You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized Networks

With increasing numbers of computer vision and object detection application scenarios, those requiring ultra-low service latency times have become increasingly prominent; e.g., those for autonomous and connected vehicles or smart city applications. The incorporation of machine learning through the a...

Full description

Bibliographic Details
Main Authors: Zuo Xiang, Patrick Seeling, Frank H. P. Fitzek
Format: Article
Language:English
Published: MDPI AG 2021-03-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/11/5/2177
id doaj-8ae8c4ed1c57445fa7db9dbbb14c01e1
record_format Article
spelling doaj-8ae8c4ed1c57445fa7db9dbbb14c01e12021-03-03T00:02:13ZengMDPI AGApplied Sciences2076-34172021-03-01112177217710.3390/app11052177You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized NetworksZuo Xiang0Patrick Seeling1Frank H. P. Fitzek2Centre for Tactile Internet with Human-in-the-Loop, Technische Universität Dresden, 01187 Dresden, GermanyDepartment of Computer Science, Central Michigan University, Mount Pleasant, MI 48859, USACentre for Tactile Internet with Human-in-the-Loop, Technische Universität Dresden, 01187 Dresden, GermanyWith increasing numbers of computer vision and object detection application scenarios, those requiring ultra-low service latency times have become increasingly prominent; e.g., those for autonomous and connected vehicles or smart city applications. The incorporation of machine learning through the applications of trained models in these scenarios can pose a computational challenge. The softwarization of networks provides opportunities to incorporate computing into the network, increasing flexibility by distributing workloads through offloading from client and edge nodes over in-network nodes to servers. In this article, we present an example for splitting the inference component of the YOLOv2 trained machine learning model between client, network, and service side processing to reduce the overall service latency. Assuming a client has 20% of the server computational resources, we observe a more than 12-fold reduction of service latency when incorporating our service split compared to on-client processing and and an increase in speed of more than 25% compared to performing everything on the server. Our approach is not only applicable to object detection, but can also be applied in a broad variety of machine learning-based applications and services.https://www.mdpi.com/2076-3417/11/5/2177object detectionlatency optimizationmobile edge cloudconnected autonomous carssmart cityvideo surveillance
collection DOAJ
language English
format Article
sources DOAJ
author Zuo Xiang
Patrick Seeling
Frank H. P. Fitzek
spellingShingle Zuo Xiang
Patrick Seeling
Frank H. P. Fitzek
You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized Networks
Applied Sciences
object detection
latency optimization
mobile edge cloud
connected autonomous cars
smart city
video surveillance
author_facet Zuo Xiang
Patrick Seeling
Frank H. P. Fitzek
author_sort Zuo Xiang
title You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized Networks
title_short You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized Networks
title_full You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized Networks
title_fullStr You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized Networks
title_full_unstemmed You Only Look Once, But Compute Twice: Service Function Chaining for Low-Latency Object Detection in Softwarized Networks
title_sort you only look once, but compute twice: service function chaining for low-latency object detection in softwarized networks
publisher MDPI AG
series Applied Sciences
issn 2076-3417
publishDate 2021-03-01
description With increasing numbers of computer vision and object detection application scenarios, those requiring ultra-low service latency times have become increasingly prominent; e.g., those for autonomous and connected vehicles or smart city applications. The incorporation of machine learning through the applications of trained models in these scenarios can pose a computational challenge. The softwarization of networks provides opportunities to incorporate computing into the network, increasing flexibility by distributing workloads through offloading from client and edge nodes over in-network nodes to servers. In this article, we present an example for splitting the inference component of the YOLOv2 trained machine learning model between client, network, and service side processing to reduce the overall service latency. Assuming a client has 20% of the server computational resources, we observe a more than 12-fold reduction of service latency when incorporating our service split compared to on-client processing and and an increase in speed of more than 25% compared to performing everything on the server. Our approach is not only applicable to object detection, but can also be applied in a broad variety of machine learning-based applications and services.
topic object detection
latency optimization
mobile edge cloud
connected autonomous cars
smart city
video surveillance
url https://www.mdpi.com/2076-3417/11/5/2177
work_keys_str_mv AT zuoxiang youonlylookoncebutcomputetwiceservicefunctionchainingforlowlatencyobjectdetectioninsoftwarizednetworks
AT patrickseeling youonlylookoncebutcomputetwiceservicefunctionchainingforlowlatencyobjectdetectioninsoftwarizednetworks
AT frankhpfitzek youonlylookoncebutcomputetwiceservicefunctionchainingforlowlatencyobjectdetectioninsoftwarizednetworks
_version_ 1724233881635258368