Javascript must be enabled to continue!
Dissecting Latency in 360° Video Camera Sensing Systems
View through CrossRef
360° video camera sensing is an increasingly popular technology. Compared with traditional 2D video systems, it is challenging to ensure the viewing experience in 360° video camera sensing because the massive omnidirectional data introduce adverse effects on start-up delay, event-to-eye delay, and frame rate. Therefore, understanding the time consumption of computing tasks in 360° video camera sensing becomes the prerequisite to improving the system’s delay performance and viewing experience. Despite the prior measurement studies on 360° video systems, none of them delves into the system pipeline and dissects the latency at the task level. In this paper, we perform the first in-depth measurement study of task-level time consumption for 360° video camera sensing. We start with identifying the subtle relationship between the three delay metrics and the time consumption breakdown across the system computing task. Next, we develop an open research prototype Zeus to characterize this relationship in various realistic usage scenarios. Our measurement of task-level time consumption demonstrates the importance of the camera CPU-GPU transfer and the server initialization, as well as the negligible effect of 360° video stitching on the delay metrics. Finally, we compare Zeus with a commercial system to validate that our results are representative and can be used to improve today’s 360° video camera sensing systems.
Title: Dissecting Latency in 360° Video Camera Sensing Systems
Description:
360° video camera sensing is an increasingly popular technology.
Compared with traditional 2D video systems, it is challenging to ensure the viewing experience in 360° video camera sensing because the massive omnidirectional data introduce adverse effects on start-up delay, event-to-eye delay, and frame rate.
Therefore, understanding the time consumption of computing tasks in 360° video camera sensing becomes the prerequisite to improving the system’s delay performance and viewing experience.
Despite the prior measurement studies on 360° video systems, none of them delves into the system pipeline and dissects the latency at the task level.
In this paper, we perform the first in-depth measurement study of task-level time consumption for 360° video camera sensing.
We start with identifying the subtle relationship between the three delay metrics and the time consumption breakdown across the system computing task.
Next, we develop an open research prototype Zeus to characterize this relationship in various realistic usage scenarios.
Our measurement of task-level time consumption demonstrates the importance of the camera CPU-GPU transfer and the server initialization, as well as the negligible effect of 360° video stitching on the delay metrics.
Finally, we compare Zeus with a commercial system to validate that our results are representative and can be used to improve today’s 360° video camera sensing systems.
Related Results
Machine Learning Techniques for Forensic Camera Model Identification and Anti-forensic Attacks
Machine Learning Techniques for Forensic Camera Model Identification and Anti-forensic Attacks
The goal of camera model identification is to determine the manufacturer and model of an image's source camera. Camera model identification is an important task in multimedia foren...
NETWORK VIDEO CONTENT AS A FORM OF UNIVERSITY PROMOTION
NETWORK VIDEO CONTENT AS A FORM OF UNIVERSITY PROMOTION
In the context of visualization and digitalization of media consumption, network video content is becoming an important form of university promotion in the educational services mar...
Enhancing Real-Time Video Processing With Artificial Intelligence: Overcoming Resolution Loss, Motion Artifacts, And Temporal Inconsistencies
Enhancing Real-Time Video Processing With Artificial Intelligence: Overcoming Resolution Loss, Motion Artifacts, And Temporal Inconsistencies
Purpose: Traditional video processing techniques often struggle with critical challenges such as low resolution, motion artifacts, and temporal inconsistencies, especially in real-...
Event based SLAM
Event based SLAM
(English) Event-based cameras are novel sensors with a bio-inspired design that exhibit a high dynamic range and extremely low latency. They sensing principle is different than the...
AESTHETIC VALUES ON TIME LAPSE AND CINEMATIC VIDEOS
AESTHETIC VALUES ON TIME LAPSE AND CINEMATIC VIDEOS
Perkembangan teknologi maklumat semakin mendorong manusia untuk membangun dan mencipta inovasi. Teknologi dapat mengembangkan potensi manusia dalam mencipta produk moden. Transform...
A full digital, high definition video system (1080i) for laryngoscopy and stroboscopy
A full digital, high definition video system (1080i) for laryngoscopy and stroboscopy
AbstractObjective:This study aimed to estimate the effectiveness of a full digital, high definition video system for laryngeal observations.Methods:A newly available, full digital,...
Pembelajaran PAI Berbasis Video dalam Meningkatkan Akhlak Mulia
Pembelajaran PAI Berbasis Video dalam Meningkatkan Akhlak Mulia
Abstract. Moral education is expected to build the character of groups, congregations and people through collaboration between schools and parents. SDN 084 Cikadut is committed to ...
Does 360° Multisource Feedback Influence the Obstetrics and Gynaecology Residency Selection Process?
Does 360° Multisource Feedback Influence the Obstetrics and Gynaecology Residency Selection Process?
This article was migrated. The article was not marked as recommended. ObjectiveThe 360° multisource feedback (MSF) is an assessment tool used to assess a physician's competency, sk...

