Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

FPGA-based Feature Extraction and Tracking Accelerator for Real-Time Visual SLAM

View through CrossRef
Due to the advantages of low latency, low power consumption and high flexibility of FPGA-based acceleration technology, it has been more and more widely studied and applied in the field of computer vision in recent years. An FPGA-based feature extraction and tracking accelerator for real-time visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) is proposed, which can realize the complete acceleration processing capability of the image front-end and directly output the feature point ID and coordinates to the backend. The accelerator consists of image preprocessing, pyramid processing, optical flow processing, and feature extraction and tracking modules. For the first time, it implements a hardware solution that combines features from accelerated segment test (FAST) corners with Gunnar Farneback (GF) dense optical flow, to achieve better feature tracking performance and provide more flexible technical route selection. In order to solve the scale invariance and rotation invariance lacking problem of FAST features, an efficient pyramid module with a five-layer thumbnail structure is designed and implemented. The accelerator is implemented on a modern Xilinx Zynq FPGA. The evaluation result shows that the accelerator can achieve stable tracking of features of violently shaking images, and is consistent with the results of MATLAB code running on PC. When operating at 100MHz, the accelerator can process 108 frames per second for 720P images and 48 frames per second for 1080P images. Compared to PC CPUs that consume seconds of time, the processing latency is greatly reduced to the order of milliseconds, making GF dense optical flow an efficient and practical technical solution on the edge side.
Title: FPGA-based Feature Extraction and Tracking Accelerator for Real-Time Visual SLAM
Description:
Due to the advantages of low latency, low power consumption and high flexibility of FPGA-based acceleration technology, it has been more and more widely studied and applied in the field of computer vision in recent years.
An FPGA-based feature extraction and tracking accelerator for real-time visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) is proposed, which can realize the complete acceleration processing capability of the image front-end and directly output the feature point ID and coordinates to the backend.
The accelerator consists of image preprocessing, pyramid processing, optical flow processing, and feature extraction and tracking modules.
For the first time, it implements a hardware solution that combines features from accelerated segment test (FAST) corners with Gunnar Farneback (GF) dense optical flow, to achieve better feature tracking performance and provide more flexible technical route selection.
In order to solve the scale invariance and rotation invariance lacking problem of FAST features, an efficient pyramid module with a five-layer thumbnail structure is designed and implemented.
The accelerator is implemented on a modern Xilinx Zynq FPGA.
The evaluation result shows that the accelerator can achieve stable tracking of features of violently shaking images, and is consistent with the results of MATLAB code running on PC.
When operating at 100MHz, the accelerator can process 108 frames per second for 720P images and 48 frames per second for 1080P images.
Compared to PC CPUs that consume seconds of time, the processing latency is greatly reduced to the order of milliseconds, making GF dense optical flow an efficient and practical technical solution on the edge side.

Related Results

Method of QoS evaluation of FPGA as a service
Method of QoS evaluation of FPGA as a service
The subject of study in this article is the evaluation of the performance issues of cloud services implemented using FPGA technology. The goal is to improve the performance of clou...
Information metrics for localization and mapping
Information metrics for localization and mapping
Decades of research have made possible the existence of several autonomous systems that successfully and efficiently navigate within a variety of environments under certain conditi...
FPGA-Based Feature Extraction and Tracking Accelerator for Real-Time Visual SLAM
FPGA-Based Feature Extraction and Tracking Accelerator for Real-Time Visual SLAM
Due to its advantages of low latency, low power consumption, and high flexibility, FPGA-based acceleration technology has been more and more widely studied and applied in the field...
Аналіз застосування технологій ПЛІС в складі IoT
Аналіз застосування технологій ПЛІС в складі IoT
The subject of study in this article and work is the modern technologies of programmable logic devices (PLD) classified as FPGA, and the peculiarities of its application in Interne...
GNV2-SLAM: vision SLAM system for cowshed inspection robots
GNV2-SLAM: vision SLAM system for cowshed inspection robots
Simultaneous Localization and Mapping (SLAM) has emerged as one of the foundational technologies enabling mobile robots to achieve autonomous navigation, garnering significant atte...
Event based SLAM
Event based SLAM
(English) Event-based cameras are novel sensors with a bio-inspired design that exhibit a high dynamic range and extremely low latency. They sensing principle is different than the...
Methods of Deployment and Evaluation of FPGA as a Service Under Conditions of Changing Requirements and Environments
Methods of Deployment and Evaluation of FPGA as a Service Under Conditions of Changing Requirements and Environments
Applying Field Programmable Gate Array (FPGA) technology in cloud infrastructure and heterogeneous computations is of great interest today. FPGA as a Service assumes that the progr...
Dynamic SLAM: A Visual SLAM in Outdoor Dynamic Scenes
Dynamic SLAM: A Visual SLAM in Outdoor Dynamic Scenes
Abstract Simultaneous localization and mapping (SLAM) has been widely used in augmented reality(AR), virtual reality(VR), robotics, and autonomous vehicles as the theoretic...

Back to Top