Title

Multi-Sensor Data Fusion For Vehicle Detection In Autonomous Vehicle Applications

Abstract

In autonomous vehicle systems, sensing the surrounding environment is important to an intelligent vehicle's making the right decision about the action. Understanding the neighboring environment from sensing data can enable the vehicle to be aware of other moving objects nearby (e.g., vehicles or pedestrians) and therefore avoid collisions. This local situational awareness mostly depends on extracting information from a variety of sensors (e.g. camera, LIDAR, RADAR) each of which has its own operating conditions (e.g., lighting, range, power). One of the open issues in the reconstruction and understanding of the environment of autonomous vehicle is how to fuse locally sensed data to support a specific decision task such as vehicle detection. In this paper, we study the problem of fusing data from camera and LIDAR sensors and propose a novel 6D (RGB+XYZ) data representation to support visual inference. This work extends previous Position and Intensity-included Histogram of Oriented Gradient (PIHOG or pHOG) from color space to the proposed 6D space, which targets at achieving more reliable vehicle detection than single-sensor approach. Our experimental result have validated the effectiveness of the proposed multi-sensor data fusion approach - i.e., it achieves the detection accuracy of 73% on the challenging KITTI dataset.

Publication Date

1-1-2018

Publication Title

IS and T International Symposium on Electronic Imaging Science and Technology

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.2352/ISSN.2470-1173.2018.17.AVM-257

Socpus ID

85055757974 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85055757974

This document is currently not available here.

Share

COinS