Abstract:
The field of autonomous driving research has made significant strides toward achieving full automation, endowing vehicles with self-awareness and independent decision making. However, integrating automation into vehicular operations presents formidable challenges, especially as these vehicles must seamlessly navigate public roads alongside other cars and pedestrians. An intriguing yet relatively underexplored domain within autonomous driving is overtaking. Overtaking involves a dynamic interplay of complex tasks, including precise steering and speed control, rendering it one of the most intricate operations for implementing augmented intelligence driving technologies. Surprisingly, the overtaking of autonomous vehicles (AVs) remains largely uncharted territory in the context of augmented intelligence for autonomous systems. This void in knowledge beckons researchers to embark on explorations and investigations in this nascent field. Our review paper systematically synthesises overtaking methodologies hinging on computer vision techniques tailored for augmented intelligence autonomous driving scenarios in response to this pressing need. Our analysis encompasses an array of domains central to overtaking in augmented intelligence AVs, encompassing Object Detection, Lane/Line Detection, Depth Estimation, Obstacle Detection, Segmentation, and Pedestrian Detection. We meticulously analyze each domain using well-established multimodal data sets. We assess different models’ performance across various parameters by employing graphical structures, enabling visual comparative analyses. In object detection, YOLOv4 achieves a top performance with 0.90 mAP on the BDD100K data set. For lane detection, CLRNET excels with the highest F1 score of around 0.96 on the LLAMAS data set