Multi-Modal Sensor Fusion for Autonomous Vehicles: Advancing Road Safety Through AI-Driven Perception and Ethical Integration
Main Article Content
Abstract
Multi-modal sensor fusion has become a foundational component of perception systems for autonomous vehicles, enabling robust environmental understanding across diverse and safety-critical operating conditions. This article presents a narrative review of camera-, LiDAR-, radar-, and ultrasonic-based perception architectures, with emphasis on their complementary sensing characteristics, fusion strategies, and system-level trade-offs. Existing literature on probabilistic, learning-based, and track-level fusion approaches is synthesized to highlight how redundancy and cross-modal validation improve reliability under adverse weather, occlusion, and high-speed scenarios. Beyond technical architectures, this review examines emerging work on explainable artificial intelligence for autonomous perception, including methods for decision traceability, post-incident analysis, and regulatory accountability. Fairness-aware training practices and dataset diversity considerations are also discussed in the context of equitable protection for vulnerable road users across varying environmental and demographic conditions. Finally, the article surveys broader system-level implications of advanced perception, including environmental efficiency, economic impact, and future directions in cooperative and networked perception. By integrating technical, ethical, and societal perspectives, this review provides a structured reference for researchers, engineers, and policymakers engaged in the design and deployment of responsible autonomous vehicle perception systems.