Peering beneath the Surface of Self-Driving Cars
Autonomous vehicles, just like us when we drive, rely on sensory input to navigate the world. Waymo and Uber, to name a few, are expanding access to these robotaxis, but many people still don't fully grasp the inner workings of these self-driving wonders. We're here to shed some light on that by breaking down the main four subsystems that make up the high-level architecture of AVs: Perception, State Estimation, Planning and Prediction, and Control.
Perception
Perception, in AVs, is all about observing the environment. Each autonomous vehicle company has its unique preference for the type and placement of sensors, but the most common three are cameras, radar, and LiDAR. These sensors process data to create a detailed, accurate picture of the driving environment.
Cameras provide rich information about objects in the surroundings, like pedestrians' directions and body language. Radar, on the other hand, works in the dark and can provide the precise distance and speed of objects. However, it might struggle to differentiate between two pedestrians. Lastly, LiDAR emits lasers to measure the time it takes for the laser to reflect back and creates a 3D map of the world. While excellent at detecting objects, it can struggle in bad weather due to laser reflections.
By combining their strengths, these sensors create more robust self-driving cars. They feed the data into a machine learning algorithm that labels objects into categories like vehicles, pedestrians, and cyclists.
State Estimation
State Estimation is similar to how we use our senses to navigate the world. It processes visual, auditory, and kinesthetic input to create a picture of the vehicle's location and the surrounding environment. Most importantly, it helps the car understand its position and where other road users are in relation to it. Without state estimation, AVs would be like driving blindfolded in a spinning room.

Planning and Prediction
Planning and Prediction use the information gathered by State Estimation to create a strategy for safe and efficient navigation. It estimates how the environment will change in the next few seconds and decides the best path for the vehicle to take. This high-level and low-level planning ensures the vehicle navigates effectively around obstacles and traffic.
Control
The Control system translates these planned actions into reality. It converts the Planner's decisions into precise input for the vehicle's steering and pedals, ensuring a smooth and safe drive while maintaining speed and position accordingly.
Conclusion
Autonomous vehicles are intricate systems, with each subsystem playing a critical role in ensuring safe and efficient navigation. Companies should continue sharing insights into the inner workings of AVs to build public trust in this transformative technology.
- Innovations in transportation, such as the introduction of autonomous vehicles, require a balanced use of sensors like cameras, radar, and LiDAR to avoid overshooting in their capabilities while still providing accurate and detailed environment information.
- The advancement of autonomous transportation through shared robotaxis relies on the synergistic work of the main subsystems, including perception, state estimation, planning, and prediction, to ensure a safe and efficient navigation experience.
- As shared autonomous vehicles continue to be introduced to the market, understanding their differentiating features and inner workings through innovations in sensors, processing, and machine learning algorithms will be key to building public trust in this transformative technology.