The Self Driving Car Deployment Dilemma

The Uncertain Deployment Plans

The benefits of Self-Driving Car (SDC) are appealing mainly in terms of Safety, economy, time-efficiency, environment, convenience, and inclusion for non-drivers.  

Apart from fairytales such as Aladdin’s magic carpet and movies such as “The Car” in 1977, the wishful dream of SDC has been in the mind of mankind since more than 500 years.  Leonardo DaVinci’s Self-Propelled Cart is one such innovation.  During 1920-30’s, the automotive industry concluded that SDC was 20 years away.  In the 1950’s, and so in the 1970’s and in the 1990’s, it was also conceived to be 20 years way!  Following the DARPA challenge in the 2000’s, and after the use of LiDAR (Light Detection And Ranging) as a 3D accurate imaging of the surroundings, most of the Car Makers and technology companies started serious R&D.   Since 2010, and with the rapid evolution of Artificial Intelligence (Ai), SDC was conceived to be 5-10 years away.   When I joined “Quanergy” (a leading Silicon Valley LiDAR company) in 2015, MoU’s were signed with major OEMs such as Daimler/Mercedes to provide solid-state LiDAR serving their strategies for SDC by 2019-2021. In 2015, Elon Musk said SDC would be available in 2-3 years.

Today, SDC plans of all OEMs have been postponed.   China moved their ambitious SDC massive production goal from 2020 to 2025.   There are few trials in a number of countries, limited to applications such as robo-taxis or robo-delivery vehicles in certain confined areas.   

In order to have a better insight over such uncertain future of SDC deployment, three highly interrelated enabling pillars of SDC need to be addressed and assessed; namely: Technology, Regulatory and Trustability.  In this article, we focus on Technology, being the most fundamental pillar in achieving Functionality and Safety. The other two pillars also highly rely upon technology. We explain the SDC Technology by examining the SDC conceptual architecture, which is struggling to achieve the 3-5 levels of “automation” as defined by SAE International.

Levels of automation:

SAE defined six levels of driving automation, from 0 (fully manual) to 5 (fully autonomous).

Level-0 autonomous driving (no automation): Relying completely on the driver to perform all tasks. The driver is in complete control of driving.

Level-1 Autonomous Driving (Driver Assistance): Level-1 system performs some driving tasks such as Adaptive-Cruise-Control or Lane-Keep-Assist.

Level-2 Autonomous Driving (partial driving automation): The car performs some primary driving functions such as adaptive cruise control and lane keeping. The driver may temporarily take her/his hands of the wheel.

Level-3 autonomous driving (conditional driving automation): The car is aware of the surrounding and controls steering, throttle, and braking in most driving conditions. However, the driver has to be ready to take control when called for. 

Level-4 autonomous driving (high driving automation): The car fully perceives the environment and performs all driving tasks. The car should handle highly complex driving situations, and the driver can safely relax and even read a book, while the car responsibly and safely drives on the highway and/or city roads. The car can still call the driver to take over control, and it is able to bring itself to a safe stop.  

Level-5 autonomous driving: (full driving automation): Level-5 autonomy requires absolutely no human attention. The SDC performs all driving tasks under all circumstances, including complex driving conditions like distracted pedestrian crossings.  There is even no need for a steering wheel, brakes or pedals.

Conceptual Architecture of SDC

In order to achieve levels 3-5 automation, the traditional SDC autonomy architecture stack includes the following modules/functionalities, where Ai and several sensors represent the most critical Safety and SDC functionality enabling factors:

Localization: The SDC uses GPS and High-Definition 3D maps of the driving area, including traffic signs/signals and other relevant infrastructure information, helping the car understand its location within its surrounding.  

Object Detection, Classification and Tracking (Perception):  The SDC uses a combination of sensors to detect, classify and track objects in the surrounding.  The most commonly used sensors are Camera, LiDAR, Radar and Ultrasound.  The data collected from sensors is processed through Ai algorithms to enable SDC situational awareness.  There is consensus in the automotive industry that LiDAR is the prime and most accurate sensor to perceive the 3D surrounding of the SDC. 

Prediction:  The prediction capability of SDC is the most critical and difficult function, and it would depend on very accurate perception and very strong Ai.  Not only motions of all surrounding objects are to be predicted, but also the SDC itself has to be predictable to other cars. 

Motion Trajectory Planning, and Control:  Building on the situational awareness, the destination target and predictions about the surrounding objects, the SDC would decide on its motion trajectory and apply controls to steering, “pedal” and/or break.

The SDC Deployment Dilemma

The two most mature and advanced pre-SDCs today are Waymo and Tesla. 

Waymo has driven 20+million miles on public roads and 20+billion miles in simulation to train their Ai Deep Learning perception/detection/planning algorithms. It is being trialed since 2019 as a level-4 robo-taxi and it uses in-house developed 360o mechanical LiDAR sensors.  Cost of automation may reach $100K per SDC.  Therefore, it is not practical for large-scale deployment.

After some fatal accidents, the Tesla “Auto-Pilot” was classified as level 2+.  Elon Musk/Tesla claimed that Ai without LiDAR is sufficient.  In fact, some experts concluded that mimicking human driving requires hundreds of “Narrow-Ai” combined to resemble a form of AGI (Artificial General Intelligence). AGI is seen by many experts to be decades away.  Nevertheless, last month Tesla was seen testing their system using Luminar LiDAR, which could expedite reaching level-3 or level-4 autonomy.    

In conclusion, SDC is not technologically ready, and when ready Regulatory and Trust ability will catch up.  SDCs driving in the wild may only be possible after LiDAR sensors become practically cheap (<$100), reliable and small.  This should be fulfilled by CMOS-based OPA (Optical Phase Array) solid-state LiDAR when available (expected in 1-2 years).  Further, smarter LiDAR Point-Cloud-based Ai libraries of algorithms need to be available for designers.  Currently almost all OEMs and most universities are working in this direction.

Dr. Essam Mitwally is a Life-Senior-Member-IEEE, with PhD from Royal Institute of Technology, Stockholm. He is the founder of AiTech4U (Artificial Intelligence Technology for You). AiTech4U is a startup at Dtec in Dubai and SRTIP in Sharjah, aiming to facilitate “mature” “trusted” Ai-based Products to empower Organizations and People in the region for effective performance, convenience, and safety. Before AiTech4U, he was the Dubai-based Director of the Silicon-Valley startup “Quanergy”, driving the use of LiDAR sensors in Ai applications like Autonomous Cars and Automated Security. He was also a senior consultant for Akhet-Consulting in Dubai. Previously, he worked at STC Engineering, Operations and Regulations, and he was the cofounder of STC R&D Department. As Chief-Engineering Switching-Design at Bell Canada International and STC, he was key for the introduction of several networks/services. He published many conference papers and magazine articles and conducted numerous training workshops. He chaired conferences and moderated sessions.