Editor's Note: This piece was written by Sam Abuelsamid, a senior research analyst at Navigant Research. The opinions represented in this piece are independent of Smart Cities Dive's views.
As we edge closer to the day when people no longer actively control how vehicles get them to their desired destinations, engineers are working feverishly to make sure we can arrive safely. Automated driving technology has been slowly evolving for decades, but in the past five years the necessary pieces to really make it work finally started to come together. The fabled self-driving car is ready to move from research project to production viability. Now the hard work begins for the engineering teams.
Today, any halfway decent engineer with about $100,000 in seed funding and a few months of dedicated development work can buy all of the necessary off-the-shelf pieces to put together a passable automated driving demo. There are likely dozens of startups in Silicon Valley that have done exactly that with some lidar sensors, cameras, and radar units from a handful of automotive suppliers.
Hacking away at some software frameworks can probably get a startup to a second round of funding. What it will not do is produce a product that I would ever let my family ride in. As an engineer, I spent the better part of two decades working on some of the electronic control systems that were the precursors of the technology generating hype today.
I sat through many technical presentations at conferences where researchers showed off simulations of amazing new anti-lock braking systems or traction control algorithms operating on a homogeneous surface. By that time, I had already been through more than one reboot of my own systems, and knew that those simplistic algorithms would never function in the real world of potholes, rumble strips, black ice patches, and surface transitions.
I learned early how much more there is to making a system safe enough for non-engineers to use — and that was with systems that humans could still take control of when the inevitable failure occurred.
The automated driving systems that will hit the road in the next few years must have much more sophisticated diagnostic systems, and a level of redundancy that has never before been implemented in mass-production vehicles. A so-called level 4 automation system must be capable of functioning safely within its operating domain without the need for human intervention. That means if a sensor, computer, or actuator fails, a vehicle cannot rely on a person to pull it over and bring it to a safe stop. The virtual driver must have all the tools to do this on its own.
These systems will have three main functional areas: the perception system, the compute platform, and the actuation system.
The perception system consists of sensors, communications, and high definition maps that enable the vehicle to sense its surroundings and understand where it is. On today’s level 1 cars with radar-based adaptive cruise control or lane keeping systems, the system disengages and the driver must take control if a sensor gets coated in salt spray and cannot see. In a level 4 (L4) car with no steering wheel or pedals, multiple types of sensors will be needed to cover up for the weaknesses in each type. Cameras can struggle in poor weather or harsh lighting, radar and lidar can be blocked by road salt or slush. Vehicle-to-vehicle communications and 3-D maps provide situational awareness beyond the line of sight available to the sensors.
It has been projected that L4 and level 5 (L5) vehicles are going to need an average of three lidar sensors, six cameras, and four radar sensors to provide a full 360-degree view around the vehicle. The exact number used on each model of vehicle will vary depending on the specific sensors used and how they are packaged on the vehicle. Each sensor is also going to need mechanisms to ensure they stay clean enough to see without a person having to get out and wipe them off — the way drivers in snowy climates do with headlights today.
As layers of a perception stack, sophisticated software can fuse these signals into a coherent picture of the world around the vehicle. Using complementary information from these different sensors, pedestrians can be distinguished from racoons or cardboard boxes, and the speed and trajectory of each target relative to the vehicle can be calculated. Doing that in real-time requires significant computing resources, and if a computer suffers any sort of failure while driving a backup will be required.
Such highly automated vehicles are probably going to need at least two independent compute platforms running in parallel at all times. While some manufacturers may opt to double up and perhaps use a watchdog computer to make sure both compute systems come to the same results, most are likely to opt for a so-called big-little architecture. In the latter configuration, a smaller, lower power compute platform of the type available from veteran automotive suppliers can run a similar algorithm in parallel to the main control unit to ensure that they agree. In the event of a failure in the primary system, the secondary unit can also provide enough performance to bring the vehicle to a safe location.
All of this sensing and number crunching is pointless without the actuators that actually make the wheels accelerate, stop, and turn. On the traditional human-driven car, if the power steering or stability control actuator fails the driver can still control the vehicle, albeit with increased effort required. If the steering or brake actuator fails in a highly automated vehicle, there needs to be a secondary actuator to take over.
None of this is inherently fundamental to the basic operation of automated driving as witnessed in the countless demonstrations provided by technology companies and entrepreneurial startups. It is, however, going to be essential to deploying a safe and robust ecosystem of highly automated vehicles in coming decade.
Technology startups like the idea of deploying a minimum viable product in order to get user adoption and feedback which drives rapid iteration and improvement. The threshold of minimum viability for a photo-sharing app or a mobile game is low, since the consequences of failure are annoyance. With automated vehicles that are expected to transport us and our families safely to work, school, and leisure activities, minimum viability will require a great deal more effort to achieve. There is room in this development landscape for both the fast movers and the traditional incumbents to collaborate on the creation of a bright new future for mobility.