Sensors, Video, and a 1965 Ferrari

It should come as little surprise that Stanford is playing a large role in the rapid progress towards fully autonomous vehicles. Research data and video recorded by John Kegelman, Lene Harbott, Chris Gerdes and others in the Dynamic Design Lab are now deposited and streamable from the SDR. These data are useful in a variety of ways, such as to inform self-driving cars that can respond to changing conditions like an expert driver handling a race car on a track.

It was only 11 short years ago that Stanford's Stanley won the 2005 DARPA Grand Challenge by driving 132 miles through the California and Nevada desert. What seemed amazing at the time has rapidly become a feature available to everyday drivers. Though not fully autonomous (yet), Tesla's Autopilot mode is available today, a company called Otto is delivering beer with trucks that drive themselves and Ford is promising fully autonomous ride-sharing vehicles in 5 years.

There is a lot of hardware needed to allow cars to operate without a human driver, such as high powered CPUs, cameras, radars, GPS, and other sensors and detectors, but that's only one part of the equation. You also need sophisticated software to coordinate the output from all of these sensors, and process them in real-time to decide on the inputs to the vehicle sub-systems, such as the steering, brakes and power output. And you need to do this many times a second, accounting for changes in road conditions, weather, other drivers, all while reading road signs, obeying speed limits and other laws and keeping the car within its normal operating conditions.

Essentially, you need to teach a computer how to drive, which is probably even harder than teaching a 16 year old to drive (though possibly with less cursing).

One of the ways you might do this is to show the computer how an expert would drive, and this is exactly what John Kegelman, Lene Harbott, Professor Chris Gerdes and the Dynamic Design Lab did. By placing instruments on vintage race cars and then watching as expert drivers negotiated the track, they collected data such as the position of the accelerator pedal, brake pedal, and steering input and then correlated this with track position and corresponding video from multiple viewpoints. These data form the basis of the Revs Vehicle Dynamic Database and provide valuable insight on how vehicles operate at the limits of their capabilities. Some of the potential uses include informing the algorithms that will drive (pun intended) future autonomous vehicles. No cursing required.

This Data Story was written by Peter Mangiafico.

Find out more about the data featured in this Data Story.