In Chandler, Ariz., where Alphabet is testing its autonomous cars, its software came across something it had never encountered before. It was a flashing yellow left-turn signal.
When the signal is flashing, drivers — both humans and robot — are expected to maneuver their way through oncoming traffic. It’s a difficult move for humans, much less a computer that has never come across a traffic light like this.
The safety driver behind the wheel of the self-driving car took over and its researchers took the event and translated it into a virtual simulation where the company could teach the software how to handle the situation.
This is a crucial part of getting fully self-driving cars on the road. Since it’s nearly impossible for Alphabet’s cars to encounter every driving experience, testing virtual and challenging situations is integral to being able to ensure these cars can safely drive themselves at all times.
An autonomous car isn’t considered to be fully self-driving until it’s able to drive in all situations and geographies. That’s why companies are often promoting the number of miles their autonomous cars have driven. More driving experience means a company’s car should be able to navigate more scenarios.
Today, Alphabet has driven over three million real-world miles. But in simulation, Alphabet’s cars drive about eight million miles a day. In 2016, Alphabet’s autonomous cars drove 2.5 billion miles in simulation.
“[We] focus on the most interesting driving out there and most challenging scenarios [in simulation],” James Stout, the lead software engineer of the simulation team told Recode. “You might see a challenging situation once in every hundred or thousand miles [in the real world]. In simulation, [we can create those].”
That’s what Alphabet’s self-driving arm Waymo did with this particular intersection. It’s just one example, but it illustrates how these cars learn how to drive in all settings.
“We don’t have to test every possible variation at this flashing yellow light,” James Stout, the lead software engineer of Waymo’s simulation team, told Recode. “Instead what we can do is teach our cars basic skills by giving it example scenarios to make sure it’s built those fundamental skills, to make sure it can handle [all versions of this situation] well.”
In situations like this, the safety driver behind the wheel of the car will take over and typically note that this is a scenario that requires more testing.
Alphabet’s self-driving engineers will then go back to the virtual world they’ve created to test its cars by recreating this exact event. The engineers will use real data from the array of cameras, radars and lidars that the car that encountered this new situation is outfitted with to build this model.
Then the engineers will add thousands of variations to this single scene. That process, called fuzzing, can include adding anything from faster moving cars to motorcycles splitting lanes to joggers running alongside the roads.
Once the car drives and re-drives that specific situation and its many variations, that skill is then added to its knowledge base and shared with Waymo’s network of self-driving cars.
Now — or at least the hope is — these cars will be able to make these left turns confidently regardless of the circumstances.