In the months because the coronavirus prompted shelter-in-place orders world wide, total firms, industries, and economies have been decimated. One market that appeared poised to flee the affect was autonomous automobiles, significantly within the case of firms whose vehicles can be utilized to move provides to well being care staff. But it appears even they aren’t immune.

In March, many high-profile startups and spinoffs, together with Uber, GM’s Cruise, Aurora, Argo AI, and Lyft, suspended real-world testing of their autonomous automobile fleets, citing security considerations and a need to restrict contact between drivers and riders. Waymo went one step additional, asserting that it could pause its industrial Waymo One operations in Phoenix, Arizona — together with its totally driverless vehicles, which don’t require human operators behind the wheel — till additional discover.

The interruptions current a mammoth engineering problem: find out how to replicate the information collected by real-world vehicles whereas the fleets stay grounded for months or longer. This downside has by no means been tackled earlier than, and a few specialists imagine it’s insurmountable. Even Waymo CEO John Krafcik has stated that real-world expertise is “unavoidable” in driverless automotive growth.

But among the business’s greatest gamers — together with Waymo — are attempting anyway.

Data is the brand new oil

GamesBeat Summit 2020 Online | Live Now, Upgrade your pass for networking and speaker Q&A.

Virtually each autonomous automobile growth pipeline leans closely on logs from sensors mounted to the outsides of vehicles, together with lidar sensors, cameras, radar, inertial measurement models (IMUs), odometry sensors, and GPS. This information is used to coach the household of machine studying fashions that underpin driverless automobiles’ notion, prediction, and movement planning capabilities. These programs make sense of the world and the objects inside it and dictate the paths automobiles finally take.

For occasion, Tesla compiled a corpus of tens of hundreds of occluded cease indicators to show a mannequin to acknowledge related indicators within the wild. And GM’s Cruise used a mixture of artificial and real-world audio and visible information to coach a system that detects police vehicles, fireplace vehicles, ambulances, and different emergency automobiles.

Real-world information assortment additionally entails mapping, which within the context of autonomous automobiles refers back to the creation of 3D, high-definition, centimeter-level maps of roads, buildings, vegetation, and different static objects on the planet. Ahead of testing in a brand new location, firms like Waymo and Cruise deploy sensor-equipped, manually pushed vehicles to map the routes driverless automobiles might presumably take. These maps assist the automobiles localize themselves on the planet, and so they additionally present priceless contextual data, like pace limits and the situation of site visitors lanes and pedestrian crossings.

In lieu of all this, autonomous automobile firms should depend on the information they’ve collected up to now — and perturbations or modifications of that information — for system growth and analysis. Fortuitously, many of those firms have invested in simulation to scale testing past what’s potential in the true world.



Waymo says it drives 20 million miles a day in its Carcraft simulation platform — the equal of over 100 years of real-world driving on public roads. Moreover, the corporate says that Waymo Driver, its autonomous automobile software program suite, has amassed over 15 billion simulated autonomous miles up to now. That’s up from 10 billion simulated autonomous miles as of July 2019.

“There’s a lot of information in [Carcraft],” Jonathan Karmel, Waymo’s product lead for simulation and automation, instructed VentureBeat. “That’s why we use a range of tools internally to extract the most important signals — the most interesting miles and useful information.”

Using web-based interfaces to work together with Carcraft simulations, Waymo engineers leverage real-world information to arrange for edge circumstances and discover concepts, choosing encounters from Waymo’s greater than 20 million autonomous miles on roads in 25 cities. As each the software program and scene evolve, conserving the atmosphere round Waymo Driver updated maintains realism. That entails modeling agent conduct and utilizing reactive brokers (similar to different vehicles, cyclists, and pedestrians) who reply to the brand new place of the digital vehicles.

Waymo says it additionally synthesizes real looking sensor information for vehicles and fashions scenes in up to date environments. As its digital vehicles drive by the identical eventualities Waymo automobiles expertise in the true world, engineers modify the scenes and consider potential conditions. They additionally manipulate these scenes by just about including new brokers into the state of affairs, similar to cyclists, or by modulating the pace of oncoming site visitors to gauge how the Waymo Driver would have reacted.

Over time, the simulation eventualities are amplified by numerous variations to evaluate the specified conduct of Waymo Driver. That data is used to enhance each security and efficiency. “I look at sensor simulation work as being able to [augment] our real-world miles,” stated Karmel. “We have the capabilities to incrementally understand the real world as things change, and as we continue to make changes to improve the state of [our systems’] performance, we continue to [create] new challenges in simulation.”

The challenges of developing autonomous vehicles during a pandemic

Above: Real-world Waymo automotive information utilized in simulation.

Image Credit: Waymo

In addition to developing eventualities knowledgeable by real-world driving information, Waymo deploys never-before-tested artificial eventualities captured from its personal take a look at observe. The firm says this permits it to proceed to develop the variety of miles it might probably simulate. The majority of studying and growth is finished in simulation, based on Karmel — nicely earlier than up to date variations of Waymo Driver hit real-world roads.

An oft-overlooked side of those studying and growth processes is consolation. Waymo says it evaluates a number of “comfort metrics,” just like the methods individuals reply to automobiles’ varied driving behaviors. This on-road testing suggestions is used to coach AI fashions and run them in simulation to validate how completely different eventualities affect rider consolation, from determining the perfect braking pace to making sure the automotive drives easily.

“We’re … beginning to better understand the components that make a ride comfortable,” defined Karmel. “Some of the key components are things like acceleration and deceleration, and we want to receive that information into simulation to predict what we think a rider or driver reaction would have been in the real world. There’s a machine learning model to predict what those reactions are in [Carcraft].”

Beyond Carcraft, Waymo’s engineers faucet instruments like Content Search, Progressive Population-Based Augmentation (PPBA), and Population-Based Training (PBT) to assist varied growth, testing, and validation efforts. Content Search attracts on tech just like that powering Google Photos and Google Image Search to let information scientists find objects in Waymo’s driving historical past and logs. PBT — which was architected in collaboration with Alphabet’s DeepMind — begins with a number of machine studying fashions and replaces underperforming members with “offspring” to scale back false positives by 24% in pedestrian, bicyclist, and motorcyclist recognition duties. As for PPBA, it bolsters object classifiers whereas lowering prices and accelerating the coaching course of, mainly as a result of it solely wants annotated lidar information for coaching.


GM’s Cruise additionally runs numerous simulations — about 200,000 hours of compute jobs every day in Google Cloud Platform — one among which is an end-to-end, three-dimensional Unreal Engine atmosphere that Cruise staff name The Matrix. It allows engineers to construct any sort of state of affairs they’re capable of dream up, and to synthesize sensor inputs like digital camera footage and radar feeds to autonomous digital vehicles.

“Handling the long tail is the reason autonomous vehicles are one of the most difficult and exciting AI problems on the planet, and also the fact that we expect extremely high performance levels from autonomous vehicles and their underlying models,” Cruise head of AI Hussein Mehanna instructed VentureBeat. “When you look at the training data, you have thousands of lidar scan points, high-resolution images, radar data, and information from all sorts of other sensors. All of that requires a significant amount of infrastructure.”

The challenges of developing autonomous vehicles during a pandemic

Above: An end-to-end simulation atmosphere inside The Matrix, GM Cruise’s simulation platform.

Image Credit: GM Cruise

Cruise spins up 30,000 cases day by day throughout over 300,000 processor cores and 5,000 graphics playing cards, every of which loops by a single drive’s price of eventualities and generates 300 terabytes of outcomes. (It’s principally like having 30,000 digital vehicles driving round on the identical time.) Among different testing approaches, the corporate employs replay, which includes extracting real-world sensor information, enjoying it again in opposition to the automotive’s software program, and evaluating the efficiency with human-labeled floor reality information. It additionally leverages planning simulation, which lets Cruise create as much as lots of of hundreds of variations of a situation by tweaking variables just like the pace of oncoming vehicles and the house between them.

Tools inside Cruise’s engineering suite embrace the web-based Webviz, which has its roots in a hackathon venture and which is now utilized by roughly a thousand month-to-month lively staff. The newest manufacturing model lets engineers save configurations, share varied parameters, and watch automobile simulations as they run on distant servers. There’s additionally Worldview, a light-weight and extensible 2D/3D scene renderer that lets engineers rapidly construct {custom} visualizations.


Aurora, the self-driving automotive firm based by former Waymo engineer Chris Urmson, says that the digital vehicles inside its Virtual Testing Suite platform full over 1,000,000 assessments per day on common. This platform and different instruments allow the corporate’s engineers to rapidly establish, evaluate, categorize, and convert the vast majority of occasions and fascinating on-road eventualities into digital assessments, and to run hundreds of assessments to judge a single change to the grasp codebase.

The Virtual Testing Suite includes a mixture of codebase assessments, notion assessments, handbook driving evaluations, and simulations. Engineers write each unit assessments (e.g., seeing if a technique to calculate velocity provides the appropriate reply) and integration assessments (e.g., seeing whether or not that very same methodology works nicely with different elements of the system). New work should go all related assessments earlier than it’s merged with the bigger code, thereby permitting engineers to establish and repair any points.

A collection of specialised notion assessments in simulation are created from real-world log information, and Aurora says it’s growing “highly realistic” sensor simulations in order that it might probably generate assessments for unusual and high-risk eventualities. Other experiments they often run within the Virtual Testing Suite assess how nicely Aurora Driver — Aurora’s full-stack driverless platform — performs throughout a variety of driving benchmarks.

No matter the character of the take a look at, custom-designed instruments robotically extract data from Aurora’s log information (e.g., how briskly a pedestrian is strolling) and plug it into varied simulation fashions, which is designed to save lots of engineers time.

The challenges of developing autonomous vehicles during a pandemic

Above: A visualization of real-world driving information from one among Aurora’s take a look at vehicles.

Image Credit: Aurora

The firm says that within the months since Aurora halted all real-world testing, its automobile operators have joined forces with its triage and labeling groups to mine handbook and autonomous driving information for on-road occasions that may be became simulated digital assessments. Aurora additionally says it’s constructing new instruments, similar to an internet app designed to make crafting simulations even simpler for engineers, and that it’s enhancing present pipelines that can assist the creation of latest testing eventualities.

Elsewhere, Aurora engineers are persevering with to construct and refine the corporate’s automobile maps — the Aurora Atlas — in areas the place Aurora Driver will function when it resumes on-road testing, a spokesperson tells VentureBeat. They’re including new maps to Cloud Atlas, the versioned database specifically designed to carry Atlas information, tapping into machine studying fashions that robotically generate annotations like site visitors lights.

Advancements in AI and machine studying have made it simpler to show car-driving brokers to navigate never-before-seen roads inside simulations. In a latest technical paper, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory describe an method not not like Aurora’s that includes Virtual Image Synthesis and Transformation for Autonomy (VISTA), a photorealistic simulator that makes use of solely a real-world corpus to synthesize viewpoints from potential automobile trajectories. VISTA was capable of practice a mannequin that navigated a automotive by beforehand unseen streets — even when the automotive was positioned in ways in which mimicked near-crashes.

“We don’t anticipate that COVID-19 will delay our progress in the long term, largely due to our investments in virtual testing, but it has demonstrated the urgency for self-driving transportation that can move people and goods safely and quickly without the need of a human driver,” stated Urmson in an announcement. “That’s why we’re more committed to our mission than ever and continue to hire experts in all disciplines, pay everyone at the company, and find ways to advance development on the Aurora Driver. As our industry comes together, ingenuity, dedication, and thoughtful leadership will get us through these challenging times.”


Uber’s Advanced Technologies Group (ATG), the division spearheading Uber’s autonomous automobiles tasks, retains a workforce that repeatedly expands the take a look at set inside Uber’s simulator primarily based on take a look at observe and highway conduct information. Every time any adjustment is made to the self-driving system’s software program, it’s robotically re-run in opposition to the total suite of simulation assessments, ATG head of programs engineering and testing Adrian Thompson instructed VentureBeat.

ATG engineers use instruments like DataViz, a web-based interface that ATG developed in collaboration with Uber’s Data Visualization Team, to see how vehicles in simulation interpret and understand the digital world. DataViz provides real looking representations of components like vehicles, floor imagery, lane markers, and indicators. It does the identical for summary illustration (by the use of coloration and geometric coding) for algorithmically generated data similar to object classification, prediction, planning, and lookaheads. Together, they allow staff to examine and debug data collected from offline and on-line testing, in addition to to discover data within the course of of making new eventualities.

The challenges of developing autonomous vehicles during a pandemic

Above: Uber’s Autonomous Visualization System, a web-based platform for automobile information.

Image Credit: Uber

Thompson says that Uber’s determination to ramp up growth of its modeling and simulation instruments over the previous two years is paying dividends. In one thing of a working example, the corporate is now utilizing over 2 million miles of sensor logs augmented with simulations to perform the “vast majority” of its AI coaching and validation, he stated.

“We have experienced very little disruption to our AI model development trajectory due to the absence of road operations,” stated Thompson. “Our test track testing is meant to validate our models, so we’re able to maintain, if not accelerate, our pace of development during this time.”

Perhaps unsurprisingly, Thompson additionally says that the digital vehicles in Uber’s simulation atmosphere are driving extra miles than earlier than the pandemic. He doesn’t attribute this to the well being disaster per se, however he says that COVID-19 offered a chance to proceed scaling simulations.

“We have well-established strategic plans in place to expand our simulated mileage further. It’s part serendipity that our model-based development approach has made our operations more robust to situations like this pandemic,” he added. “We will continue this rapid expansion of our simulation capability for the foreseeable future, and have no plans to reduce simulated miles driven even after the pandemic is behind us.”


Lyft was within the midst of growing a brand new automobile platform when it was pressured to halt all real-world testing. Nevertheless, Jonny Dyer, director of engineering at Lyft’s Level 5 self-driving division, tells VentureBeat that the corporate is “doubling down” on simulation by leveraging information from the roughly 100,000 miles its real-world autonomous vehicles have pushed and calibrating its simulation atmosphere forward of validation.

Specifically, Lyft is refining the strategies it utilized in simulation to direct brokers (similar to digital pedestrians) to react realistically to automobiles, partly with AI and machine studying fashions. It’s additionally constructing out instruments like a benchmarking framework that permits engineers to check and enhance the efficiency of conduct detectors, in addition to a dashboard that dynamically updates visualizations to assist create diversified simulation content material.

Dyer says that Lyft isn’t a lot centered on challenges like simulating digital camera, lidar, and radar sensor information, however as an alternative on conventional physics-based mechanisms, in addition to strategies that assist establish the appropriate units of parameters to simulate. “It’s not a game of scale with simulation models — it’s really more about simulating the right miles with high fidelity,” he stated. “We’re focusing on that fidelity aspect and getting simulation closer to telling us the types of things that real-world driving does. It’s the question of not just simulating a large number of miles, but simulating the right miles.”

Lyft additionally reworked its validation technique to weigh extra closely on issues like structural and dynamic simulation in gentle of the pandemic, based on Dyer. The firm had deliberate to carry out real-world testing prior to those steps — and it nonetheless will in some capability — however the shutdown pressured its {hardware} engineers to pivot engineering towards simulation.

For instance, a senior laptop engineer mounted the high-performance server that runs Lyft’s autonomous automobile know-how stack — which incorporates eight graphics playing cards and a strong x86 processor — in her bed room with 4 desk followers blowing on it to maintain it cool. Another engineer constructed an electrolytic corrosion setup in his storage with a Raspberry Pi and circuit boards he’d bought on eBay. Yet one other engineer transformed the cricket pitch in his yard right into a lidar sensor vary, with full-sized avenue indicators he’s utilizing to carry out calibration for the brand new sensors Lyft plans to combine.

Industry challenges

Despite the Herculean efforts of the autonomous automobile firms grounded by COVID-19, it appears seemingly that just a few will emerge from the pandemic worse for put on. Simulation isn’t any substitute for testing on actual roads, some specialists assert.

A longstanding problem in simulations involving actual information is that each scene should reply to a self-driving automotive’s actions — regardless that people who may not have been recorded by the unique sensor. Whatever angle or viewpoint isn’t captured by a photograph or video must be rendered or simulated utilizing predictive fashions, which is why simulation has traditionally relied on computer-generated graphics and physics-based rendering that considerably crudely represents the world. (Tellingly, even Wayve , a U.Ok.-based startup that trains self-driving fashions solely in simulation, depends on suggestions from security drivers to fine-tune these fashions.)

A paper printed by researchers at Carnegie Mellon outlines the opposite challenges with simulation that impede real-world {hardware} growth:

  • The actuality hole: Simulated environments don’t at all times adequately signify bodily actuality — for instance, a simulation missing an correct tire mannequin may not account for real looking automotive behaviors when cornering at excessive speeds.
  • Resource prices: The computational overhead of simulation requires specialised {hardware} like graphics playing cards, which drives excessive cloud prices. According to a recent Synced report, coaching a state-of-the-art machine studying mannequin just like the University of Washington’s Grover, which generates and detects faux information, can price in extra of $25,000 over a two-week interval.
  • Reproducibility: Even one of the best simulators can comprise non-deterministic components that make reproducing assessments unattainable.

Indeed, Yandex — which continues to function its self-driving vehicles in public roads in places the place it’s allowed, similar to Moscow — says that whereas simulation can support autonomous automobile growth, public testing stays crucial. Shifting to a full simulation program with out on-road assessments will sluggish the progress of autonomous automobile growth within the brief time period, the corporate asserts, as a result of growing a simulation with 100% accuracy and complexity would possibly require as a lot problem-solving and assets as growing self-driving know-how itself.

“[Without real-world testing,] self-driving companies won’t be able to collect critical real-world driving data,” a Yandex spokesperson instructed VentureBeat. “[Additionally,] driving simulations and running vehicles on test tracks can help prove that vehicles meet specific requirements in a laboratory environment. Driving on public roads presents much more complex, real-world dynamics that the self-driving platforms need to face, including different weather conditions and a variety of pedestrian and driver behavior.”

Beyond exposing autonomous driving programs to those advanced dynamics, Ars Technica’s Timothy B. Lee notes that testing ensures sensors and different {hardware} have a low failure fee; that the vehicles will use protected passenger pickup and dropoff places; and that fleet operators are well-trained to deal with any contingency. It additionally permits firms to establish points which may crop up, like whether or not there are sufficient automobiles obtainable for rush-hour service.

Dyer doesn’t disagree with these sentiments completely, however he’s usually extra optimistic concerning the prospect of simulated testing. Simulation is well-suited for structured and purposeful testing on take a look at observe information, he says, which make up a big slice of Lyft’s autonomous automobiles roadmap.

“The reality is that all simulation is somewhat limited in that you have to calibrate it and validate it against reality. … It’s not going to replace driving on the roads anytime soon [because] you can’t do everything in simulation. But I do think that we’re making tremendous progress in our simulation environment,” he stated. “In this respect, the pandemic hasn’t been a setback at all. There’s a lot of basic stuff that comes out of these big engineering projects like tech debt and infrastructural things that you want to fix, but that becomes hard to fix when you’re in the middle of an operational program. Investing in these will, in my opinion, pay off in a big way once we’re back.”

Skeptics like Boston Consulting Group senior companion and managing director Brian Collie anticipate the pandemic will delay the commercialization of driverless automotive know-how by a minimum of three years. Karmel concedes that there may be bumps within the highway — significantly with Waymo’s testing paused — however he says with confidence that the pandemic hasn’t materially affected deliberate rollouts.

“If you just focus on synthetic miles and don’t start bringing in some of the realism that you have from driving in the real world, it actually becomes very difficult to know where you are on that curve of realism,” stated Karmel. “That said, what we’re trying to do is learn as much as we can — we’re still getting thousands of years of experience during this period of time.”