Autonomous cars may be smart enough to drive on the highway, but there is still a long way to go before self-driving works in urban areas.

That is due to the immense amount of objects and situations on the road everyday. In order for autonomous cars to be allowed on these roads, it needs to recognize, understand, and make the correct decision in less than a second.

See Also: Self-driving tech’s lobbying supergroup to play many dates in DC

We are far away from that being reality, as autonomous cars struggle to understand the end of the path and start of the road in the rain, or what to do when construction work blocks a road, but Barcelona’s Computer Vision Centre has created a virtual simulation to speed up the process.

The simulation, which is named SYNTHIA, provides the autonomous car with a wide variety of incidents and labelled objects. Automakers and developers can test “corner case” situations, like a car accident or pigeons on the road, without spending thousands of hours on the road hoping that this situation will occur.

SYNTHIA going to Vegas

The team, led by researchers Germán Ros and Antonio M. López, plans to share the simulation at the International Conference on Computer Vision and Pattern Recognition in Las Vegas this month.

“These vehicles require the use of artificial intelligences to understand what is happening in their surroundings and depend on artificial systems which simulate the functioning of human neural connections. Our simulator, SYNTHIA, represents a giant leap within this process,” Germán Ros says.

Currently, most automakers are manually labelling objects pixel-by-pixel, which requires thousands of workers. The new simulation should lower the amount of manual work needed, by providing thousands of images in an open library.

Once the simulation has worked for one car, the self-driving developer can pass it onto all other cars in the fleet. This could be a critical feature to avoid accidents if new traffic laws or objects are added to the road.