Search
It is not yet clear who will win the battle for the first commercial fully autonomous vehicle.
It is not yet clear who will win the battle for the first commercial fully autonomous vehicle.
( Source: gemeinfrei / Pexels)

Artificial Intelligence How deep learning and AI are making autonomous vehicles a reality

| Author / Editor: Cate Lawrence / Isabell Page

The fight to roll out the first commercial fully autonomous Level-5 car is on. Advancements in science led by young enterprises steeped in academia are using technology such as AI and machine learning to create the necessary technological innovations.

The fight to roll out the first commercial fully autonomous Level-5 car is on. Advancements in science lead by young enterprises steeped in academic focusing on deep tech such as AI and machine learning and making this a reality.

Last month, Elon Musk asserted that Tesla was very close to achieving Level 5 autonomous vehicles, telling the opening of Shanghai’s annual World Artificial Intelligence Conference (WAIC):

I remain confident that we will have the basic functionality for level 5 autonomy complete this year.

But the reality is that while industry stalwarts like Tesla and Google are often the public face of autonomous vehicles, there's a whole sub-layer of enterprises hard at work creating the technology that makes the dream of mainstream autonomous vehicles a reality. The companies are frequently spin-offs and graduates from academia and are heavily driven by scientific, mathematic, and engineering rigour. It can even be hard to find out what many of the companies are specifically working on, as they work in stealth mode. They all play a long game, in a highly competitive sector, working to appease both investors and potential customers, and they are often acquired by bigger companies like Amazon (Zoox) and Apple (Drive.ai).

These companies and heavily invested in AI and deep learning as they work to create the capabilities necessary to teach and enable machines to act autonomously and safely on the road.

Here are some of what they are working on:

Oxbotica: The value of deep fakes

When you think of deep fakes, your first thought is fake videos of Mark Zuckerberg or Donald Trump. Deep fakes are a combination of machine learning and AI and they involved training generative neural network architecture such as generative adversarial networks (GANs) to generate realistic images and video which it is different to determine as real or fake. But it's not just a tool of artists and creators of fake news. Deep fakes are also being used in the simulated testing of autonomous vehicles. Autonomous vehicles software company Oxbotica (Oxford university spin-off), has generated a form of deep fake technology which can generate thousands of realistic photographic images which can be used to expose a vehicle to a near-infinite number of scenarios.

Sophisticated deep fake algorithms reproduce the same scene in poor weather or adverse conditions, and subject its vehicles to rare occurrences. The technology can switch images of trees with buildings, reverse road signs, and change the time of day identified with the appropriate lighting. It then uses these synthetic images to teach its software, producing thousands of accurately-labelled, true-to-life experiences and rehearsals which are not real but generated; even down to the raindrops on lenses. prote

According to Oxbotica, the data is generated by an advanced teaching cycle made up of two co-evolving AIs, one is attempting to create ever more convincing fake images while the other tries to detect which are real and which have been reproduced. Oxbotica engineers have designed a feedback mechanism which sees both entities improve over time in a bid to outsmart their adversary. Over time, the detection mechanism will become unable to spot the difference, which means the deep fake AI module is ready to be used to generate data to teach other AIs.

Helm.ai: Unsupervised learning

Helm.ai is co-founded by Vladislav Voroninski, former faculty member at the MIT math department. Earlier this year, the company announced a breakthrough in unsupervised learning technology. They've created a new methodology, called Deep Teaching, which enables Helm.ai to train neural networks without human annotation or simulation for the purpose of advancing AI systems.

Supervised learning is the process of training neural networks to perform particular tasks using training examples, while unsupervised learning is the process of enabling AI systems to learn from unlabelled information, infer inputs and produce solutions without the assistance of pre-established input and output patterns.

In the first use-case of Helm.ai’s Deep Teaching technology, it trained a neural network to detect lanes on tens of millions of images from thousands of different dashcam videos from across the world without any human annotation or simulation. The resulting neural network is robust out of the box to a slew of corner cases well known to be difficult in the autonomous driving industry, such as rain, fog, glare, faded/missing lane markings and various illumination conditions.

The company has also built a full-stack autonomous vehicle which is able to steer autonomously on steep and curvy mountain roads using only one camera and one GPU (no maps, no Lidar and no GPS), without ever training on data from these roads and, performing well above today’s state of the art production systems. Since then, Helm.ai has applied Deep Teaching throughout the entire AV stack, including semantic segmentation for dozens of object categories, monocular vision depth prediction, pedestrian intent modelling, Lidar-Vision fusion and automation of HD mapping.

Humanising Autonomy: Human intent prediction

Autonomous systems struggle to understand the complexities of human behaviour from implicit behaviour cues - such as facial expressions or gestures shared between passengers or pedestrians - and behaviour such as jaywalking. This remains an obstacle for developing autonomous vehicles suitable for urban environments.

Humanising Autonomy has built a human intent prediction application and platform that is able to recognise and predict human behaviour from visual camera footage. Its main application is in automated vehicles, as it allows the vehicle to make better decisions in terms of vehicle path planning and pedestrian interactions to improve the safety, societal acceptance and deployment of Level 2+ Advanced Driver Assistance Systems and Fully Autonomous Vehicles. Its platform can be built into any AV stack for vehicle perception, path planning, passenger detection and intuitive interaction between people and machines.

The software extracts observable and inferable behaviours from video data for intent predictions, using a combination of behavioural psychology, statistical AI and novel deep learning algorithms. As a critical perception technology, the software provides real-time accident and near-miss prevention, improving the safety and efficiency of Urban Mobility Systems across cities worldwide.

(ID:46756149)