Search
Researchers and experts are working to solve what is arguably the hardest problem for automated driving: reading the state of mind of humans for the safe large-scale rollout of automated vehicles.
Researchers and experts are working to solve what is arguably the hardest problem for automated driving: reading the state of mind of humans for the safe large-scale rollout of automated vehicles.
( Source: gemeinfrei / Unsplash)

MACHINE DECISION MAKING Solving the challenge of autonomous vehicles and pedestrians

Author / Editor: Cate Lawrence / Nicole Kareta

One of the most complex challenges for autonomous driving is the unpredictability of pedestrians. Perceptive Automata is using psychology, data science, and deep learning to help autonomous vehicles understand and predict human behavior.

We've long been predicting a future where autonomous vehicles provide roam our roads, communicating seamlessly with other vehicles and infrastructure using V2E communications. But the role of us as humans coexisting with autonomous vehicles is often forgotten beyond that of our role shift as cars progress to Level 5 autonomy. As humans, we act unpredictably and exhibit thoughts, emotions, and behaviors that are the antithesis of machine decision making. As pedestrians, we jaywalk, at times walk in the middle and perform actions like starting to walk across a road then promptly returning to where we started - all very difficult for a machine such as an autonomous vehicle to understand.

Enter the work of Perceptive Automata, a company founded by a team of Harvard and MIT neuroscientists, computer vision researchers, software engineers, and machine learning experts. They are working to solve what is arguably the hardest problem for automated driving: reading the state of mind of humans for the safe large-scale rollout of automated vehicles (L2-L5). The company enables vehicles to better understand what people might do next so they can navigate safely around humans, increasing safety and enabling a more natural and smooth human-like driving for autonomous vehicles. This is essential to deploy highly automated vehicles into human-dominated road environments.

I recently spoke to Sam Anthony, CTO, and co-founder of Perceptive Automata to find out more. He explained that the company had developed technology that uses the techniques of behavioral science. Psycho-neuroscience, cognitive science, and a subfield called visual psychophysics, to train deep learning algorithms to enable machines to understand and predict human behaviors such as whether someone will step in front of a vehicle to cross the road.

He explains: "I was walking, and I was sort of thinking about the things that humans do effortlessly. When I went to walk across the street, I got annoyed as a car didn't seem to know I was trying to cross the street. And then I thought, an autonomous vehicle would never know that I wanted to cross and they would lack this ability. This is a really human cognitive ability. So we talked to people in the industry, and we looked at publications and determined that this was something where everyone kind of knew this was a problem. If you didn't give autonomous vehicles the ability to look at a human and understand what was in that person's head as they're related to you, it would be basically impossible for the vehicles to succeed."

The company built out a proof of concept, gained 20 m in Series A VC funding, and has been working with autonomous vehicle companies since. Their investors include Toyota and Hyundai.

The challenge of intuition

Essential to the work of Perceptive Automica is the notion of human intuition, and the ability to collect and analyze subtle, unconscious insights - thousands of individual traits that we exhibit persistently- in a split second.

Perceptive Automata uses a toolbox of behavioral science techniques to analyze extensive vehicle data which shows interactions with pedestrians. The data is sliced and shown to groups of people that answer questions about the depicted pedestrian's level of intention and awareness based on what they are seeing. "People are asked, imagine if you're driving a car and you're driving past, does this person intend to cross the road?" Every time a person appears and interacts with a vehicle, they're giving off hundreds of signals that another human could use to understand their awareness and intention — their state of mind. The process is repeated hundreds of thousands of times, and the data is used to train models to interpret and understand these interactions.

Sam explained: "After we trained some of our early models, we went back, and we did an explainability exercise to determine what parts of the image or video our model cares about. One of the things we noticed is that when someone was carrying a bag, our model was very attuned to that fact. And at first, we didn't really know what was going on, but after looking at some of these samples where this happened, what we realized is, if you have a bag on your shoulder and you're planning to cross the street, you infinitesimally change how you're holding that bag. Maybe you tuck it in a little bit, because you're going to start moving and don't want that bag to go flying around. And so it turns out that whether someone's holding a bag, and they're holding the bag gets incredibly diagnostic of whether they want to cross the road."

How does the person know the car has seen them?

I was wondering how a person can be sure the car has seen them. Sam notes that while some OEMs have suggested light-up indicators that indicate the pedestrian is free to cross, the problem is more complex. Unless the car has a real understanding that people want to cross the road, it will pick up every human movement indiscriminately. "If you have a light that says it's okay to cross and if someone isn't paying attention to the car because they're waiting for a bus, the autonomous car is just going to sit there blinking."

Perceptive Automata's software is massively scalable and runs faster than real-time, using only light compute resources. It's ready for the next generation of vehicles such as Tesla, GM Super Cruise, and Toyota Guardian where the computing power has enough extra resources that the software can piggyback on top of existing hardware.

Perceptive Automata's strength is it's grounding in academia. As Sam details: "We have been doing this for 11 years, and commercially for five years. And we have the internal expertise, the internal tools, the internal data to be able to understand and predict human intuition reliably and repeatedly for production systems that have functional safety and compliance concerns." The technology is not only limited to autonomous vehicles and could have future applicability in any vertical applications where autonomous systems have to deal with humans.

(ID:46813723)