Perception & Sensor Tech, Sensor Fusion

The Challenge of Identifying Pedestrian Actions

The vast majority of motor vehicle fatalities involving pedestrians, a whopping 75%, occur at night. Reduced visibility is just one of the problems – drivers might also be tired, distracted by the preceding workday and eager to get home. That doesn’t make for the safest commute, but it’s one most of us complete every single day.

Ove J. Salomonsson, senior director and LiDAR product architect at AEye, an artificial perception company, believes that autonomous vehicles could have an advantage in this regard. In addition to the fact that AVs won’t get tired, annoyed or distracted by everyday human problems, they will also come equipped with a multitude of sensors that can see in every direction.

“The autonomous vehicle hopefully sees the pedestrian,” said Salomonsson. “Then the vehicle can pre-fill the brakes, so the brake pads are very close and ready to go. And then there’s no foot and no decision-making – other than the computer itself – to just hit the brake, with basically just one g of braking on dry asphalt. That makes a big difference as well – the autonomous vehicle’s reaction time.” The time of day is less relevant when LiDAR and radar enter the picture, but there is still much to consider.

When human drivers analyze the road in an effort to determine what other drivers or pedestrians are going to do next, they might take preemptive action to avoid a potential accident. An autonomous car is likely to do the same, but in a much different way. It could, for example, gauge the trajectory of a thrown object (such as a ball) to determine if it will actually land on the road up ahead. If the answer is no, there might not actually be a reason for the vehicle to stop unless it believes the ball was a catalyst for something more.

“Of course, the AI connected to camera, would be close to impossible to train,” said Salomonsson. “For all the different variances of the ball, the color, size and the way it bounces, and so on.” A camera could “maybe” provide supporting information, Salomonsson explained. However, he argued that AEye’s software-controlled LiDAR – which allows you to place density points on something that’s detected, and then follow it and track it – is a more effective solution.

“That would give a lot of insight into why this ball is there, what it’s doing and what might come next,” said Salomonsson. “It might be a dog coming next, but there could also be a child and an adult [running behind the ball]. And then you know from what direction this ball was coming and you could once again be able to classify it.” These are just a few of the many problems that need to be tackled, but Salomonsson is already looking forward to deployment. When ready, he would like for the first few thousand units to be allocated toward serving a greater good.

“The first self-driving cars will be very expensive,” said Salomonsson. “The best bang for the buck would actually be to give them out to drunk drivers, those who have several DUIs, because they still cause a lot of accidents. And the second phase of that would be to hand them over to handicapped, elderly individuals who cannot get to the doctor.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.