Events News

Don’t let the lawyers stop us saving lives

3rd October 2017
Joe Bush
0

Steve Rogerson reports from last month’s AutoSens conference in Brussels. There is no single sensor technology that can handle the complexities of data gathering that will be essential if the vision of autonomous vehicles is going to be realised. As such, car makers and their tier-one suppliers are wrestling with the difficulties of finding the best mix of vision systems, radar, lidar, ultrasonics and even sonar to provide the car with the necessary information.

Even more tricky is discovering better ways to integrate the information from these varied technologies into a coherent whole that can be acted on in a meaningful way.

Getting to grips with this so-called sensor fusion problem was the major talking point at last month’s AutoSens show in Brussels.

“Sensors need to operate in different weather conditions and at day and night,” said Sören Molander, Senior Engineer at Panasonic. “We need sensor fusion and a combination of active and passive sensors working together.”

As an example of doing this, an automatic parking system was described by Markus Heimberger, System Architect with Valeo. This combined a dozen ultrasonic sensors with four surround view cameras.

“With parallel parking,” he said, “the distances can be very close. You need centimetre accuracy to get into a very narrow parking slot, and to get this you need to combine the two sensor systems. Each sensor has some uncertainty. Combining them gives more accuracy and makes the uncertainty smaller.”

However, for on-road autonomous driving, the interaction between the sensors and the cognitive functions of the vehicle need to be tighter than they are today, believes Ronny Cohen, CEO at Vayavision.

“This is the big challenge for level three and above autonomous driving,” he said. Below SAE level three, the driver is always in control of the car, even though some driver assistance features may be used. Levels four and five are different levels of autonomous driving in which the driver is not expected to take control of the car. Level three is the tricky one where the car can drive autonomously but the driver must be ready to retake control at any time.

He said high accuracy and low latency were also essential. “If a kid jumps into the road, you need the system to be fast and reliable,” he said, and that involved a combination of cameras, radar and lidar to give reliable 3D information.

Radar, he said, was very reliable, especially in poor weather conditions, but had low resolution. Lidar was still a high cost sensor and cameras could give high definition but their detection ability was not high enough.

When it came to fusing these together, the companies doing the fusion were having to use different sensors from different companies creating, Cohen said, a whole that was not reliable enough.

“People are trying to take unreliable information and fuse it and it is not working,” he said. He demonstrated this with a real example of a child painted on the surface of the road that from the right angle looked like a real 3D child. In this example, the vision system reported seeing a child and the lidar said there was nothing there.

To solve this problem, he said the fusion had to take place at each pixel. However, though lidar was becoming popular, he said its designers had “a tough fight with physics” to get the resolution needed. He said this might be impossible.

Limited Lidar
Lidar that is on the market today only has a limited range. This is because once it hits an object and bounces back the laser scatters in different directions so only a small percentage returns to the sensor.

“This means you cannot have high resolution, speed, range and definition all in one,” said Edel Cashman, Senior Applications Engineer at fabless semiconductor company Sensl. “You can get a subset but not all combined. There is a huge opportunity there to get a system together.”

However, for now, she said, the lidar manufacturers needed to take the sensor output and combine it with optical and radar.

“It is very hard,” she said. “It is complex to get them all working in harmony. There is a race to do that quickly and cheaply. For now, the algorithms behind it are more important.”

Junmuk Lee, Senior Research Engineer at Hyundai Autron, added: “We have developed a test bench for sensor fusion with a lidar sensor. Lidar is too expensive at the moment, but we are getting ready for when that improves.”

The advantages of lidar include generally a potentially better range than alternatives and it will work well at night and is not affected by lightning.

“It is more robust,” said Olivier Garcia, Chief Technical Officer at Dibotics. “It is not easily affected by environmental conditions like lightning. It is very precise and accurate. The main problem is it depends on how reflective the object is. Darker objects do not have good reflectability. They are also not very good at seeing the ground more than 30m away.”

He said they were also not fast enough at around 20 frames per second and they needed a better angular resolution.

The reflectability can be improved by changing the wavelength. Laser diodes on the market are set at wavelengths that are not the best, so some companies are looking at making their own diodes at more suitable wavelengths.

“All this is evolving very fast,” said Garcia. “All the problems will be resolved very soon. I am confident about this. For example, we have a simple algorithm that can improve things.”

Despite the problems with lidar, Rudy Burger, Managing Partner at Woodside Capital Partners, said he expected to see the lidar segment grow significantly over the next year or so.

“We are seeing venture capitalists getting more interested in investing,” he said. “About 37 lidar companies have been funded to date and that is still growing.”

However, he said the main competition for lidar companies would come from vision systems. “These will evolve over the next few years to provide a very significant threat to lidar systems,” he said.

One answer to the technical problems with lidar might be to take a lesson from human vision, said Cohen. Humans have high resolution in the direction they are looking, lower resolution either side of this and on the edges of vision they can only really detect motion. To get full information about their surroundings, they turn their heads.

“You can do that with existing lidar,” he said. “You need controllable sensors, or point-and-shoot lidar. This can give high resolution at certain points in the image. This means I can use low cost lidar and generate a 3D model.”

Taking responsibility
Another tricky question when it comes to sensor fusion is who takes responsibility. There is a supply chain from the component makers, through the tier two and tier one suppliers to the car makers themselves.

If there is a serious accident caused by an autonomous vehicle, lawyers may initially want to blame the car maker but the car maker will in turn look for where the failure happened and pass the blame along.

“The lawyer will say to the car maker, ‘In that case show how the vehicle interacts with the tier-one’s black box’, and the OEM might not be able to do that,” said Patrick Denny, Senior Expert at Valeo. “They will have to show what went wrong, whether it is the camera, the lidar or the radar. These are not easy questions.”

Lee added: “We are moving from ADAS to autonomous driving and fusion can take place anywhere. But as we move towards autonomous driving I believe there will be a shift to the system-on-chip makers who will want to take control of everything.”

However, Martin Pfeifle, Head of ADAS Perception at Visteon, thinks this will be the job of the tier one suppliers. “It is a very project centred thing,” he said. “This is something a system vendor can do.”

Heimberger though put the responsibility at the door of the car maker. “I think the major responsibility is on the OEM,” he said. “It is not alone, but it has the major responsibility.”

However, Denny said this was something the industry needed to sort out before a court case. “If the lawyers take over, the danger is the technology will not even get to market,” he said. “This means the lives that can be saved will not get saved, the companies won’t make money - the lawyers will make money, that is the danger.”

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier