Putting the pieces together in the autonomous jigsaw

6th October 2016
Posted By : Joe Bush
Putting the pieces together in the autonomous jigsaw

Steve Rogerson looks at some of the discussions that will be coming up in the Automotive Conference at this year’s electronica as the industry prepares for autonomous vehicles.

There is no doubt that the automotive industry is going through the biggest transformation in its history. The goal of autonomous driving is now seen as achievable and the road to that involves driver safety systems being added to vehicles that are available today.

However, this will involve engineers in different disciplines working together to solve the many problems that these create and it is thus no surprise that the automotive session at this year’s electronica exhibition in Munich will perhaps be one of the most exciting ever.

Four examples of the areas that need to be tackled are the fusion of different sensor systems so the car can monitor its environment in real time, communications with other cars and the infrastructure, the integrated displays letting drivers know what is happening, and whether the crucial software controlling all these systems can be proprietary or open source. And these are just a few of the pieces that make up the autonomous jigsaw.

Sensors
Josef Stockinger from ST Microelectronics will be tackling the sensors. “Having surround sensors is not totally new,” he said. “There are surround cameras, rear-view cameras are available, there are sensors for blind-spot detection, there are ultrasonic sensors.”

However, when it comes to the five levels of autonomous driving, these really only cover the two lowest. Actually, there are six levels, but level 0 is no assistance at all - the driver is completely in control. For level 1, the car can assist with the braking. For level 2, the car can have some control of the steering in a lane-keep assist situation as well as braking for cruise control. But the driver must still be alert and able to take control. Today’s sensor systems can handle these two.

Level 3 is where it starts to change seriously and the car can effectively drive itself, but the driver still has to be ready to take control within five to ten seconds.

“Level 3 is not yet in the market,” said Stockinger. “There is more responsibility on the sensors as the driver may not take over immediately. This means a totally different level of quality for the sensors.”

This will involve sensor fusion systems and much higher quality sensors. It is, however, seen as an intermediate stage and some car makers may skip this altogether as they do not consider the five to ten seconds as being fair on the driver. A big advantage of autonomous driving is seen as the driver not having to be in control at all. And what happens if the driver is unable to regain control, say, from falling asleep or being taken ill?

Above: There needs to be a consolidation of the amount of information being presented to the driver

That is why many car makers will jump straight from level 2 to levels 4 and 5. Level 4 means the car can be put in a fully autonomous mode and there is no need for driver intervention at all. If a situation arises that the car cannot handle, it will come to a safe stop. Level 5 cars are similar but have no option for a driver – no steering wheels, no pedals or other controls.

“The technology for this as such is here,” said Stockinger, “but there needs to be improvements. The vertical resolution on radar needs to be improved, for example. We need to define what is meant by a safe halt.”

A human driver relies on the visual spectrum and in theory that should be good enough for an autonomous car. This though is already being enhanced by radar, and LIDAR can bring in real 3D mapping.

“You need to use the information from the various sensors to develop the 3D model of the environment and make reasonable decisions on how to drive,” he said.

He does not, however, see ultrasonic sensors as having a future. “Ultrasonic sensors are cheap but their range is very limited. It is fine for parking assist but their range does not go beyond seven to ten metres. I do not regard ultrasonic sensors as very reliable.”

Communications
Another area that needs to improve for levels 4 and 5 is the car’s ability to communicate with other cars and the infrastructure. However, of course, that needs an infrastructure to be in place and Bernd Lübben, Vice President of Business Development at Cohda Wireless, will be talking about the work the company has been doing with Siemens.

“We are working on a number of smart city projects,” said Lübben. These include looking at traffic light control for emergency vehicles to create what he calls a “green wave” of lights. Another is platooning for heavy lorries so that the driver of the first is effectively towing the others wirelessly. This needs interaction with the infrastructure again for the likes of traffic lights, which need to know the length of the platoon so all of them are let through in one go.

“We are partnering with Siemens on connected vehicle projects in the US in New York, Tampa and Wyoming,” said Lübben. “We are putting our technology on traffic lights and emergency vehicles. This includes more than 10,000 vehicles in New York.”

The company is also working with Columbus, which won the US Smart City Challenge for government funding to develop projects of this nature. In Europe, it is involved with the ITS (Intelligent Transport System) corridor from Amsterdam through Germany to Austria and on a smaller scheme in the UK between London and Dover.

Displays
One difficulty for levels 1 and 2 and onto 3 is keeping the driver informed of what the vehicle is doing without causing information overload that could end up distracting the driver. Also, more displays both for driver information and passenger entertainment is leading to an increase in processing power. These problems will be tackled in a talk by Hans Roth, Senior Director of Technical Marketing at Harman.

“There is a need for consolidation,” he said. “This will let you reduce the number of ECUs, but these cannot run on the same operating system. You need a hypervisor to separate the domains, the safety and cluster display domain from the infotainment domain. They can run on the same hardware but the hypervisor separates the software.”

Intelligence is also needed in the driver displays to control what information and when is put before the driver. This can be combined with a driver monitoring system that can check whether the driver is becoming drowsy or being distracted, and react accordingly.

“The car will become more intelligent and act like a personal assistant,” said Roth. “It will learn the driver’s routines. For example, if the driver regularly leaves at 8.00am to go to the office it will have that route already on the navigation system.”

The glue that will hold together much of the driver assistance features will be software but there is a debate on whether proprietary or open source software is the route to go. Car manufacturers are nervous about open source because they see many of the driver assistance features they are introducing as selling points for their vehicles and are therefore reluctant to share. But they also see the benefits of open source in the vast amount of shared resources that become available and the reduced chance of bugs as more engineers can work on the code.

“Open source has been discussed in automotive for a couple of years now,” said Martin Schleicher, Executive Vice President for Elektrobit, which is now part of Continental. “But there is a debate as to whether this can be successful in automotive.”

Though the Genivi Alliance has done solid work on producing an open source model for vehicle infotainment systems, this has not happened in other areas, notably for driver assistance and safety features.

“Safety control is more or less untouched,” said Schleicher. “Can you develop safety critical software based on an open source model? Open source software for security in the PC world is very successful. It means more people can help identify security issues. Maybe that could be applied to functional safety. It might work.”

The fundamental issue though is that the car makers and suppliers would have to open up their source code, something of which they are highly protective.

“The companies in automotive have to decide whether they want control over the software,” he said. “Software in vehicles will be growing tremendously going forward and it will be very difficult to keep up with the complexity if everything is done in house. You need to set up ways of alternative software development.”

But, he said, the suppliers want to protect their IP and not transfer their knowledge to competitors.

“There is no perfect answer,” he said. “The OEMs need to respect the software suppliers. There needs to be certain licence models that deal with this.”

Conclusion
The automotive industry is well down the road to autonomous vehicles but there are still a lot of kilometres to travel before the technical and licensing hurdles are overcome. But now that the goal can be seen, the industry is working hard to take us on what now looks like an unstoppable road to self-driving cars.

electronica Automotive Conference will take place on 7th November 2016, at ICM – Internationales Congress Center München.


You must be logged in to comment

Write a comment

No comments




Sign up to view our publications

Sign up

Sign up to view our downloads

Sign up

Smart Mobility Executive Forum
12th February 2018
Germany Berlin