Steve Rogerson reports from last month’s AutoSens conference in Brussels on the progress towards autonomous vehicles. The goal is clear. Autonomous or self-driving cars will be on our roads, and in the not-too-distant future. That will happen despite the route to achieving this still being littered with hurdles, one of these being finding the right balance between the different types of sensors and combining the information from them into something that is usable by an autonomous system.
The views on this at September’s AutoSens conference in Brussels varied from being able to handle everything with just sophisticated image sensors to those who believed a full combination of cameras, radar, lidar and thermal sensors was needed.
The investments into this field are huge and look set to continue growing with the returns on that investment still a long way away.
“Over $100bn has been invested in autonomous driving,” said Rudy Burger, Managing Partner at Woodside Capital Partners, “but this is producing zero revenue at this point. It hasn’t matured yet. This is still a pre-revenue market, and that is not sustainable.”
A lot of that money is being spent on artificial intelligence (AI), but Burger questioned whether the current approach to AI could get to autonomous and, most importantly, safe vehicles. Deep learning, he said, was very good at well-bonded problems such as playing chess.
“But that is not how we learn to drive,” he said. “You can take a 15 year old child and teach them to be a good driver in a few months, but can we get cars that are safe enough through brute force deep learning?”
As an example, he said light aircraft pilots were taught that the safest place to land if they were in trouble was on a highway. Human car drivers who see this happening will react accordingly. In the few instances this has happened, there has not been a pile up.
“Does that mean we have to teach autonomous vehicles what a light aircraft looks like?” said Burger. “How do we teach them for every unusual occurrence?” Charles Sevior, CTO at Dell Technologies, said the hazards varied from country to country.
“In Australia, kangaroos can leap out in front of you,” he said. “That is unique to Australia. That is why the OEMs are test driving in different countries to build up a database. It will be too difficult to have one network that takes care of everything in every country.”
Sevior was also critical of the terminology. “I don’t like the term AI because these products are not sentient,” he said.
Burger also criticised car makers who were publicising goals of zero fatalities, accusing them of setting the industry up for failure.
“We may never get to Vision Zero,” he said. “Why not Vision 50%? Let us talk about halving the number of deaths on the road. That is achievable.”
Theresa Cypher-Plissart, Autonomous Driving Researcher at the Alliance Innovation Lab in Silicon Valley, agreed. She said: “We need to minimise accidents. I am not talking about zero accidents or zero fatalities.”
She said to achieve the vision of autonomous driving, you had to ensure safety. “Systems have to be fail-safe with multiple redundancy,” she said. “An important consideration is anticipation of hazards and things that can go wrong.
That is defensive driving and humans do it naturally when we reach intersections. We adjust our driving based on the environment, whether it is residential, whether there are other road users, and the weather.”
On sensors, Burger pointed out that humans have two very capable image sensors and a very capable processor. At the moment, semiconductor processors are not as good as a human brain and image sensors are not as good as human eyes. However, he said humans were pretty good drivers with just two image sensors.
“We don’t need lidar and radar,” he said. “Cars do because we haven’t got good enough image sensors yet, but maybe we will in the long run. As image sensors get closer to the ability of our eyes, they will take a larger slice of the pie. The reason we need radar and lidar is that historically vision sensors have not been capable enough. But as image sensors mature, I think they will become the dominant technology.”
At the moment, he said, the bulk of the investment is going into lidar followed by cameras, processors and radar.
“The vast share is going into lidar,” he said. “Over a 100 new lidar companies have emerged in the past few years and only a handful of these will survive.”
James Hodgson, Principal Analyst at ABI Research, agreed. He said: “There will be consolidation in the supply chain and, at each level of the supply chain, there is a case for winner takes all.”
Dexin Chen, Principal Analyst at IHS Markit, said that while some thought lidar would not be needed, the industry consensus was that it would be.
“Lidar is needed but the tier ones and OEMs are working with alternatives,” said Chen. “They are being technology agnostic. It is a really crowded area right now. There are 80 to 100 companies working on lidar.”
He said the advantages of lidar were for applications such as 3D mapping and that it worked well in low light. It also had good resolution and precision. On the bad side, it was expensive and didn’t work well in bad weather.
In 2018, he said lidar shipments were around the four million mark, dominated by basic lidar systems such as Continental’s pre-collision sensor.
“Basic lidar dominates now, but more performance will be needed,” said Chen. “We will see innovative lidar designs catching up.”
On price, lidar still has a long way to go. “Everyone is talking about lidar becoming low cost,” said Chen, “but companies are not getting the price down yet. The cost of lidar is still a moving target. The target was $200, but it depends on what type of lidar you want. While $200 is what everyone is looking at, it depends on the performance required.”
IHS forecasts automotive lidar systems to hit 18 million in 2025, but different technology options exist and it is not clear which will win.
Wilfried Philips, a senior professor at Ghent University, also commented on the battle between radar and lidar. He said: “Radar is cheaper than lidar but not as good. But radar is improving and lidar is getting cheaper. We will see which wins for distance detection.”
He also said thermal sensors could help detect pedestrians in difficult cases. “They are also good at night when optical cameras are not so good,” he said. “It helps eliminate false positives because people have a temperature.”
Also booming was interest in AI processors or accelerators, which Hodgson said had rekindled interest in silicon. There are more than 50 new AI processor companies. This year, 83% of the revenue going into semiconductor companies was to those involved with AI processors.
“I expect to see an accelerated rate of consolidation over the next few years,” he said. “But it is going to take longer than most of us were hoping for the autonomous vehicle market to mature. Nothing is going to change overnight.”
The number of pixels coming into a vehicle is huge, far more than a multi-core CPU can handle. That is why companies are looking at AI accelerators as they can each be designed for a different class of algorithm and sensor.
However, according to Andrew Richards, CEO of Codeplay Software, current coding standards such as Misra C++ cannot handle the use of AI accelerators.
“You need AI accelerators to achieve AI in automotive,” he said. “Misra C++ is for 2008 C++ but for an accelerator you need 2011 C++. Misra supports the old designs but not these.”
The Misra C++ group is updating the standard to support accelerator programming in collaboration with Autosar.
Sometimes that goal of autonomous vehicles seems as far away as ever, yet progress is being made, and probably at a faster rate than most people realise. The problem is there are still disagreements about the configuration of the technology that will be used in these vehicles, and that looks set to continue for some time. There may never be a consensus until after these vehicles are on the road and driving themselves.