Throughout 2015, the development of driverless cars has dominated technology headlines across the UK. Between the futuristic predictions of Elon Musk and the real life road tests of Google’s SDCs (Self-Driving Cars), driverless vehicles are quickly moving from a work of science fiction to a genuine commercial reality.
Already, Google’s fleet of 23 automated vehicles have clocked up over a million driverless miles without any (significant) incidents. Now, as the search giant’s official road test moves out of the streets of Silicon Valley and expands into the UK, the Department of Transport has been forced to devise a regulatory “plan of action” to support the use of driverless vehicles on British roads. While still in the very early stages, just the consideration of this regulatory framework has further cemented the view that driverless cars are set to become a common fixture of our daily lives.
While the futuristic nature of driverless cars may have led to widespread discussion and debate, in terms of the actual technology, projects such as the Google car still have a long way to go before they are ready for any sort of reliable commercial use. As it stands, the vast majority of self-driving cars still struggle to navigate in anything other than the most basic of weather conditions. Rain, snow and even potholes have so far eluded the driverless technology, causing Google’s fleet to stall and swerve when faced with anything more challenging than the sunny streets of San Francisco. Most prohibitive of all however has been the price tag associated with driverless technology. As it stands, each Google Car contains around $150,000 of robotics technology, including a $70,000 roof-mounted LIDAR sensor system. While the price of these components will inevitably drop in years to come, we probably shouldn’t expect to see a budget-friendly model hitting the consumer market any time soon.
But while driverless cars may not yet be ready for the real world, that isn’t to say that their incorporated technologies don’t have a place in the existing automotive marketplace. Over the last ten years, advancements in automotive electronics have grown staggeringly sophisticated. As it stands, a car released in 2015 now houses the equivalent of twenty desktop computers, incorporates more than 10 million lines of code, and typically processes up to 600gb of data every single day. Combined with advancements in automation, Bluetooth-based monitoring systems and a 50% increase in electric power supplies, the incorporation of new technologies is revolutionising the way we drive. These “connected cars” are already extremely popular, with independent research from element14 suggesting that 61% of consumers either own or are planning to own a connected car in the near future.
Already, much of the low level technology found within driverless cars is being adopted into advanced driver assistance systems (ADAS). Rather than taking complete control of the vehicle, these AEC approved systems employ video cameras and radar to alert drivers to dangers on the road. In some instances this information is then being connected to cloud-based databases, combining with third party information platforms to create a more all-encompassing IoT system. One of the most recent examples of this has been the adoption of Catapult transport standards, which were designed to tie together different transportation sectors and ultimately encourage “intelligent mobility” around the UK. By adapting previously unreleased data streams from the Met Office, Catapult has able to develop an “instant weather” project, which provides highly localised real-time weather updates to help ADAS navigate hazardous conditions.
Not only are vehicles using this tech to get to know their surroundings, they’re even starting to get to know us as individual drivers as well. Earlier this month, researchers at the Department of Electrical Engineering at Berkeley announced the successful development of an automotive algorithm designed to predict the (somewhat erratic) decisions that human drivers make on the road. Following a simulated test run, this algorithm has so far managed to predict with 92% accuracy whether a driver is planning on changing lanes without any physical input. While still in its infancy, this type of predictive technology may prove an effective hybrid between traditional vehicles and fully driverless automation. A similar example is the inclusion of sophisticated “Mid-Air Touch” systems, which allow drivers to navigate their on-board computer systems without the need to rely on either voice activation or physical touch. Once again this reduces the workload of the driver, without delegating excessive responsibility to the vehicle itself.
In many respects, this is where we should expect to see the real innovation over the next few years. Not in attempting to develop some sci-fi vision for driverless cars (which requires an unrealistic redesign of infrastructure), but in using that same technology to improve the overall driving experience of existing vehicles. In that regard, driverless cars have almost proved more useful as a testing ground than as an actual end product. By pushing the limits of automated tech, they help us to discover the best possible balance between human and machine activities, improving safety and functionality, without compromising on the benefits of an autonomous driver.
For more information from Farnell, click here.