Why the automotive industry is turning to Ethernet for increased in-vehicle network bandwidth. By Dr. Kai Richter and Jonas Diemer of Symtavision and Daniel Thiele, Philip Axer and Dr. Rolf Ernst of Technische Universität Braunschweig.
The superior bandwidth and flexibility of Ethernet, along with the potential for sharing the cost of ownership with other industrial segments, makes it ideal for addressing the high demands of new functions in infotainment and advanced driver assistance systems (stereo camera, surround view, etc.) as well as for reducing ECU flashing and updating costs.
While automotive OEMs initially intend to use Ethernet as a complementary communication medium, they are also considering its use as a powerful, future backbone technology capable of carrying traffic originating in CAN (-FD) and other bus sub-systems (see Figure 1).
Figure 1: An example of future automotive Ethernet backbone network
However, using Ethernet in automotive applications is no easy task as it was developed primarily for bandwidth hungry applications that are neither time nor safety critical. Vehicles have a far richer set of real-time requirements including quality-of-service (QoS), guaranteed end-to-end timing (i.e. latency), guaranteed delivery, guaranteed bandwidth or best-effort. Furthermore, Ethernet differs from CAN (-FD) and FlexRay in many ways. It is a packet switched network with point-to-point links and switches, which add delays to the end-to-end latency. It only has (at most) 8 priority levels compared to 211/229 of CAN. Frames can even be dropped by the switches without any ECU noticing it. Being able to analyse architectural concepts, load, performance and real-time capabilities of Ethernet networks has become an urgent priority. While such analysis is already well-established for CAN and FlexRay networks, they barely exist for Ethernet-based E/E architectures where real-time behaviour is more complex.
Fortunately, there are many Ethernet parameters that can be used to tailor a solution for the automotive domain, if a careful design approach based on analytical methods is taken. Over the past two years, Symtavision has collaborated tightly with the Technische Universität Braunschweig and its iTUBS innovation centre, as well as several premium and volume automotive OEMs and industrial partners (Siemens, VW, BMW and Daimler) to develop a technology that is now commercially available as part of Symtavision’s SymTA/S tool suite. Integrated with Symtavision’s well-established analyses for CAN, FlexRay and AUTOSAR-based ECUs, the Ethernet analysis solution enables OEMs and Tier 1 suppliers to plan, optimise and verify timing when introducing Ethernet. With the ability to undertake end-to-end timing analysis for distributed functions via Ethernet, as well as connecting Ethernet to legacy CAN and FlexRay networks via gateways, extensive experimentation and evaluation is now possible.
Traditional automotive communication buses like CAN (-FD) or FlexRay are shared by all nodes that are connected to the same physical wire. Switched Ethernet, on the other hand, can only connect two nodes directly in a peer-to-peer fashion. This gives rise to several different potential topologies including Star, Line and Clustered topologies. A Star topology connects all ECUs to one big central switch. This reduces network contention by minimising shared links, allowing very low latencies, but increases cable length and wiring costs. A Line topology seeks to reduce overall cable length by adding a 3-port switch to each ECU and connecting them in a daisy-chain fashion. However, this increases ECU costs and the potential for interference in the network. Clustered topologies are a mixture of Star and Line topologies and facilitate a trade-off of their individual strengths and weaknesses (see Figure 2).
Figure 2: Typical topology candidates
Choosing the optimal Ethernet topology requires thorough analysis. In a recent study of the topology impact on Ethernet network timing for a typical industrial automation scenario (similar to automotive control applications), it was shown that a Star topology achieved the best latencies, while a Line topology showed the worst, with partly very high delays. This is explained by the increased conflicts due to the sharing of links. As expected, the performance of a Clustered topology was in between the two. Topology choice can, therefore, have a significant impact on the overall cost and performance of the network. Timing analysis methods and tools can provide the support needed to achieve the optimal choice, while also allowing other design space options and switch parameters to be considered.
Switches perform the key role of transporting the traffic and arbitrate between different, concurrent traffic streams according to configurable schemes (i.e. unicast, multicast and broadcast). They forward incoming frames over a switch fabric and store them in queues from which a scheduler selects frames for transmission over the output (see Figure 3). This scheduling results in highly dynamic delays depending on the interfering traffic, which can result in highly complex real-time behaviour and non-intuitive corner cases with increased latencies.
Figure 3: Internal switch architecture
The scheduling policy depends on what ‘flavour’ of Ethernet is employed. The original Ethernet standard (IEEE 802.3) mostly uses a single FIFO queue per port, making differentiated predictions per traffic class or even per message almost impossible. IEEE 802.1Q VLAN adds the concept of traffic classes. VLAN switches can have up to 8 separate FIFO queues per port, each with a distinct strict priority (SP). Just as with CAN buses, high priority traffic is always served before lower priority traffic, with a key difference that CAN offers 211 or even 229 priority levels, instead of 8. The recent Ethernet AVB Standard extends the fixed-priority scheduler by traffic shapers that facilitate bandwidth reservation, preventing higher-priority traffic classes from permanently blocking lower-priority traffic. The traffic shaping, however, also imposes an additional delay and thus increases latencies. Other switches provide a weighted round-robin (WRR) policy, where bandwidth is distributed according to weights that are associated with each traffic class. This, however, can show very complex timing behaviour with sudden jumps in the latency. Time-Triggered Ethernet (TTE) uses a different approach based on a synchronised time-triggered schedule, which avoids collisions by design instead of resolving them on the fly. Here, just as with FlexRay, the guaranteed latency times come at the cost of reduced flexibility and efficiency. A combination of these properties is currently pursued for the upcoming AVB Generation 2 (also called time-sensitive networks, TSN). Overall, the best strategy in terms of short latencies depends on the number of ECUs in the system and the traffic patterns in the network.
Ethernet and AUTOSAR
Timing is not only affected by the network infrastructure, but also by the higher layer protocols that govern how data is encapsulated into Ethernet frames (see Figure 4). Due to the lack of automotive-specific standards and experience, plenty of questions arise. Is the use of MAC frames sufficient, or do we always need network and/or transport layer protocols such as IP, UDP and TCP? Higher layer protocols provide additional services such as increased address space and routing (IP), fragmentations of very large payloads (IP), data integrity checks (UDP, TCP), end-to-end flow control, re-transmission of lost frames or in-order delivery (TCP). At the same time, higher layer protocols increase the overhead due to additional protocol data and handshaking traffic and corresponding delays.
Figure 4: Signal-to-PDU-to-Frame mapping options on CAN, FlexRay and Ethernet
.So, how can AUTOSAR signals or payload data units (PDUs) be packed into MAC or UDP/IP frames? A CAN-like 1:1 mapping would be inefficient. Ethernet frames offer roughly a 1500 Byte payload, which fits 190 PDUs (of 8 Byte each). UDP frames take up to 64 kByte. Larger frames reduce the overhead, but at the same time increase the per-signal latency, as additional time is required for receiving the signals (including waiting for updates), especially for different signal update cycles. Where domain-gateway architectures (see Figure 1) are deployed, such delays can occur twice, on both ‘ends’ of the Ethernet backbone; this is called end-of-line blocking. Richer protocols, such as SomeIP (see AUTOSAR 4.0.3), provide more flexibility at the cost of additional complexity. Finally, it should be expected that the packing will also be driven by the way the limited VLAN priorities are used, because packing will happen for priority sharing anyway.
Automotive OEMs have made it very clear that Ethernet is on its way into vehicle electronics in a big way. The question is how to determine the ‘best’ topology, switch architecture, and higher level protocol? For this, a clear strategy and appropriate analysis tools are needed. Working in collaboration, the Technische Universität Braunschweig, via its innovation centre, iTUBS, has developed the necessary foundations and Symtavision the commercial tools for Ethernet as an automotive backbone technology. First solutions now exist, but the design space is still very large and covering all potential options with real-time analysis methods is immensely expensive. Consequently, the automotive industry, including component suppliers, need to develop ‘deeper’ standards as soon as possible, otherwise it will not be possible to share the research and development costs among all players.
Any standard must be driven by the specific needs of OEMs coupled with timing, safety, and QoS considerations, as well as the connection of legacy standards (CAN to CAN-FD). All this demands a holistic, network-centric approach to automotive Ethernet standards development which combines specific technologies (e.g. AVB or TTE) with the way the technology is used (e.g. topology and higher-level protocols). Only then will it be possible to achieve cost efficiency for components, processes and tools.