Search

PerceptIn Chassis Software Adaptation Layer


1. Introduction


In our previous article we have introduced the Chassis technology for autonomous robots and vehicles (https://www.perceptin.io/blog/chassis-technologies-for-autonomous-robots-and-vehicles). In this article we introduce the PerceptIn Chassis Software Adaptation Layer, which provides a layer of abstraction for different chassis, so that chassis manufacturers can easily integrate PerceptIn’s autonomous driving technology stack to convert their chassis into autonomous vehicles.


Figure 1 below shows the PerceptIn technology architecture. In the upper layer we integrate various sensors (radar, cameras, LiDAR, sonar, GPS etc.) to capture information from the environment. Then the raw data is fed into the the main computing engine to perform perception, localization and planning. Next, the planning module generates control commands and sends these commands to the chassis. This step is facilitated by the PerceptIn Chassis Software Adaptation Layer, which we will explain in more details in this article.

We have integrated many chassis using the methodology introduced in this article, including an autonomous passenger pod [1], an autonomous explorer [2], and an intelligent advertising machine [3].


Figure 1: PerceptIn technology architecture

2. Communication Protocol

We assume all chassis platforms use the CAN bus for communication [4]. CAN bus allows multiple devices communicate with one another. Each of the devices on the network has a CAN controller chip and all devices see all transmitted messages. Each device can decide if a message is relevant or if it should be filtered. Also every message has a priority, so if two nodes try to send messages simultaneously, the one with the higher priority gets transmitted and the one with the lower priority gets postponed. This arbitration is non-destructive and results in non-interrupted transmission of the highest priority message. This also allows networks to meet deterministic timing constraints.


The CAN specification also includes a Cyclic Redundancy Code (CRC) to perform error checking on each frame's contents. Frames with errors are disregarded by all nodes, and an error frame can be transmitted to signal the error to the network. Global and local errors are differentiated by the controller, and if too many errors are detected, individual nodes can stop transmitting errors or disconnect itself from the network completely.

CAN was first created for automotive use, so its most common application is in-vehicle electronic networking. However, as other industries have realized the dependability and advantages of CAN over the past 20 years, they have adopted the bus for a wide variety of applications. Railway applications such as streetcars, trams, undergrounds, light railways, and long-distance trains incorporate CAN.


An open source CAN-based communication system is CANOpen [5]. CANOpen comprises higher-layer protocols and profile specifications. CANopen has been developed as a standardized embedded network with highly flexible configuration capabilities. It was designed originally for motion-oriented machine control systems, such as handling systems. Today it is used in various application fields, such as medical equipment, off-road vehicles, maritime electronics, railway applications, or building automation.


3. Chassis Software Adaptation Layer


Figure 2 below shows the architecture of PerceptIn’s chassis software adaptation layer. In a way, it is a detailed physiology diagram of Figure 1. Note that the Planning & Control module generates control commands and sends them down to the chassis module for execution. Also, the perception sensors, such as sonars and radars [6, 7], interact with both the chassis module for passive perception and the perception module for active perception.


The core of the chassis module consists of three parts:

· VehicleControlUnit: this interface provides abstraction for different chassis platforms such that the developers do not have to fully understand the details of the CAN communication protocols. Instead, when a developer tries to integrate a new chassis platform, he/she only needs to derive a new class from the VehicleControlUnit virtual interface and to implement the core functions.

· Sensors: this interface provides abstraction the sensors connected to the CAN bus, mostly passive perception sensors such as radars and sonars. Using this interface, developers can easily get perception data without digging into the details of how these sensors work.

· PassiveSafety: developers can implement and adjust their passive perception logics in this interface. For instance, a developer can decide to stop the vehicle when radar or sonar detects an obstacle 2 meters ahead. In this case, the developer should take passive perception sensor data from the Sensors interface and implement this logic in the PassiveSafety interface.


Figure 2: PerceptIn chassis interface

4. Connecting Your Chassis

In this section, we delve into the details of how to integrate a new chassis. Figure 3 shows the hardware setup diagram. For simplicity, we can use a two CAN bus setup, such that the chassis platform occupies one CAN bus, and the passive perception sensors occupy the other CAN bus. Then both CAN buses connect to the control computer through a CAN card. Of course, it is ok as well to put all sensors and the chassis on the same CAN bus as well, then in this case we have to agree with the chassis provider on what CAN ID to use for the chassis.


Figure 3: CAN bus connection setup

Figure 4 shows the DragonFly pod software interface, for each new chassis platform, we need to implement the virtual interface VehicleControlUnit and its essential functions including SetSpeed, SetBrake, SetAngle, GetSpeed, GetBrake, GetAngle. Note that these functions are required for the Planning & Control module to interact with the chassis.


Figure 4: DragonFly pod software interface

5. Software Interfaces

This section we provide the software interface definitions as below:

References

1. PerceptIn Launches the DragonFly Pod: The World's First $40,000 AV, Accessed 1 December, 2018, https://www.futurecar.com/2649/PerceptIn-Launches-the-DragonFly-Pod-The-Worlds-First-$40000-AV

2. PerceptIn Autonomous Vehicle Technology Demo, Accessed 1 December, 2018, https://www.youtube.com/watch?v=qVX2mSvKHR8&t=5s

3. PerceptIn unleashes a driverless mobile vending machine that displays video ads, Accessed 1 December, 2018, https://venturebeat.com/2018/11/15/perceptin-unleashes-a-driverless-mobile-vending-machine-that-displays-video-ads/

4. Controller Area Network (CAN) Overview, Accessed 1 December 2018, http://www.ni.com/white-paper/2732/en/

5. CAN Open, Accessed 1 December 2018, https://www.can-cia.org/canopen/

6. PerceptIn DragonFly mmWave Radar, Accessed 1 December 2018, https://www.youtube.com/watch?v=ZLOgcc7GUiQ

7. PerceptIn DragonFly Sonar, Accessed 1 December 2018, https://www.youtube.com/watch?v=-H3YdC-xSgQ

版权所有:深圳普思英察科技有限公司

©COPYRIGHT 2018 PerceptIn  ALL RIGHTS RESERVED

​ICP备案:粤ICP备18142588号-2

邮箱:dragonfly@perceptin.io

公司地址:深圳市南山区科兴科学园