Search

PI-SIM: PerceptIn Simulator for Autonomous Robots and Vehicles


1. Introduction


This is part of the PerceptIn technology blog on how to build your own autonomous vehicles and robots. The other technical articles on these topics can be found at https://www.perceptin.io/blog/.


This article introduces the PerceptIn’s simulation system, namely PI-Sim. Note that PI-SIM is an extremely lightweight simulator. Simplicity and modularity are the design principles of PI-SIM, this is very different from function-rich simulators, such as CARLA [1]. With CARLA, users can integrate perception, planning, and other functions to perform heavy-weight simulations. Whereas with PI-SIM, users can easily import a selected map (the mapping process can be found here https://www.perceptin.io/blog/map-creation-for-autonomous-robots-and-vehicles), change the chassis model, and select what behaviors to simulate.  Advanced users can also modify the planning and control algorithms or simply import their own planning and control algorithms.


With PI-SIM, users can enable a customized autonomous vehicle (such as the DragonFly Car, https://www.perceptin.io/blog/build-your-own-autonomous-vehicle-with-dragonfly-technologies) to run in user defined areas , a simulator that provides unified functionality and interface and can be integrated into an autonomous vehicle architecture is still needed. Such simulator is quite useful for accelerating the development and deployment processes. To be more specific, the simulator should have the following features.


First, it can import user specified maps and cars. Second, it can perform simulation on the specified maps and allows users to visualize the simulation in action. The car’s navigation and path planning functions can be easily verified in the simulator. Third, the car and user can interact with the simulation environment. For example, a car’s state and behavior can be visualized in real-time by the simulator. This is extremely useful for integrating and debugging the whole system.


The following sections delve into the technical details of PI-Sim. We will start with the architecture of PI-Sim and describe the components of PI-Sim. After that, we will show a demo of DragonFly car’s navigation behavior simulation in the University of California, Irvine (UCI) campus. In addition, PI-Sim can be used to visualize the vehicle state as the DragonFly car is navigating in real-world.



2. PI-SIM Architecture


Figure 1 below shows the architecture diagram of PI-SIM, which utilizes a modular design and consists of a map, a car configuration module, one or more planning and control algorithms, a simulation module, and a visualization module.


The beauty of the modular design is that you can create your own map and run your simulation in any environments. Also, if you want to develop your own planning and control algorithms, you can do so as long as your module conform to the specified interface APIs.


The Map module specifies the lane and road information. Users can specify a car’s form factor in the Car Config module. The configured car model is used by the planner to calculate car’s pose and to predict future trajectories. The Planner generates car’s trajectory and states to the Simulation Module. The Simulation Module controls the simulation environment and generates obstacles information to the Planner. The Visualization Module presents the simulation UI and allows users to interact with the UI. The UI contains and displays information such as current vehicle status, vehicle trajectory, and map information, etc.


Figure 1: PI-SIM Architecture


3. Simulation


We use the DragonFly car and the map of UC Irvine campus as an example to demonstrate the functionality of PI-SIM. To precisely emulate the car’s behavior, a set of parameters of the car need to be specified. Take DragonFly pod [2] as an example (illustrated by the following figure), users can specify the following parameters:


  • changeDis: the distance required for the vehicle to make a lane change. The larger the distance, the smoother the motion will be, but also longer trajectory for lane change.

  • senseDst: sense distance for passive perception such that the chassis will be notified when obstacles appear within this distance.

  • laneWidth: the width of the lanes

  • regularSpeed: regular forward speed, the planning algorithm tries to maintain this speed when moving forward.

  • turnSpeed: turning speed, the planning algorithm tries to maintain this speed when making a turn.

  • maxAngle: the maximum angle between the front wheel and vehicle heading when making a turn.

  • maxTurnSpeed: the maximum front wheel's rotation speed when turning, in angles

  • vehicleLength: vehicle length

  • vehicleWidth: vehicle width

  • wheelDst: distance between two wheels, or wheelbase


Figure 2: DragonFly pod

The demonstration video in [3] shows DragonFly pod’s navigation and path planning behavior in the UCI campus. In the video, after importing DragonFly pod’s chassis model and the map of UCI, the simulator is ready to run.


After that, users need to set the starting location, stops and destination location through PI-SIM’s UI, then a global optimal routes will be generated. As shown in Figure 3, as the DragonFly pod starts to run in the simulator, the UI shows the current road situation, the map, and the current speed of the vehicle.


Figure 3: PI-SIM UI

Figure 4 shows the dynamic obstacle avoidance behavior in the simulation, when a dynamic obstacle is detected, the UI shows the obstacle’s distance to the vehicle.  Then the planning and control algorithm can decide whether to stop the vehicle and wait for the obstacle to disappear, or to go around the obstacle.  


Figure 4: dynamic obstacle avoidance

Figure 5 shows the car following behavior in the simulation, in this scenario, there is a vehicle moving in front of the host vehicle, so the host vehicle can follow the front vehicle while maintaining a safety distance.


Figure 5: car following behavior

Figure 6 shows the static obstacle avoidance behavior in the simulation, when a static obstacle is detected, the host vehicle slows down and then eventually change to a different lane to bypass the obstacle.


Figure 6: static obstacle avoidance

Figure 7 shows the parking behavior in the simulation, when the vehicle reaches the destination, it searches for the nearest parking lot and automatically parks itself.  To achieve, we need to label parking lot locations on the map.


Figure 7: parking


4. Interaction with a real DragonFly car


By replacing the perception and localization parts inside the simulation module with DragonFly vision module [4] and GNSS sensor [5], and connecting the planning and control module to a DragonFly pod [6], the simulation system serves as the UI for the physical vehicle and allows users to interact with it. While the vehicle is navigating in the user defined area, the physical information of the vehicle and the environment, such as the vehicle’s pose, trajectory, lane information, obstacles and etc., can be displayed on the UI in real time. The video in [6] shows an example of the interaction between the simulator and a real DragonFly pod in action. Thanks to the modular design principle, comparison of the car’s behavior between simulation environment and real testing environment, and integration the autonomous vehicle system become easy.


Figure 8: DragonFly Pod UI

We will release PI-SIM in the next few weeks for you to play with it, please stay tuned.



References


  1. CARLA Open-source simulator for autonomous driving research, accessed Dec 15 2018, http://carla.org/

  2. PerceptIn DragonFly Pod, accessed Dec 15 2018, https://www.perceptin.io/products

  3. PerceptIn Simulation Demo, accessed Dec 15 2018, https://www.youtube.com/watch?v=7w9_KHO4SBE

  4. DragonFly Computer Vision Module, accessed Dec 15 2018, https://www.perceptin.io/products

  5. DragonFly GNSS Module, accessed Dec 15 2018, https://www.perceptin.io/products

  6. PerceptIn DragonFly Pod Navigation UI, accessed Dec 15 2018, https://www.youtube.com/watch?v=HJoitMxcUQo&feature=youtu.be

版权所有:深圳普思英察科技有限公司

©COPYRIGHT 2018 PerceptIn  ALL RIGHTS RESERVED

​ICP备案:粤ICP备18142588号-2

邮箱:dragonfly@perceptin.io

公司地址:深圳市南山区科兴科学园