Our team, Autonomous Ground Vehicle Research Group, was one of the top 13 teams to get selected for the final round of the Mahindra Rise Prize challenge out of more than 400 organisations. We were entrusted with the task to convert the Mahindra e2o electric vehicle into a fully autonomous vehicle. AGV lab is one of the few labs in the country having state of the art sensors used for autonomous driving like the Velodyne 32E 3D Lidar, Bumblebee Stereo cam, Vector-Nav Inertial measurement units, Hokuyo 2D Lidars, etc. The entire group was working on this project. I will briefly describe the modules that I have worked on and give an overview of the work being done on the other modules.
The first task to be worked upon was to make the e2o controllable using the joystick. The car came preinstalled with the motors for steering control and gas pedal-brake. Moreover, the car could be interfaced using the CAN protocol. CAN stands for Controller Area Network and it allows microcontrollers and devices to communicate with each other in applications without a host computer. With the help of CAN, the various electronic devices can exchange information with each other over a common serial bus. The other benefit of using the CAN protocol is that we can provide priority-based commands to the vehicle and thus it can handle the emergency situations without much trouble. We mapped the joystick buttons for different CAN commands like the velocity increment-decrement, steering angle increment-decrement, headlights, indicator lights, etc with the help of ROS environment. The video below shows the testing of drive-by-wire control.
Sensor setup and Localisation
The next step to achieve was to set up the sensors on the car and implement accurate localisation for the vehicle. The vehicle had inbuilt encoders, but it would give the velocity feedback with a least count of 1km/hr which was not up to the mark. Thus we designed the mounts and coupled two external rotary encoders on each of the rear wheels. An Inertial Measurement Unit was mounted on the interior of the vehicle, and the Global positioning system was also attached. The Lidar mounts were also being prepared simultaneously. After installation of the sensors, the data from various sensors had to be fused to provide accurate odometry output. The odometry data from both the encoders were combined with the inertial data from the IMU and GPS with the help of an extended Kalman filter to provide a filtered output. The final part was to perform the loop closure test to check the accuracy of the localisation. After changing a few parameters, we got an almost perfect loop closure.
Lateral and Longitudinal Controls
This was a very crucial and time-taking part. Various controllers were tested for the longitudinal velocity control like the normal PID controller and the adaptive PID controller. The tests were done on a simulator before testing them on a car. Various methods were tried for the lateral controls of the car by considering a kinematic model as well as a dynamic model of the vehicle. The full blog on this project can found here.
Path Planning and trajectory generation
The work is currently going on for this module. The problem of planning can be divided into two parts- a global plan and a local plan. The global plan is a permanent path from the initial point to the end point and it remains constant whereas the local plan is limited to a certain distance and it keeps on updating based on the current configuration of the vehicle. We tested and tuned the Dijkstra based global planner available for ROS along with the Time elastic band local planner. The ROS move-base package was used for achieving the goals set by the planner.
Currently work is going on for different modules like vision, Simultaneous localisation and mapping and planning. I am currently working on the advanced controls and planning part for the vehicle.