Talking about the technology involved in autonomous driving from both hardware and software

Autopilot is a large and complex project involving a lot of technology. This article talks about the technologies involved in autonomous vehicles from both hardware and software.

First, the hardware

Let's look at a picture. The following picture basically contains all the hardware needed for the Autopilot Institute.

Of course, so many sensors don't necessarily appear in a car at the same time. The presence or absence of a sensor depends on what kind of task the car needs to accomplish. If you only need to complete the automatic driving of the highway, like Tesla's AutoPilot function, you don't need to use the laser sensor at all; if you need to complete the automatic driving of the urban section, it is very difficult to rely on vision alone.

Therefore, autopilot system engineers should be task-oriented, hardware selection and cost control.

car

Since it is necessary to do autonomous driving, the car is of course an essential thing. From experience, when developing, you can choose not to choose a pure gasoline car.

On the one hand, the entire autonomous driving system consumes a large amount of electricity, and hybrid and pure electric have obvious advantages in this respect. On the other hand, the underlying control algorithm of the engine is much more complicated than the motor. Instead of spending a lot of time on the calibration and debugging of the bottom layer, it is better to directly select the electric vehicle to study the higher layer algorithm.

Controller

In the pre-research stage of the previous algorithm, it is recommended to use the Industrial PC (IPC) as the most direct controller solution. Because the industrial computer is more stable and reliable than the embedded device, the community support and supporting software are also richer.

When the algorithm is more mature, the embedded system can be used as a controller. For example, the zFAS jointly developed by Audi and TTTech has been applied to the latest Audi A8 production car.

CAN card

The interaction between the industrial computer and the chassis of the car must pass through a special language - CAN. To obtain information such as the current speed and steering wheel angle from the chassis, it is necessary to analyze the data sent from the chassis to the CAN bus; after the industrial computer calculates the steering wheel angle and the desired speed through the sensor information, the message can also be transcoded into a chassis through the CAN card. The signal, the chassis, in turn responds.

The CAN card can be directly installed in the industrial computer and then connected to the CAN bus through an external interface.

Global Positioning System (GPS) + Inertial Measurement Unit (IMU)

Humans drive, from point A to point B, you need to know the map from point A to point B, and your current location, in order to know whether the next intersection is right or straight.

The same is true for the driverless system. By relying on GPS+IMU, you can know where you are (latitude and longitude), in which direction (heading), and of course the IMU can provide more information such as yaw rate and angular acceleration. Contribute to the positioning and decision control of autonomous vehicles.

Sensory sensor

I believe everyone is familiar with the car sensors. There are many types of sensory sensors, including vision sensors, laser sensors, radar sensors, and so on.

The visual sensor is the camera, and the camera is divided into monocular vision and binocular vision. The more well-known visual sensor provider is Israel's Mobileye, which has been acquired by intel this year.

The laser sensor is divided into single lines and multiple lines up to 64 lines. For every additional line, the cost increases by 10,000 RMB, and of course the corresponding detection effect is better. More well-known laser sensor providers are Velodyne and Quanergy in the United States, Ibeo in Germany, etc.

Radar sensors are the strength of the Tier1, because radar sensors are already widely used in cars. Well-known suppliers are of course Bosch, Delphi, Denso and so on.

Summary of hardware

It takes a lot of experience to assemble an automated driving system that can perform a certain function, and it is very familiar with the performance boundary of each sensor and the computing power of the controller. Excellent system engineers can keep costs to a minimum when they meet the requirements of the function, making it more likely to be mass-produced and landed.

Second, the software

The software consists of four layers: perception, integration, decision making, and control. Every level needs to write code to achieve information conversion.

collection

When the sensor communicates with our PC or embedded module, there are different transmission methods.

For example, we collect image information from cameras, some are communication through Gigabit network cards, and some are directly communicating through video lines. For example, some millimeter wave radars send information downstream through the CAN bus, so we must write code to parse CAN information.

Different transmission media require different protocols to resolve this information. This is called the "drive layer." In layman's terms, all the information collected by the sensor is taken and encoded into data that the team can use.

Pretreatment

When the sensor information is obtained, it will be found that not all information is useful. The sensor layer sends data to the downstream at a frame-by-frame, fixed frequency, but downstream can't take the data of each frame for decision or fusion. why?

Because the state of the sensor is not 100% effective, it is extremely irresponsible for downstream decision-making if it is only based on the signal of a certain frame to determine whether there is an obstacle ahead (possibly the sensor is misdetected). Therefore, the upstream needs to pre-process the information to ensure that the obstacles in front of the vehicle are always present in the time dimension, rather than flashing past.

Here we will use an algorithm commonly used in the field of intelligent driving - Kalman filter.

Coordinate transformation

Coordinate transformation is very important in the field of intelligent driving.

The sensors are installed in different places. For example, the ultrasonic radar is arranged around the vehicle. When there is an obstacle on the right side of the vehicle, which is 3 meters away from the ultrasonic radar, do we think that the obstacle is 3 meters away from the car?

Not necessarily! Because the decision control layer is doing vehicle motion planning, it is done in the car body coordinate system (the car body coordinate system generally has the axis center after the O point), so the information of all the sensors is finally transferred to the vehicle coordinate system. of.

Therefore, after the sensory layer obtains the obstacle position information of 3m, the position information of the obstacle must be transferred to the self-vehicle coordinate system for use in planning decision.

Similarly, the camera is generally installed in the windshield, the middle mirror, the data is also based on the camera coordinate system, the downstream data, also need to be converted to the car coordinate system.

What is the car coordinate system? Please take out your right hand and start reading X, Y, Z in the order of thumb → index finger → middle finger. Then the handle is held in the following shape:

Place the intersection of the three axes (the root of the index finger) in the center of the rear axle of the car coordinate system, the Z axis points to the roof, and the X axis points in the direction of the vehicle.

The coordinate system directions that each team may define are inconsistent, as long as the development team is unified internally.

Information fusion

Information fusion refers to the multi-integration of information of the same attribute.

For example, the camera detected an obstacle in front of the vehicle. The millimeter wave also detected an obstacle in front of the vehicle. The lidar also detected an obstacle in front of it. In fact, there is only one obstacle in front, so what we have to do is The information from the multi-sensor under this vehicle is merged to tell the downstream that there is a car in front, not three cars.

Decision making

The main design at this level is how to plan correctly after getting the data. The plan includes vertical control and lateral control. Longitudinal control is speed control, showing why it accelerates and when it brakes. Horizontal control is behavior control, showing when to change lanes, when to overtake, etc.

What does the software look like?

Some of the software in the autopilot system looks similar to the following.

The name of the software reflects the actual role of the software -

App_driver_camera camera driver

App_driver_hdmap high precision map driver

App_driver_ins INS drive

App_driver_lidar laser sensor driver

App_driver_mwr millimeter wave sensor driver

App_fusion_freespace Free driving area fusion

App_fusion_lane lane line fusion

App_fusion_obstacle obstacle fusion

App_planning&decision planning decision

However, the siege lions will write some other software for their own debugging work, such as tools for recording data and playing back data.

There is also a visualization program for sensor information display, similar to the effect shown below.

Zinc Wire

Zinc wire is a good anti-corrosion material,widely used in steel structure anti-corrosion,wind power tower,bridge,sluice gate,oil pipe on sea,Ductile iron pipe,extrusion division tube.






Zinc Wire,High Pure Zinc Wire,Zinc Wire Mesh,Corrosion Protection Zinc Wire

Shaoxing Tianlong Tin Materials Co.,Ltd. , https://www.tianlongspray.com

This entry was posted in on