From ADAS to full Level 5 autonomy, with a special focus on artificial intelligence, machine learning, sensors, the computing unit handles the main control for the self-driving vehicles, such as converting throttle input to torque requests, safety systems monitoring, control loops, and power limiting. Therefore, Autonomous Vehicles (AV) could contribute to making future mobility more efficient, safer and cleaner.
- How Autonomous Vehicle Works
Sensors are key components to make a vehicle driverless. Camera, radar, ultrasonic and LiDAR enable an autonomous vehicle to visualize its surroundings and detect objects. Cars today are fitted with a growing number of environmental sensors that perform a multitude of tasks. The control system integrated sensors for AV encompasses three parts: perception, decision and execution.
01. PERCEPTION LAYER
Perception Perception enables sensors to not only detect objects, but also acquire and eventually classify and track objects surround. Sensor perception task includes three parts: localization, detection and tracking. All of them achieved through data fusion performed at different levels. For instance, localization is usually performed by algorithms that fuse data from GPS, IMU, and LiDAR, resulting in a high-resolution group map. Vision-based deep -learning technologies are achieving accurate results for object detection, as they can autonomously handle huge amounts of data.
02. DECISION LAYER
Decision-taking is one of the most challenging tasks that AVs must perform. It encompasses prediction, path planning, and obstacle avoidance. All of them performed on the basis of previous perceptions. As the most important part of integrated sensor system, Decision-taking needs at least two computers to complete critical missions: one of computers (Image Processing Computer) processes the huge amount of data delivered from sensors, and then transfers the classified data to another one computer (Driving Computer). With the help of analyzed data, Driving Computer can command devices, e.g., accelerator and brake to adjust the speed.
03. EXECUTION LAYER
Execution layer consists of interconnection between accelerator, brakes, gearbox and so forth. Driven by Real-Time Operating System (RTOS), all these devices can carry out commands issued by Driving Computer, achieving what AI set out to do. So, AV is able to execute all necessary mechanical actions known as the “four pillars” of vehicle operation: Steering, Shifting, Acceleration, and Braking.
- Required High Performance Computing Power
7StarLake GPGPU Series
An automated-driving control unit is the core controller of autonomous vehicles. 7Starlake has designed high performance GPGPU computer to EASYMILE to achieve the most advanced driverless shuttle – EZ10 . EZ10 has been launched in 2015 and operated over 26 countries and up to 200 sites, including Asia, Middle East , North America and Europe. EZ10 has no steering wheel, gas pedal or brake pedal, being 100% driverless. Relative to normal cars, hardware accelerators, such as GPUs , CPU and FPGAs are extremely important to autonomous vehicles for handling computation-intensive tasks.
In response to an exponential increase in the usage of the autonomous vehicles across the globe, 7StarLake continuously develops suitable products for self-driving cars. 7StarLake’s GPGPU AI Fusion computers provide complete structure for image processing and driving with remarkable durability for various unpredictable conditions and perfect adaptation for multi-usage. It can process variant vision sensor data synchronously, and offer a high-performance solution for automated driving that supports all relevant sensor interfaces, buses, and networks.
Depending on environmental condition and application, AV requires different facility composition and system organization. In recent innovating and examining process, AV are commonly used in three main fields: Load lifter, Shuttle bus, and Battle MUTT. To learn more details about the operation, please check out the highlight solutions below.