On April 15, 2025, Tesla CEO Elon Musk announced an exciting new news through the social media platform X - Tesla is about to launch a "universal full self-driving (FSD) solution" based on pure artificial intelligence technology. This solution will be independent of complex sensor systems such as lidar, relying only on the vehicle's camera and Tesla's self-developed AI chip to work together. This move marks another important breakthrough for Tesla in the field of intelligent driving.
In the fierce competition of autonomous driving technology, the competition between Tesla and LiDAR has become the focus of attention in the industry. Tesla adheres to the pure vision route, relying on Dojo chips and cameras to achieve autonomous driving, while LiDAR improves perception capabilities through multi-sensor fusion solutions. This technology showdown is not only about the future of autonomous driving, but also has a profound impact on the entire semiconductor industry.
Pictured: Musk announced that he only needs a camera and Tesla's AI chip to complete a purely visual self-driving solution
Ⅰ The technical logic and advantages of the purely visual route
Tesla's vision-only route is based on a simple logic: humans can drive cars with both eyes, and autonomous driving can also be achieved through cameras. Tesla's hardware configuration includes eight cameras that surround the body, providing a 360-degree field of view, combined with millimeter-wave radar and ultrasonic sensors to form a multi-sensor fusion perception system.
The advantages of this approach are low cost and high data consistency. The price of the camera is low, and its cost advantage is obvious compared to lidar, which can easily cost tens of thousands of dollars. In addition, the pure vision solution has a single data source and will not invalidate historical data due to the addition of new sensors, which provides strong data support for Tesla's "shadow mode". With data collected by millions of vehicles every day, Tesla's neural network is continuously optimized to form a closed loop of self-evolution.
However, purely visual solutions are not perfect. The camera's two-dimensional imaging is easily distorted in complex weather (such as heavy rain, fog) or when the light is low, resulting in reduced perception. In addition, vision-only solutions rely heavily on algorithms and AI models, requiring a database to cover all possible driving scenarios. This dependency makes Tesla's performance in extreme scenarios uncertain.
Ⅱ Technical characteristics and industry support of LiDAR
LiDAR emits laser pulses and measures the reflection time to generate high-precision 3D point cloud maps, which can provide accurate environmental awareness. Compared with cameras, lidar is more robust in complex weather and light conditions, and the detection distance can usually reach more than 100 meters.
At present, the lidar solution has been widely supported by domestic car companies. Huawei, Xpeng, Li and other companies have adopted multi-sensor fusion solutions to improve perception capabilities through a combination of cameras, lidars, and millimeter-wave radars. For example, Huawei's ADS 3.0 version introduces the GOD (General Obstacle Detection) network, which can realize the leap from simple obstacle recognition to in-depth scene understanding. Xpeng uses the combination of lidar and vision algorithms to improve the adaptability in urban road scenarios.
The high cost of lidar used to be its main bottleneck, but in recent years, with the advancement of technology and large-scale mass production, its price has dropped significantly. For example, FLASH LiDAR has a high degree of chipization, and the cost is expected to be further reduced after large-scale mass production.
Figure: Fully autonomous system solutions are rapidly evolving
Ⅲ The future direction of the technology route and the impact of the industry
From the perspective of industry trends, lidar and pure vision solutions have their own advantages and disadvantages, and may coexist in the market in the future. LiDAR is irreplaceable in high-end models and high-security scenarios, while pure vision solutions are more attractive in low-end models due to their cost advantages.
Although Tesla's purely visual route has advantages in terms of cost and data accumulation, its perception ability in complex scenarios still needs to be improved. In contrast, LiDAR's robustness and high-precision perception capabilities make it more competitive in the field of autonomous driving. In the future, with the further decline in the cost of lidar, the competition between the two technical routes will become more intense.
Ⅳ Conclusion
Tesla's odds of using Dojo chip rigid lidar depend on whether it can achieve breakthroughs in algorithm optimization and hardware performance. At the same time, LiDAR continues to improve its perception capabilities through multi-sensor fusion solutions, and is gradually narrowing the gap with Tesla. This technology showdown is not only a core issue in the field of autonomous driving, but also has a profound impact on the future development direction of the entire semiconductor industry.