With the development of AI technology, the demand for computing power is also growing, which has led to the rising cost of AI computing. Raghib Hussain, CTO of Marvell, noted that the cost of AI computing is prohibitive for most of the top chipmakers, making it a significant challenge for them to catch up with AI technology.
First of all, the cost estimation of an AI project is a complex process, which involves the basic information of the project, such as volume, scale, business format, façade style, interior decoration standards, seismic rating, and structure type. In addition, it is also necessary to consider the location of the project, and the combined cost of this information can be estimated, including the cost of construction, civil works, interior, M&E, the cost of equipment, the cost of outdoor, and the second class fee and preparation fee.
The high cost of AI computing is mainly due to the demand for highly reliable, high-performance, and high-security computing power. With the rapid development of AI algorithms, more and more model training requires a huge amount of computing power, and the increasing amount of data also requires the evolution of computing power. Therefore, computing power has become a key factor in AI breakthroughs.
Pictured: Noam Mizrahi, executive vice president and chief technology officer, Marvell (Source: Nikkei).
In addition, the AI computing power market is showing a trend of rapid growth. Data, computing power and algorithms are the three elements of artificial intelligence development, among which data and algorithms are inseparable from the support of computing power. The world is setting off an "arms race" of computing power, and data centers, AI chips, servers and other links are expected to be highly valued as computing infrastructure.
However, the high cost of computing also puts enormous pressure on AI companies. OpenAI, for example, expects to lose $5 billion this year, of which the cost of computing power is as high as $7 billion, accounting for more than eighty percent of the total operating costs. This means that even if the model is advanced and the number of users is large, the high cost of computing power makes it difficult to make a profit easily.
In the face of these challenges, some tech giants such as Meta, Google, Microsoft, Amazon, etc., are investing heavily in expanding and improving AI infrastructure to support the market demand for cloud computing. The high cost of AI computing is mainly due to data scale and processing, model complexity, training time and iteration, hardware requirements, storage and bandwidth, and energy and environmental protection. To reduce these costs, measures can be taken to optimize models, use distributed training, take advantage of the elasticity of cloud computing, employ efficient data processing algorithms, and focus on hardware energy efficiency. These strategies and technical approaches can help reduce computing requirements, speed up the training process, avoid unnecessary waste, and promote the widespread application and development of AI technology.
Still, the high cost of AI computing is a problem that needs to be addressed. Nvidia's CEO, Jensen Huang, said that the future of AI will be a service that can "reasoning“, but to reach this stage, the cost of computing needs to be reduced. Nvidia plans to lay the foundation for these advancements by increasing chip performance by two to three times per year for the same energy and cost.
Overall, the high cost of AI computing poses a challenge for both chipmakers and AI companies. However, as the technology continues to advance and the market matures, there may be more solutions in the future to reduce these costs, making AI technology more accessible and.