Home > All news > Industry news > Intell ifusion Releases First Domestic 14nm Chiplet Large Model Inference Chip
芯达茂F广告位 芯达茂F广告位

Intell ifusion Releases First Domestic 14nm Chiplet Large Model Inference Chip

There is news that in the recent opening ceremony of the Hi-Tech Fair (November 15), Yuntian Reed Fei released a new generation of AI chips DeepEdge10. The new product is the first domestic 14nm Chiplet large model inference chip, using an independently controllable domestic process, which contains a domestic RISC-V core and supports the deployment of large model inference. Relying on the self-developed chip DeepEdge10 innovative D2D chiplet architecture to build the X5000 inference card, has been adapted and can carry SAM CV large model, Llama2 and other tens of billions of large model computing, can be widely used in the AIoT edge of the video, mobile robots and other scenarios.

Figure 1: DeepEdge10, a 14nm Chiplet large-model inference chip from Reed Cloud (*image from intell ifusion)

 

Key Carrier for Application Landing in the Age of Big Models: AI Inference Chips


The market value of edge computing shows a high-speed development trend.IDC predicts that by the end of 2023, the global edge computing market will reach a scale of $200 billion; it is expected that by 2026, the edge computing market will exceed $300 billion.


Edge computing scenarios are characterized by fragmentation of arithmetic power, long-tailed algorithms, non-standard products, and fragmentation of scale, and traditional algorithm development and chips are difficult to adapt to the productization needs of the new generation of AI edge computing scenarios. The emergence of big models provides the industry with a solution at the algorithmic level. However, if the big model is to play a role in the edge computing scene for the real world, it needs the support of the AI big model inference chip.


For AI chips, big models bring new computing generalization and computing requirements. The chip needs to have more arithmetic power, more memory bandwidth, and more memory capacity in order to support the operation of large models with a huge number of parameters at the edge. At the same time, AI edge inference chips also take on the responsibility of "the last kilometer of landing applications", which means that AI edge inference chips not only need to support large models and other AI computing tasks, but also need to have strong general-purpose computing power.


In response to these scenarios, ReedInfo has created a new generation of edge computing chip DeepEdge10, which has SoC master control integration; adopts D2D Chiplet technology and C2C Mesh expansion architecture, which can realize flexible expansion of arithmetic power; and has built-in ReedInfo's newest fourth-generation neural network processor.


Based on a series of innovative technologies, Yuntian Reed Fei has created a series of chips. Currently, it has developed Edge10C, Edge10 Standard Edition and Edge10Max; the shipment forms include chips, boards, boxes, acceleration cards, inference servers, etc., which can be widely used in AIoT edge video, mobile robots and other scenarios. The X5000 inference card built on Dee Edge10's innovative D2D chiplet architecture has been adapted to and can carry SAM CV large models, Llama2 and other 10 billion large model operations.

Currently, Yuntian Reedfine has provided IP licenses of neural network processors to domestic head AIoT chip design manufacturers, smart car chip design manufacturers, service robot manufacturers, and national key laboratories.

Yuntian Reed Fei's core magic weapon to build AI chips: algorithmic siliconization

Yuntian Reedfly is an AI local brand enterprise that has insisted on chip research and development since its inception in 2014, which is also rare in China. At present, Yuntian Reed Fei has completed the research and development of 3-generation instruction set architecture and 4-generation neural network processor architecture, and has been commercialized one after another. What is more valuable is that through years of investment, the company has established a core chip team with an average of more than 14 years of design experience. The company's chips have been awarded the Artificial Intelligence Special Project by the Ministry of Industry and Information Technology, the Development and Reform Commission, and the Ministry of Science and Technology, and have been awarded the Wu Wenjun Artificial Intelligence Science and Technology Award for three times.

Yuntian Reedfine continues to focus on the precipitation of its core capability of "algorithmic siliconization". "Algorithm Chip" is not simply "algorithm + chip", but an AI chip design process that integrates the chip designer's concepts and ideas with the algorithm based on the understanding of the scene and quantitative analysis of the algorithm's key computational tasks in the application scene. Only in this way can the AI chip play a better role in practical applications.


Yuntian Reedfly's self-developed chips are also an important engine of the company's Self-Evolving City Intelligence Body strategy.In 2020, Yuntian Reedfly officially released its Self-Evolving City Intelligence Body strategy at the Hi-Tech Fair. The core logic of driving the development of self-evolving city intelligence is to create a data flywheel of "application production data, data training algorithms, algorithm definition chip, chip scale empowerment application". Chip is the key carrier that determines the breadth and depth of AI applications, and it is also an important arithmetic support for the construction of self-evolving urban intelligences.


In the future, Reedfine will continue to increase independent research and development efforts, based on independent and controllable, self-research "core", to provide a powerful engine for the development of self-evolving urban intelligences.

Related news recommendations

Login

Register

Login
{{codeText}}
Login
{{codeText}}
Submit
Close
Subscribe
ITEM
Comparison Clear all