"The computing cost required to operate ChatGPT is staggering" said OpenAI CEO Sam Altman. The 'ChatGPT' service provided by OpenAI processes users' questions based on approximately tens of thousands of GPUs (Graphics Processing Units). This GPU-based AI computing system is easy to design for specific purposes and has the advantage of low implementation costs. However, this approach faces challenges of high power consumption and high costs due to processing massive computations on GPUs. Particularly, with AI and machine learning characteristics rapidly increasing computational loads, alternatives are needed.
[Image] GPU-based AI computing system
NPU (Neural Processing Unit) is gaining attention as an alternative capable of addressing such AI computations. NPU is a structure implemented for AI and machine learning computations, capable of processing large amounts of data in artificial neural network structures. This structure achieves low power consumption through an improved distributed parallel processing structure compared to conventional GPU-based computations, addressing the high power consumption issues of conventional GPU computations. The semiconductor applied to NPU is unique. AI semiconductors for NPU application are designed to process data operations for core AI workload functions such as learning and inference with low power and high-speed processing to enhance efficiency.
[Image] Differences in CPU, GPU, and NPU structures
AI semiconductors applied to NPU are classified into three stages of technology: ① Improved versions of existing semiconductors, ② 1st generation AI semiconductors, ③ 2nd generation AI semiconductors. ① Improved versions of existing semiconductors have the characteristic of being programmable for various purposes using CPU, GPU, and Field-Programmable Gate Array (FPGA). However, there are relatively low computational performance and power consumption efficiency issues tailored to the characteristics of NPU. ② 1st generation AI semiconductors are chips developed to address these issues, customized for the applied AI to accelerate computations. Although these 1st generation chips achieve high computational efficiency and optimized structures for low power consumption compared to previous generations, they are expensive as custom chips and difficult to use for general purposes. ③ 2nd generation AI semiconductors are focused on as universal AI chips using a 'non von Neumann' approach. These chips are easy to mass-produce, achieving low power and low cost with similar computational processing capabilities to previous generations.
[Image] Classification of semiconductors by generation
Mobilint Inc. (CEO Shin Dong-joo) succeeded in attracting investment (Series B, 20 billion won) with this AI semiconductor technology. Mobilint 's AI semiconductor ARIES is a 1st generation AI semiconductor, offering approximately four times higher computational performance compared to competing products and using less than one-fifth of the energy consumption. The company is conducting demonstration projects with domestic and international companies and has applied and verified its products in various fields such as smart factories, smart cities, and robotics.
Furthermore, Mobilint is developing the next-generation chip, REGULUS, a 2nd generation AI semiconductor capable of high-performance AI functions with less than 5W power consumption, which can be applied not only to NPU but also to robotics, drones, on-device, and other fields. Mobilint's next-generation artificial intelligence semiconductor technology is protected by domestic and international patent portfolios, demonstrating differentiated technological composition compared to competitors. Based on this investment, Mobilint plans to use it for mass production of the existing AI semiconductor ARIES and development of the next-generation chip REGULUS.
[Image] Mobileint's AI semiconductor ARIES
In addition, domestic companies' participation in the market is active. Among them, DeepX Inc. (CEO Kim Nok-won) is leading the production of AI semiconductors in the market as a chip maker for Pelis AI semiconductors. DeepX has developed the DX-M1 for image and video identification, a vision AI semiconductor, and is verifying it with over 40 customers. DX-M1 is DeepX's flagship semiconductor supporting AI image analysis, characterized by up to 20 times lower power consumption than competing AI semiconductors due to optimized computational processing. It is a product that can reduce manufacturing costs by lowering the cost of SRAM input devices. Based on this technology, DeepX won three 'CES Innovation Awards' in 2023 in the fields of 'Embedded Technology', 'Computer Hardware', and 'Robotics', becoming the first AI semiconductor company to win in three categories.
Another prominent AI semiconductor company is Sapeon Inc. (CEO Ryu Su-jeong). Sapeon's AI semiconductor 'X330' is an AI semiconductor for NPU, with four times faster computational processing speed and twice the power efficiency compared to existing AI semiconductors. Based on this AI semiconductor technology, Sapeon is collaborating with SK Telecom and SK Broadband to build NPU farms. Sapeon plans to apply it to high-performance NPUs required for autonomous driving, CCTV, and other fields in the future.
[Image] DeepX's AI semiconductor 'DX-M1' and Sapeon's AI semiconductor 'X330'
Government support for NPU and AI semiconductor development is active. The Ministry of Science and ICT announced the 'K Cloud Initiative utilizing domestic AI semiconductors'. The K Cloud policy aims to develop domestically produced AI semiconductors with world-class ultra-high-speed and low-power capabilities and apply them to data centers to enhance competitiveness. To achieve this, the government plans to invest a total of 826.2 billion won by 2030. This will be carried out in three stages.
[Image] The Ministry of Science and ICT's 'K Cloud' policy for domestic NPU and AI development
The AI computing market undergoes vigorous changes every year. Even OpenAI, which was a winner in the market, faces intensified competition from global tech giants such as Google, Meta, Apple, etc. Companies that adapt well to these changes are expected to emerge as new winners.
As of 2024, BLT Law Firm has been a partner chosen by more than 2,000 innovative startups, supporting IP acquisition and strategy formulation, as well as investment attraction, technology special listing, and other business support utilizing IP to drive corporate growth and success.
'BLT insight' introduces a recently invested technology field every week.
If you have any questions about the Korean market or related to intellectual property rights, please ask your questions via the link below:
www.BLT.kr/contact
Or, you can inquire by emailing shawn@BLT.kr
"The computing cost required to operate ChatGPT is staggering" said OpenAI CEO Sam Altman. The 'ChatGPT' service provided by OpenAI processes users' questions based on approximately tens of thousands of GPUs (Graphics Processing Units). This GPU-based AI computing system is easy to design for specific purposes and has the advantage of low implementation costs. However, this approach faces challenges of high power consumption and high costs due to processing massive computations on GPUs. Particularly, with AI and machine learning characteristics rapidly increasing computational loads, alternatives are needed.
[Image] GPU-based AI computing system
NPU (Neural Processing Unit) is gaining attention as an alternative capable of addressing such AI computations. NPU is a structure implemented for AI and machine learning computations, capable of processing large amounts of data in artificial neural network structures. This structure achieves low power consumption through an improved distributed parallel processing structure compared to conventional GPU-based computations, addressing the high power consumption issues of conventional GPU computations. The semiconductor applied to NPU is unique. AI semiconductors for NPU application are designed to process data operations for core AI workload functions such as learning and inference with low power and high-speed processing to enhance efficiency.
[Image] Differences in CPU, GPU, and NPU structures
AI semiconductors applied to NPU are classified into three stages of technology: ① Improved versions of existing semiconductors, ② 1st generation AI semiconductors, ③ 2nd generation AI semiconductors. ① Improved versions of existing semiconductors have the characteristic of being programmable for various purposes using CPU, GPU, and Field-Programmable Gate Array (FPGA). However, there are relatively low computational performance and power consumption efficiency issues tailored to the characteristics of NPU. ② 1st generation AI semiconductors are chips developed to address these issues, customized for the applied AI to accelerate computations. Although these 1st generation chips achieve high computational efficiency and optimized structures for low power consumption compared to previous generations, they are expensive as custom chips and difficult to use for general purposes. ③ 2nd generation AI semiconductors are focused on as universal AI chips using a 'non von Neumann' approach. These chips are easy to mass-produce, achieving low power and low cost with similar computational processing capabilities to previous generations.
[Image] Classification of semiconductors by generation
Mobilint Inc. (CEO Shin Dong-joo) succeeded in attracting investment (Series B, 20 billion won) with this AI semiconductor technology. Mobilint 's AI semiconductor ARIES is a 1st generation AI semiconductor, offering approximately four times higher computational performance compared to competing products and using less than one-fifth of the energy consumption. The company is conducting demonstration projects with domestic and international companies and has applied and verified its products in various fields such as smart factories, smart cities, and robotics.
Furthermore, Mobilint is developing the next-generation chip, REGULUS, a 2nd generation AI semiconductor capable of high-performance AI functions with less than 5W power consumption, which can be applied not only to NPU but also to robotics, drones, on-device, and other fields. Mobilint's next-generation artificial intelligence semiconductor technology is protected by domestic and international patent portfolios, demonstrating differentiated technological composition compared to competitors. Based on this investment, Mobilint plans to use it for mass production of the existing AI semiconductor ARIES and development of the next-generation chip REGULUS.
[Image] Mobileint's AI semiconductor ARIES
In addition, domestic companies' participation in the market is active. Among them, DeepX Inc. (CEO Kim Nok-won) is leading the production of AI semiconductors in the market as a chip maker for Pelis AI semiconductors. DeepX has developed the DX-M1 for image and video identification, a vision AI semiconductor, and is verifying it with over 40 customers. DX-M1 is DeepX's flagship semiconductor supporting AI image analysis, characterized by up to 20 times lower power consumption than competing AI semiconductors due to optimized computational processing. It is a product that can reduce manufacturing costs by lowering the cost of SRAM input devices. Based on this technology, DeepX won three 'CES Innovation Awards' in 2023 in the fields of 'Embedded Technology', 'Computer Hardware', and 'Robotics', becoming the first AI semiconductor company to win in three categories.
Another prominent AI semiconductor company is Sapeon Inc. (CEO Ryu Su-jeong). Sapeon's AI semiconductor 'X330' is an AI semiconductor for NPU, with four times faster computational processing speed and twice the power efficiency compared to existing AI semiconductors. Based on this AI semiconductor technology, Sapeon is collaborating with SK Telecom and SK Broadband to build NPU farms. Sapeon plans to apply it to high-performance NPUs required for autonomous driving, CCTV, and other fields in the future.
[Image] DeepX's AI semiconductor 'DX-M1' and Sapeon's AI semiconductor 'X330'
Government support for NPU and AI semiconductor development is active. The Ministry of Science and ICT announced the 'K Cloud Initiative utilizing domestic AI semiconductors'. The K Cloud policy aims to develop domestically produced AI semiconductors with world-class ultra-high-speed and low-power capabilities and apply them to data centers to enhance competitiveness. To achieve this, the government plans to invest a total of 826.2 billion won by 2030. This will be carried out in three stages.
[Image] The Ministry of Science and ICT's 'K Cloud' policy for domestic NPU and AI development
The AI computing market undergoes vigorous changes every year. Even OpenAI, which was a winner in the market, faces intensified competition from global tech giants such as Google, Meta, Apple, etc. Companies that adapt well to these changes are expected to emerge as new winners.
As of 2024, BLT Law Firm has been a partner chosen by more than 2,000 innovative startups, supporting IP acquisition and strategy formulation, as well as investment attraction, technology special listing, and other business support utilizing IP to drive corporate growth and success.
'BLT insight' introduces a recently invested technology field every week.
If you have any questions about the Korean market or related to intellectual property rights, please ask your questions via the link below:
www.BLT.kr/contact
Or, you can inquire by emailing shawn@BLT.kr