
“The computing cost required to run ChatGPT is enough to make you cry.”
This remark by Sam Altman, CEO of OpenAI, highlights a major issue. The chatbot service ChatGPT processes user queries using approximately 10,000 GPUs (Graphics Processing Units). GPU-based AI computation offers advantages such as flexible architecture and relatively low adoption costs. However, this approach faces a critical challenge: massive computation handled solely by GPUs results in high power consumption and high operational costs. As AI and machine-learning workloads continue to expand exponentially, these burdens are expected to grow further—creating an urgent need for new solutions.

[Fig] GPU-Based AI Computing System
The NPU (Neural Processing Unit) is emerging as a compelling alternative. Designed specifically for AI and machine-learning workloads, the NPU processes massive datasets according to artificial neural-network structures. Through improved distributed parallel processing, NPUs consume significantly less power than GPUs while maintaining equivalent or superior computing efficiency. The semiconductors used in NPUs are specialized devices engineered to process data efficiently for AI workloads such as model training and inference, enabling high-speed computation at low power.

[Fig] Structural Differences Between CPU, GPU, and NPU
AI semiconductors for NPUs are categorized into three generations: ① Enhanced conventional semiconductors, ② 1st-generation AI semiconductors, ③ 2nd-generation AI semiconductors
Enhanced conventional semiconductors (CPU, GPU, FPGA) can be programmed for various use cases. However, their power efficiency and AI computation performance are inferior compared to NPU-optimized architectures.1st-generation AI semiconductors are custom-designed for specific AI workloads, improving computational efficiency and reducing power usage. However, because they are customized, they are expensive and difficult to deploy universally. 2nd-generation AI semiconductors adopt a non-von-Neumann architecture, enabling mass production and universal application. These chips provide low-power and low-cost performance while delivering processing capabilities comparable to earlier models.

[Fig] Semiconductor Classification by Generation
Mobilint Co., Ltd. (CEO Dongjoo Shin) has successfully raised 20 billion KRW (Series B) based on its AI semiconductor technology. Its AI chip ARIES is a 1st-generation AI semiconductor that delivers four times the computational performance of competing products, while consuming less than one-fifth of the energy. Mobilint has conducted pilot projects with domestic and global partners, validating the chip in smart factories, smart cities, robotics, and other sectors. The company is also developing REGULUS, a next-generation 2nd-generation AI semiconductor capable of powerful AI processing with less than 5W of power. REGULUS is expected to be deployed in NPUs, robots, drones, and on-device AI systems. Mobilint’s semiconductor portfolio is protected by a strong global patent strategy, demonstrating clear technological differentiation. The newly raised funds will be used for mass production of ARIES and development of REGULUS.

[Fig] Mobilint Co., Ltd.’s AI Semiconductor ARIES
Domestic market activity is also accelerating. DeepX Co., Ltd. (CEO Nokwon Kim), a fabless AI semiconductor developer, is leading volume production in the sector. The company developed DX-M1, a vision AI semiconductor for image and video recognition, validated by more than 40 customer companies. DX-M1 is DeepX’s flagship chip, which reduces power consumption up to 20-fold compared to rival AI semiconductors through optimized computing, while also reducing SRAM costs, thus lowering manufacturing expenses. Thanks to its technological achievements, DeepX won three CES Innovation Awards in 2023—in Embedded Technologies, Computer Hardware, and Robotics—becoming the first AI semiconductor firm to do so.
SAPEON Co., Ltd. (CEO Sujeong Ryu) is another major player. Its AI semiconductor X330 is optimized for NPU integration and delivers four times faster processing and double the power efficiency compared to previous solutions. SAPEON is collaborating with SK Telecom and SK Broadband to establish NPU farms and plans to expand into autonomous driving, CCTV, and other high-performance AI markets.

[Fig] DeepX’s AI Semiconductor ‘DX-M1’ and SAPEON’s ‘X330’
The Korean government is vigorously supporting the development of NPUs and AI semiconductors. The Ministry of Science and ICT announced its “K-Cloud Initiative Using Domestic AI Semiconductors,” which aims to develop world-class, ultra-fast, low-power AI chips and deploy them in data centers to enhance national competitiveness. The government plans to invest 826.2 billion KRW by 2030 across three phases to execute this strategy.

[Fig] The Ministry of Science and ICT’s “K-Cloud” Policy for Domestic AI Semiconductor and NPU Development
The AI computing landscape is evolving rapidly. Even OpenAI, once the undisputed leader of the field, now faces fierce competition from global tech giants such as Google, Meta, and Apple. In this turbulent transition, companies that successfully adapt to the shift toward NPU-centric architectures are poised to become the next dominant winners in the AI era.
#NPU #AISemiconductor #NeuralProcessingUnit #Mobilint #DeepX #SAPEON #OpenAI #ChatGPT #AIComputing #AIChip
“The computing cost required to run ChatGPT is enough to make you cry.”
This remark by Sam Altman, CEO of OpenAI, highlights a major issue. The chatbot service ChatGPT processes user queries using approximately 10,000 GPUs (Graphics Processing Units). GPU-based AI computation offers advantages such as flexible architecture and relatively low adoption costs. However, this approach faces a critical challenge: massive computation handled solely by GPUs results in high power consumption and high operational costs. As AI and machine-learning workloads continue to expand exponentially, these burdens are expected to grow further—creating an urgent need for new solutions.
[Fig] GPU-Based AI Computing System
The NPU (Neural Processing Unit) is emerging as a compelling alternative. Designed specifically for AI and machine-learning workloads, the NPU processes massive datasets according to artificial neural-network structures. Through improved distributed parallel processing, NPUs consume significantly less power than GPUs while maintaining equivalent or superior computing efficiency. The semiconductors used in NPUs are specialized devices engineered to process data efficiently for AI workloads such as model training and inference, enabling high-speed computation at low power.
[Fig] Structural Differences Between CPU, GPU, and NPU
AI semiconductors for NPUs are categorized into three generations: ① Enhanced conventional semiconductors, ② 1st-generation AI semiconductors, ③ 2nd-generation AI semiconductors
Enhanced conventional semiconductors (CPU, GPU, FPGA) can be programmed for various use cases. However, their power efficiency and AI computation performance are inferior compared to NPU-optimized architectures.1st-generation AI semiconductors are custom-designed for specific AI workloads, improving computational efficiency and reducing power usage. However, because they are customized, they are expensive and difficult to deploy universally. 2nd-generation AI semiconductors adopt a non-von-Neumann architecture, enabling mass production and universal application. These chips provide low-power and low-cost performance while delivering processing capabilities comparable to earlier models.
[Fig] Semiconductor Classification by Generation
Mobilint Co., Ltd. (CEO Dongjoo Shin) has successfully raised 20 billion KRW (Series B) based on its AI semiconductor technology. Its AI chip ARIES is a 1st-generation AI semiconductor that delivers four times the computational performance of competing products, while consuming less than one-fifth of the energy. Mobilint has conducted pilot projects with domestic and global partners, validating the chip in smart factories, smart cities, robotics, and other sectors. The company is also developing REGULUS, a next-generation 2nd-generation AI semiconductor capable of powerful AI processing with less than 5W of power. REGULUS is expected to be deployed in NPUs, robots, drones, and on-device AI systems. Mobilint’s semiconductor portfolio is protected by a strong global patent strategy, demonstrating clear technological differentiation. The newly raised funds will be used for mass production of ARIES and development of REGULUS.
[Fig] Mobilint Co., Ltd.’s AI Semiconductor ARIES
Domestic market activity is also accelerating. DeepX Co., Ltd. (CEO Nokwon Kim), a fabless AI semiconductor developer, is leading volume production in the sector. The company developed DX-M1, a vision AI semiconductor for image and video recognition, validated by more than 40 customer companies. DX-M1 is DeepX’s flagship chip, which reduces power consumption up to 20-fold compared to rival AI semiconductors through optimized computing, while also reducing SRAM costs, thus lowering manufacturing expenses. Thanks to its technological achievements, DeepX won three CES Innovation Awards in 2023—in Embedded Technologies, Computer Hardware, and Robotics—becoming the first AI semiconductor firm to do so.
SAPEON Co., Ltd. (CEO Sujeong Ryu) is another major player. Its AI semiconductor X330 is optimized for NPU integration and delivers four times faster processing and double the power efficiency compared to previous solutions. SAPEON is collaborating with SK Telecom and SK Broadband to establish NPU farms and plans to expand into autonomous driving, CCTV, and other high-performance AI markets.
[Fig] DeepX’s AI Semiconductor ‘DX-M1’ and SAPEON’s ‘X330’
The Korean government is vigorously supporting the development of NPUs and AI semiconductors. The Ministry of Science and ICT announced its “K-Cloud Initiative Using Domestic AI Semiconductors,” which aims to develop world-class, ultra-fast, low-power AI chips and deploy them in data centers to enhance national competitiveness. The government plans to invest 826.2 billion KRW by 2030 across three phases to execute this strategy.
[Fig] The Ministry of Science and ICT’s “K-Cloud” Policy for Domestic AI Semiconductor and NPU Development
The AI computing landscape is evolving rapidly. Even OpenAI, once the undisputed leader of the field, now faces fierce competition from global tech giants such as Google, Meta, and Apple. In this turbulent transition, companies that successfully adapt to the shift toward NPU-centric architectures are poised to become the next dominant winners in the AI era.
#NPU #AISemiconductor #NeuralProcessingUnit #Mobilint #DeepX #SAPEON #OpenAI #ChatGPT #AIComputing #AIChip