Jump to content
  • Sign Up
×
×
  • Create New...

Chinese chipmaker launches 14nm AI processor that’s 90% cheaper than GPUs — $140 chip’s older node sidesteps US sanctions


Recommended Posts

  • Diamond Member



******** chipmaker launches 14nm AI processor that’s 90% cheaper than GPUs — $140 chip’s older node sidesteps US sanctions

Aiming at the high-end hardware that dominates the AI market and has caused China-specific GPU bans by the US, ******** manufacturer

This is the hidden content, please
is introducing “DeepEyes” AI boxes with touted AI performance of 48 TOPS for 1000 yuan, or roughly $140. Using an older 14mn node and (most likely) an ASIC is another way for China to sidestep sanctions and remain competitive in the AI market.

The first “Deep Eyes” AI box for 2024 leverages a DeepEdge10Max SoC for 48 TOPS in int8 training performance. The 2024 H2 Deep Eyes box will use a DeepEdge10Pro with up to 24 TOPS, and finally, the 2025 H1 Deep Eyes box is aiming at a considerable performance boost with the DeepEdge10Ultra’s rating of up to 96 TOPS. The pricing of these upcoming higher-end models is unclear. Still, if they can maintain the starting ~1000 yuan cost long-term, Intellifusion may achieve their goal of “90% cheaper AI hardware” that still “covers 90% of scenarios”.

All of those above fully domestically-produced hardware leverages Intellifusion’s custom NNP400T neural networking chip. Besides the other expected components of SoCs, this specialized (a 1.8 GHz 2+8 cores RISC CPU, GPU up to 800 MHz in DeepEdge 10), the effective NPU onboard makes this a pretty tasty option inside its market.

Photo of Intellifusion’s DeepEdge10 chips. (Image credit: zhidx.com)

For your reference, to meet

This is the hidden content, please
’s stated requirements of an “AI PC,” modern PCs must have at least 40 TOPS of NPU performance. So, Intellifusion’s immediate trajectory seems like it should soon be suitable for many AI workloads, especially considering most existing NPUs are only as fast as 16 TOPS. However, Snapdragon’s X Elite chips are set to boast 40 TOPS alongside industry-leading iGPU performance later this year.

As Dr. Chen Ning, chairman of Intellifusion, posted, “In the next three years, 80% of companies around the world will use large models. […] The cost of training a large model is in the tens of millions, and the price of mainstream all-in-one training and pushing machines is generally one million yuan. Most companies cannot afford such costs.”

While the claim that 80% of companies worldwide will be leveraging AI seems…questionable at best, a fair point is being made here about the cost of entry for businesses to make meaningful use of AI, especially in creating their models. The DeepEdge chips use “independent and controllable domestic technology” and a RISC-V core to support extensive model training and inference deployment.





This is the hidden content, please

#******** #chipmaker #launches #14nm #processor #cheaper #GPUs #chips #older #node #sidesteps #sanctions

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.