Jump to content
  • Sign Up
×
×
  • Create New...

AMD unveils its first small language model, AMD-135M — AI performance enhanced by speculative decoding


Recommended Posts

  • Diamond Member

This is the hidden content, please

AMD unveils its first small language model, AMD-135M — AI performance enhanced by speculative decoding

data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==

As AMD flexes its muscles in the AI game, it is not only introducing new hardware but is ******** on software too, trying to hit new market segments not already dominated by Nvidia. 

Thus, AMD has 

This is the hidden content, please
 its first small language model, AMD-135M, which belongs to the Llama family and is aimed at private business deployments. It is unclear whether the new model has to do anything with the company’s recent acquisition of Silo AI (as the deal has to be finalized and cleared by various authorities, so probably not), but this is a clear step in the direction of addressing the needs of specific customers with a pre-trained model done by AMD – using AMD hardware for inference.  

The main reason why AMD’s models are fast is because they use so-called speculative decoding. Speculative decoding introduces a smaller ‘draft model’ that generates multiple candidate tokens in a single forward pass. Tokens are then passed to a larger, more accurate ‘target model’ that verifies or corrects them. On the one hand, this approach allows for multiple tokens to be generated simultaneously, yet on the other hand this comes at the cost of power due to increased data transactions.  

AMD’s new release comes in two versions: AMD-Llama-135M and AMD-Llama-135M-code, each designed to optimize specific tasks by accelerating inference performance by using speculative decoding technology, a logical thing to do for a small-language model-based AI service. Somehow, both prevail in performance tests conducted by AMD.

  • The base model, AMD-Llama-135M, was trained from the ground up on 670 billion tokens of general data. This process took six days using four 8-way AMD Instinct MI250-based nodes (in AMD’s nomenclature these are just ‘four AMD MI250 nodes’). 
  • In addition, AMD-Llama-135M-code was fine-tuned with an extra 20 billion tokens specifically focused on coding, completing this task in four days using the same hardware.

AMD believes that further optimizations can lead to even better performance. Yet, as the company shares benchmark numbers of its previous-generation GPUs, we can only imagine what its current-generation (MI300X) and next-generation (MI325X) could do. 



This is the hidden content, please

#AMD #unveils #small #language #model #AMD135M #performance #enhanced #speculative #decoding

This is the hidden content, please

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.