Jump to content
  • Sign Up
×
×
  • Create New...

MediaTek Announces Optimisation of Microsoft’s Phi-3.5 AI Models on Dimensity Chipsets


Recommended Posts

  • Diamond Member

This is the hidden content, please

MediaTek Announces Optimisation of
This is the hidden content, please
’s Phi-3.5 AI Models on Dimensity Chipsets

MediaTek announced on Monday that it has now optimised several of its mobile platforms for

This is the hidden content, please
’s Phi-3.5 artificial intelligence (AI) models. The Phi-3.5 series of small language models (SLMs), comprising Phi-3.5 Mixture of Experts (MoE), Phi-3.5 Mini, and Phi-3.5 Vision, was released in August. The open-source AI models were made available on Hugging Face. Instead of being typical conversational models, these were instruct models that require users to input specific instructions to get the desired output.

In a

This is the hidden content, please
, MediaTek announced that its Dimenisty 9400, Dimensity 9300, and Dimensity 8300 chipsets are now optimised for the Phi-3.5 AI models. With this, these mobile platforms can efficiently process and run inference for on-device generative AI tasks using MediaTek’s neural processing units (NPUs).

Optimising a chipset for a specific AI model involves tailoring the hardware design, architecture, and operation of the chipset to efficiently support the processing power, memory access patterns, and data flow of that particular model. After optimising, the AI model will show reduced latency and power consumption, and increased throughput.

MediaTek highlighted that its processors are not only optimised for

This is the hidden content, please
’s Phi-3.5 MoE but also for Phi-3.5 Mini which offers multi-lingual support and Phi-3.5 Vision which comes with multi-frame image understanding and reasoning.

Notably, the Phi-3.5 MoE has 16×3.8 billion parameters. However, only 6.6 billion of them are active parameters when using two experts (typical use case). On the other hand, Phi-3.5 features 4.2 billion parameters and an image encoder, and the Phi-3.5 Mini has 3.8 billion parameters.

Coming to performance,

This is the hidden content, please
claimed that the Phi-3.5 MoE outperformed both Gemini 1.5 Flash and GPT-4o mini AI models on the SQuALITY benchmark which tests readability and accuracy when summarising a block of text.

While developers can leverage

This is the hidden content, please
Phi-3.5 directly via Hugging Face or the Azure AI Model Catalogue, MediaTek’s NeuroPilot SDK toolkit also offers access to these SLMs. The chip maker stated that the latter will enable developers to build optimised on-device applications capable of generative AI inference using the AI models across the above mentioned mobile platforms.



This is the hidden content, please

#MediaTek #Announces #Optimisation #Microsofts #Phi3.5 #Models #Dimensity #Chipsets

This is the hidden content, please

This is the hidden content, please

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.