It’s been a bit over eight years since we first began speaking about Neural Processing Items (NPUs) inside our smartphones and the early prospects of on-device AI. Massive factors for those who do not forget that the HUAWEI Mate 10’s Kirin 970 processor was the primary, although related concepts had been floating round, notably in imaging, earlier than then.
After all, so much has modified within the final eight years — Apple has lastly embraced AI, albeit with blended outcomes, and Google has clearly leaned closely into its Tensor Processor Unit for the whole lot from imaging to on-device language translation. Ask any of the massive tech firms, from Arm and Qualcomm to Apple and Samsung, and so they’ll all inform you that AI is the way forward for smartphone {hardware} and software program.
And but the panorama for cell AI nonetheless feels fairly confined; we’re restricted to a small however rising pool of on-device AI options, curated principally by Google, with little or no in the best way of a artistic developer panorama, and NPUs are partly responsible — not as a result of they’re ineffective, however as a result of they’ve by no means been uncovered as an actual platform. Which begs the query, what precisely is that this silicon sitting in our telephones actually good for?
What’s an NPU anyway?
Robert Triggs / Android Authority
Earlier than we are able to decisively reply whether or not telephones actually “want” an NPU, we must always most likely acquaint ourselves with what it really does.
Identical to your cellphone’s general-purpose CPU for operating apps, GPU for rendering video games, or its ISP devoted to crunching picture and video information, an NPU is a purpose-built processor for operating AI workloads as shortly and effectively as potential. Easy sufficient.
Particularly, an NPU is designed to deal with smaller information sizes (corresponding to tiny 4-bit and even 2-bit fashions), particular reminiscence patterns, and extremely parallel mathematical operations, corresponding to fused multiply-add and fused multiply–accumulate.
Cell NPUs have taken maintain to run AI workloads that conventional processors wrestle with.
Now, as I stated again in 2017, you don’t strictly want an NPU to run machine studying workloads; numerous smaller algorithms can run on even a modest CPU, whereas the info facilities powering numerous Giant Language Fashions run on {hardware} that’s nearer to an NVIDIA graphics card than the NPU in your cellphone.
Nonetheless, a devoted NPU may help you run fashions that your CPU or GPU can’t deal with at tempo, and it could possibly usually carry out duties extra effectively. What this heterogeneous method to computing can value when it comes to complexity and silicon space, it could possibly achieve again in energy and efficiency, that are clearly key for smartphones. Nobody desires their cellphone’s AI instruments to eat up their battery.
Wait, however doesn’t AI additionally run on graphics playing cards?

Oliver Cragg / Android Authority
If you happen to’ve been following the ongoing RAM worth disaster, you’ll know that AI information facilities and the demand for highly effective AI and GPU accelerators, notably these from NVIDIA, are driving the shortages.
What makes NVIDIA’s CUDA structure so efficient for AI workloads (in addition to graphics) is that it’s massively parallelized, with tensor cores that deal with extremely fused multiply–accumulate (MMA) operations throughout a variety of matrix and information codecs, together with the tiny bit-depths used for contemporary quantized fashions.
Whereas fashionable cell GPUs, like Arm’s Mali and Qualcomm’s Adreno lineup, can help 16-bit and more and more 8-bit information sorts with extremely parallel math, they don’t execute very small, closely quantized fashions — corresponding to INT4 or decrease — with anyplace close to the identical effectivity. Likewise, regardless of supporting these codecs on paper and providing substantial parallelism, they aren’t optimized for AI as a major workload.
Cell GPUs give attention to effectivity; they’re far much less highly effective for AI than desktop rivals.
In contrast to beefy desktop graphics chips, cell GPU architectures are designed at the beginning for energy effectivity, utilizing ideas corresponding to tile-based rendering pipelines and sliced execution models that aren’t fully conducive to sustained, compute-intensive workloads. Cell GPUs can undoubtedly carry out AI compute and are fairly good in some conditions, however for extremely specialised operations, there are sometimes extra power-efficient choices.
Software program improvement is the opposite equally vital half of the equation. NVIDIA’s CUDA exposes key architectural attributes to builders, permitting for deep, kernel-level optimizations when operating AI workloads. Cell platforms lack comparable low-level entry for builders and gadget producers, as a substitute counting on higher-level and infrequently vendor-specific abstractions corresponding to Qualcomm’s Neural Processing SDK or Arm’s Compute Library.
This highlights a big ache level for the cell AI improvement surroundings. Whereas desktop improvement has principally settled on CUDA (although AMD’s ROCm is gaining traction), smartphones run quite a lot of NPU architectures. There’s Google’s proprietary Tensor, Snapdragon Hexagon, Apple’s Neural Engine, and extra, every with its personal capabilities and improvement platforms.
NPUs haven’t solved the platform drawback

Taylor Kerns / Android Authority
Smartphone chipsets that boast NPU capabilities (which is basically all of them) are constructed to resolve one drawback — supporting smaller information values, complicated math, and difficult reminiscence patterns in an environment friendly method with out having to retool GPU architectures. Nonetheless, discrete NPUs introduce new challenges, particularly in the case of third-party improvement.
Whereas APIs and SDKs can be found for Apple, Snapdragon, and MediaTek chips, builders historically needed to construct and optimize their functions individually for every platform. Even Google doesn’t but present simple, normal developer entry for its AI showcase Pixels: the Tensor ML SDK stays in experimental entry, with no assure of normal launch. Builders can experiment with higher-level Gemini Nano options by way of Google’s ML Package, however that stops nicely in need of true, low-level entry to the underlying {hardware}.
Worse, Samsung withdrew help for its Neural SDK altogether, and Google’s extra common Android NNAPI has since been deprecated. The result’s a labyrinth of specs and deserted APIs that make environment friendly third-party cell AI improvement exceedingly troublesome. Vendor-specific optimizations had been by no means going to scale, leaving us caught with cloud-based and in-house compact fashions managed by a number of main distributors, corresponding to Google.
LiteRT runs on-device AI on Android, iOS, Net, IoT, and PC environments.
Fortunately, Google launched LiteRT in 2024 — successfully repositioning TensorFlow Lite — as a single on-device runtime that helps CPU, GPU, and vendor NPUs (at the moment Qualcomm and MediaTek). It was particularly designed to maximise {hardware} acceleration at runtime, leaving the software program to decide on essentially the most appropriate technique, addressing NNAPI’s greatest flaw. Whereas NNAPI was supposed to summary away vendor-specific {hardware}, it in the end standardized the interface quite than the conduct, leaving efficiency and reliability to vendor drivers — a niche LiteRT makes an attempt to shut by proudly owning the runtime itself.
Curiously, LiteRT is designed to run inference fully on-device throughout Android, iOS, embedded techniques, and even desktop-class environments, signaling Google’s ambition to make it a really cross-platform runtime for compact fashions. Nonetheless, in contrast to desktop AI frameworks or diffusion pipelines that expose dozens of runtime tuning parameters, a TensorFlow Lite mannequin represents a completely specified mannequin, with precision, quantization, and execution constraints determined forward of time so it could possibly run predictably on constrained cell {hardware}.

Whereas abstracting away the vendor-NPU drawback is a significant perk of LiteRT, it’s nonetheless price contemplating whether or not NPUs will stay as central as they as soon as had been in mild of different fashionable developments.
For example, Arm’s new SME2 exterior extension for its newest C1 collection of CPUs supplies as much as 4x CPU-side AI acceleration for some workloads, with large framework help and no want for devoted SDKs. It’s additionally potential that cell GPU architectures will shift to higher help superior machine studying workloads, probably lowering the necessity for devoted NPUs altogether. Samsung is reportedly exploring its personal GPU structure particularly to higher leverage on-device AI, which may debut as early because the Galaxy S28 collection. Likewise, Immagination’s E-series is particularly constructed for AI acceleration, debuting help for FP8 and INT8. Perhaps Pixel will undertake this chip, finally.
LiteRT enhances these developments, releasing builders to fret much less about precisely how the {hardware} market shakes out. The advance of complicated instruction help on CPUs could make them more and more environment friendly instruments for operating machine studying workloads quite than a fallback. In the meantime, GPUs with superior quantization help may finally transfer to develop into the default accelerators as a substitute of NPUs, and LiteRT can deal with the transition. That makes LiteRT really feel nearer to the mobile-side equal of CUDA we’ve been lacking — not as a result of it exposes {hardware}, however as a result of it lastly abstracts it correctly.
Devoted cell NPUs are unlikely to vanish however apps might lastly begin leveraging them.
Devoted cell NPUs are unlikely to vanish any time quickly, however the NPU-centric, vendor-locked method that outlined the primary wave of on-device AI clearly isn’t the endgame. For many third-party functions, CPUs and GPUs will proceed to shoulder a lot of the sensible workload, notably as they achieve extra environment friendly help for contemporary machine studying operations. What issues greater than any single block of silicon is the software program layer that decides how — and if — that {hardware} is used.
If LiteRT succeeds, NPUs develop into accelerators quite than gatekeepers, and on-device cell AI lastly turns into one thing builders can goal with out betting on a selected chip vendor’s roadmap. With that in thoughts, there’s most likely nonetheless some technique to go earlier than on-device AI has a vibrant ecosystem of third-party options to take pleasure in, however we’re lastly inching a bit bit nearer.
Thanks for being a part of our neighborhood. Learn our Remark Coverage earlier than posting.


