Thursday, January 15, 2026

Why software program will save Nvidia from an AI bubble burst • The Register


Right this moment, Nvidia’s revenues are dominated by {hardware} gross sales. However when the AI bubble inevitably pops, the GPU big will change into the only most vital software program firm on the earth.

Since ChatGPT kicked off the AI arms race in late 2022, Nvidia has shipped hundreds of thousands of GPUs predominantly to be used in AI coaching and inference.

That’s loads of chips which are going to be left idle when the music stops and the finance bros come to the sickening realization that utilizing a fast-depreciating asset as collateral for multi-billion greenback loans wasn’t such an amazing thought in any case.

Nonetheless, anybody suggesting these GPUs will likely be rendered nugatory when the mud settles is naive.

GPUs could also be synonymous with AI by this level, however they’re way more versatile than that. As a reminder, GPU stands for graphics processing unit. These chips had been initially designed to hurry up online game rendering, which, by the late ‘90s, was shortly turning into too computationally intensive for the single-threaded CPUs of the time.

Because it seems, the identical factor that made GPUs nice at pushing pixels additionally made them notably nicely fitted to different parallel workloads — you realize, like simulating the physics of a hydrogen bomb going essential. A lot of Nvidia’s strongest accelerators — chips just like the H200 or GB300 — have lengthy since ditched the graphics pipeline to make room for extra vector and matrix math accelerators required in HPC and AI.

If an app could be parallelized, there’s a great likelihood it’ll profit from GPU acceleration — when you have the software program to do it. That is why there are so few GPU firms. A GPU must be broadly programmable; an AI ASIC solely must do inference or coaching nicely.

CUDA-X many causes to purchase a GPU

Since introducing CUDA, its low degree GPU programming atmosphere and API interface, in 2007, Nvidia has constructed a whole bunch of software program libraries, frameworks, and micro-services to speed up any and each workload it will possibly suppose to.

The libraries, collectively marketed beneath the CUDA-X banner, cowl all the things from computational fluid dynamics and digital design automation to drug discovery, computational lithography, materials design, and even quantum computing. The corporate additionally has frameworks for visualizing digital twins and robotics.

For now, AI has turned out to be probably the most profitable of those, however when the hype prepare runs out of steam, there’s nonetheless loads that may be executed with the {hardware}.

For instance, Nvidia constructed cuDF and built-in it into the favored RAPIDS information science and analytics framework to speed up SQL databases or Pandas, attaining a 150x velocity up within the course of. It’s no marvel database big Oracle is so eager on Nvidia’s {hardware}. Any compute it will possibly’t lease out to OpenAI for a revenue, it will possibly use to speed up its database and analytics platforms.

Nvidia doesn’t provide an entire resolution, and that’s by design. A few of its libraries are open sourced, whereas others are made out there as extra complete frameworks and micro-services. These kind the constructing blocks by which software program builders can use to speed up their workloads, with a rising variety of them being tied again to revenue-generating licensing schemes.

The one drawback: up up to now, these advantages required shopping for or leasing a pricy GPU after which integrating these frameworks into your code base or ready for an impartial software program vendor (ISV) to do it for you.

However when the bubble bursts and pricing on GPUs drops by means of the ground, anybody that may discover a use for these stranded property stands to make a fortune. Nvidia has already constructed the software program essential to do it — the ISVs simply have to combine and promote it.

On this context, Nvidia’s regular transition from constructing low-level software program libraries geared toward builders to promoting enterprise-focused micro-services begins to make loads of sense. The decrease the barrier to adoption, the better it’s to promote {hardware} and the subscriptions that go together with it.

It seems that Nvidia could even open this software program stack to a broader ecosystem of {hardware} distributors. GPUzilla has begun transitioning to a disaggregated structure that breaks up workloads and offloads them to third-party silicon.

This week, Nvidia accomplished a $5 billion funding in Intel. The x86 big is at the moment creating a prefill accelerator to hurry up immediate processing for big language mannequin inference. In the meantime, Nvidia signed a deal final week to aqui-hire rival chip vendor Groq that it — although it stays to be seen how the GPU slinger intends to combine the corporate’s tech long run.

Along with its home-grown software program platforms, Nvidia has made a number of strategic software program acquisitions over the previous few years, buying Run:AI’s Kubernetes-based GPU orchestration and Deci AI’s mannequin optimization platforms in 2024. Earlier this month, Nvidia added SchedMD’s Slurm workload administration platform, which is extensively deployed throughout AMD, Nvidia, and Intel-based clusters for HPC and AI workloads, making certain a revenue even in case you don’t purchase its {hardware}.

GenAI is right here to remain

To be clear, generative AI as we all know it in the present day isn’t going away. The money that’s fueled AI growth over the previous three years could evaporate, however the underlying expertise, imperfect as it’s, continues to be priceless sufficient that enterprises will maintain utilizing it.

Fairly than chasing the mirage that’s synthetic normal intelligence, functions of the tech will likely be way more mundane.

In reality, a lot of Nvidia’s extra complete micro-services make intensive use of domain-specific AI fashions for issues like climate forecasting or physics simulation.

When the dot-com bubble burst, folks didn’t cease constructing internet companies or shopping for switches and routers. This time round, folks aren’t going to cease consuming AI companies both. They’ll simply be considered one of a number of causes to purchase GPUs. ®

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles