Many companies running AI workloads consider GPU-as-a-service providers to cut long wait times at hyperscalers.

GPU processing-as-a-service providers, known as neoclouds, are relatively new on the AI scene, but their popularity is growing quickly as customers look for lower prices and more GPU availability than hyperscalers can provide.
As the AI market has exploded, the GPUaaS business model has emerged as an alternative to hyperscalers, with providers such as CoreWeave, Lambda Labs, and Crusoe gaining traction.
The business model appears to be catching on: A recent survey found that nine in 10 organizations are either using neoclouds already, currently piloting their use, or planning on adopting them.
The July survey by web hosting review site HostingAdvice found 25% of respondents already using neoclouds extensively, with another 34% testing their use, and 21% planning to adopt within six months. The survey engaged data engineers, data scientists, cloud architects, and FinOps professionals in the healthcare, software, IT, finance, and automotive industries.
GPUs needed now
GPU availability and cost are the primary reasons enterprise AI users are turning to neoclouds. Nearly a third of respondents say reduced wait times were the top reason to sign up with neocloud providers.
More than a third of respondents say they typically wait two to four weeks for GPU access from traditional cloud providers, while 20% wait three months or longer.
GPU wait times of weeks or months is a huge problem, says Joe Warnimont, senior analyst at HostingAdvice. “Imagine telling your board that some big AI project you have is delayed by a quarter because you can’t get these specific computer chips,” he adds. “If a competitor can deploy models three months faster than you, that’s not just an inconvenience, that’s a huge competitive disadvantage.”
While only 13% of respondents say cost was the primary factor for using neoclouds, nearly half of those surveyed say they can save 25% or more by making the switch from hyperscalers.
“There’s a lot of uncertainty right now with inflation pressures and tariffs and all that,” Warnimont says. “If you can find just a little bit of savings — and this isn’t even a little bit of savings when you’re finding a 25% to 50% savings for your AI infrastructure budget — that could be massive; it could be transformational.”
Neoclouds are becoming an attractive option with GPU shortages slowing down AI initiatives, says Bijit Ghosh, managing director of AI/ML, data, and cloud at banking firm Wells Fargo. While Wells Fargo hasn’t yet moved to a neocloud, the company is evaluating the options, he adds.
“The most significant advantage is access to GPUs without the multi-month waitlists we’ve seen with hyperscalers,” he says. “As a bank, when we have a new risk model or fraud detection AI ready, the last thing we want is to wait 90 days for [Nvidia] A100s.”
Cost can also be an advantage. While neoclouds aren’t always cheaper in dollars per hour than AWS or Azure, their pricing can be more predictable because of fewer hidden costs for data egress or storage, Ghosh says.
The issue of cost and access have made the emergence of neoclouds a crucial, and somewhat predictable, market expansion driven by the demands of AI, says Mitch Ashley, vice president and practice lead for DevOps and application development at IT analyst firm The Futurum Group.
“Neocloud vendors are not just new players; they are a direct market response to the growing and anticipated need for AI compute workloads, including the availability and cost of premium GPUs,” he adds. “Organizations are looking to these neoclouds and their existing hyperscalers to offer a focused and more cost-effective solution.”
The neocloud market represents an expansion from general-purpose cloud services to specialized, purpose-built platforms, Ashley adds.
“The future of this market isn’t about competing with hyperscalers on every front, but about providing a highly performant and accessible on-ramp to AI innovation,” he says. “Neoclouds will thrive by maintaining this focus, creating a competitive pressure on larger players to address their own GPU supply and pricing models.”
CIOs should be aware of a couple of issues, however, he says. They should examine compatibility with existing customer networks because neoclouds often rely on high-throughput fabrics like InfiniBand.
Neoclouds also don’t offer the full complement of cloud capabilities that hyperscalers do, meaning that multicloud and cross-environment workload orchestration are usually required, he says.
The future of neoclouds
The future of the market is also uncertain. HostAdvice’s Warnimont expects that neoclouds will become takeover targets for hyperscalers, but he suggests that a handful of market leaders could emerge and survive.
Wells Fargo’s Ghosh believes neoclouds will carve out a permanent niche alongside hyperscalers. “Hyperscalers are great for scale and breadth of services, but neoclouds win on specialization, performance tuning, and speed of access,” he says.
In regulated sectors such as banking, neoclouds can sell themselves as strategic partners for burst capacity, he adds.
“We can keep our steady-state workloads on-prem or in private clouds and then tap neoclouds when we need GPU power yesterday,” Ghosh says. “The key is having more control over where workloads run, placing them in a fit-for-purpose way, and providing the best yield from those investments.”