The Infrastructure Behind AI

What Is a Neocloud?

There’s a new kind of cloud in town—one that’s purpose-built for AI and complementary to traditional cloud offerings

David Tairych
Aaron Delp
What Is a Neocloud?

TL:DR

  • Neoclouds are AI-first cloud providers offering specialized GPU infrastructure at a lower cost than hyperscalers, making AI workloads more accessible.
  • These vendors deliver purpose-built AI services including GPUaaS, optimized storage, data pipelines and inference with transparent hourly pricing models.
  • Enterprises can integrate neoclouds into hybrid multicloud strategies for AI experimentation while using hyperscalers for traditional workloads.

Since the launch of large language models (LLMs) in late 2022, the AI boom has been building momentum, and IT infrastructure solutions and services have been evolving to accommodate it. Behind every great AI breakthrough, there’s technology infrastructure: servers, specialized processors, storage systems and networking. As AI adoption increases and enterprises explore how to effectively use it in their businesses, the demand for compute power keeps rising.

In addition to growing demand for GPUs, manufacturing bottlenecks, supply chain disruptions and geopolitical issues have impacted the availability and cost of specialized AI hardware. Enterprises thus face the twin challenges of determining how to best use AI and accessing the necessary resources affordably.

As the IT industry struggles to meet unprecedented demand for AI-specific equipment, new players have emerged to creatively address some of the biggest challenges. Enter neoclouds—the new AI-first clouds that specialize in enabling AI workloads.

Neoclouds: Delivering AI infrastructure as a Service

A neocloud is a vendor that offers AI-specific infrastructure and services. Because GPUs are the dominant type of processor in AI today, many neoclouds specialize in delivering GPU as a Service (GPUaaS). This is great for enterprises, because GPUaaS offerings make the newest, most powerful GPUs more accessible and affordable—especially for speculative projects where ROI isn’t predetermined. Companies can rent GPU compute capacity from neocloud providers instead of having to acquire AI infrastructure themselves, shifting from CAPEX to OPEX.

Neoclouds offer more than just GPUs and compute infrastructure, though. They also provide:

  • AI-optimized object and file storage
  • Data pipeline support and other data transformation services
  • AI model training and finetuning
  • Low-latency, high-bandwidth networking and connectivity
  • AI inference
  • Monitoring and observability tools

As more AI solutions become available, such as the inference-specific language processing units (LPUs) from Groq, the more expansive term “AI as a Service” (AIaaS) is taking hold. AIaaS is an encompassing term expressing the range of services neoclouds offer to accelerate the lifecycle of AI. Moving forward, neoclouds are likely to continue opening up to chip architectures beyond GPUs that allow for greater power and cost savings for some workloads.

Some of the leading neoclouds in the market today are CoreWeave, Crusoe, Denvr Dataworks, GroqCloud, Lambda Labs and Nebius. (And a little side note: All these neocloud vendors have a presence in Equinix data centers.)

What’s different about neoclouds?

The term “neocloud” (new cloud) arose in late 2024, gaining traction quickly in 2025. It’s meant to distinguish the newer AI-first cloud vendors from traditional hyperscale clouds that specialize in helping companies modernize traditional infrastructure. Hyperscalers excel at providing compute, storage and networking resources for a wide variety of workloads. Neoclouds, on the other hand, are purpose-built for AI-specific workloads.

Neoclouds offer numerous benefits to enterprises:

Specialized offerings for AI

Neoclouds offer products specifically designed for AI workloads instead of providing generic building blocks that can be used for AI. For this reason, they can often deliver AI infrastructure to customers faster than other providers.

Better pricing

For enterprises, startups and research organizations, relying on hyperscale infrastructure for AI can get expensive. Since neoclouds have fewer services, they’re less complex, run leaner and can therefore offer more competitive pricing. They also monetize their services differently: Instead of complex, layered pricing that includes per-hour costs for resources, egress/ingress fees and API calls, neoclouds tend to offer transparent hourly pricing for resources. The Uptime Institute found that the average hourly cost of an NVIDIA DGX H100 instance when purchased on-demand from a hyperscaler was $98. When an approximately equivalent instance is purchased from a neocloud, the unit cost drops to $34, a 66% savings.[1]

Elasticity and scalability

Hyperscale clouds offer amazing elasticity for traditional workloads, making it easy for enterprises to increase and decrease infrastructure resources on demand. Neoclouds, however, are more elastic for GPUs specifically. Because they tend to have more readily available AI resources, they can offer greater scaling for AI clusters. Organizations can increase resources for model training and then scale back when less compute is needed.

Better fit for AI workload sizing

AI-centric workloads need not only the storage, compute and networking traditional data centers offer but also AI-specific resources like GPUs, LPUs and TPUs. Neoclouds can provide a larger number of available parallel processors, higher bandwidth to interconnects and larger memory pools on AI chips. Thus, neoclouds are better equipped to accommodate AI-specific sizing parameters.

Including neoclouds in your hybrid multicloud strategy

Investments have been pouring in for neoclouds, and the GPUaaS market is expected to grow significantly in the next several years: The 2024 market size was $3.80 billion USD, and the 2030 projected market size is $12.26 billion, a compound annual growth rate of 22.9%.[2]

While neoclouds may face some challenges with enterprise adoption, many enterprises will be motivated to use them because of better pricing and faster access to the latest GPUs. Neoclouds can empower companies to explore new AI use cases without having to commit to massive upfront spending, thus accelerating AI innovation and experimentation. By making AI equipment more accessible, neoclouds are also helping to democratize AI. And because of high utilization rates for specialized AI hardware, neoclouds boast excellent resource efficiency, which is very important as power availability and data center capacity are constrained by the growth of AI.

That said, neoclouds address a niche requirement for cost-effective AI infrastructure. As the market continues to mature and AI ecosystems grow, neoclouds will complement the offerings of major cloud providers. Neoclouds can help enterprises secure AI chips while de-risking some of the infrastructure investments for hyperscalers. We’re already seeing partnerships form between neoclouds, hyperscalers, data center operators and AI leaders.

Meanwhile, enterprises, startups, research labs and other organizations can benefit from making neoclouds a part of their hybrid multicloud strategy. This will look different for each organization, but here are some broad guidelines:

  • Use neoclouds for specialized AI workloads, such as for training or finetuning an LLM, and new, exploratory AI use cases where the ROI may not be tightly defined.
  • Use hyperscale clouds for traditional services like databases, storage and analytics, and connect the data sources through a private interconnection service like Equinix Fabric® to neoclouds as needed.
  • Use private infrastructure for sensitive data and authoritative copies of data you want to keep under your control, and for AI inference at the edge.

Of course, there are many variations on this breakdown of where organizations can put workloads. Private infrastructure should be more heavily prioritized for heavily regulated industries and those with sensitive data.

Neoclouds and enterprises alike benefit from private colocation

Neoclouds will continue to evolve. Even if AI growth slows, the need for fast access to specialized infrastructure will continue. In terms of their own infrastructure, neoclouds use a mix of new data center builds, acquisitions and colocation presence. Colocation offers several advantages for fast-growing neocloud vendors:

  • Available space and power without taking years to build new facilities
  • Ability to scale to new locations quickly
  • High-speed multicloud networking, including private connectivity
  • Advanced cooling capabilities like liquid cooling for high-density deployments
  • Built-in security and compliance
  • Energy efficiency and sustainable design

Connectivity, in particular, is a crucial factor for neoclouds and their customers to focus on. Similar to hyperscalers, neoclouds are working to ensure that private on-ramp connectivity is available for enterprises that don’t want to send their most valuable data across public networks.

Learn more about the role Equinix AI-ready data centers can play in your AI strategy in our AI-Ready Data Center Guide.

 

 

[1] Neoclouds: a cost-effective AI infrastructure alternative, Uptime Institute, February 26, 2025.

[2] GPU As A Service Market (2025 – 2030), Grandview Research.

Avatar photo
David Tairych Principal Solutions Architect
Avatar photo
Aaron Delp Former Director, AI Technical Solutions
Subscribe to the Equinix Blog