Neoclouds and GPU Colocation, New Cornerstones of AI Computing

May 16, 2025

While artificial intelligence (AI) computing has taken businesses by storm, it has also taken data centers by surprise. The colossal calculations and datasets for AI create pressing needs for technology upgrades. Traditional installations cannot keep up; even conventional cloud hyperscalers struggle. So, a new generation of solutions, neoclouds, has rolled in to fill the gap. They can offer advantageous AI workload performance and pricing, but there is a condition. Neoclouds themselves must have access to suitable AI-ready data center infrastructure to enable their services.

A new perspective on AI workloads

Enterprises and organizations want to accelerate their AI operations and reap the benefits sooner. Consequently, the overall trend of AI use is upwards, at times exponentially, but the market dynamics can be complex. Although AI’s recent breakthroughs were made possible through massive use of graphics processing units, availability of GPUs for AI workloads remains limited.

Specializing in GPU as a Service (GPUaaS), neoclouds already offer keen pricing and rapid technology refresh times. To woo customers further, they innovate with new resource-sharing models and efficiency gains. They offer fractional GPU rental. They bundle their services with bare-metal resources or thin virtual machines (thin VMs) that only occupy main memory as needed, instead of reserving it ahead of time. Adaptive resource scheduling algorithms help optimize utilization, and dynamic cost renegotiation gives customers better deals while maximizing revenue.

Growing AI hardware diversity

Instead of adapting conventional cloud configurations, neoclouds take AI compute requirements as their service design starting point. They include increasing demands for custom AI computing solutions, fueled by the diversification of AI hardware.

The success of Nvidia, currently the GPU market leader, has encouraged competitors. Mainstream processor vendors and cloud providers are marketing new processor designs for AI acceleration and improved efficiency. Tensor processing units (TPUs) offer advantages for deep learning, while field programmable gate arrays (FPGAs) are attractive for their low latency in real-time AI applications.

As a result, neoclouds may manage diverse hardware portfolios to ensure AI workloads run on the most suitable processors. This in turn means a wider range of supporting infrastructure requirements.

Neocloud infrastructure needs

Much of AI’s appetite for resources comes from its continuing use of GPUs. These processors are the hardware of choice for AI model development (the AI training phase) due to their large-scale parallel processing capabilities. The high throughput computing comes however at the price of increased power and cooling requirements.

Whereas traditional data centers might function with 5 to 10 kilowatts of power per rack, GPU rack consumption can climb to well over 60 kilowatts. Air cooling alone is insufficient for the heat produced. Liquid cooling becomes essential for an AI-ready data center to ensure that AI workloads can run continuously. This is a key consideration as interruptions can mean considerable extra effort to resume the AI processing correctly.

Connectivity must match data transfer and application end-to-end performance needs. Within data centers, Infiniband is the reference for high-speed networking. Direct cloud connections (onramps) and carrier neutrality also help select optimal solutions for the ultra-low connection latency that many AI applications need to function properly. Geographical location is another key factor, for edge data center computing and proximity to end-users, as well as ensuring sovereign cloud infrastructure.

Data center facilities, build or buy?

Some neocloud companies, like those whose previous business was bitcoin mining (another GPU-intensive activity), already have data centers. Others do not. Building their own data centers may be unrealistic, not only because of the high costs of capital typical for neoclouds, but also because they cannot afford to wait several years for the completion of the full cycle of site selection, building permission, design, and construction.

Renting space in colocation data centers is a natural alternative. A colocation facility that is an AI-ready data center and that offers business flexibility is a strategic enabler for neoclouds to scale rapidly and cost-efficiently. Leveraging asset-light GPU colocation, they are free to focus on their service delivery and innovation. Supporting services from the colocation provider are not limited to build-to-suit power and cooling, but include data center security, compliance, and technical and remote hands assistance as well.

Sustainability and CHG footprints

Sustainability is a major concern for the AI industry due to its intensive power consumption. The carbon, heat, and greenhouse gas emissions (CHG) footprint of a neocloud is an important element for many customers comparing offers of service.

AI hardware is one factor determining this footprint. The use of more energy-efficient processors can make it shrink. Sustainable AI-ready data centers also help neoclouds to be good corporate citizens. Examples include the use of green energy like hydroelectricity, carbon-neutral initiatives and carbon emission recycling, and avoiding the use of water in cooling.

These measures must still be part of a solution that makes good business sense. Thus, in its design and operation of AI-ready colocation centers, eStruxture ensures that its locations combine full core and edge data center capabilities as well as access to sustainable power and cooling solutions.

Changes on the horizon

As projects move out of development and into production, AI is extending from model training-centric operations to the larger market of inference or the application of those models. However, power-hungry GPUs that made intensive training computation possible will be too inefficient for the relatively lightweight inference workloads. New species of processors may replace them. Following the example of DeepSeek, more efficient AI models may further erode the attraction of GPUs, changing the shape of infrastructure requirements.

Customers will also want their AI capabilities to mesh more closely with their business activities. Neocloud infrastructure may change significantly as a result with support for mixed digital ecosystems, while scrupulously meeting individual power and cooling requirements. Overall, colocation AI-ready data centers that can flex and scale in sustainable power and cooling with suitable price structures will continue to be excellent partners for neoclouds as they meet evolving customer requirements for AI computing.

Contact eStruxture today to find out more about our AI-ready colocation data centers and how they can provide your business with the support it needs.