As artificial intelligence workloads grow more compute-intensive, data centers are running into a less visible constraint: not chips, not software, but cooling. For every watt that powers a server, nearly another watt can be lost keeping that server from overheating. One startup believes a significant share of that power can be reclaimed by rethinking how cooling decisions are made in real time.
That startup is NexDCCool Technologies, a software company spun out of Penn State University, where years of academic research are now being refactored into a commercial platform designed to optimize data center cooling as AI demand scales.
NexDCCool is betting that smarter software — guided by AI — can free up power already locked inside existing infrastructure.
Cooling: The Hidden Power Sink in AI Infrastructure
Data centers are known for their high energy use, but much of that electricity isn’t powering servers — it’s used to keep them cool. Cooling is often the second-largest operating cost after IT.
Servers generate heat constantly, and conventional cooling systems are designed conservatively — often assuming peak loads, even when real-world demand fluctuates hour to hour. The result is excess cooling that consumes power without delivering additional compute value.
As AI workloads scale, that inefficiency becomes more expensive. Power availability is finite, and every kilowatt directed toward cooling is one that can’t be used to run additional servers. This is the inefficiency NexDCCool is targeting.
From Academic Models to Operator Reality
NexDCCool grew out of research led by Wangda Zuo, a professor of architectural engineering at Penn State, whose group spent more than a decade developing models to improve energy performance in buildings and data centers.
While the research proved effective in academic settings, the team encountered challenges when moving toward commercialization. Industry feedback indicated that the technology, though sophisticated, was too complex and slow for day-to-day data center operations.
Those gaps became clearer during the National Science Foundation’s I-Corps program, which requires research teams to test their ideas with potential customers. Rather than focusing on technical detail, operators emphasized practical considerations such as response time, ease of integration, and operational impact.
That feedback led the team to refocus its approach, shifting from research-grade optimization tools to a software platform designed to support real-time decision-making in live data center environments.
NexDCCool’s software integrates with existing data center cooling systems and IT infrastructure, using AI to adjust cooling operations in response to changing conditions such as power prices, IT load, and thermal constraints. Operators can set different objectives, including minimizing overall energy costs, reducing cooling power use, or increasing available capacity for computing.
According to the company, demand has increasingly centered on maximizing IT capacity by reallocating power from cooling systems to servers. Rather than emphasizing energy savings alone, the company has positioned its platform around enabling additional computing capacity and revenue generation, a shift that has informed its move from academic research to a commercial product.
A Growing Category, With an Open Question
NexDCCool isn’t alone in seeing opportunity at the intersection of AI, infrastructure, and efficiency. Across the sector, startups are racing to address the physical limits of digital growth — from advanced cooling to power-aware scheduling to on-site generation.
What remains uncertain is how far software-driven optimization can go. For NexDCCool, the bet is that reclaiming power lost to cooling isn’t just incremental — it’s one of the fastest ways to unlock capacity without waiting years for new infrastructure to come online.
In an AI economy increasingly defined by physical constraints, that kind of leverage may prove just as valuable as the next generation of chips.