Efficiency, Not Brute Power, Will Shape the AI Race: Ex-Facebook Privacy Chief

Former Facebook executive Chris Kelly says AI’s future winners will be defined by efficiency, as energy costs emerge as the biggest challenge.
Artificial intelligence continues to transform industries and redefine competition among global technology giants. While much of the current AI race has been driven by a relentless push for smarter and larger models, a former Facebook executive believes the next phase will be shaped by a very different factor: efficiency.
Chris Kelly, former chief privacy officer and general counsel at Facebook, has argued that raw computing power alone will no longer determine leadership in AI. Instead, the decisive advantage will belong to companies that can deliver advanced intelligence while using less energy and spending less on infrastructure.
In a recent interview with a famous publication, Kelly explained that the industry is approaching a critical inflection point. As AI models grow in size and complexity, the energy required to train and operate them is rising sharply. Massive data centres, packed with specialised hardware and cooling systems, are becoming increasingly expensive to build and maintain. According to Kelly, this growing dependence on power-intensive infrastructure is turning into one of the biggest bottlenecks for the industry.
“The companies that learn how to do more with less are going to have a real edge,” Kelly suggested, highlighting how energy efficiency could become the defining metric of success in AI.
The scale of investment already pouring into AI infrastructure shows how serious the challenge has become. Data from S&P Global indicates that data centre dealmaking surpassed $61 billion in 2025. Major players such as OpenAI, Google, Meta and xAI are racing to construct facilities capable of handling enormous AI workloads. This rapid expansion has intensified demand for electricity, land, cooling systems and advanced chips, placing pressure not only on corporate finances but also on regional power grids.
Much of today’s AI ambition is centred on achieving artificial general intelligence (AGI) — systems that can match or exceed human-level reasoning. However, Kelly points out a striking contrast between machine intelligence and the human brain.
“We run our brains on 20 watts... We don’t need gigawatt power centres to reason," he said. "“I think that finding efficiency is going to be one of the key things that the big AI players look to.”
For Kelly, this comparison highlights the need for smarter engineering rather than endless scaling. He believes future breakthroughs will depend on rethinking how models are trained and deployed, focusing on reducing both cost and energy consumption without sacrificing performance.
Concerns around power usage are already becoming impossible to ignore. In September, Nvidia and OpenAI revealed plans involving at least 10 gigawatts of new data centre capacity. That level of electricity consumption is roughly equivalent to the annual usage of around eight million US households or the peak summer demand of New York City in 2024.
As AI systems continue to grow more capable and demanding, their appetite for energy will only increase. Kelly’s message is clear: the next leaders in artificial intelligence will not be those who simply build the biggest models, but those who can teach machines to think smarter while consuming far less power.













