In the relentless pursuit of artificial intelligence (AI) that can match and surpass human capabilities, the tech industry has long believed that bigger is better. However, as we push the boundaries of scale, we are increasingly hitting a wall where the costs and risks of expansion are beginning to outweigh the benefits.
The problem lies in the physical and economic constraints that come with scaling AI. Data centers, the backbone of AI processing, are projected to double their electricity demand by 2030, a level once associated with entire industrial sectors. In the U.S. alone, data center power demand is expected to rise by over 100% before the decade ends, requiring trillions of dollars in new investments and significant expansions in grid capacity.
The Cost of Scaling
Training frontier AI models has already become extraordinarily expensive, with costs multiplying year over year. Single training runs could soon exceed $1 billion, and the real expense lies in inference—running these models continuously with real latency, uptime, and verification requirements. Every query consumes energy, and every deployment demands robust infrastructure, leading to a compounding of costs and energy use.
Amplifying Weaknesses
As AI systems are integrated into critical sectors like law, finance, and compliance, the stakes are higher than ever. In June 2025, the UK High Court warned lawyers to stop submitting filings that cited fabricated case law generated by AI tools. This incident underscores a broader issue: scaling AI without improving its reasoning capabilities amplifies its weaknesses.
Large language models (LLMs) have become increasingly fluent because they are pattern-based. The more examples they see, the better they perform. However, deeper intelligence—reasoning—does not scale the same way. The next generation of AI must understand cause and effect, recognize uncertainty, and explain its conclusions. This level of reasoning does not reliably improve with more parameters or compute power.
The Verification Burden
As AI systems are deployed more widely, the burden of verifying their outputs grows. Humans must spend more time checking machine-generated content, which undermines the efficiency and reliability these systems are supposed to provide. In financial markets, fluent but unreliable AI can propagate errors, move capital based on false signals, and undermine public trust.
Time for a New Approach
Ensuring that AI is economically viable and socially valuable requires a shift from simply scaling to improving the underlying reasoning mechanisms. Cognitive or neurosymbolic systems, which organize knowledge into interrelated concepts, can deliver high reasoning capability with far lower energy and infrastructure demands. These systems can operate on local servers or edge devices, allowing users to maintain control over their own knowledge.
Decentralizing AI development through blockchain can further reduce concentration risk and align deployment with local needs. By enabling both individuals and corporations to contribute data, models, and computing resources, these approaches can foster a more resilient and trustworthy AI ecosystem.
Conclusion
The era of scaling AI at all costs is coming to an end. The industry must now focus on developing architectures that enhance reasoning and reduce the verification burden. By prioritizing reliability over size, we can build AI systems that are not only powerful but also trustworthy and sustainable.
