Who Controls AI? Not Government, and That’s the Real Problem

The EU AI Act, the White House’s Executive Order on AI, and China’s strict AI governance laws all claim to be tackling the risks of artificial intelligence. Lawmakers promise that these policies will make AI safer, fairer, and more accountable. Reassuring thought to us all since we increasingly see how AI is reshaping economies, workplaces, and governments at an accelerating pace.
But there’s a problem. Most AI laws are focused on the wrong risks.
Governments are treating AI like a software issue. Something to be audited for safety and ethics. That’s why the biggest regulatory conversations centre around algorithmic fairness, misinformation risks, and ethical AI frameworks.
I’m not saying that these concerns are not important but these policies ignore the bigger picture and focus on who is controlling and profiting from AI.
Because AI isn’t just another technology, it’s reshaping economic power. And if AI regulation doesn’t address who controls and profits from this transformation, it’s not really regulation at all.
AI is Reshaping Power, But Not in the Way Regulators Think
For years, we’ve been told that AI is a disruptive force, breaking down barriers and unlocking new opportunities. And it is, but only to an extent. As for every opportunity AI is opening up, it is also increasingly concentrating power in a few corporations at a scale never seen before.
The companies that own the infrastructure of AI compute power, training data, and foundational models are quietly becoming the gatekeepers of the digital economy. It’s not just about who builds the best AI models anymore. It’s about who controls the resources that make AI possible in the first place.
For example, while most of the AI conversation revolves around OpenAI’s breakthroughs in generative AI, the real story is how Microsoft has positioned itself as the backbone of AI development. By securing exclusive partnerships and providing the cloud infrastructure OpenAI relies on, Microsoft has outsourced AI innovation while keeping control of its economic value.
Amazon is doing the same with AWS, which dominates the AI cloud computing market, providing infrastructure for countless AI startups and enterprise applications. Even Google’s DeepMind, often seen as an AI research powerhouse, ultimately serves the company’s broader advertising and cloud business models.
And yet, AI regulation barely touches on these issues of increasing centralised control.
Instead, regulators are focusing on whether AI chatbots spread misinformation, whether automated hiring tools are fair, or whether deepfakes are a security risk. All valid concerns but none of them address the deeper issue of economic control.
If regulators were serious about AI’s risks, they wouldn’t just be auditing AI models for bias.They’d be asking deeper different questions by looking more deeply at how AI is turbocharging inequality and concentrating control.
Ironically, most AI regulations do the opposite of what they claim to achieve. Instead of reining in AI’s power, they are entrenching it further, especially for the largest players.
The reason is simple: Complying with regulation is expensive.
Take the EU AI Act, which classifies certain AI systems as “high risk,” requiring them to undergo rigorous testing, documentation, and regulatory approval before deployment. This includes AI used in biometric surveillance, critical infrastructure, law enforcement, and systems that impact hiring, education, or access to essential services. In theory, these rules are meant to protect consumers from harmful AI applications.
In practice, however, the cost of compliance, ensuring data quality, conducting risk assessments, and maintaining human oversight, means that only companies with the resources to navigate these legal hurdles can afford to compete. While tech giants like Google, Microsoft, and Amazon have the infrastructure to meet these demands, smaller AI startups may struggle, unintentionally reinforcing industry consolidation rather than fostering fair competition.
Regulation becomes a reinforcer of power, ensuring that only the biggest players survive, instead of a check-point for it.
But the problem runs deeper than just compliance costs. Governments themselves are becoming increasingly dependent on AI companies — not just as regulators, but as customers.
In the UK, government agencies rely on AI models developed by private firms for everything from predictive policing and social service assessments to immigration decisions and fraud detection. This raises concerns about transparency and accountability when critical public functions are outsourced to private AI systems.
In the US, defence contracts for AI development are flowing to tech giants like Microsoft, Google, and Palantir, further entrenching their influence. The Pentagon’s Joint Artificial Intelligence Center (JAIC), for example, partners heavily with Big Tech to develop AI-driven military and surveillance applications, blurring the lines between private-sector innovation and government power.
Even in Europe, where AI regulation is stricter, policymakers still rely on AI-driven infrastructure controlled by U.S. corporations. From cloud computing services to AI-driven cybersecurity, European governments are tied to the same American firms that EU regulators claim to be keeping in check. This creates a paradox: while the EU seeks to regulate AI’s risks, it remains structurally dependent on the very companies shaping its AI landscape.
This begs the question, how can governments effectively regulate AI companies if they are financially and operationally tied to them? How are we able to manage AI risks, including those that reinforce the economic structures that are pushing us into a new era beyond capitalism, to AI Feudalism.
Regulatory Blind Spots
The problem with AI regulation today isn’t that it’s unnecessary. It’s that it’s focused on the wrong risks.
Current policies treat AI as a technical problem, something that needs better ethics, improved safeguards, and tighter moderation. But AI isn’t just a technological challenge, it’s an economic and governance challenge.
If AI laws were truly designed to regulate power, they wouldn’t just focus on algorithmic fairness or misinformation. They would address the deeper systemic risks AI is creating risks that go beyond individual model biases and into the structure of economic control itself.
AI Monopolies Are Controlling Economic Infrastructure
Governments once broke up monopolies in oil, telecoms, and railroads to prevent private control over essential infrastructure. AI should be no different.
Yet today, a handful of companies own the compute power, cloud storage, and foundation models that make AI possible, turning AI infrastructure into a privately controlled economic gatekeeping system.
AI as an Economic Dependency
AI is becoming the backbone of decision-making across industries. If businesses, governments, and entire sectors rely on privately owned AI models to function, the question becomes more focused on who is owning these systems if we are to regulate it effectively. Without intervention, AI will reinforce economic dependencies, locking smaller players into a pay-to-play system where access is dictated by corporate gatekeepers.
AI-Driven Rent Extraction
AI isn’t just replacing human labor, it’s extracting economic value while reducing reinvestment. Instead of redistributing the gains of automation, AI is being designed to maximise profits for a select few. Without intervention, AI won’t just widen inequality it will cement new digital landlords who own the infrastructure everyone else must pay to access.
Will Policymakers Catch Up?
Right now, policymakers are regulating AI as if it’s just a technological challenge, ignoring the larger economic takeover unfolding in real time.
If AI is allowed to become privately owned infrastructure, the future won’t belong to the most innovative companies it will belong to the ones that already own everything.
Regulation is coming — but whether it serves the public or entrenches corporate rule depends on the decisions we make today.
Think Box Project provides insights, research and strategy consultations on the systematic risks and governance of AI. Head to our website to find out more: www.thinkboxproject.org.