Who Really Holds the Power in “Powerful AI”?
“Powerful AI”— a term recently championed by Anthropic’s CEO Dario Amodei —sounds like a futuristic marvel, a technology advancing all on its own. But this phrase isn’t about AI’s raw abilities; it’s a strategic narrative. By reframing AI’s potential, we’re asked to accept the authority of those behind the scenes, steering it in directions that serve their own interests. So, when we call AI “powerful,” we should ask: who’s really powering it? And more importantly—who stands to gain?
Who Gives AI Its Power?
The term “powerful AI” might imply a technology with innate authority, but let’s be clear: AI’s influence depends entirely on who enables it and who shapes its direction. We’re talking about two core forms of power here:
1. Literal Power: The Energy Behind AI
To function at the scale touted by tech leaders, AI demands enormous energy. Running today’s advanced models consumes staggering amounts of electricity, and as these models expand, their environmental impact only grows. The physical power needs of AI make it clear that there’s no “inevitable” technological trajectory; instead, there’s a very deliberate decision about where resources are spent and what environmental costs are accepted.
2. Political and Economic Power: Who Decides AI’s Future?
It’s not AI driving itself forward, but the agendas of those with the power to invest, regulate, and deploy it. Governments and corporations are deciding the rules, setting priorities, and determining the future of AI. This isn’t a neutral evolution—it’s a politically charged process that puts immense authority in the hands of a few, who then shape the technology’s direction, influence public trust, and control what we see, hear, and believe.
Enter the Age of AI Agents: Who Are They Really Serving?
The next wave of AI development will center on agents—systems acting on our behalf, personalizing recommendations, executing commands, and guiding decisions. But here’s the critical question: Whose interests are these agents truly serving? Even if presented as personal assistants, these agents are at least doubling agents, meaning they serve both the user and the platform or company that controls them.
If these agents already act in dual interest, what prevents them from quietly prioritizing a third agenda—one that benefits other corporate, political, or private entities? As these agents take a more active role in shaping our decisions, preferences, and perceptions, they introduce potential biases that are easy to overlook but profoundly influential.
Powerful AI Isn’t Inevitable—It’s Political
So when we hear “powerful AI,” we should question what power means in this context. This isn’t just a force to admire or fear; it’s a technology built by human hands, powered by real-world resources, and designed with certain people’s interests at heart. The authority we ascribe to AI is, ultimately, a matter of political choice, shaped by those who wield power and control its growth.
As we face this new reality, the critical questions are: Who gets to decide the future of AI? Who benefits from its evolution? And whose interests will these “powerful” systems serve? In the race to build the next great AI, these are the questions of authority and influence we can’t afford to ignore.