Authoritarian by Design: AI, Big Tech, and the Architecture of Control

Authoritarian by Design: AI, Big Tech, and the Architecture of Control

By Luca Greiner


 
Introduction
Over the past few years, Artificial Intelligence (AI) has evolved rapidly from a research field into an inescapable transformative technology (Sheikh et al. 2023). Systems across almost every area of society, including politics, industry, healthcare, education, and security, are being reimagined through the lens of AI. Companies are racing to develop and bring AI applications to market, competing to be the first to gain a foothold in consumers’ daily lives. Meanwhile, governments are struggling to keep up and introduce regulations.
We are just beginning to understand AI, and while some believe the technology has nearly limitless potential to advance humanity, growing concerns surround its dangers. In their 2025 Global Risk Report, the World Economic Forum presents the findings of their 2024–25 Global Risk Perception Survey. The survey analyses the key global risks that humanity faces in the short, medium, and long term. Nine hundred experts from academia, business, government, international organizations, and civil society were asked to estimate the impact of risks over two- (2025) and ten-year (2035) periods. The survey ranks the adverse outcomes of AI technologies as the 31st most impactful risk over the next two years; however, the negative consequences of AI technology rise to sixth position when a ten-year timeframe is considered (Elsner et al., 2025). This demonstrates that experts consider AI to be one of the most severe risks that humanity will face in the future.
One reason why AI poses such a significant risk is its potential to undermine rights and freedoms. At the same time that this technological revolution is unfolding, authoritarianism is on the rise around the world, and we are already seeing AI play a part in this development. Unsurprisingly, there is growing concern that AI will increasingly contribute to the global trend of digital authoritarianism. But while much of this fear centers on the technology’s use by authoritarian states and non-state actors to repress rights and freedoms, I want to argue that there is another threat: If tech companies continue to develop AI in the way they have previous digital technologies, these systems will inherently contribute to the trend of digital authoritarianism, the same way previous technologies have—not by accident but as a consequence of the prevailing mode of operation and ideology within the tech industry.
Defining Digital Authoritarianism
But what is digital authoritarianism? The concept was first introduced when digital technologies made new practices of authoritarianism, suppressing citizens’ rights and freedoms, possible (Roberts & Oosterom, 2024). One widely accepted definition is “the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations” (Roberts & Oosterom 2024: 4). This concept has mostly been used to study regimes such as those in Russia, China, Syria, Singapore and Pakistan. Although research on digital authoritarianism has primarily focused on the state as an actor, some studies have also examined state-affiliated entities, including influencers, trolls, and hackers. The practices of digital authoritarianism vary in form, but they can generally be categorized into four main areas: online censorship, including content filtering, blocking, and internet shutdowns; digital surveillance, encompassing both mass and targeted monitoring; digital disinformation, referring to the deliberate spread of false or misleading information to manipulate public discourse; and repressive laws and regulations that curtail citizens’ freedoms (Roberts/Oosterom 2024). The consequences of these practices are similarly diverse and far-reaching, including the suppression of dissent, interference in elections, the erosion of open public debate and access to reliable information, and violations of the right to individual privacy (Roberts/Oosterom 2024).
AI in the Hands of Authoritarian States
AI is already contributing to the trend of digital authoritarianism, as authoritarian states and non-state actors are using it to restrict rights and freedoms. In their Freedom on the Net 2023 report, Freedom House highlights the danger of advances in AI for human rights online. They explore how AI can increase the “scale, speed and efficiency” (Shabaz 2023: 2) of digital repression by supercharging surveillance, censorship, disinformation campaigns, algorithmic discrimination, and tracking (Shabaz 2023). This represents the primary framework used to examine AI’s role in digital authoritarianism.
Beyond the Authoritarian Agent: Rethinking AI’s Role
But how can AI systems inherently contribute to digital authoritarianism, as described above, without deliberate or strategic use by states or non-state actors? Addressing this question requires a broader definition of digital authoritarianism, one that does not presuppose a centralized agent actively engaging in digital authoritarian practices for these effects and consequences to occur. Definitions that do assume an authoritarian agent intentionally exploiting digital technology to repress rights and freedoms can be classified as “intention-based definitions” (Pearson 2024: 1). However, digital authoritarianism and its effects can manifest even without the involvement of an explicit authoritarian agent. This is evident because even in democracies we find “ways that digital technologies systematically foster authoritarianism without any politically repressive agents intentionally causing these effects” (Pearson 2024: 2). Thinking back to the practices and consequences associated with digital authoritarianism, we can find comparable examples in contexts without an authoritarian agent: the suppression of voices through shadow banning on platforms like Instagram; mass surveillance via location tracking in services such as Google Maps; the algorithmic spreading of disinformation on YouTube; allegations that X (formerly Twitter) interfered in the German February elections; the erosion of open public debate due to polarization amplified by Facebook’s filter bubbles; the diminishing accessibility of reliable information because of Google’s increasingly ad-driven search algorithm; and the ongoing violation of privacy rights through extensive user data collection. Consequently, the intention-based definition of digital authoritarianism is “underinclusive” (Pearson 2024: 3). A more expansive alternative is the “promotion-based definition” (Pearson 2024: 13), which describes digital authoritarianism as “any situation where digital technologies systematically promote authoritarian politics” (Pearson 2024: 13). Adopting this broader definition of digital authoritarianism makes it possible to argue that AI can inherently contribute to its spread, even in the absence of deliberate use by state actors.
Learning from Social Media’s Trajectory
To understand how AI might inherently contribute to digital authoritarianism in practice, it is helpful to examine how previous digital technologies developed by tech companies have done so. In particular, there are striking parallels between AI and social media. Not only are the leading social media companies like Meta and X now at the forefront of AI development, but both technologies were initially hailed as forces for democratization, empowerment, and global connectivity (Diamond 2019). Moreover, early concerns about social media largely centered on authoritarian regimes seeking to exert influence, and on malicious actors adapting the technology to serve their own ends (Diamond 2019). This closely parallels the current focus on AI’s strategic use by authoritarian states and non-state actors to repress rights and freedoms. The sources of threat social media poses now are much more ominous and diverse (Diamond 2019). No one anticipated that social media would have the inherent capacity to destabilize society, even without deliberate interference from authoritarian regimes or other malicious actors. Effects such as the creation of echo chambers, algorithmic radicalization, rising political polarization, the rapid spread of misinformation, and the erosion of public trust were largely unforeseen. Likewise, the psychological consequences, ranging from addiction and deteriorating mental health to increased loneliness and social isolation, were not expected outcomes of social media’s early promise to connect the world.
One can only speculate about the unforeseen outcomes AI might have, that could reshape the fabric of our society as profoundly as social media has. Potential candidates could be: a post-truth media landscape because of deepfakes, widespread job displacement, increased social alienation and dependence on AI for company, a gradual erosion of human agency, value misalignments between AI systems and human goals, and ultimately, the possibility of losing control over the systems entirely. Yet, there are ways that we can reasonably expect AI to contribute to digital authoritarianism if it is developed by tech companies in the same way that social media and other digital technologies have been. An obvious one is the surveillance of users and the large-scale extraction of data. Social media platforms function as vast data farms, systematically harvesting information about their users (Askonas 2019). If the same companies develop AI, it is likely to result in a pervasive system of mass surveillance, one where individuals are continuously monitored without their awareness or meaningful control, infringing on their right to privacy. A way closely linked to this is the algorithmic manipulation of information. Social media algorithms are optimized to maximize engagement by prioritizing content that captures and holds user attention (Askonas 2019). Similarly, AI systems may tailor recommendations and interactions based on behavioral data, distorting what users see. A third way is behavioral control. Just as social media subtly steers user behavior both online and in real life (Askonas 2019), AI systems are likely to do the same, but with even greater precision. Finally, there is the consolidation of power. A small number of tech companies already dominate the social media landscape, controlling the infrastructure of global communication. These same companies are now leading the development of advanced AI systems. As a result, a similar concentration of power is emerging.
Why Tech Companies Build Authoritarian Tools
These ways in which established digital technologies have contributed—and AI, if developed by the big tech companies, is likely to contribute—to the trend of digital authoritarianism, are not accidental flaws (Askonas 2019). Rather, they are the inevitable outcome of the prevailing operational model and ideological frameworks of the tech companies developing them. The operational model that is at the core of digital tech’s authoritarian aspects is free market capitalism (Morozov 2024). Tech companies are private businesses that have to compete with others. It is therefore necessary for them to extract value from users, content creators, and businesses by monetizing engagement, leveraging user data for targeted advertising. Their business models prioritize growth and profit over user autonomy, equity, and often, democratic accountability. But it's not just the mode of operation that contributes to the authoritarian nature of the products, platforms, and systems developed by tech companies; equally important is the prevailing ideology within those companies. The foundational ideology of Silicon Valley tech companies traces its roots to the California counterculture of the 1960s, which envisioned a utopian global community and an ideal social order enabled by the liberating power of communication technologies (Turner 2019). While contemporary tech leaders often claim to uphold Enlightenment ideals such as "reason, progress, and freedom" (LaFrance 2024: 3), the dominant ideology guiding these companies increasingly exhibits authoritarian characteristics.
Central to this worldview is technological solutionism, the belief that all social, political, and environmental challenges can be resolved through technological innovation (LaFrance 2024). This reductive mindset not only obscures the complexity of challenges but often leads to inappropriate or harmful solutions. Another key tenet is disruption, the notion that breaking industries, norms, and institutions constitutes progress. Instead of deliberate, responsible development, the prevailing ethos suggests that "you should always build it because you can," with consequences addressed only after the fact (LaFrance 2024: 3). Additionally, this ideology is marked by a professed political neutrality of technology, a deep skepticism toward government oversight, and a reverence for the founder figure, often portrayed as a visionary whose authority should remain unchallenged (LaFrance 2024).
This operational model and ideology are also embedded in the development of artificial intelligence. A notable example is Marc Andreessen’s essay “Why AI Will Save the World.” Andreessen, a prominent software engineer, entrepreneur, investor, and one of the most influential figures in Silicon Valley, reproduces the same ideological beliefs outlined earlier. In his essay, he proclaims that AI will “save the world” (Andreessen 2023: 1) and categorically dismisses concerns about its potential risks.
Conclusion: Preventing Authoritarian Futures
I have argued that AI contributes to the growing trend of digital authoritarianism. However, this threat does not arise solely from its use by authoritarian states or repressive non-state actors. Instead, when AI is developed under the current practices of tech companies, it inherently reinforces digital authoritarianism. Moreover, this outcome is not accidental but a direct consequence of the operational models and prevailing ideologies within these tech companies. The relevance of this observation extends beyond AI alone. With artificial general intelligence (AGI) potentially on the horizon, we may face even greater risks that require us to confront these same questions. In addition, AI is likely to drive advancements in other technological fields, which could similarly be shaped and influenced by tech companies operating under the same models and ideologies. I believe it is essential to develop AI with caution by implementing government-controlled oversight, removing AI development from market-driven forces, and increasing transparency and collaboration between researchers throughout the development process.

 
References
Andreesen Marc (2016): Why AI Will Save the World. https://a16z.com/ai-will-save-the-world/.
Askonas, Jon (2019): How Tech Utopia Fostered Tyranny. Authoritarians’ Love for
Digital Technology is no Fluke - It’s a Product of Silicon Valley’s “Smart” Paternalism. In: The New Atlantis. https://www.thenewatlantis.com/publications/how-tech-utopia-fostered-tyranny.
Diamond, Larry (2019): The Road to Digital Unfreedom: The Threat of Postmodern Totalitarianism. In: Journal of Democracy, 30 (1), 20-24.
Elsner, Mark/Atkinson, Grace/Zahidi, Saadia (2025): Global Risks Report 2025. https://www.weforum.org/publications/global-risks-report-2025/.
LaFrance, Adrienne (2024): The Rise of Techno-authoritarianism. Silicon Valley has its own ascendant political ideology. It’s past time we call it what it is. In The Atlantic. https://www.theatlantic.com/magazine/archive/2024/03/facebook-meta-silicon-valley-politics/677168/.
Morozov, Evgeny (2024): Can AI Break Out of Panglossian Neoliberalism? What Big Tech has done to our institutional and infrastructural imagination. In: Boston Review. https://www.bostonreview.net/articles/can-ai-break-out-of-panglossian-neoliberalism/.
Pearson, James (2024): Defining Digital Authoritarianism. In: Philosophy & Technology.
Roberts, Tony/Oosterom, Marjoke (2024): Digital Authoritarianism: A SystematicLiterature Review. In: Information Technology for Development.
Shabaz, Adrian/Funk, Allie/Brody, Jennifer/Vesteinsson, Kian/Baker, Grant/Grothe,
Cathryn/Barak, Matthew/Manisin, Maddie/Modi, Rucha/Sutterlin, Elizabeth (2023): Freedom on the Net 2023. https://freedomhouse.org/sites/default/files/2023-11/FOTN2023Final.pdf.
Sheikh, Haroon/Prins, Corien/Shrijvers Erik (2023): Mission AI. The New Systems Technology. Cham. Springer.
Turner, Fred (2019): Machine Politics. The rise of the internet and a new age of authoritarianism. In: Harpers Magazine. https://harpers.org/archive/2019/01/machine-politics-facebook-political-polarization/.
 
 
 
 
 

Olga Solovyeva, PhD | Good tech & Digital safety strategist | London, UK
© 2025 Good Tech Strategy. All rights reserved.
Built with Potion.so