The Silence of the Digital Flock and the Decline of Western Democracy

El silencio del rebaño digital y el declive de la democracia occidental

 

Ph.D. Nguyen Vân Han

PhD in Dialectical and Historical Materialism. Lecturer and Researcher at the Academy of Journalism and Communication, Ho Chi Minh National Academy of Politics, Hanói, Vietnam. nvahanhajc@gmail.com 0009-0001-7986-8907

Cómo citar (APA, séptima edición): Vân Han, N. (2025). The Silence of the Digital Flock and the Decline of Western Democracy. Política Internacional, VII (Nro. 4), 332-346. https://doi.org/10.5281/zenodo.17306257

https://doi.org/10.5281/zenodo.17306257

 

received: july 20, 2025

approved: september 10, 2025

published: october 20, 2025

 

 

RESUMEN En la era digital, las plataformas monopolísticas controlan el flujo de información, manipulando el discurso público mediante algoritmos y minería de datos, creando un «rebaño digital» carente de reflexión crítica. Este artículo analiza cómo esta dinámica, descrita como «capitalismo de la vigilancia», erosiona la democracia occidental al fomentar las cámaras de eco, la polarización y la desinformación, mientras que Vietnam construye proactivamente un modelo de democracia socialista que prioriza la soberanía digital y el interés público. Desde una perspectiva marxista, se argumenta que la concentración de poder en manos de oligarcas digitales amenaza la autonomía informativa, pero el enfoque vietnamita, basado en normativas como la Ley de Ciberseguridad y el programa «Hecho en Vietnam», ofrece una alternativa para el Sur Global, promoviendo un modelo de gobernanza digital que equilibra libertad y disciplina.

Palabras clave: Democracia burguesa; Democracia en la era digital; Autoritarismo digital; Poder algorítmico; Democracia socialista.

 

ABSTRACT In the digital age, monopolistic platforms control the flow of information, manipulating public discourse through algorithms and data mining, creating a “digital flock” devoid of critical reflection. This article analyzes how these dynamics, described as “surveillance capitalism,” erode Western democracy by fostering echo chambers, polarization, and disinformation, while Vietnam proactively builds a model of socialist democracy that prioritizes digital sovereignty and the public interest. From a Marxist perspective, it is argued that the concentration of power in the hands of digital oligarchs threatens informational autonomy, but the Vietnamese approach, based on regulations such as the Cybersecurity Law and the “Make in Vietnam” program, offers an alternative for the Global South, promoting a model of digital governance that balances freedom and discipline.

Keywords: Bourgeois democracy; Digital-age democracy; Digital authoritarianism; Algorithmic power; Socialist democracy.

 

 

INTRODUCTION

In Western culture, the “flock” serves as a metaphor for a submissive, unresisting, and easily manipulated collective, while the “shepherd” represents power—whether institutional, authoritarian, or dictatorial. The phrase “the silence of the lambs” metaphorically captures the condition of marginalized groups in Western society who choose silence in the face of oppression and injustice. Today, in the digital age, Western societies are witnessing the emergence of a new flock—the digital flock—who remain “silent” before the guiding power of all-encompassing algorithms. This silence does not imply a literal absence of speech, but rather a passive state of critical paralysis and the flattening of public discourse, as users are unknowingly swept into algorithmically curated information flows. Herman and Chomsky (1988) describe this phenomenon as manufactured consent, referring to how public opinion is engineered to serve the interests of dominant power groups. This crisis is not confined to the West. Across the Global South, many developing nations face growing concerns over digital dependency and the erosion of informational sovereignty. As data infrastructures, communication channels, and algorithmic systems are increasingly controlled by transnational corporations, the threat of becoming “data colonies” looms large—where national autonomy is undermined by foreign algorithmic governance. In this global context, the struggle for digital independence has become a new frontier in the broader fight for postcolonial sovereignty and self-determination. Vietnam, in particular, has emerged as a notable case of resistance and strategic foresight—taking early and concrete steps to assert control over its digital space and ensure that technological development aligns with national interests and socialist values.

DEVELOPMENT

1. “The Silence of the Digital flock: From Passive Users to Epistemic Subjects in Crisis

1.1 The Suppression of Critical Social Voices in Algorithmic Spaces

Despite the abundance of available information in the digital age, users increasingly experience cognitive overload, making them more susceptible to conspiracy theories and misinformation. As Ecker, Lewandowsky, Cook, Schmid, Fazio, Brashier, Kendeou, Vraga and Amazeen (2022) note, “Misinformation has been identified as a major contributor to various contentious contemporary events.” (p. 6) For example, during Canada’s 2025 federal election, distorted narratives flooded the Canadian digital space, casting doubt on the electoral system and turning the online environment into an “information minefield” (Media Ecosystem Observatory, 2022, p. 7). Similar surveys indicate 73% of Canadians encountered questionable content online in the past year, with 59% expressing serious concern about distinguishing true from false information (Evidence for Democracy, 2023, p. 8). Similar dynamics unfolded elsewhere—for example, TikTok became a key platform for spreading disinformation about the war in Ukraine (Bösch & Divon, 2024, p. 5081).

Conspiracy theories and false information have shaped the views of large public segments, creating misguided “consensus” on critical issues—such as the belief that COVID-19 vaccines are dangerous or that the 2020 U.S. election was fraudulent. Even more troubling, digital advertising can unintentionally magnify the spread of fake news.

Believers in such narratives often gather in closed online communities, where their views are reinforced despite expert rebuttals. This reflects a new form of “manufactured consent”: no longer requiring press censorship, it suffices to saturate algorithms with ideologically loaded content to steer public opinion toward a consensus engineered by unseen powers. In today’s social media landscape, public opinion is increasingly fragmented and saturated with information overload. Although misinformation often fuels division, it represents only one dimension of a broader and deeper conflict that may pose even greater long-term risks to democratic institutions.

Philosopher Hannah Arendt had already cautioned against such dangers in the last century. Reflecting on the political manipulation of truth during the Vietnam War, she warned that “truthfulness has never been counted among the political virtues, and lies have always been regarded as justifiable tools in political dealings” (Arendt ,1971, p. 1). In her view, the erosion of factual truth undermines the common ground upon which public reasoning and critical judgment depend. She further emphasized that “the deliberate denial of factual truth — the ability to lie — and the capacity to change facts — the ability to act — are interconnected; they owe their existence to the same source: imagination” (Arendt, 1971, p. 2), pointing to the fragility of truth when subjected to the calculated operations of political power.

Narratives crafted by political elites gain easier acceptance when citizens are overwhelmed by conflicting information and cease to seek truth. Algorithms—previously analyzed—have become potent tools for spreading opinion-shaping content, whether profit-driven or politically motivated. Social bots further amplify misinformation by exposing users to such content and encouraging them to share it. Gombar (2025, p. 1) argues this constitutes cognitive warfare, noting that digital technologies enable “algorithmic manipulation” and mobilize media theories to shape public perception.

Consequently, an artificial consensus emerges—not through democratic deliberation, but via manipulated information. Disinformation’s goal is not only to deceive but to induce confusion, apathy, and passive acceptance of distorted narratives. More damaging in the long run is the erosion of public intellectual and moral quality. Users are saturated with personalized content that gratifies immediate desires, fostering habits of shallow consumption. Over time, they lose the capacity for critical reflection and engagement with reality. Without cultivating critical thinking and civil discourse, the public becomes vulnerable—guided by algorithms exploiting crowd psychology, and forming a generation with diminished independent judgment.

The younger generation, as the most digitally immersed demographic, is increasingly exhibiting signs of diminished critical awareness. Shaped by algorithmic environments, their social reality often narrows to screen-sized spaces where the boundaries between truth and fabrication become increasingly indistinct. As O’Hara (2022, p. 124) points out, “The disruptive effect of misinformation on cyberspace is hard-wired into digital modernity.”

Herbert Marcuse foresaw this predicament back in 1964. He argued that in advanced industrial society, instrumental rationality and consumerist imperatives “flatten out the antagonisms between culture and social reality” (Marcuse, 1964/2013, p. 57), reducing individuals to a single-dimensional existence where critical and oppositional thinking are systematically suppressed. This one-dimensional society, driven by the pursuit of material comforts and mass media, erodes the capacity for critique and reduces consciousness to the mold of a preordained consumer society. He termed this the one-dimensional man. In this context, digital flock are a new manifestation of this archetype—citizens under a novel form of control that limits freedom under the guise of technological progress. As critical thinking declines or the ability to detect domination fades, people increasingly desire only what the system provides—unable to envision alternatives or challenge the status quo. Propaganda today seeks not just obedience but cognitive submission—people who no longer ask “why”.

1.2 Echo Chambers and Filter Bubbles: The Fragmentation of Public Discourse into Isolated Realms

The digital age has further contributed to the fragmentation of public discourse through the formation of echo chambers and filter bubbles. These phenomena describe how individuals are increasingly exposed only to content that reinforces their pre-existing beliefs, as algorithms prioritize information aligned with users’ interests and ideological preferences. Algorithms have, in many respects, replaced the traditional gatekeepers of information—editors, journalists, and public broadcasters—by determining what content is shown, to whom, and in what format. The prominence or obscurity of a piece of information within the information flow no longer depends primarily on its objective value or factual accuracy. “The algorithmic filtering and adaptation of online content to personal preferences and interests is often associated with a decrease in the diversity of information to which users are exposed” (Helberger, Karppinen & D’Acunto, 2020, p. 6). Algorithms prioritize content not based on truth or importance but on predicted engagement and profit maximization.

Data-driven algorithms enable a hyper-personalized curation of content aligned with users’ behavioral profiles. On Facebook, the News Feed algorithm optimizes engagement by continuously analyzing user behavior. Over time, a user’s feed becomes a mirror of the self, where opposing perspectives are rarely encountered—an emblematic case of hyper-personalized information flows. On TikTok, the For You algorithm is notorious for its uncanny ability to “read” users, quickly learning their preferences and relentlessly feeding matching content. TikTok’s algorithm as a perfectly calibrated television channel for each user’s brain, highlighting its extreme personalization. While personalization boosts short-term user satisfaction and engagement, it limits exposure to diverse viewpoints and reduces cognitive flexibility. As Bozdag (2013, p. 209) observes, online platforms like Facebook and Google “introduced personalization features, algorithms that filter information per individual,” which “introduces new biases” while failing to eliminate existing ones. For example, during the 2024 U.S. presidential election, TikTok became an optimistic bubble for Harris’s supporters, repeatedly reinforcing her expected victory—so much so that many believed it inevitable. This case illustrates how algorithmic personalization can lull users into cognitive safe zones, rendering them silent in the face of opposing warnings or critical information.

People tend to self-select into groups of like-minded individuals, creating enclaves of mutual reinforcement. As a result, many online communities have developed into isolated information islands, with little meaningful dialogue or genuine debate between them. Instead of accessing objective and multi-perspectival information, the modern public is increasingly guided by misleading or inflammatory messages. Consequently, sensational, angry, or controversial content tends to be amplified and widely circulated—because such content provokes stronger user reactions and generates more comments and engagement. As Dunaway (2024, para. 5) notes through summarizing Mochon et al.’s findings, “platforms benefit from keeping users active, regardless of whether the interaction is positive or negative,” and their study “found that users frequently react to opposing viewpoints with heightened engagement, often driven by outrage.” Attempts to correct false beliefs often fail within echo chambers, where users prioritize group identity over facts. In fact, as Zollo, Bessi, Del Vicario, Scala, Caldarelli, Shekhtman, Havlin, and Quattrociocchi (2017) note, “attempts to debunk are often undertaken” yet such efforts “remain mainly confined to the scientific echo chamber. Only few conspiracy users engage with corrections… and their liking and commenting rates on conspiracy posts increases after the interaction.” (p. 1)

The localization of information is not confined to a single platform. Different platforms host parallel echo chambers: Twitter, YouTube, Reddit all foster community clusters that amplify internal consensus. Users expressing minority views face algorithmic invisibility—less engagement, lower reach, and eventual self-censorship, resulting in the silencing of minority voices within each community. In essence, this constitutes a digital version of the spiral of silence—a communication theory which posits that individuals who perceive themselves to be in the minority are less likely to express their views, thereby allowing dominant opinions to grow ever more dominant. As a result, the online public sphere faces the risk of fragmenting into multiple isolated herds, each silent in the face of perspectives from other groups—leaving little room for genuine, society-wide dialogue.

Beyond personalization, algorithms also shape public discourse by amplifying specific types of content. Major social media platforms such as Facebook and Twitter deploy algorithms that prioritize posts deemed popular or trending within a given community—typically measured by metrics like likes, shares, and comments. This effectively creates a system of algorithmic rewards and punishments: content that drives high engagement is widely disseminated, while posts with limited interaction are quickly buried in obscurity. When algorithms operate under a commercial logic of more engagement is better, the value of truth and objective information is often relegated to a secondary position. “When false headlines come with warning tags, participants assume that untagged headlines are true.” (Pennycook, Bear, Collins, & Rand, 2020, p. 1). In such an environment, truth may become the victim—silenced in an unequal competition with sensationalist content and misinformation.

Excessive information personalization is exacerbating the problem of social polarization in contemporary society, with rising levels of prejudice and ideological entrenchment. In recent years, the political landscape in many countries has witnessed growing divisions between communities supporting different parties, making it increasingly difficult to find common ground. Public debate often descends into impasse or radical confrontation. The more personalized the content, the more fragmented the public sphere becomes —leading to a loss of shared facts and common discourse. Online echo chambers contribute to the illusion that one’s group is always right and represents the majority, while dissenting views are seen as deviant and marginal. As a result, when real-world outcomes fail to align with group expectations, communities may experience collective shock and outrage—leading to a crisis of trust in institutions and, in some cases, extreme reactions. This growing polarization erodes social cohesion and undermines constructive dialogue—both of which are foundational to a healthy democracy. Citizens are manipulated into believing falsehoods or incited to distrust one another, rather than engaging in rational discussions about public policy and their nation’s future.

All of these manifestations of silence—the marginalization of dissent, the algorithmic filtering of information, and the collapse of discursive plurality—contribute to the construction of a digital public sphere that appears, on the surface, highly consensual. Yet this apparent consensus often masks a deeper epistemic crisis. The absence of visible disagreement on social media platforms does not necessarily indicate genuine harmony; rather, it reflects the systematic suppression of alternative perspectives through algorithmic invisibility. Karl Marx once warned that: “All our invention and progress seem to result in endowing material forces with intellectual life, and in stultifying human life into a material force” (Marx, 2000, p. 10). In today’s algorithmic environment, this warning takes on renewed urgency. Algorithms—driven by proprietary logic and capitalist imperatives—appear to possess agency, shaping what is seen, said, and known. Meanwhile, the digital public becomes epistemically hollow: deprived of critical reflection, stripped of discursive diversity, and rendered passive in the face of programmed informational conformity. The silence of the digital flock, then, is not simply the absence of voice—it is the absence of epistemic agency. It signals a condition in which public reason is subdued not through force, but through design.

2 Digital Shepherds of Capital: The Weaponization of Data for Public Manipulation

If the online public constitutes a digital flock of lambs, then the digital shepherds are the powerful technology corporations that exercise monopolistic control over data and algorithmic infrastructure. These corporate actors—guardians of vast reservoirs of behavioral and communicative data—have transformed algorithms into the new gatekeepers of human attention. “Media scholars have only recently begun to recognize and investigate the importance of algorithms to a wide range of processes related to the production and consumption of media content” (Napoli, 2014, p. 340). These platforms subtly but systematically guide the digital flock by privileging agreeable content, filtering out dissenting or inconvenient information, and enclosing users within algorithmically defined cognitive territories. The result is a form of informational enclosure: users are comforted, confirmed, and confined, often unaware that their digital environment is shaped by invisible logics of control. In this scenario, the shepherd’s power lies not in silencing the lambs through coercion, but in orchestrating what appears to be free movement within predetermined boundaries. Over time, users lose not only their autonomy in navigating information but also their capacity for critical scrutiny—reduced to docile actors in a closed circuit of algorithmic repetition, endlessly circling within the pasture delineated by capital.

Karl Marx had already anticipated in the 19th century that machinery, while representing a victory of humankind over the forces of nature, could paradoxically become a new mechanism of domination under capitalism. As he wrote in Capital, “in itself it is a victory of man over the forces of nature, but in the hands of capital it makes man the slave of these forces” (Marx & Engels, 2002, p. 694). In the digital age, it is not big data technologies or algorithms themselves that have fragmented society into digital flock and digital shepherds, but rather the capitalist mode of deploying such technologies. At the heart of this lies the system of private ownership over the means of production. Ownership, in this context, does not simply concern material assets or productive tools; it reflects and institutionalizes the social relations among individuals within the production process.

Today, big data is regarded as the “oil” of the digital age—an essential input for intelligent production processes. The latest developments in the evolution of the Internet increasingly depend on datafication (the transformation of many aspects of the world and people’s lives into data) and mediation of content by algorithms and other intelligent technologies. However, unlike any previous form of resource, data is uniquely abundant: it is an intangible asset that can be infinitely replicated and becomes increasingly digitized the more it accumulates.

At present, this vast global data reservoir remains largely under private ownership and continues to expand, as public use of online platforms and social media unwittingly contributes individual digital labor to capitalist enterprises. “This is not just a case of platforms extracting data from users, but of treating them as [...] unpaid labourers in the process of data extraction” (Couldry & Mejias, 2019, p. 4). Digital capitalists are able to appropriate, often at virtually no cost, the labor products of digital users and convert them into surplus value. Within the framework of the market economy, big data is gradually becoming the exclusive property of digital capitalists. Consequently, this data reservoir primarily serves the economic interests of capital. These actors seek to develop big data technologies in ways that optimize profit; transforming the public into a digital flock becomes an effective means to that end, while the capitalist hides behind online platforms as a sophisticated, invisible digital shepherd.

Online platforms resemble two-sided markets. On one side, capitalists present themselves as neutral actors, claiming not to favor any particular party and merely providing a mechanism to remove informational barriers and administer platform governance. On the other side, however, the algorithmic mechanisms embedded within these platforms are fundamentally designed to serve the interests of powerful groups.

Tech giants such as Google and Facebook have been described as attention merchants, they offer users free access to information and entertainment services, only to sell their attention to advertisers. According to the spirit of Metcalfe’s Law, “Metcalfe's Law states that the value of a communication network is proportional to the square of the size of the network” (Briscoe, Odlyzko & Tilly 2006, p. 34–39). The more data these corporations accumulate, the greater their capacity to manipulate public opinion. When behavioral data is sufficiently vast and granular, it enables platforms to read users—to understand their desires, emotions, and preferences—and subsequently guide their behavior, often even before users themselves are consciously aware of it.

This condition has been identified by Professor Shoshana Zuboff of Harvard University as surveillance capitalism. She defines it as “a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales” (Zuboff, 2019, p. 1). In this new phase of capitalism, personal data and human behavior become the central raw materials for capital accumulation. Surveillance capitalism profits by collecting, analyzing, and commercializing human behavior—especially what Zuboff calls behavioral surplus, namely, “data that surpasses what is required for product or service improvement and is instead repurposed as a means to behavioral prediction” (Zuboff, 2019, p. 377).

Large technology corporations harvest such data through users’ everyday activities on digital platforms and deploy algorithms to predict, influence, and ultimately control future behavior—often without explicit and informed consent. Zuboff warns that this emerging regime of power does not rely on violence or force but rather operates through psychological manipulation and behavioral control, posing a fundamental threat to personal freedom, democracy, and human rights. She describes this process as “an expropriation of critical human rights that is best understood as a coup from above: an overthrow of the people’s sovereignty” (Zuboff, 2019, p. 1).

For instance, according to statistics from NapoleonCat, as of December 2023, Canada had approximately 31.6 million Facebook users—accounting for around 80.5% of the country’s total population (NapoleonCat, 2023). With such a high penetration rate, Meta effectively holds a near-monopoly in shaping public discourse online within the country. In other words, once a platform achieves data monopoly status, any form of manipulation—whether by the platform itself or through it—can exert systemic influence across the entire society. This concentration of power has allowed tech empires such as Meta, Google, SpaceX, and X (formerly Twitter) not only to dominate the market but also to acquire structural authority over the informational sphere—a role once reserved for state institutions or traditional journalism. Behind these digital empires stand the digital shepherds who guide and govern the behavior of the connected masses.

For instance, the Cambridge Analytica scandal brought this hidden commercial regulation into sharp focus, implicating the social-media giant Facebook in unethical dealings with people’s data (Cadwalladr & Graham-Harrison, 2018). Billionaires Nigel Farage and Arron Banks, who had direct ties to Cambridge Analytica, leveraged data analytics to support the Leave.EU campaign advocating for the United Kingdom’s exit from the European Union. Algorithms were deployed to deliver micro-targeted political messaging to specific groups and constituencies, aiming to provoke emotions such as fear, anger, and anxiety—particularly related to immigration from poorer EU countries. Special focus was placed on “swing regions,” where voters remained undecided. As Berry (2022, p. 135) observes, “Algorithms were used to provoke emotions such as fear, anger and anxiety, particularly around immigration,” revealing how emotional manipulation through data-driven targeting became central to the campaign’s strategy. The result, as is now well known, was the victory of the Leave.EU campaign. Disinformation is not merely a deviation from truth but a strategic form of political communication.

In 2024, Elon Musk—CEO of major tech companies such as Tesla, SpaceX, and X (formerly Twitter)—leveraged his social media platform to promote incentive campaigns, attract attention, and influence voter behavior during the U.S. presidential election (Financial Times, 2024). This case illustrates how the intersection of technology and politics can generate novel forms of political campaigning. At the same time, however, it raises serious concerns about democratic backsliding when digital platforms interfere with the political environment. Musk’s actions in the 2024 U.S. election also exemplify the growing political influence of a digital capitalist—one whose technological reach translates directly into political power.

In contemporary capitalist societies, the concentration of ownership over key economic resources—particularly the means of production—has profound implications not only for material inequality but also for broader social and political disparities. The distribution of economic power shapes who has influence, whose voices are heard, and how collective decisions are made. The connection between economic control and democratic values such as freedom and equality is not peripheral but structural, affecting the very fabric of institutional life. The advent of digital technologies initially raised expectations of a more inclusive and participatory information society—one that could enhance the scale and reach of democratic engagement. However, in reality, the monopolization of big data and control over core digital infrastructures has contributed to a subtler erosion of democratic principles. This regression is often masked by the rhetoric of innovation and empowerment, giving rise to what may be called a polished illusion of “digital freedom,” behind which new forms of exclusion and asymmetry quietly emerge.

3 Algorithmic Power and the Erosion of Public Reason in Democratic Life

Hannah Arendt argued that the blurring of the line between truth and falsehood is a particularly insidious tool for paralyzing the public’s will to resist. As she wrote: “The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists” (Arendt, 1971, p. 11).

When the modern public devolves into a silent herd, devoid of critical thinking and unaccustomed to multi-perspectival verification, it becomes increasingly vulnerable to fake news, populist rhetoric, and conspiracy theories. On the political front, extremist forces may exploit micro-targeted advertising on social media to disseminate ideologically tailored propaganda to specific demographic segments, thereby fracturing public opinion and manipulating electoral outcomes in their favor. For example, as Neudert (2017, p. 4) observed, “in Germany our prior research has found active social bots and an abundance of German junk news during the federal presidency elections”. “First, we describe computational propaganda and define political bots as automated scripts designed to manipulate public opinion.” (Howard, Woolley & Calo 2018, p. 81). Furthermore, algorithms trained with biased data have resulted in algorithmic discrimination—“recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender” (Buolamwini & Gebru 2018, p. 77).

The “silence of the digital flock” also gives rise to a paradox of information: although the public lives in an age of information abundance, the quality of public discourse is in decline. Critical voices grow faint, while noise—misinformation, disinformation, and irrelevant content—prevails. The public becomes disoriented in the face of a constant flood of conflicting narratives, often retreating into generalized skepticism or placing trust in the most simplistic explanations. This phenomenon corresponds to what McIntyre (2018, p. 1) terms the “post-truth” era, in which “feelings have more weight than evidence.” In such a context, objective facts lose their influence in public discourse, giving way to emotional or ideological persuasion. A society operating under such conditions becomes increasingly fragile and vulnerable to manipulation by corporate oligarchs.

In today’s digital society, workers in capitalist countries are no longer bound to the wheel of civilization in the same way as in the 19th and early 20th centuries. Instead, they are caught in a new form of subjugation that Herbert Marcuse referred to as sublimated slavery (Marcuse, 1964/2013, p. 9). This condition is characterized by individuals’ inability to recognize their own enslavement; rather than perceiving repression as coercion, they internalize it as a form of freedom. This paradoxical transformation marks a fundamental shift in decision-making power—from individual volition to algorithmic control. In other words, when the inner world of human beings is shaped and directed by algorithms, the modern public becomes digital slaves living under digital dictatorships, ruled by the invisible owners of omnipotent algorithms.

Those in power manipulate and manufacture consent to legitimize policies that benefit oligarchic capitalist groups. As a result, democracy is weakened: voters make decisions based on distorted perceptions, and public support or opposition to policies stems more from manipulated public opinion than from rational deliberation. Instead of engaging with objective and diverse information, citizens are guided by misleading or emotionally charged messages. Moreover, not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected (Ecker et al., 2022, p. 13). This trend runs counter to the Enlightenment spirit of modern democracy, which demands autonomous, informed citizens capable of open public debate. It creates a fertile ground for authoritarianism to grow. The unidimensional silence of today’s public is a forewarning of looming political tragedies. In the digital age, that tragedy is the gradual erosion of democracy itself. Several recent electoral controversies in the United States, the United Kingdom, and Canada suggest that this erosion is no longer hypothetical—but unfolding in real time.

The foundation of democracy rests on open deliberation and freedom of critical discourse. However, when factual information no longer occupies a central role and rational debate is displaced by mass emotion and group bias, the collective decision-making process—whether in elections, referenda, or policy formulation—loses its clarity and discernment. As Zuboff (2019, p. 8) warns, surveillance capitalism fosters “a new form of power that is not rooted in democratic oversight but in unilateral knowledge, asymmetry, and behavioral control,” undermining both personal autonomy and collective self-governance. Political advertising has the potential to reinforce existing misperceptions among the electorate, particularly when it reduces complex policy issues to emotionally charged and simplistic narratives. Such messaging strategies can distort public understanding and weaken the foundations of informed democratic participation. This dynamic contributes to a new form of informational despotism—one that operates under the guise of freedom of expression. Unlike overt censorship, this mode of influence deceives individuals into believing they are exercising free choice, even as their perceptions and decisions are subtly shaped by algorithmic filtering and the strategic curation of content.

Each individual now resides within their own information silo, contributing to a growing fragmentation of society. Social media platforms have effectively become relentless polarization engines, deepening societal divisions through algorithmically curated content. Without meaningful intervention, society risks being further splintered into increasingly intolerant factions, devoid of shared understanding or common ground—conditions ripe for unrest and even violence. Empirical evidence shows that online hate speech can translate into real-world harm. For example, Müller & Schwarz (2020, p. 2131) found that “anti-refugee sentiment on Facebook predicts crimes against refugees in otherwise similar municipalities with higher social media usage”. Even more alarming is the way mutual suspicion and hostility, once seeded online, gradually erode the fabric of social trust. When societies become polarized, they struggle to reach consensus on collective action, leaving them vulnerable in times of global crises such as pandemics, climate change, or economic recession.

Moreover, the unchecked power of digital platforms poses a significant threat to the informational sovereignty of nation-states. A handful of global corporations can manipulate content across dozens of countries without being meaningfully subject to local governmental regulation. The 2023 case of Meta (Facebook) blocking news outlets in Canada in retaliation against the Online News Act—requiring platforms to share advertising revenue with news publishers—demonstrated the supra-national power of such platforms. Meta’s willingness to effectively “black out” national information ecosystems in defense of corporate interests exemplifies how democratic principles and the public good can be sacrificed for profit, As Müller & Schwarz (2020, p. 2160–2167) “illustrate, content availability on Facebook can directly influence real-world behavior—demonstrating how platform control over information can shape public action.”

Today, although overt forms of political repression are less common, democracy is being quietly suffocated by more subtle means. Control over freedom of expression and electoral influence no longer resides solely in the hands of traditional industrial oligarchs but increasingly rests with digital oligarchs—technology conglomerates that possess vast troves of data and operate all-powerful algorithms capable of subtly steering public discourse and manipulating political behavior at an unprecedented scale.

4. From the Crisis of Western Democracy to the Proactive Role of the Communist Party of Vietnam

4.1 Lessons from the Crisis of Western Democracy

Western societies have long celebrated their model of democracy as the highest ideal of political development. They assert that free elections, multi-party competition, the separation of powers, and individual liberties—such as freedom of speech and freedom of the press—have created a society in which the “general will” of the people is respected and state power is tightly constrained. Western democracies claim that their model is not only a universal value but also the sole legitimate framework that should be globally replicated. However, today, Western democracy is being stifled in more subtle and sophisticated ways. Although overt acts of repression through violence have become less common, control over freedom of speech and the ballot has increasingly fallen not only into the hands of traditional industrial oligarchs but also into those of digital tycoons. These technology conglomerates possess vast datasets and operate omnipotent algorithms capable of shaping public opinion and manipulating political behavior on an unprecedented scale. Behind the façade of democracy, capitalist states readily deploy both economic and technological power to steer labor movements and suppress dissent, all in the service of preserving capitalist order. This marks the rise of a new form of “informational despotism” disguised as freedom of expression. In essence, under digital capitalism, bourgeois democracy not only fails to resolve its internal contradictions but also degenerates into what may be termed “digital authoritarianism.”

From a Marxist perspective, in capitalist societies, the concentration of the means of production in the hands of a small bourgeois elite not only generates economic inequality but also serves as the root cause of social and political disparities. With the advent of digital technologies, it was once believed that the democratization of information would be accelerated, allowing Western democracy to reach an unprecedented scale and level of perfection. However, the monopolization of big data and other digital technologies has led instead to a subtle and seductive decline. Under the guise of a “new democratic freedom,” the masses are subjected to domination—yet deceived into believing they are free. This model cannot represent the future of a truly progressive humanity.

4.2. The Proactive Role of the Communist Party of Vietnam

The manipulation of public opinion and user data in Western countries serves as a stark warning for Vietnam in the context of global integration and digital transformation: without effective governance, the nation risks becoming a data colony of transnational digital platforms. However, the Communist Party of Vietnam has been timely and proactive in recognizing this threat, advancing strategic policies to develop national digital infrastructures and reduce dependence on Western tech giants. Whereas cyberspace in the West is increasingly left at the mercy of digital oligarchs, in Vietnam, it is treated as a sovereign domain—governed by law and imbued with political responsibility. The 2018 Law on Cybersecurity (National Assembly, 2018), Resolution No. 35-NQ/TW (Politburo, 2018), on defending the ideological foundation of the Party, along with initiatives such as fake news response centers and the digital transformation of the press and media system, represent concrete steps reflecting the Party’s determination to construct a robust “digital border.” On this new front, each cadre, journalist, and intellectual becomes an “information warrior,” contributing to the struggle against distorted narratives and defending the ideological stronghold in the digital age.

Vietnamese technology corporations, under the leadership of the Communist Party, have clearly recognized the roles of communication, cyberspace, and digital sovereignty as foundational pillars in building a socialist democracy centered on human dignity rather than driven by profit. Refusing to fall into the trap of “unregulated freedom of speech” as seen in many Western countries, Vietnam has consistently upheld the principle that freedom must go hand in hand with discipline, and that technological development must be inseparable from national and information security. The Law on Cybersecurity, national digital transformation strategies, and mechanisms for addressing fake and misleading news all reflect the proactive stance and high sense of responsibility assumed by the Party and the State.

Democracy in Vietnam is not limited to periodic elections, but is understood as a long-term, comprehensive process—manifested in the assurance of the people’s voice in policymaking through institutions such as the Vietnam Fatherland Front, the National Assembly, the revolutionary press, and increasingly, in the digital environment. The development of e-government and digital citizenship is not merely a technological goal; it embodies the vision of a modern, organized, and purposeful democracy.

The Party and State of Vietnam have proactively implemented a series of major policies aimed at guiding and safeguarding cyberspace while building a modern socialist democracy. At the state level, Decision No. 749/QĐ-TTg (2020) issued by the Prime Minister approved the National Digital Transformation Program, which sets forth the goal of developing a digital government, digital economy, and digital society—alongside the strong growth of domestically led digital technology enterprises (Communist Party of Vietnam, 2024b). The Party places particular emphasis on the protection of national cybersecurity. The National Cybersecurity and Safety Strategy (2020) identifies cybersecurity as a central focus of the digital transformation process and as a pillar for establishing “digital trust” within society (Dong A, 2024). Resolution No. 57/NQ-TW (2024) of the Politburo further affirms that digital transformation is a “decisive factor for Vietnam to become a prosperous and powerful nation,” and calls upon the entire political system and the people to take the lead in this process and create new momentum for transformation (Communist Party of Vietnam, 2024a). The strategic goal is that by 2030, Vietnam will become a self-reliant nation in terms of cybersecurity and information safety (Communist Party of Vietnam, 2022).

In the leadership philosophy of the Communist Party of Vietnam, the promotion of democracy must be closely linked with discipline and socialist orientation. The 13th National Party Congress emphasized that “the people are the center and the subject of the process of renewal, nation-building, and national defense”; all policies must originate from the legitimate aspirations and interests of the people, with the goal of achieving their happiness and well-being (Nguyen, 2022). This principle has been institutionalized through directives, resolutions, and legal frameworks developed with public oversight. For example, the Law on Cybersecurity (2018), along with accompanying regulations on information safety and the protection of state secrets, has been widely supported by the public for its timely response to national security needs in cyberspace. The Party’s leadership role in shaping information sovereignty and cybersecurity has thus been further affirmed. In addition, the Party has introduced strategic orientations and action programs to promote the development of domestic digital platforms and Vietnamese-led technological products under the “Make in Vietnam” initiative (Ninh, 2025). This approach seeks to ensure national self-reliance by reducing dependence on transnational platforms, thereby safeguarding critical data and information from external exploitation.

In contrast to the image of a “digital flock” passively guided by algorithms, Vietnam’s younger generation must be equipped with critical thinking skills, political awareness, and the capacity to filter information—so that they may become exemplary digital citizens. The role of the revolutionary press, political education, and ideological guidance from the Communist Party is crucial in safeguarding the ideological foundation and shaping public opinion. The combination of democratic promotion and disciplined governance, under the consistent leadership of the Party, has fostered a clean and healthy digital environment—one that avoids the perils of information manipulation and the surveillance traps of Western technological capitalism. Through its reform policies, stringent legal frameworks, and a people-centered approach to democratic participation, Vietnam is actively constructing a modern socialist democracy—one that guarantees guided freedom and resilient sovereignty in the digital age.

CONCLUSIONS

“The silence of the digital flock” is not an inevitable fate of the digital age, but rather the consequence of an information ecosystem manipulated by monopolistic platforms in pursuit of profit and power. Experience shows that when technological development proceeds without democratic oversight, it can become a tool for eroding critical thinking and weakening democratic life. While Western nations struggle with crises of democracy—ironically fueled by the very digital platforms they once championed—Vietnam is charting a different course: proactively constructing a modern model of socialist democracy, in which technology is governed and directed to serve the interests of the people. In doing so, Vietnam not only resists algorithmic domination but also affirms the enduring relevance and superiority of socialist democracy in the context of digital globalization.

This path holds valuable lessons not only for Vietnam but also for many developing countries grappling with asymmetrical control over digital infrastructures. In the emerging global struggle for informational sovereignty, Vietnam’s approach demonstrates that it is possible for nations of the Global South to assert strategic autonomy in cyberspace—resisting algorithmic dependence and reclaiming control over data, discourse, and digital futures. By prioritizing national interests over technological subservience, Vietnam offers a compelling alternative to the dominant model shaped by Western techno-capitalism.

BIBLIOGRAPHICAL REFERENCES

Arendt, H. (1971). Lying in politics: Reflections on the Pentagon Papers. The New York Review of Books, 18 November. Retrieved from https://www.cia.gov/readingroom/document/cia-rdp80-01601r000300360048-1

Berry, D. M. (2022). Cambridge Analytica, Brexit and the death of democracy. In J. Pila & D. Erdos (Eds.), The algorithmic society (pp. 123–147). Springer. https://doi.org/10.1007/978-3-031-13551-4_6

Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6

Briscoe, B.; Odlyzko, A., & Tilly, B. (2006). Metcalfe’s law is wrong. IEEE Spectrum, 43(7), 34–39. https://doi.org/10.1109/MSPEC.2006.1653003

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research, Vol. 81, pp. 77–91). PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html

Bösch, M., & Divon, T. (2024). The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine. New Media & Society, 26(9), 5081–5106. https://doi.org/10.1177/14614448241251804

Cadwalladr, C., & Graham Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Communist Party of Vietnam. (2022). Decision No. 964/QD-TTg dated August 10, 2022, of the Prime Minister approving the National Cybersecurity and Safety Strategy, proactively responding to challenges from cyberspace by 2025, with a vision to 2030. https://tulieuvankien.dangcongsan.vn/he-thong-van-ban/van-ban-quy-pham-phap-luat/quyet-dinh-so-964qd-ttg-ngay-1082022-cua-thu-tuong-chinh-phu-phe-duyet-chien-luoc-an-toan-an-ninh-mang-quoc-gia-chu-dong-ung-8804

Communist Party of Vietnam. (2024a). Resolution No. 57-NQ/TW dated December 22, 2024, of the Politburo on breakthrough development of science, technology, innovation, and national digital transformation. https://tulieuvankien.dangcongsan.vn/he-thong-van-ban/van-ban-cua-dang/nghi-quyet-so-57-nqtw-ngay-22122024-cua-bo-chinh-tri-ve-dot-pha-phat-trien-khoa-hoc-cong-nghe-doi-moi-sang-tao-va-chuyen-11162

Communist Party of Vietnam. (2024b). Decision No. 749/QD-TTg dated June 3, 2020, of the Prime Minister approving the National Digital Transformation Program to 2025, with orientation to 2030. https://tulieuvankien.dangcongsan.vn/he-thong-van-ban/van-ban-quy-pham-phap-luat/quyet-dinh-so-749qd-ttg-ngay-0362020-cua-thu-tuong-chinh-phu-phe-duyet-chuong-trinh-chuyen-doi-so-quoc-gia-den-nam-2025-dinh-huong-6476

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

Dong, A. (2024). Digital transformation and new requirements for cybersecurity and safety. Nhân Dân Newspaper. https://nhandan.vn/chuyen-doi-so-va-yeu-cau-moi-ve-an-toan-an-ninh-mang-post829697.html

Dunaway, R. (2024, October 9). Rage clicks: Study shows how political outrage fuels social media engagement. Tulane University News. https://news.tulane.edu/pr/rage-clicks-study-shows-how-political-outrage-fuels-social-media-engagement

Ecker, U. K. H.; Lewandowsky, S.; Cook, J.; Schmid, P.; Fazio, L. K.; Brashier, N.; Kendeou, P.; Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y

Evidence for Democracy. (2023). Misinformation in Canada. https://evidencefordemocracy.ca/sites/default/files/reports/misinformation-in-canada-2023.pdf

Financial Times. (2024, October 21). Elon Musk’s riskiest bet yet: Donald Trump. Financial Times. https://www.ft.com/content/17792d26-7dca-495c-98e2-0c93cd385b9d

Gombar, M. (2025). Algorithmic manipulation and information science: Media theories and cognitive warfare in strategic communication. European Journal of Communication and Media Studies, 4(2), 1–11. https://doi.org/10.24018/ejmedia.2025.4.2.41

Helberger, N.; Karppinen, K., & D’Acunto, L. (2020). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 23(2), 256–272. https://doi.org/10.1080/1369118X.2018.1475620

Herman, E. S., & Chomsky, N. (1988). Manufacturing consent: The political economy of the mass media. Pantheon Books.

Howard, P. N.; Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81–93. https://doi.org/10.1080/19331681.2018.1448735

Marcuse, H. (2013). One-dimensional man: Studies in the ideology of advanced industrial society (Original work published 1964). Beacon Press.

Marx, K. (2000). Toàn tập, Tập 12 [Collected Works, Vol. 12]. Hanoi: National Political Publishing House.

Marx, K., & Engels, F. (2002). Toàn tập, Tập 23 [Collected Works, Vol. 23]. Hanoi: Truth National Political Publishing House.

McIntyre, L. (2018). Post-truth. MIT Press. https://doi.org/10.7551/mitpress/11483.001.0001

Media Ecosystem Observatory. (2022). Mis and disinformation during the 2021 Canadian federal election. McGill University. https://www.mediaecosystemlab.ca/reports/mis-and-disinformation-2021-canadian-federal-election.pdf

Mir, A. (2025). The benefits of platform monopoly. ProMarket. https://www.promarket.org/2025/04/16/the-benefits-of-platform-monopoly/

Müller, K., & Schwarz, C. (2020). Fanning the flames of hate: Social media and hate crime. Journal of the European Economic Association, 19(4), 2131–2167. https://doi.org/10.1093/jeea/jvaa045

NapoleonCat. (2023). Facebook users in Canada December 2023. https://napoleoncat.com/stats/facebook-users-in-canada/2023/12/

Napoli, P. M. (2014). Automated media: An institutional theory perspective on algorithmic media production and consumption. Communication Theory, 24(3), 340–360. https://doi.org/10.1111/comt.12039

National Assembly. (2018). Law on Cybersecurity. https://datafiles.chinhphu.vn/cpp/files/vbpq/2022/07/24-2018-qh14..pdf

Neudert, L.-M. N. (2017). Computational propaganda in Germany: A cautionary tale (Working Paper No. 2017.7). Computational Propaganda Research Project, University of Oxford. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/06/Comprop-Germany.pdf

Nguyen, N. H. (2022). The role of the people as the central subject in the national development strategy and the process of national construction and defense, according to the spirit of the 13th National Congress of the Party. Communist Review. https://www.tapchicongsan.org.vn/media-story/-/asset_publisher/V8hhp4dK31Gf/content/vai-tro-chu-the-vi-tri-trung-tam-cua-nhan-dan-trong-chien-luoc-phat-trien-dat-nuoc-va-qua-trinh-xay-dung-bao-ve-to-quoc-theo-tinh-than-dai-hoi-xiii-cu

Ninh, C. (2025). “Make in Vietnam” increasingly affirms its brand. Nhân Dân Newspaper. https://nhandan.vn/make-in-vietnam-ngay-cang-khang-dinh-thuong-hieu-post857881.html

O’Hara, K. (2022). Digital modernity. Foundations and Trends® in Web Science, 9(1–2), 1–254. https://doi.org/10.1561/1800000031

Pennycook, G.; Bear, A.; Collins, E. T., & Rand, D. G. (2020). The Implied Truth Effect: Attaching warnings to a subset of fake news stories increases perceived accuracy of stories without warnings. Management Science, 66(11), 4944–4957. https://doi.org/10.1287/mnsc.2019.3478

Politburo. (2018). Resolution No. 35-NQ/TW on strengthening the protection of the Party’s ideological foundation, combating wrong and hostile views in the new situation. https://tulieuvankien.dangcongsan.vn/he-thong-van-ban/nghi-quyet-cua-chinh-phu/nghi-quyet-so-35nq-cp-ngay-0462019-cua-chinh-phu-ve-tang-cuong-huy-dong-cac-nguon-luc-cua-xa-hoi-dau-tu-cho-phat-trien-5446

Zollo, F.; Bessi, A.; Del Vicario, M.; Scala, A.; Caldarelli, G.; Shekhtman, L.; Havlin, S., & Quattrociocchi, W. (2017). Debunking in a world of tribes. PLOS ONE, 12(7), e0181821. https://doi.org/10.1371/journal.pone.0181821

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: PublicAffairs.

 

CONFLICT OF INTEREST

The authors declare that there are no conflicts of interest related to this article.

ACKNOWLEDGMENTS

Not applicable.

FUNDING

Not applicable.

PREPRINT

Not published.

COPYRIGHT

The copyright is held by the authors, who grant Journal Política Internacional exclusive rights for first publication. The authors may enter into additional agreements for the non-exclusive distribution of the version of the work published in this journal (for example, posting in an institutional repository, on a personal website, publishing a translation, or as a book chapter), with acknowledgment that it was first published in this journal. Regarding copyright, the journal does not charge any fees for submission, processing, or publication of articles.