Draft:Algorithmic democracy

From Wikipedia, the free encyclopedia

Algorithmisation has become the main response of governments to the challenges of our democracies. Social and political bots, especially their generative version, colonise the public sphere with synthetic content that perverts and distorts both reality and public opinion. Virtual politicians run as candidates in presidential and municipal elections (Russia, Denmark, Japan, New Zealand), promising neutrality, comprehensiveness and fairness in the face of the self-serving bias that characterises human decision-making. Virtual twins are being presented as the most effective way to address participatory disaffection and the improvement of representative democratic systems. And digital platforms and the cyber-political ecosystems they offer constitute the pillars underpinning the structures of political governance, becoming today's great centres of despotism.[1][2][3]

The consequences of such algorithmic excesses are beginning to be felt in the form of an exponential increase in the perception of human obsolescence, the vulnerability of systems and societies and social entropy, the instrumentalisation of the functional spheres of society and its institutions, the automation of injustices, political irresponsibility, despotism and technological imperialism, moral hardship among the most vulnerable citizens, distrust of the human, among many other things[1].[4][5]

According to studies on the subject, behind all of this lies the particular interests of governmental bodies and large technology companies. These use their enormous power to design, promote and disseminate intelligent algorithms with the aim of satisfying strategic and highly instrumental ends, such as maintaining or increasing power, controlling the citizenry, subjugating society, manipulating the vote, muzzling public opinion, provoking compulsive consumption, etc.

One of the main problems is that AI has been creeping into streets, homes, cars, bodies, etc. for a decade now, through smart devices that collect private and intimate data for large government and technology corporations: robots, mobile phones, speakers, watches, glasses, cameras, headsets, computers, RFID chips, and many other AI-enabled or AI-linked technological devices. And the more data AI has, the more vulnerable people, societies and the various functional spheres that, like democracy, it uses to project and develop a good life in relation to others become. With this data, AI generates synthetic discourses capable of segregating the free will of the consumer or electorate to its interests, interests that are clearly not those of the algorithms, but those of the governmental and technological corporations that design and disseminate them with the aim of satisfying their strategic and instrumental ends.

Another fundamental problem is that the public sphere, the place where citizens use dialogue and deliberation to reach agreements on the meaning of different aspects of democracy, such as the legitimacy of a government's actions and decisions, has been shifting towards the digital world (such as social networks). This displacement has allowed the hybridisation of the human and the non-human and their algorithmic colonisation, thereby facilitating the possible interference of governments and large corporations through massive synthetic content, echo chambers, bubble filters, etc., which produce fields of distortion of reality.

The war in the Gaza Strip is a clear example of this. Both sides in the conflict have used generative AI to manipulate public opinion by means of synthetic data packets that reproduce information with a high emotional content that has awakened consciences and produced rejection by international society towards one group or the other. The application of this disruptive and distorting potential of Generative AI in a super election year such as 2024, where 74 countries around the world will elect their representatives, can have highly corrosive impacts on democracy on a global scale.

They also highlight the problem of shrinking spaces of freedom. The potential of algorithmic democracies is based on the production, collection and exploitation of the massive data of the digitally hyper-connected society. To this end, they apply policies of mass surveillance (mass survellance). This involves the deployment by states of thousands or millions of cameras, sensors and other types of surveillance devices connected to artificial intelligence in public and, in some cases, private space. Through this technological deployment, governments exert increasingly tight and oppressive control over citizens, producing a spiral of silence that slows down or prevents the regeneration, development and good health of modern democracies.

Nevertheless, different studies[1][6]do not advocate paralysing the development of AI, but rather governing it. On the one hand, they advocate promoting a strong civil society and a decoupling of the public sphere and the digital sphere that allows for the recovery of relationality, criticism and agreement on political actions and decisions and the different democratic processes. Public opinion is key to the progress and good health of democratic institutions and organisations, and thus to the development of society. On the other hand, they are committed to developing, applying and implementing self-regulation instruments and mechanisms for the ethical governance of algorithmic democracy, such as codes of ethics and conduct to guide its design and operation, ethics channels to alert or denounce cases of malpractice, ethical audits to check its correct functioning, or accountability and explainability reports to be accountable to society for its uses and impacts.

References[edit]

  1. ^ a b c García-Marzá, Domingo; Calvo, Patrici (2024). Algorithmic Democracy. A Critical Perspective Based on Deliberative Democracy. Cham: Springer. ISBN 978-3-031-53017-3.
  2. ^ Pérez-Zafrill, Jesús. "Polarización artificial: cómo los discursos expresivos inflaman la percepción de polarización política en internet". Recerca. Revista de Pensament i Anàlisi. 26 (2): 1–23.
  3. ^ Zuboff, Shoshana (2019). The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
  4. ^ Innerarity, Daniel; Colomina, Carme. "La verdad en las democracias algorítmicas". Revista CIDOB d'Afers Internacionals.
  5. ^ Conill, Jesús (2023). ""Ética discursiva e Inteligencia artificial. ¿favorece la inteligencia artificial la razón pública?"". Daimon. Revista Internacional de Filosofía. 90: 115–130.
  6. ^ Gudiño, Jairo F.; Grandi, Umberto; Hidalgo, César. "Large Language Models (LLMs) as Agents for Augmented Democracy". arxiv.