AI and the 2024 Indonesian election: Are ethical guidelines enough?

Damar Juniarto
4 min readFeb 20, 2024

The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth, and truth be defamed as lie, but that the sense by which we take our bearings in the real world — and the category of truth versus falsehood is among the mental means to this end — is being destroyed. (Hannah Arendt, 1973)

Artificial Intelligence (AI) has evolved from the science-fiction realm into our everyday life. The growth of AI has been exponential, and it is becoming more difficult to avoid its usage. AI now permeates our lives, from the voice assistants on our phones to the virtual backgrounds we use for video calls to the meeting minutes that are automatically generated, and many more.

During election periods, AI is also being employed in the political sphere. Take, for example, pemilu.ai, a political consulting service that uses AI to create political content. With just a single click, politicians can generate content that is tailored to their specific needs, similar to what ChatGPT by OpenAI does. AI is also used for facial enhancement by politicians to obtain youthful appearances and physical attractiveness in campaign posters and banners. While reality is being constructed by AI, no one has raised any objections or complaints to the creators.

Generative AI is also utilized extensively. The latest generation of AI can generate images, videos and voices that can be challenging to distinguish between the genuine and the counterfeit. ​​It was previously reported that Gen AI videos could not blink, but this is no longer the case. Their eyes can now blink, which makes them even more lifelike. Previously, AI characters looked stiff and unnatural, but now they can be made to look so ​​realistic.

In tandem with this growth in AI is an expanding body of documentation regarding how such advanced technologies should be governed and managed. At least 470 documents encompassing the legal, social, ethical and policy issues around AI have been identified, as of May 2021.

Among digital rights activists and academics, the government’s AI Circular is considered to be “faint” and “unconvincing,” especially in comparison with the effort of the European Union to draw a line between what is and what is not allowed with AI technology.

Last December 2023, Indonesia’s Communication and Informatics Minister passed Circular Letter Number 9 of 2023 concerning the Ethics of Artificial Intelligence. The Circular urged businessmen and Electronic Service Operators (ESO) who want to implement programming based on AI in Indonesia to uphold nine values, including inclusiveness, transparency, credibility and accountability, security, and others.

Unfortunately, this Circular is just another AI Ethics recommendation and compliance is only based on voluntary participation. As a result, ethical violations are not promptly and thoroughly addressed. Among digital rights activists and academics, the Circular is considered to be “faint” and “unconvincing,” especially in comparison with the effort of the European Union to draw a line between what is and what is not allowed with AI technology.

In my point of view, Indonesia’s Circular on AI ethics would not be able to answer the problems posed by AI, such as the following examples. Based on my records, to date, Gen AI has been misused twice during this election.

The first occurred in October 2023, when a deep fake video was widely circulated showing Indonesian President Joko Widodo speaking eloquently in Mandarin, a language he does not have a strong command of. The video was captioned with the text “Jokowi speaks Mandarin,” but an AI voice was just embedded into an old video uploaded by The US — Indonesia Society (USINDO) YouTube channel on November 13, 2015. The Communications and Informatics Ministry responded by stating that the video was a deepfake, a type of disinformation created using AI, and it provided an explanation of deepfakes on its website.

The second one occurred in early January. Erwin Aksa, deputy chairman of Golkar Party, disseminated a deep fake video through his X account @erwinaksa_id on January 7, 2024. In this video, a figure of New Order dictator Soeharto was featured wearing batik and sitting in a relaxed manner with the ornate flag of the Golkar Party behind him. The video was approximately 2 hours and 52 minutes in length, containing an invitation from the late Soeharto to vote for representatives from Golkar on election day on February 14, 2024. To date, this video has been viewed by more than 4.4 million people.

Refraining from judging the creative merit of these videos, it is the ethical implications of deep fake videos that require evaluation. If ethical behavior is defined as doing what is right and avoiding what is wrong, is related to the concepts of good and bad in ethics, and is based on agreed-upon principles of behavior, then I believe that these two deepfake videos are unethical.

Even more, both these videos are examples of political disinformation. The information provided may mislead, which could lead citizens to make wrong decisions on voting day. Therefore, such violations and dangerous content should be met with clear and definitive punishments.

Hannah Arrendt, who learned how propaganda works back in the 40’s, wrote that a lot of lies will render society helpless to defend itself. I quote from her book The Origins of Totalitarianism, that the end result from omission to lie is the impairment of our capacity to discern between what is real and false. For that reason, we shouldn’t allow this to happen to us.

— — — -

This article was first published in The Jakarta Post daily on January 30, 2024, on their special outlook 2024 https://www.thejakartapost.com/epost-read/2024-01-30

--

--

Damar Juniarto

Love to write on new media, digital democracy, internet policy, freedom of expression, cyber security. https://linktr.ee/damarjuniarto