In this essay, I examine the ethics of Artificial Intelligence as it pertains to the spread of misinformation.
Misinformation is spread efficiently and at unprecedented rates through the use of Artificial Intelligence (AI). Developers of this technology also have an obligation to not introduce unwarranted bias into AI machines which make decisions that affect the lives of humans.
Artificial Intelligence and Misinformation
The ability of technology to demonstrate human characteristics like learning and decision-making is known as “Artificial Intelligence (AI)”. The field of artificial intelligence has been improving rapidly in recent years and has become increasingly powerful as a tool to spread information efficiently. It is also an effective tool that can mitigate the spread of misinformation by detecting and removing misleading content, which if spread further can be dangerous to the well-being of society. However, as AI techniques have become more prevalent and accessible, the use of AI for spreading harmful misinformation is a concern, particularly when it comes to user-profiling, the creation of “deep-fakes”, and the removal of human oversight from information generation. The misuse of AI has terrifying, major consequences, from further dividing societal groups, to inciting violence, to decreasing vaccination rates, to influencing election results.
Artificial intelligence and machine learning (ML)— a subset of AI in which prior information is used to “train” machines to influence future decisions and judgements— are being used by technology companies to improve their products. AI is integrated into almost every aspect of Google, including the search engine which uses AI techniques to return results, Maps which uses AI to estimate where the user is headed to help navigate, and Photos which uses AI to come up with suggestions for photos to share with friends [1]. Twitter uses AI to decide which Tweets to show to users first, based on algorithms that determine what individual users will be interested in [2]. Additionally, companies are using AI to detect fake information, find trolls, remove bots, and flagging sensitive or inappropriate content. Facebook is currently using AI to detect COVID-19 misinformation and content that violates its policies [3]. Five years ago, Facebook greatly “relied on users to flag offending accounts” and content, which was then reviewed by human reviewers [4].
Unsurprisingly, the task of humans regulating misinformation is not one that can be easily be gotten rid of yet. While AI techniques are able to do repetitive tasks, like notice repeated posts by bots, they are not sophisticated to pick up on nuances of human language like sarcasm and hyperbole [5]. Linguistic barriers, like idioms and grammatical structure, also make completely automating the detection of misinformation difficult. On the other hand, human moderators are not without their own drawbacks. Moderation done by human reviewers is highly subjective; whether a post gets taken down is subject to a person’s biases and mood on a particular day. Human labor is also costly and often inefficient and human moderators are subject to seeing sensitive and traumatizing content. Facebook has a team of 30,000 moderators and spent $52 million to compensate content moderators suffering from PTSD [6, 7]. As a result of both human and AI limitations, technology companies are reliant on hybrid computer/ human moderation, which may not necessarily be a bad thing.
The continual development and improvement of increasingly sophisticated AI techniques means that developers, companies, and governments must be aware of the potential implications of misuse and develop regulation and policy to prevent the use of AI as a dangerous tool. User-profiling, in which data on users is collected to be used to disseminate targeted ads, is an issue that could be problematic. When used by social media companies to show user-preferred content, this can provide for a better user-experience. However, when the algorithms used to decide what information the user sees are only distributing one type of content, this can become extremely problematic, especially if the content is misinformation. The user might also believe that the information they are seeing is being seen by everyone. User-profiling can also be used to target and influence the opinions of vulnerable populations, when it comes to who to vote for or what to think about certain groups of people. In most elections, political candidates use demographic-based data to profile and send information out to voters who might vote for them based on age, sex, education, employment, etc. In 2016, the Trump campaign contracted a company called Cambridge Analytica to do psychographic-based profiling and targeted potential voters based on their personalities [8]. When used improperly or politically, user-profiling has the potential to influence democratic elections and spread misinformation.
Additionally, “deep fakes”— audio and visual edits that are barely distinguishable from reality— are created using AI and are powerful mediums to spreading misinformation. They were originally used in movie-making, but have become more and more prevalent. In 2017, a team from the University of Washington released a paper called “Synthesizing Obama”, in which they describe their program that takes existing audio and video of Barack Obama to create clips of him lip syncing phrases that he never said [9]. Techniques like these that have the capability manipulate audio and video into believable content and media have the potential to incite violence, create civil unrest, and divide people of different groups even further.
As much as computer scientists would like to believe, AI algorithms are not completely objective. Because humans are the ones creating the content, there will always be an element of human flaw in AI algorithms. The information we feed algorithms will be subject to human biases and the information created by those AI algorithms will be subject to those same biases. This could result in an AI machine making discriminatory judgements towards particular groups of people. Even just from these examples, multiple ethical concerns regarding AI arise [10]. User-profiling could be an infringement on user-privacy. “Deep fakes” if used by political figures or in diplomatic situations could cause major conflicts and impact the legitimacy of democracy. Can we allow AI machines to make judgements that affect the lives of people, if we cannot guarantee a non-biased judgement? How much censorship should be allowed and does removing content impact freedom of speech? While these questions cannot be answered definitively, they bring to mind many considerations that must be taken into account by developers, companies, and governments that are using AI.
It is imperative to consider the implications of AI so regulations and policy can be created to ensure it doesn’t get misused. Accountability and transparency might need to be considered. While companies may not want to release source code for AI algorithms, it might be something that could help combat disinformation. Regulations could be created to determine what responsibility companies have for the information their algorithms are spreading. Those algorithms could be altered to counteract misinformation and send information from a variety of sources to users. The Algorithmic Accountability Act was introduced in 2019 that requires companies to evaluate the use of algorithms [11]. Partnerships between technology companies and the government could be created to ensure security of sensitive algorithms and to prevent misuse. Furthermore, the need to educate people to spot misinformation and to think critically is imperative. In Finland, students are learning specific skills to identify bots on the internet [12].
Ultimately, AI techniques are incredibly powerful and useful tools for identifying misinformation. They are also incredibly dangerous and have enormous influence if used improperly. Consequences of misuse need to be evaluated by developers, social media/technology companies, and governments working who have power to influence society through the spread of information. Those regulations and polices need to be implemented so that AI algorithms are not misused and so that misinformation does not reach the public so readily and quickly.
References
[1] Pichai, S. (2020, September 29). Google is AI first: 12 AI projects powering Google products. Retrieved November 14, 2020, from https://research.aimultiple.com/ai-is-already-at-the-heart-of-google/
[2] Koumchatzky, N., & Andryeyev, A. (2017, May 9). Using Deep Learning at Scale in Twitter’s Timelines. Retrieved November 14, 2020, from https://blog.twitter.com/engineering/en_us/topics/insights/2017/using-deep-learning-at-scale-in-twitters-timelines.html
[3] Using AI to detect COVID-19 misinformation and exploitative content. (2020, May 12). Retrieved November 14, 2020, from https://ai.facebook.com/blog/using-ai-to-detect-covid-19-misinformation-and-exploitative-content/
[4] Kahn, J. (2020, March 04). Meet the A.I. Facebook relies on to police its social network. Retrieved November 14, 2020, from https://fortune.com/2020/03/04/facebook-a-i-fake-accounts-disinformation/
[5] Zawacki, K. (2015, January 22). Why Can’t Robots Understand Sarcasm? Retrieved November 14, 2020, from https://www.theatlantic.com/technology/archive/2015/01/why-cant-robots-understand-sarcasm/384714/
[6] Jee, C. (2020, June 08). Facebook needs 30,000 of its own content moderators, says a new report. Retrieved November 14, 2020, from https://www.technologyreview.com/2020/06/08/1002894/facebook-needs-30000-of-its-own-content-moderators-says-a-new-report/
[7] Whittaker, Z. (2020, May 12). Facebook to pay $52 million to content moderators suffering from PTSD. Retrieved November 14, 2020, from https://techcrunch.com/2020/05/12/facebook-moderators-ptsd-settlement/
[8] Michael Wade Professor of Innovation and Strategy. (2020, January 08). Psychographics: The behavioural analysis that helped Cambridge Analytica know voters’ minds. Retrieved November 14, 2020, from https://theconversation.com/psychographics-the-behavioural-analysis-that-helped-cambridge-analytica-know-voters-minds-93675
[9]Suwajanakorn, S. (2017, July). Synthesizing Obama: Learning Lip Sync from Audio. Retrieved November 13, 2020, from https://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf
[10] University, S. (n.d.). A Framework for Ethical Decision Making. Retrieved November 14, 2020, from https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/a-framework-for-ethical-decision-making/
[11] Clarke, Y. (2019, April 11). All Info - H.R.2231 - 116th Congress (2019-2020): Algorithmic Accountability Act of 2019. Retrieved November 14, 2020, from https://www.congress.gov/bill/116th-congress/house-bill/2231/all-info
[12] Mackintosh, E. (n.d.). Finland is winning the war on fake news. Other nations want the blueprint. Retrieved November 14, 2020, from https://edition.cnn.com/interactive/2019/05/europe/finland-fake-news-intl/
[13] Kertysova, Katarina. (2018). Artificial Intelligence and Disinformation. Security and Human Rights. 29. 55-81. DOI 10.1163/18750230-02901005. https://www.researchgate.net/publication/338042476_Artificial_Intelligence_and_Disinformation