Why the Combination of Fake News and Artificial Intelligence is Dangerous
We live in unprecedented times, where, unfortunately, increasingly things are not what they seem to be or what they should be. We only have been in this situation for less than 18 months, but it is rapidly affecting our lives on a daily basis. I am talking about fake news and how it has become one of the greatest threats to democracy, free debate and capitalism. Unfortunately, for many, fake news is not a problem at all. It is even Trump’s favourite topic. But above all, there is disagreement about what constitutes as fake news, how big the problem is and what to do about it. And that is a very dangerous situation to be in.
The reason for this week’s article, was an article I read on Monday about how Aviv Ovadya, Chief Technologist at the Center for Social Media Responsibility, fears the fake news crisis and how he is now worried about an information apocalypse. Unfortunately, the fake news crisis is a lot bigger than Russian propaganda during the US Elections, which in itself is already a massive problem. Fake news is spreading to other domains and stopping it is becoming increasingly difficult.
Disturbing Examples of Fake News
Since the inception of the concept fake news, technology has advanced rapidly. As a consequence, we now face examples of fake news that was unconceivable only a few months ago.
The latest trend in fake news is that of face-swapping celebrity faces onto porn performers’ bodies. It was first revealed by Motherboard, who discovered a Redditor named ‘deepfakes’ that was enjoying his hobby. However, since then there has been an explosion of face-swapped porn, and the implications are truly terrifying. Within weeks, AI-assisted fake porn has become scarily easy for anyone to perform and while most of the videos are still related to porn, more users are now experimenting with the technology to place any face in any video. That is scary because it means you could put anyone’s face in any video. Fortunately, Reddit shut down the deepfakes subreddit a few days ago, and more websites, including Pornhub, are banning these ‘deepfake’ porn videos.
Realistic Lip-synced Videos
But it gets scarier, as last year, researchers from the University of Washington created a program that was capable of manipulating a video by turning an audio clip into a realistic lip-synced video. Combine this with ‘deepfakes’ technology and you have a technology that lets you have anyone say anything on video, even if they did not say so or were not present at a certain location. The consequences of this are enormous and are already shown with Trump now denying his vulgar remarks in the famous “Access Hollywood” tape.
Ovadya envisions armies of highly-intelligent AI-powered bots that distort what is real and what is fake. Already, bots have been used to flood US Senator’s inboxes with messages seemingly written by voters, thereby taking up their attention and preventing them from reading the real messages. When these AI bots are becoming smarter, they could start to imitate anyone by using publicly available data and have them say or write anything to anyone. With the bots mimicking your tone of voice, it will become more difficult to deny something you did not write or say.
Why is AI-powered Fake News Dangerous?
Normal fake news is already dangerous and has the potential to influence elections as we have seen with Brexit and the US elections. AI-powered fake news will be infinitely more dangerous as it becomes increasingly difficult to tell the difference between what is fake and what is real. Within any society, that is very disturbing. Of course, there are many reasons why AI-powered fake news is dangerous, but the most prominent reasons for me are:
- Information and reliable news are very important for running our society. Checks and balances on those people in power are important to prevent a democracy turning into a dictatorship. It is important that we can hold someone accountable for their actions, but if we are not sure that these actions occurred, this becomes very difficult. The opposite is also true, with AI-powered fake news, it will become a lot easier for dictatorships to prosecute opponents by simply having them say things they did not say. If we can no longer determine what is true or not, we enter a very dangerous situation.
- AI-powered fake news is not only a threat to our democracy, but also to our capitalist system. If it becomes very easy to disseminate fake stories about certain companies, it becomes possible to influence stock market prices. Already, fake news is infiltrating financial markets and affecting stock market prices, which could easily reach up to billions of dollars in lost market cap.
- The third and most serious threat is information fatigue, or what Ovadya calls, “reality apathy”. It is the moment that people simply start to give up and stop paying attention to news since most of it is fake anyway. When that happens, the “fundamental level of informedness required for functional democracy becomes unstable”, as stated by Ovadya.
How to Solve AI-powered Fake News?
It may be clear that we are dealing with a massive problem here and the more advanced AI-powered fake news becomes, the more difficult it will be to stop it. Therefore, we have to act now and collectively we have to prevent bad actors from being able to create such sophisticated AI-powered fake news. Of course, that is a daunting task, as the ‘deepfakes’ crisis has shown, anyone with a computer can already create advanced forms of fake news.
Nevertheless, it is not that we have no option and the attention to this problem helps in hopefully solving it. There are several ways, I believe, that will contribute to solving the problem of AI-powered fake news:
- We should combat AI-powered fake news with the help of artificial intelligence and the community. With the help of a global community, we have been able to create wonderful services such as Wikipedia, so let’s use the power of the community and the same technology that creates fake news, to combat it. With Datafloq, this is something that we are working on currently, and hopefully, within a few months, we are able to present an initial application to start fighting fake news.
- We should establish a global governing body for AI regulation, preferably created and run by the United Nations at the highest level. The first signals towards this are already happening, as the United Nations Interregional Crime and Justice Research Institute (UNICRI) recently launched the UNICRI Centre for Artificial Intelligence and Robotics. This centre is to serve as an international resource on matters related to AI and robotics. However, this is not enough. Earlier, I argued for countries to follow Dubai’s example of appointing a minister for AI and that these ministers should convene regularly to discuss the required regulations for, among others, AI-powered fake news, as only on a global scale can we solve this global problem.
- We should have programmers, data scientists and AI developers sign some kind of a Hippocratic oath, similar to the oath doctors have to sign. Of course, not before they have had classes on ethical AI such as the new course that will be jointly offered by Harvard University and the Massachusetts Institute of Technology on the ethics and regulation of artificial intelligence. Although, of course, an oath does not simply result in better developers, it will help steer developers in the right direction at least. Although agreeing to what should be in this oath, will be a challenge in itself of course.
The problems we face with artificial intelligence are enormous if we do not get it right. We already have come this far and now it is time to solve the side effects of this brilliant technology. Responsible AI and responsible usage of AI are a pre-requisite for an AI-driven society that wants to remain open, free and democratic and I certainly hope we will achieve that.
Image: charles taylor/Shutterstock