AI: A Force for Good or Bad?

AI: A Force for Good or Bad?
đź‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

This week, Elon Musk praised the work of OpenAI after a team of five neural networks had defeated five humans, who ranked in the top 99.95 percentile of players worldwide, in the popular game Dota 2. The five bots had learned the game by playing against itself at a rate of a staggering 180 years per day. The game requires strong teamwork among the five players and, therefore, the achievement is quite remarkable and more evidence that artificial intelligence (AI) is rapidly becoming more advanced.

However, directly after the five bots beat the five humans 2-1, Musk cautioned for the power of AI by urging that OpenAI should focus on AI that works with humans, instead of against humans. His statement is in line with his previous warnings for AI, which Musk believes could result in a robot dictatorship or an AI-arms race amongst superpowers that could be the most plausible cause for World War III.

With artificial intelligence becoming increasingly sophisticated, also the warnings against AI become more pervasive, and the question remains then, is AI good or bad?

AI: a Force for Good

Artificial intelligence offers tremendous opportunities for mankind and Bill Gates believes that AI can be our friend and be a force for good. Already, artificial intelligence is helping humans tremendously in a wide variety of disciplines, enabling humans to do more with less. These advantages range from offering private banking services to the general public, which are normally reserved for the rich, to helping aid workers in times of disaster, to improving healthcare and making it more accessible or to making our roads safer with self-driving cars.

The list is long and continues to grow. Basically, every industry can benefit from the advances made with artificial intelligence and, hence, you could argue that AI is good for society; AI is just like any other major technology invented by humans, and it can be applied to help us and improve our lives.

AI: a Force for Bad

Nevertheless, technology can also be turned against us. After all, technology itself is neutral; it is how humans use that technology that defines whether it is good or bad. A hammer can be used to build a house or to kill someone. Hence, there are people who create AI that is intended to cause harm, and a good example of that is viruses. A virus is intended to cause harm to humans and organisations. However, with AI there is an additional problem as even when AI is created with the best intentions, it can still cause harm.

Humans can have the objective to create benevolent AI, but if it is done incorrectly, it could easily turn against us, even if that was not our intention. The famous example is the Twitter bot Tay, which was created to have nice conversations with Twitter users, but within 24 hours after its launch became a racists chatbot that praised Hitler. I am sure that it was not Microsoft’s intention to create a racist chatbot.

What we are dealing with here is the Principal-Agent Problem. This dilemma is predominantly known in political sciences or economics but is increasingly becoming relevant for computer sciences as well. It means that the agent, in this case the AI, has different intentions and objectives than the principal (the developer or organisation) who developed it. Solving the Principal-Agent problem with AI is difficult because with an advanced AI the objective might change over time or does not sufficiently consider the objectives of the principal (if developed incorrectly). If that happens, AI can turn rogue and cause significant havoc. Since it is impossible for humans to describe all possible outcomes of an AI, if we do not solve the Principal-Agent Problem AI can easily turn bad.

Solving the AI Principal-Agent Problem

Whether AI is good or bad is not only determined by the intentions of the developer but also by how well the AI has been developed and the data that is used. Therefore, in order to minimise the chances that an AI turns rogue, we should ensure that the code is written correctly and without bugs (which until now has proven to be impossible as code always contains bugs), that we only use bias-free (training) data and that we understand the decision-making processes of an algorithm. After all, algorithms are black boxes and, therefore, we need Explainable AI to understand why a certain decision was made. Knowing why certain decisions were made will enable us to improve AI over time.

Bug-free code, unbiased data and Explainable AI are three enormous challenges that we need to overcome if we wish to minimise the risks of AI becoming a force for the Bad. Nevertheless, it is vital that we overcome these challenges as the more intelligent AI becomes, the more difficult it becomes to steer it in the right direction if the underlying code, assumptions and objectives are incorrect. After all, once we reach Artificial General Intelligence we can no longer make any assumptions on AI’s behaviour, except that it will have full access to its source code and can overrule any control mechanisms.

Therefore, if we wish to ensure AI remains benevolent in the future, we should start today to make sure that any AI developed incorporates human values, uses unbiased data, is understandable and bug-free. A huge challenge, but one we cannot ignore.

Image: metamorworks/Shutterstock

Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr Mark van Rijmenam is The Digital Speaker. He is a leading strategic futurist who thinks about how technology changes organisations, society and the metaverse. Dr Van Rijmenam is an international innovation keynote speaker, 5x author and entrepreneur. He is the founder of Datafloq and the author of the book on the metaverse: Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy, detailing what the metaverse is and how organizations and consumers can benefit from the immersive internet. His latest book is Future Visions, which was written in five days in collaboration with AI. Recently, he founded the Futurwise Institute, which focuses on elevating the world’s digital awareness.

Share