Algorithms are Black Boxes, That is Why We Need Explainable AI

Algorithms are Black Boxes, That is Why We Need Explainable AI
đź‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

Artificial Intelligence offers a lot of advantages for organisations by creating better and more efficient organisations, improving customer services with conversational AI and reducing a wide variety of risks in different industries. Although we are only at the beginning of the AI revolution that is upon us, we can already see that artificial intelligence will have a profound effect on our lives. As a result, AI governance and Explainable AI are becoming increasingly important, if we want to reap the benefits of artificial intelligence.

Data governance and ethics have always been important and a few years ago, I developed ethical guidelines for organisations to follow, if they want to get started with big data. Such ethical guidelines are becoming more important, especially now since algorithms are taking over more and more decisions. Automated decision-making is great until it has a negative outcome for you and you can’t change that decision or, at least, understand the rationale behind that decision. In addition, algorithms offer tremendous opportunities, but they have two major flaws:

  1. Algorithms are extremely literal; they pursue their (ultimate) goal literally and do exactly what is told while ignoring any other, important, consideration;
  2. Algorithms are black boxes; whatever happens inside an algorithm is only known to the organisation that uses it, and quite often not even.

Algorithms are very Literal

An algorithm only understands what has been explicitly told to it. Algorithms are not yet, and perhaps never will be, smart enough to know what it does not know. AI does not know what it does not know and, as such, it might miss vital considerations that we humans might have thought off automatically. Therefore, it is important to tell an algorithm as much as possible when developing it. The more you tell the algorithm, the more it understands. Next to that, when designing the algorithm, you must be crystal clear about what you want the algorithm to do.

Algorithms focus on the data they have access to and often that data has a short-term focus. As a result, algorithms tend to focus on the short term. Humans, most of them anyway, understand the importance of a long-term approach, but algorithms do not unless they are told to focus on the long-term as well. Therefore, developers (and managers) should ensure algorithms are consistent with any long-term objectives that have been set within the area of focus. This can be achieved by offering a wider variety of data source to incorporate into its decisions and focusing on so-called soft goals as well (which relates to behaviours and attitudes in others).

As such, when developing algorithms, one should focus on a variety of internal and external data, or mixed data. This concept of Mixed Data, which I developed a few years ago to help small business also getting started with big data, is important when building algorithms. Especially for smaller organisations, the Mixed Data approach helped SMEs understand that they too can obtain valuable insights, without the need for having Petabytes of data, but that the trick lies in having a wide variety of (un)structured and internal/external data sources.

Now, I would like to expand this approach to building algorithms. Organisations should use a variety of long-term- and short-term-focused data sources, as well as offering algorithms soft goals and hard goals, to create a stable algorithm. A mixed data approach can be used by the algorithm to calibrate the different data sources for their relative importance, resulting in better predictions and better algorithms. The more data sources and the more diverse these are, the better the predictions of the algorithms will become.

Algorithms and Explainable AI (XAI)

Algorithms are black boxes and often, we don’t know why an algorithm comes to a certain decision. They can make great predictions, on a wide range of topics, but how much are these predictions worth, if we don’t understand the reasoning behind it? Therefore, it is important to have explanatory capabilities within the algorithm, to understand why a certain decision was made.

Explainable AI or XAI is a new field of research that tries to make AI more understandable to humans. The term was first coined in 2004 in this paper, as a way to offer users of AI an easily understood chain of reasoning on the decisions made by the AI, in this case especially for simulation games. As such, the objective of XAI is to ensure that an algorithm can explain its rationale behind certain decisions and explain the strengths or weaknesses of that decision. Explainable AI, therefore, can help to uncover what the algorithm does not know, although it is not able to know this itself. Consequently, XAI can help to understand which data sources are missing in the mathematical model, which can be used to improve the AI.

In addition, explainable AI can help prevent so-called self-reinforcing loops. Self-reinforcing loops are a result of feedback loops, which are important and required to constantly improve an algorithm. However, if the AI misses soft goals and only focuses on the short term, or if the AI is too biased because of the usage of limited historical data, these feedback loops can become biased and discriminatory. Self-reinforcing loops, therefore, should be prevented. Using Explainable AI, researchers can understand why such self-reinforcing loops appear, why certain decisions have been made and, as such, understand what the algorithms do not know. Once that is known, the algorithm can be changed by adding additional (soft) goals and adding different data sources to improve its decision-making capabilities.

Explainable AI should be an important aspect of any algorithm. When the algorithm can explain why certain decisions have been / will be made and what the strengths and weaknesses of that decision are, the algorithm becomes accountable for its actions, just like humans are. It then can be altered and improved if it becomes (too) biased or if it becomes too literal, resulting in better AI for everyone.

Independent AI Watchdog

A few years ago, I argued for the need of Big Data auditors that would audit proprietary algorithms used by organisations to automate their decision-making. Today, this has become more important than ever. Too often, algorithms go awry and discriminate against consumers, most of the time because they are trained with historic, biased, data. Recently, a new report by Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, a research team at the Alan Turing Institute in London and the University of Oxford, called for an independent third-party body that can investigate AI decisions for people who believe they have been discriminated against by the algorithm. A great idea and I think this should be expanded to a governing body that also audits and verifies that algorithms work as they should be and to prevent discrimination.

A combination of an independent auditor and Explainable AI will help organisations to ensure consumers are treated equally, will help developers to build better algorithms, which in the end will result in better products and services.

Image credit: Customdesigner/Shutterstock

Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr Mark van Rijmenam is The Digital Speaker. He is a leading strategic futurist who thinks about how technology changes organisations, society and the metaverse. Dr Van Rijmenam is an international innovation keynote speaker, 5x author and entrepreneur. He is the founder of Datafloq and the author of the book on the metaverse: Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy, detailing what the metaverse is and how organizations and consumers can benefit from the immersive internet. His latest book is Future Visions, which was written in five days in collaboration with AI. Recently, he founded the Futurwise Institute, which focuses on elevating the world’s digital awareness.

Share