Ensuring a Thriving Digital Future in a Post-Truth World

Ensuring a Thriving Digital Future in a Post-Truth World
đź‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

A few months ago, I delivered my second TEDx talk (my first TEDx talk was the world's first TED in virtual reality!), and I'm thrilled to share that my groundbreaking TEDx talk, "Ensuring a Thriving Digital Future in a Post-Truth World," is now available online!

Being the opening talk of TEDxAthens, I started my talk with a digital deepfake of myself, followed by a dive into the swirling currents of our post-truth era, where AI technologies like deepfakes and large language models are reshaping our reality.

This talk isn't just about the challenges; it's about finding our path forward. As an 'optimistic dystopian,' I explore the complexities of AI's impact on society and propose a three-fold strategy: Education, Verification and Regulation.

These are not just abstract ideas; they're actionable steps towards a thriving digital future. In this era where AI blurs lines between real and artificial, understanding and adapting is crucial.

Whether you're a tech enthusiast, a concerned citizen, or somewhere in between, this talk has something for you. So, are you ready to join me on this transformative journey? Let's shape a future where AI is not a dystopian threat but a tool for positive change. Watch the talk, get inspired, and let's discuss how we can collectively navigate these uncharted digital waters.

Transcript

Here is the transcript of my talk, including the sources used and the corresponding images that were crucial to the story as they depict the emotions I was trying to convey with my talk:

What you just saw was a digital deepfake of me. An exact digital replica of me, where I recreated myself digitally, cloned my voice, mimicked my movements, and asked ChatGPT to write the script based on my content. This process taught me the power of this amazing technology, and AI truly has the potential to revolutionize society or destroy it.

As Morpheus from The Matrix once asked, "What is real? How do you define 'real'?" In our increasingly digital world, determining who and what we can trust becomes more challenging every day. After all, if I can create this digital deepfake with minimal costs, what can bad actors do with near-infinite budgets? We have entered the post-truth world, and things will get dangerous from here on.

And that worries me a lot. I have been in the space for over a decade. Starting with big data, moving to blockchain, AI, the metaverse and back to AI. I did a PhD on how these technology change organizations, wrote four books about this myself and I always try to practice what I preach to really understand how these emerging technologies will affect society. My work allows me to travel a lot to help organisations around the world understand these disruptive forces. However, when my son was born in the summer of 2022, I paused my travels, which gave me time to think deeply about what was happening.

And the more I thought about it, the more worried I became. Especially with now having two children, I decided I want to avoid at all costs that they will end up in a digital dystopian future. The question is how?

Unfortunately, we are rapidly moving in the direction of a future nobody wants. Already, it seems we have become enslaved by our technologies.

Every time I see parents with a young child in a pram and a mobile phone attached to the pram, right in front of the eyes of the poor kid so it cannot look anywhere else except for that screen with bright colours dancing on the screen, it hurts me and I feel for the kid who has no choice but to get addicted to digital technologies before he or she knows it.

If this is painful, know that Dutch research showed that 25% of babies under one year spend two hours or more on a phone or tablet, and this is self-reported, so most likely, it is a lot more. Almost 10% of the parents give a phone or tablet to the children to watch in bed when they need to go to sleep!

and that was even before ChatGPT was launched.

In my quest to understand how the world had changed when ChatGPT was launched in November 2022, I decided to write my fifth book with it. I dare to say I was the first person in the world to publish a book written by ChatGPT because it became available only 1,5 weeks after the launch of this extraordinary tool.

And that’s also when it hit me. The world had changed, and it will never be the same.

Moreover, the dystopian future I had feared since the birth of my son, all of a sudden, had become a lot closer than I had anticipated, and with that, the time to do something about it, which was already a closing window, had become even shorter. The time is ticking, and I feel that the Doomsday Clock moved closer to midnight on the day ChatGPT was released.

I realized that these Large Language Models, whether from OpenAI, Google or any of the other Big Tech firms or startups developing generative AI tools, have become so powerful that increasingly we are at risk of losing control to the machines, and that is not something to look forward to.

Or, as the famous Jaron Lanier recently stated,: “AI will not kill us, but AI driving us insane will.”

Now, I am an optimistic dystopian, trying to hold two opposing positions to understand how we can end up on the right side of history and ensure a thriving digital future instead of a dystopian one.

So, let’s first explore my dystopian side.

Isaac Asimov once said: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” That was in 1988, and his quote still seems eerily accurate.

For the past 35 years, we have sleepwalked into the digital age, and us not being digitally aware has resulted in a range of problems.

If we thought that the polarization, manipulation, and misinformation of the social media era were bad, it will be child's play compared to what we can expect in the AI era, where technologies such as AI, quantum computing and the metaverse will converge, creating exponentially more polarization, manipulation, and misinformation. We are creating a perfect storm, but the question is, why?

I think the metaphorical representation of Moloch can help us understand the challenges and dilemmas we face. Moloch was originally an ancient mystical deity associated with child sacrifice. In the past 100 years, it has evolved into a metaphor for the destructive forces within society. Moloch symbolizes the consequences of competition, FOMO and self-interest that can lead to harmful outcomes, even when individuals act rationally, resulting in sub-optimal states for everyone.

The metaphor is derived from the idea that competition and self-interest, like the god Moloch, require constant sacrifice and consume the creativity and individuality of those subject to them.

The best example is capitalism, which drives innovation and wealth, but also results in exploitation, inequality, and environmental degradation. Just like capitalism, AI can help us tackle climate change, improve healthcare, help restore the Amazon and even create more empathetic machines, but it can also result in exploitation, massive inequality, and other negative consequences.

Already, ChatGPT is used for misinformation campaigns in Venezuela, for phishing and malware attacks, and earlier this year, ChaosGPT was created to destroy humanity, and that is only the beginning. Now we see the convergence of these forces, where AI created by a capitalist society will result in a tiny elite controlling society, or worse, the end of society as a result of FOMO and a pursuit of short-term shareholder value.

Unfortunately, capitalism has created a suboptimal state for humanity when it comes to the development of AI. It is Moloch at work. A significant shift in recent years has seen AI research move from universities, where ethics boards are standard, to the private sector, where oversight is often lacking.

According to the AI Index, compiled by researchers from Stanford University as well as AI companies, including Google, Anthropic, and Hugging Face, in 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia.

That is a problem, as companies are driven by a short-term shareholders’ perspective. This transition has been driven by the fear of missing out on AI advancements, as evidenced by Elon Musk's decision to develop his own AI competitor, TruthGPT, shortly after calling for a pause in AI development. 

While we now have very advanced machine learning models, and some even believe we are getting close to artificial general intelligence, the models are often optimized to benefit the shareholders of the companies who created them and not society as a whole. In this process, a tiny elite aims to control vast amounts of wealth.

A good example is ChatGPT. Sam Altman predicted that OpenAI could capture up to $ 100 trillion of the world's wealth. And that is no surprise if we see how they are using the technology developed. While it was extremely beneficial for OpenAI to release ChatGPT to the public, with over 100 million users testing it and feeding it lots of data in just two months, completely for free, the positive impact on society remains to be seen.

For example, their API now allows anyone with access to integrate the powerful Large Language Model in their own tools, where oversight and control are a lot less. An example is SnapChat’s virtual friend driven by GPT4, which, as Tristan Harris showed in a recent talk,, encourages child abuse. Not the kind of technology we want on a platform with hundreds of millions of children active.

The more advanced AI becomes, the bigger the risks for humanity, and if we do not build it correctly, AI can pose an existential threat to us. Therefore, we must establish global alignment on the ethical use of AI and incorporate it into our culture, which is even apart from ensuring that AI is aligned with human values.

We must fight Moloch. Although this seems like an insurmountable challenge, it is certainly possible, as we have done before.

For example, we have prevented the world from going down the rabbit hole of cloning humans or allowing anyone to create bioweapons and avoided every country possessing nuclear weapons. So, let’s remain optimistic and see how we can fight Moloch, which we already have done on a smaller scale.

 So, how can we replicate the theatre’s positive environment in the realm of AI?

Let’s turn to my optimistic side.

I really do believe "we still have one shot, one opportunity to seize everything we ever wanted for a better future, to quote Eminem. Will we capture it, or will we let it slip?"

We must approach AI with greater caution and consideration to avoid the pitfalls of moving too fast and breaking things, as we've seen with social media. Instead of repeating past mistakes, let's harness the power of AI to create a future that benefits humanity as a whole, not just a select few.

To achieve this vision, we must become digitally aware. We must wake up from doom-scrolling TikTok videos and we must read the instruction manuals, if they existed, of these very powerful tools, especially when it involves children. After all, we also don’t allow kids to drive cars, and they must obtain a driver’s license if they want to do so at 18 years old.

We can become digitally aware if we focus on three key areas: education, verification, and regulation.

First education. We should focus on educating the world on the tools that will define our lives. Not only should we use AI in education, as my digital counterpart mentioned, though in a more ethical way, but to truly address the challenges posed by AI, we must elevate the world's digital awareness.

While today's generation may be considered digital natives, it's crucial to recognize that being native to the digital world doesn't automatically equate to being digitally aware.

To tackle the complex ethical and societal issues arising from AI, we need a comprehensive understanding of the implications of our digital footprint and the importance of data rights.

To achieve this, we must invest in education beyond mere technological literacy. It should encompass the ethical, social, and political dimensions of technology, empowering people to make informed decisions about how they engage with AI and the digital world. And this should start from an early age, as children nowadays become familiar with these tools from early on.

By fostering a culture of digital awareness, we can equip individuals with the knowledge and tools necessary to vote with their data, demand transparency and accountability from tech companies, and actively participate in shaping a responsible and equitable digital future.

Second, verification. As AI becomes an increasingly powerful force in our lives, we must develop robust methods to verify its functionality, intentions, and output. This involves creating systems and tools that can effectively identify AI-generated content and confirm the authenticity of digital identities in the rapidly evolving online landscape.

After all, if everyone can easily create a hyper-realistic digital deepfake as I did, we need to be able to verify if the person you are dealing with, whether via voice, video or in the metaverse, is indeed the person who they say they are.

Finally, regulation. In order to manage the impact of AI on society, we need to establish comprehensive regulations similar to the FDA's approval process for drugs. To ensure the responsible development and deployment of AI technologies that impact our minds, we must establish a regulatory body similar to the FDA.

Just as the FDA evaluates the safety and efficacy of drugs that affect our physical health, this new regulatory agency would assess AI technologies that influence our mental well-being, cognitive processes, and decision-making abilities. Europe’s upcoming AI act is a good start, but AI operates without borders, so we would need this at a global level.

Next, we should require companies developing AI to have ethics boards with genuine authority. These boards should have real decision-making power and be composed of diverse experts, including ethicists, engineers, social scientists, and representatives from affected communities. They must be given the authority to set guidelines, oversee the development and deployment of AI systems, and, if necessary, halt projects that don't meet ethical and safety standards.

These three solutions are very much achievable if we work together as a species. As we embark on this journey, let us draw inspiration from the theatre example, where cultural norms and values around the world guide our actions and promote cooperation. We must foster a long-term global culture that prioritizes ethical AI development and embraces the need for transparency, accountability, and collaboration across all sectors, in order to fight Moloch.

Despite our challenges, I remain optimistic. What keeps me hopeful? The simple truth is that we all want the best for our children, and that is a thriving digital future instead of a dystopian one.

For the past decade, I have been experimenting with the technology, and no, I am not a developer, data scientist or machine learning specialist. I cannot even code, but I am still trying to understand the inner workings of these powerful digital technologies, and so should everyone else. Yes, we are all busy leading our lives, but this is something so fundamental that we have to look up; we have to become aware of what’s happening and where we are going because if we are not looking up if we are ignoring the signals that our society is on a path of destruction, how will we be able to protect the future generation?

The potential of AI is vast, and the stakes are high. We have the power to determine whether AI becomes a force for good or a source of destruction. It is up to us to make the right choices, establish the necessary safeguards, and ensure that the future we create is one we can be proud of. With the right approach, we can shape a future where AI enriches our lives, solves complex problems, and ultimately serves humanity.

Thank you.

Images: Midjourney 

Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a strategic futurist known as The Digital Speaker. He stands at the forefront of the digital age and lives and breathes cutting-edge technologies to inspire Fortune 500 companies and governments worldwide. As an optimistic dystopian, he has a deep understanding of AI, blockchain, the metaverse, and other emerging technologies, blending academic rigor with technological innovation.

His pioneering efforts include the world’s first TEDx Talk in VR in 2020. In 2023, he further pushed boundaries when he delivered a TEDx talk in Athens with his digital twin, delving into the complex interplay of AI and our perception of reality. In 2024, he launched a digital twin of himself, offering interactive, on-demand conversations via text, audio, or video in 29 languages, thereby bridging the gap between the digital and physical worlds – another world’s first.

Dr. Van Rijmenam is a prolific author and has written more than 1,200 articles and five books in his career. As a corporate educator, he is celebrated for his candid, independent, and balanced insights. He is also the founder of Futurwise, which focuses on elevating global knowledge on crucial topics like technology, healthcare, and climate change by providing high-quality, hyper-personalized, and easily digestible insights from trusted sources.

Share

Digital Twin