Why Data Rights Will Be The New Civil Rights - EP12

Why Data Rights Will Be The New Civil Rights - EP12
๐Ÿ‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.
800px

Hello, everyone and welcome to The Tech Journal. My name is Mark van Rijmenam, and I am The Digital Speaker. And this series I project my digital twin into cyberspace to bring you the latest and greatest from the digital world.

I cover all the latest digital news from blockchain and crypto to quantum computing and AI. And I always try to take it a step further and look into what these digital innovations mean for our personal and professional lives.

You can either view the episode above or view it on Vimeo, YouTube or listen to it on Anchor.fm, Soundcloud or Spotify.


In todayโ€™s show, we have Dan Turchin, sharing his insights with us on the hype of today, AI, but especially AI ethics. Dan Turchin is the Co-founder and CEO of PeopleReign โ€“ the leading AI platform for employee service automation. Prior to PeopleReign, Dan was the CEO of AIOps leader InsightFinder. He has also co-founded Astound, an AI-first enterprise platform for HR and IT, and he was involved with DevOps leader BigPanda and security analytics company AccelOps. The common thread in his career is the future of work and how we can use AI to create better organisations. It is a pleasure to have you on the show. Welcome.

Dan Turchin ย 01:12

Dr. Mark, pleasure to be here. Thanks for having me.

Mark van Rijmenam ย 01:15

Is there anything else that you want to add that I forgot to mention in your introduction?

Dan Turchin ย 01:21

Today, weโ€™re gonna talk about my lifeโ€™s passion, like you said, itโ€™s itโ€™s thinking about how to be human first in an era of technology, and in really thinking about how to use AI technology, some really advanced technologies, but first, always with the focus on how can we augment the intelligence of humans rather than replace humans. All these topics that weโ€™re going to discuss today, some of which we started a discussion about on AI and the future of work, my podcast, a few weeks back, really eager to start to unpack some of them and get your feedback as well.

Mark van Rijmenam ย 01:56

Sounds good. And Iโ€™m really super excited to talk to you. And again, as you mentioned once more because I was just a guest on your podcast, and I will include the link to your show AI and the future of work in the bio below. And we had a great conversation, you know, we unpacked a lot of the AI and at this time, weโ€™ll continue the conversation and focus on as you mentioned, already an important topic, the ethical side, the human side of AI. This topic is very important to me, it was part of my PhD a few years ago, and also the recent Netflix documentary, as we have seen, has put a lot more new attention on ย this crucial topic. So Iโ€™d like to dive straight in if thatโ€™s okay with you. And Iโ€™m very curious to hear what is AI ethics to you? And why is it so important?

What is AI Ethics and Why is it so Important?

Dan Turchin ย 02:46

Yeah, so Mark, weโ€™re having this conversation in a fascinating time, I think most of us would agree that the world of work will change more say, in the next decade than it has probably in the last 250 years since the first industrial revolution in Europe, but it is absolutely the case that thereโ€™s never been a time in history when so many critical decisions have been made on our behalf, with so little transparency.

Iโ€™d say every day, big life decisions are made by algorithms, sometimes without our knowledge. And weโ€™re very infrequently informed about how theyโ€™re made. And those are things like who gets a loan and who gets a job and who gets a discounted insurance premium. And even as we all know who gets on an aeroplane. Iโ€™d say algorithms can predict risk based on data that contains information about historical decisions and the results of those decisions. But applying the field of ethics to AI requires a little bit of a nuanced thinking about ethics.

Now, kind of just to get some definitions out; ethics is really the set of moral principles that govern a personโ€™s behaviour. But as AI increasingly makes human life decisions, it also needs a set of moral principles. AI ethics, you know, the way I think about it, itโ€™s really that framework that we use to communicate when automated decisions are made to understand how theyโ€™re made to evaluate the implications, if theyโ€™re made based on bad data, and know when they were made using unbiased inputs and potentially flawed algorithms.

Iโ€™d say AI is great at telling us what is, letโ€™s say popular. Itโ€™s terrible at knowing what is fair, we need to enforce a strict code of ethics on AI, before we can expect it to really improve the human condition. Thatโ€™s what AI ethics in this emerging field is to me.

What is Good and Bad for AI?

Mark van Rijmenam ย 04:53

Yeah, well, I think it immediately shows also the difficulty in it because, you know, when we talk about human ethics, itโ€™s already very difficult to know what is good, what is bad, you know, whatโ€™s good for you is maybe itโ€™s not good for me and vice versa. And ethics is a very difficult topic. So then we move on to we move to AI, and we have now AI agents, who are autonomous and can almost think for themselves, make decisions autonomously. What is then good and bad? Thatโ€™s what makes this field so tricky. Because for as humans is already difficult to learn to decide what it is, what means what good and bad is for AI? But what do you how do you look at that perspective? To determine what is good and bad for AI?

Dan Turchin ย 05:43

Well, first, I think we imbue AI with a level of intelligence. So when we think of it often as a sentient being, because of what weโ€™ve seen in science fiction movies, go back to the 1960s. And it was Isaac Asimov that first proposed three rules for robots. And ever since then, you know, the science fiction community has convinced us that these are beings that are much closer to being like humans, in fact, as is a great a great one for the shownotes but I Janelle Shane is she calls herself an AI humorous, wrote a book called, โ€˜I think you are a thing, and I love youโ€™. And, and itโ€™s all about the ways that we can trick AI into being quite silly.

So I think itโ€™s first important that as a human society, we appreciate that we are responsible for applying a set of ethical guidelines to the AI that we develop. And we need to remember that AI in its current state is only really intelligent and very narrow domains, when it comes to kind of AGI โ€“ artificial general intelligence โ€“ AI is about has about the brainpower of an earthworm.

We need not get too carried away and make sure that we kind of rein in the power of AI by realising that, you know, itโ€™s essentially, you know, math plus data. And so, you know, thereโ€™s a big conceptual leap between math plus data and sentience, human beings. And, you know, hence the reason why I think itโ€™s incumbent on us in the AI community, to really give structure to some of, you know, to this kind of ethical framework.

A Risk-based Approach to AI

Mark van Rijmenam ย 07:30

Yeah, but despite that, you know, thereโ€™s a big gap between, you know, sentient AI and, and math and data, we, there are plenty of examples already of, you know, where AI or, you know, algorithms have have gone rogue. And I think, you know, thatโ€™s, thatโ€™s why it is important that we need to move towards, we need to move, create a society where we have, you know, these guidelines, as you mentioned, of how to deal with it. And I think that thatโ€™s the interesting thing, whatโ€™s happening here in Europe, is that the EU recently announced here, this risk-based approach to AI, apart from launching the AI guidelines, as they did in 2019, as well.

This risk-based approach is basically, you know, defining AI into four risks, unacceptable risk which basically means ย AI systems that are considered a threat to safety, livelihoods, and the rights of people and those will be banned. You have the high-risk system, which is identified as critical infrastructure, law enforcement, administration of justice, etc, where, you know, itโ€™s important that we do the right thing when it comes to comes to AI. And we need to ensure you know, a high level of robustness, security and accuracy. Then the third one is the limited risk where the AI system with require specific transparency obligations.

So just chatbots, you know, that we know that we are interacting with the machine. And the lowest one is the minimal risk, where we can, you know, where the framework sort of, proposes to free use of AI in applications such as, you know, games or spam filters, where they actually have, you know, a clear benefit to society. And I think this is a great first step forward, to have to have this sort of risk-based approach. What do you think of this approach? And what is the US doing in this regard?

Data Rights are the new Civil Rights

Dan Turchin ย 09:22

So on AI and the future of work, my podcast, you introduced the concept and I know Yuval Noah Harari discussed essentially you know, the replacement of liberalism, kind of, you know, democratic, free flow exchange of ideas with digitalism. I love the concept, and Iโ€™ve been kind of thinking about that since our conversation. And increasingly, not so much in a cynical way, but in a very pragmatic way. I feel like, increasingly, itโ€™s the big seven tech companies, you know, Amazon, Google, Facebook, Microsoft, maybe Iโ€™d include Alibaba and Tencent in China that are really functioning as governments. So that is, that is where decisions are made about what we think what we see where we get healthcare advice, a lot of the things that were in the traditional domain of kind of democracies or governments are now things that are in the domain of this very small group of owners of data.

I feel strongly that for the next century, data rights are really the new civil rights. Policy moves much slower than technology. And as a result, decisions that, you know, would historically be made in the halls of power, you know, by elected representatives are increasingly made by, you know, seven global tech companies, and maybe that number expands contracts. But, you know, this idea of, you know, the point youโ€™ve made about digitalism replacing liberalism, I think is a nice way to think about the responsibility that we have.

You mentioned, you know, whatโ€™s going on in the United States? Well, you know, most of us listening have probably seen the viral video of Mark Zuckerberg explaining how the internet works to Congress, in the United States back in 2018. You know, itโ€™s just a farce to think that those legislators will do anything to intelligently regulate innovation, when theyโ€™re competing with a profit motive from companies like Google and Facebook, and a very poorly informed public.

Now, just in terms of regulation, itโ€™s pushing its way through federal and state. legislative bodies in the United States, as of 2021 16, states of our 50 have at least one bill going through the Legislative Review Process. New Jersey, in fact, has seven small state, but itโ€™s actually leading the country. The Federal Congress also has some bills. Thereโ€™s one that was recently introduced called the Advancing American AI Act. But Iโ€™d say all of these are, again, far behind where technologies where technology companies are. And the reason I say that is because oftentimes, they are just in the process of creating ways to educate legislators. And by the time the legislators are sufficiently educated, the tech companies are 3, 4, 5, 7 years ahead.

I take governmentโ€™s traditional government is at a big disadvantage here, I firmly believe we need to think differently about how we legislate or regulate AI. Iโ€™m in favour of, you know, not just calling tech executives for these, you know, circus acts to testify in front of Congress, but I propose something more like a decentralised approach to managing these automated decisions. I think that anyone whoโ€™s potentially impacted by these automated decisions should be able to share their experiences in kind of an open forum,

I propose using some kind of technological construct like a blockchain, whereby, you know, we all have greater visibility into what decisions were made, what data was used, how the decisions would be sensitive to changes in the data, almost like in the United States, we have something called the FDA, the Food and Drug Administration that says whether or not Food and Drug products are safe for consumers, I think there needs to be a parallel framework only in governed in a decentralised way that has authority over when AI is being used responsibly, to, to make decisions to me, thatโ€™s, thatโ€™s kind of the AI governance framework of the future.

Mark van Rijmenam ย 13:39

Yeah, and then ideally, of course, this is a decentralised framework is not controlled, and owned by the big tech, which is at the moment, you know, and the fact that, you know, these big tech, they have so much money in so much, such a big lobbying power that they can basically create, or, you know, cancel any, any legislation that they want, because they just have the power to do so.

Within a completely ignorant government, which I think is not only happening in America, itโ€™s also happening in here in the Netherlands, or in many countries where, you know, the people who are responsible for creating the legislation have zero understanding of whatโ€™s going on. And to a certain extent, I canโ€™t really blame them because, you know, itโ€™s, itโ€™s my, one of my objectives and my task where my work is to remain up to date with all the changes that are happening in the technological space. And even I canโ€™t keep up almost because, you know, the world is changing so fast.

How can you expect legislators to keep up with whatโ€™s happening, if they also have to, you know, do all kinds of other stuff and I think that thatโ€™s sort of where the problem lies. And then and then the question is, of course, do we want to rely on the legislator or do we want to have the markets organising this themselves?

And I think, yeah, we live in a capitalist society and you know, rely purely on the market will be challenging because AI offers so many advantages in terms of efficiency and making your organisation more lean and etc, that it will be quite challenging to do so. But maybe the academic world might offer a solution there as well. And if we talk about the academics during my PhD, and you know, the AI and, and responsible AI, in particular was part of my observations that I did. And itโ€™s often near what did they also say that you know, this is what we need to do to rein in AI and we need to create as responsible AI. But in your opinion, what is responsible AI? And how can we create responsible AI?

What is Responsible AI and how can we Create Responsible AI?

Dan Turchin ย 15:44

I love that weโ€™re having this conversation. And I wish that more people were considering what it means to practice AI responsibly, I think one of the things to keep in mind is that AI is, at its foundations, a human activity.

Everything that AI does wrong, can potentially be undone and done in the right way. Because humans control AI, we canโ€™t, we canโ€™t ever feel like weโ€™ve ceded control to robot overlords. And so to me, the three principles of responsible AI are 1) AI must be transparent. We need to know when automated decisions are made on our behalf, 2) they need to be predictable, the same input should reliably generate the same outputs. And then to me, the third principle is needs to be configurable.

When itโ€™s determined that AI is either making poor decisions because of bias data, or because of issues with the algorithms, we, as those impacted by automated decisions, should be able to run our own sensitivity analyses to be able to figure out if the inputs were changed how the outputs would change. Now, I feel strongly that all algorithms should be scored on a scorecard almost like youโ€™d score the hygiene of a restaurant, I think it should be very clear, you know, what weights are assigned to what data, itโ€™s easy to say that AI should behave, quote fairly, which is a subjective term, and that it should be explainable, but I donโ€™t think that goes far enough.

In the real world, I think itโ€™s incumbent on us to define what it means to have responsible AI and go a step further that and actually have frameworks that enforce the use of these are the adoption of these principles of responsible AI, by not just the developers have the algorithms, but everyone that kind of hovers around the ecosystem of participating in these automated decisions.

Audit System for AI-based Systems

Mark van Rijmenam ย 17:52

Yeah, and I liked the way that you mentioned that there are three, you know, criteria guidelines, as responsibly, I need to adhere to and actually want to add a fourth one, which, which I think is that, you know, AI needs to be able to explain itself, ideally, in plain language, whether thatโ€™s plain English, or plain Dutch or whatever, and you know, that it made a decision or when it when an error occurs, you know, it can say, you know, I made an error because of x, y, z, and you can, if you, if you go to, you know, code, there, you can find why I made this decision, and you can change that, and ideally, would want to embed this explainability within the AI systems of the future, so that, you know, with AI is all becoming black boxes, it becomes so difficult for us to understand, even for, you know, developers to understand whatโ€™s going on that we need to rely almost on the AI to explain to us what is going wrong, and why a certain decision has been made.

And in addition to that, I think what we also need to move to is we are moving to a society where you know, our financials are being done, you know, automatically, so, the accountancy firms of the world, they become less and less relevant, because, you know, we with the push of a button, we can get the full financial statement of a company, but these companies are very well equipped to move to the next phase of monitoring, meaning, you know, AI in doing the accountancy, not anymore on the finances, but on the AI, where ideally weโ€™ll start with publicly listed companies will need to be have a stamp of approval of one of the big four accountancy firms that yes, their AI systems, they are honest, they are transparent, they do what is that what they say they do, and you can trust this AI. And only once they have that approval, just that they need approval for the finances, then they are you know, they, they can move ahead. And I think thatโ€™s something that we, as a society we should really strive to move towards.

Dan Turchin ย 19:59

Good idea yeah. An audit system for AI-based systems.

Mark van Rijmenam ย 20:05

Yeah, especially because, you know, AI becomes so much more difficult to understand. And that was also what happened a few years ago. Yeah, for those who remember, Facebook killed an AI that had created its own language, and that even developers no longer understood. And if I then look to the to a famous book by Nick Bostrom, his book Superintelligence from 2014. And you know, if you want to be scared about the future, I can definitely recommend reading that book.

Because itโ€™s sort of the holy grail in terms of AI books. And he discussed that there are several technical and organisational methods to prevent AI from going crazy. And one tactical approach would be to implement such a kill switch as the Facebook developers had done, they just killed the AI. What is your idea around this? And then do you think that such technical measures would actually work if we move to an era where although still far away, AI becomes more and more intelligent?

How to Create Responsible AI?

Dan Turchin ย 21:06

Superintelligence is a good one by Bostrom, I would add to that list Artificial Unintelligence by Meredith Broussard, I think she is a professor at NYU, but also a good one, slightly less cynical than in Bostrom, but a good read in that it to your point. Yeah, so I do recall, the I think was called Bob and Alice, the chatbots that Facebook introduced that started talking in their own language.

And then a bit after that there was a Microsoft Tay bot Iโ€™m sure we all remember that started spewing misogynistic and anti-semitic comments after about 12 hours, because it turned out it was really easy to kind of weaponize the AI to make it essentially pare it back, you know, whatever, whatever user said, these are dangers and these are things that seep into the popular press, and they get a lot of coverage.

Every technology company is or will soon have their Tay or their Bob and Alice moment, if weโ€™re not very careful about how, how these technologies are deployed, and my fear, which, you know, is kind of what Bostrom covers, and even Meredith Broussard Is it the implications could be a lot greater, it may not just be, you know, some somewhat innocuous, although, you know, hurtful remarks made by a chatbot, thatโ€™s in experimental mode.

But when we start talking about, you know, flight paths, and minefields and global warming, you know, increasingly, these are all domains, let alone, you know, policing in urban areas, these are all domains that are being affected by AI. And before weโ€™re prepared to just devolve control to automated decision making, I think itโ€™s really important that we think about not just what can happen if the AI performs as it should, but itโ€™s really important that we always ask the question, what could go wrong, and until we really feel like the technologies a lot more mature, and culturally, weโ€™re more prepared based on you know, having kind of adopted these AI ethical frameworks, to kind of coexist alongside systems that make automated decisions, I think itโ€™s really important that we tread lightly, and, you know, get really good at, you know, at thinking through, you know, call it, you know, a few very tangible things to prevent AI from going rogue, you know, I mentioned, you know, kind of asked what can go wrong, but I, I think itโ€™s also important that we understand and kind of introspect the data being fed AI, to see what kind of biases encoded in the data before we deploy a system.

Again, math plus data equals AI, data is a really important component. I think we also need to get more comfortable involving more types of personas or more types of users in the AI vetting process. It should not just be the domain of developers or data scientists, I think itโ€™s equally important, letโ€™s say for finance or for marketing, to be involved. If you know, if you work at a bank, that using AI to make automated decisions about who gets a loan, I think itโ€™s really important that everyone whoโ€™s part of that lifecycle of the loan management process to be aware of how AI is being used to impact those, those end consumers of the bank. So thatโ€™s it. Those are three specific things that we could do better as we acknowledge that you know, the power of AI and we, either intentionally or unintentionally cede more really important decision making to AI

Mark van Rijmenam ย 24:56

I couldnโ€™t agree more. And I think you know, the recent Netflix documentary Coded Bias showed us, which is based on a very good book by Cathy Oโ€™Neil, Weapons of mass destruction, showed that how many things can go wrong with AI. And I think that the main thing of course that we should not forget is that AI is always being, you know, developed by biased developers, and is trained with biased data. No matter how good we try to remove the bias, there will always be some bias in there. And this documentary really showed some of the problems that we saw in terms of biased AI really causing havoc, chaos within organisations. But can you name some more examples that you have of AI really affecting now apart from the Tay example that we see that of course was pretty bad, but some recent examples that you are aware of?

AI Gone Rogue

Dan Turchin ย 25:56

So yes, Coded Bias is a great example. So the documentary follows the work of Joy Buolamwini, who was at MIT doing some work on facial recognition, she was kind of an artist first as well as a technologist and has since founded, which is called the AJL or the Algorithmic Justice League to track a lot of these situations where, you know, AI has been used for and it has had unintended consequences.

I think a few of the examples that are notable, this goes back quite a few years, but the massive IBM Watson failure at the MD Anderson Cancer Centre where they literally spent $62 million to develop a Watson based system that can detect cancer. And it was subsequently determined that it was frequently misdiagnosing cases, and it performed far worse than humans cancer detection. Amazon famously stopped using AI to screen candidates for engineering jobs, because it was disproportionately recommending white males because no oneโ€™s surprised thatโ€™s who had had the most success in Amazon engineering roles.

Janelle Shane gives a funny example, the book I mentioned previously, where a hedge fund used AI to do some selection and found that people named Jared that play Lacrosse tend to be the most successful in, in, in roles in the hedge fund. But, you know, humour aside, I mean, the implications are real, you know, I mentioned who gets a job and things like that.

But, you know, these are just the examples that we hear in the popular press, you know, I think, Joy Buolamwini gives a lot of more interesting examples, in fact, in Coded Bias, they talked about Big Brother Watch, an initiative in the UK, where they were following vehicles that were doing on the street face detection, to look for potential criminals in and around London.

Well, thereโ€™s a famous example in the United States in Congress where they used that Facebookโ€™s recognition, it is a Facebook product technology to try to figure out, to try to match the faces of congressional leaders with the faces of those in a known criminal database. And it turned out 28 of the members of Congress, none of whom actually had a criminal record, were determined by Amazon recognition to match the mug shots of criminals. So, you know, we start to get into some pretty, some pretty deep water in terms of using AI in ways that violate our civil rights.

And, you know, this, this, you know, these are all of the kind of telltale signs of, you know, kind of an authoritarian state that we need to be careful for. Itโ€™s not imminent, but if as a technology community, we donโ€™t really govern how these decisions are made, itโ€™s clear that regulatory bodies those you know, elected officials arenโ€™t really in a good position to make these decisions for us. So, all the more reason why conversations like this are so critical to be having now.

Mark van Rijmenam ย 29:34

Yeah, absolutely. And I think you know, the examples that you mentioned, show shows us that how, what a long road we still have to go in order to create AI that is actually to ensure that we that is good for us that we create AI thatโ€™s beneficial to organisations, to society, to citizens, to customers, etc. So, how would you what kind of tips would you give to organisations but also to legislators, how to regulate AI, whatโ€™s the best approach to follow here?

Vote with your Data

Dan Turchin ย 30:07

Yeah, Iโ€™m gonna go back in the Wayback Machine I referred to Isaac Asimov. Iโ€™ll start with his three rules for robots which have withstood the test of time. And then Iโ€™ve got a few other thoughts to add to that. Asimov, this is in a short story he published, I think this actually goes back to the 40s. But it was called Runaround, a science fiction story.

And he said, One, robots may not injure humans make sense, Two robots must obey orders from humans, except where such orders would conflict with rule number one. And then his third rule is robots must protect their own existence, as long as such protection doesnโ€™t conflict with rules one or two. So a bit of a high level summary, but a good framework, I think, to think about use of AI.

I would add, letโ€™s say, two to Asimovโ€™s rules, that are more kind of put a modern spin on that kind of framework, Iโ€™d say, get educated, donโ€™t be complacent, you know, channel, your inner Joy Buolamwini, you know, look to, you know, leaders in cities like San Francisco, or Portland or Boston, that have already banned the use of facial recognition by police, I think things like that are good step forward, at least until we can verify that these automated decisions wonโ€™t disproportionately impact underrepresented communities.

So get educated, have a voice. I think another, you know, another actionable behaviour I propose for all of us is vote with your data. So things like GDPR in Europe, and we have similar laws that are proliferating in the United States make it easy for us to get access to data, when weโ€™re concerned that it might be misuse. Whenever you see organisations that are misusing your data, or using it for questionable purposes, boycott those organisations, exercise your right to enforce the principles of responsible AI on not just those big tech, those big seven tech companies, but all of the tech companies that are looking to those big tech, big seven tech companies as role models.

Mark van Rijmenam ย 32:22

Yeah, I think I think there is an I agree with you, thereโ€™s a lot of thereโ€™s a there is a responsibility Also with us, you know, the consumers, the citizens, because how often we just, you know, donโ€™t read Terms and Conditions accept whatever we do just to use a, you know, an application, and Iโ€™m complacent to it as well, you know, everyone does it. And, and so there is a real responsibility for us. And yeah, we can indeed vote without data, we can say, Stop, this is a, this is enough.

And weโ€™re no longer use a particular service. Although that might be a bit challenging, but weโ€™ve seen it recently happening is, as well as WhatsApp when you know, Facebook decided to, to, to share the data with Facebook, which they always promised they would never do. And all of a sudden, now they are doing that a lot of people did leave, but itโ€™s still difficult to leave because of the high barrier, the high switching cost that comes with the social network, and thatโ€™s what Zuckerberg knows. And thatโ€™s what he, you know, he counts on when weโ€™re making such a move. And thatโ€™s a shame that we are moving to a society where, you know, these big tech companies are so powerful.

Thereโ€™s a real responsibility for us consumers to take action and to also force the legislators to, as you mentioned correctly, to get educated and to start doing with this something and to pay attention to whatโ€™s happening in the in the field of AI. So yeah, well, thanks, Dan, for this, as you know, itโ€™s a wrap, we could easily talk for I think, hours and hours on this topic. Thereโ€™s so much so much happening. But I really would like to thank you for for joining me on the show. And Iโ€™m looking forward to continue the conversation on another podcast in the near future.

Dan Turchin ย 34:04

Mark, always a pleasure. I think itโ€™s so critical that weโ€™re having this conversation and we involve more people in it and Iโ€™m glad that The Digital Speaker is having the conversation and, and just weโ€™d love to hear feedback from anyone listening and, and, you know, feel like weโ€™re all weโ€™re all in an important position right now in a critical place in history where we can influence how, how technology gets gets applied for good and not look forward to continuing the dialogue in the future.

Mark van Rijmenam ย 34:34

Absolutely. Well, thank you so much for joining us today. Iโ€™ve been your host Mark van Rijmenam, The Digital Speaker on this has been the Tech Journal. Do not forget to hit the like and subscribe button and I will see you next time for your information download. Stay digital.

Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a strategic futurist known as The Digital Speaker. He stands at the forefront of the digital age and lives and breathes cutting-edge technologies to inspire Fortune 500 companies and governments worldwide. As an optimistic dystopian, he has a deep understanding of AI, blockchain, the metaverse, and other emerging technologies, blending academic rigor with technological innovation.

His pioneering efforts include the worldโ€™s first TEDx Talk in VR in 2020. In 2023, he further pushed boundaries when he delivered a TEDx talk in Athens with his digital twin, delving into the complex interplay of AI and our perception of reality. In 2024, he launched a digital twin of himself, offering interactive, on-demand conversations via text, audio, or video in 29 languages, thereby bridging the gap between the digital and physical worlds โ€“ another worldโ€™s first.

Dr. Van Rijmenam is a prolific author and has written more than 1,200 articles and five books in his career. As a corporate educator, he is celebrated for his candid, independent, and balanced insights. He is also the founder of Futurwise, which focuses on elevating global knowledge on crucial topics like technology, healthcare, and climate change by providing high-quality, hyper-personalized, and easily digestible insights from trusted sources.

Share

Digital Twin