Why Data Rights Will Be The New Civil Rights - EP12

Why Data Rights Will Be The New Civil Rights - EP12
👋 Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.
800px

Hello, everyone and welcome to The Tech Journal. My name is Mark van Rijmenam, and I am The Digital Speaker. And this series I project my digital twin into cyberspace to bring you the latest and greatest from the digital world.

I cover all the latest digital news from blockchain and crypto to quantum computing and AI. And I always try to take it a step further and look into what these digital innovations mean for our personal and professional lives.

You can either view the episode above or view it on Vimeo, YouTube or listen to it on Anchor.fm, Soundcloud or Spotify.


In today’s show, we have Dan Turchin, sharing his insights with us on the hype of today, AI, but especially AI ethics. Dan Turchin is the Co-founder and CEO of PeopleReign – the leading AI platform for employee service automation. Prior to PeopleReign, Dan was the CEO of AIOps leader InsightFinder. He has also co-founded Astound, an AI-first enterprise platform for HR and IT, and he was involved with DevOps leader BigPanda and security analytics company AccelOps. The common thread in his career is the future of work and how we can use AI to create better organisations. It is a pleasure to have you on the show. Welcome.

Dan Turchin  01:12

Dr. Mark, pleasure to be here. Thanks for having me.

Mark van Rijmenam  01:15

Is there anything else that you want to add that I forgot to mention in your introduction?

Dan Turchin  01:21

Today, we’re gonna talk about my life’s passion, like you said, it’s it’s thinking about how to be human first in an era of technology, and in really thinking about how to use AI technology, some really advanced technologies, but first, always with the focus on how can we augment the intelligence of humans rather than replace humans. All these topics that we’re going to discuss today, some of which we started a discussion about on AI and the future of work, my podcast, a few weeks back, really eager to start to unpack some of them and get your feedback as well.

Mark van Rijmenam  01:56

Sounds good. And I’m really super excited to talk to you. And again, as you mentioned once more because I was just a guest on your podcast, and I will include the link to your show AI and the future of work in the bio below. And we had a great conversation, you know, we unpacked a lot of the AI and at this time, we’ll continue the conversation and focus on as you mentioned, already an important topic, the ethical side, the human side of AI. This topic is very important to me, it was part of my PhD a few years ago, and also the recent Netflix documentary, as we have seen, has put a lot more new attention on  this crucial topic. So I’d like to dive straight in if that’s okay with you. And I’m very curious to hear what is AI ethics to you? And why is it so important?

What is AI Ethics and Why is it so Important?

Dan Turchin  02:46

Yeah, so Mark, we’re having this conversation in a fascinating time, I think most of us would agree that the world of work will change more say, in the next decade than it has probably in the last 250 years since the first industrial revolution in Europe, but it is absolutely the case that there’s never been a time in history when so many critical decisions have been made on our behalf, with so little transparency.

I’d say every day, big life decisions are made by algorithms, sometimes without our knowledge. And we’re very infrequently informed about how they’re made. And those are things like who gets a loan and who gets a job and who gets a discounted insurance premium. And even as we all know who gets on an aeroplane. I’d say algorithms can predict risk based on data that contains information about historical decisions and the results of those decisions. But applying the field of ethics to AI requires a little bit of a nuanced thinking about ethics.

Now, kind of just to get some definitions out; ethics is really the set of moral principles that govern a person’s behaviour. But as AI increasingly makes human life decisions, it also needs a set of moral principles. AI ethics, you know, the way I think about it, it’s really that framework that we use to communicate when automated decisions are made to understand how they’re made to evaluate the implications, if they’re made based on bad data, and know when they were made using unbiased inputs and potentially flawed algorithms.

I’d say AI is great at telling us what is, let’s say popular. It’s terrible at knowing what is fair, we need to enforce a strict code of ethics on AI, before we can expect it to really improve the human condition. That’s what AI ethics in this emerging field is to me.

What is Good and Bad for AI?

Mark van Rijmenam  04:53

Yeah, well, I think it immediately shows also the difficulty in it because, you know, when we talk about human ethics, it’s already very difficult to know what is good, what is bad, you know, what’s good for you is maybe it’s not good for me and vice versa. And ethics is a very difficult topic. So then we move on to we move to AI, and we have now AI agents, who are autonomous and can almost think for themselves, make decisions autonomously. What is then good and bad? That’s what makes this field so tricky. Because for as humans is already difficult to learn to decide what it is, what means what good and bad is for AI? But what do you how do you look at that perspective? To determine what is good and bad for AI?

Dan Turchin  05:43

Well, first, I think we imbue AI with a level of intelligence. So when we think of it often as a sentient being, because of what we’ve seen in science fiction movies, go back to the 1960s. And it was Isaac Asimov that first proposed three rules for robots. And ever since then, you know, the science fiction community has convinced us that these are beings that are much closer to being like humans, in fact, as is a great a great one for the shownotes but I Janelle Shane is she calls herself an AI humorous, wrote a book called, ‘I think you are a thing, and I love you’. And, and it’s all about the ways that we can trick AI into being quite silly.

So I think it’s first important that as a human society, we appreciate that we are responsible for applying a set of ethical guidelines to the AI that we develop. And we need to remember that AI in its current state is only really intelligent and very narrow domains, when it comes to kind of AGI – artificial general intelligence – AI is about has about the brainpower of an earthworm.

We need not get too carried away and make sure that we kind of rein in the power of AI by realising that, you know, it’s essentially, you know, math plus data. And so, you know, there’s a big conceptual leap between math plus data and sentience, human beings. And, you know, hence the reason why I think it’s incumbent on us in the AI community, to really give structure to some of, you know, to this kind of ethical framework.

A Risk-based Approach to AI

Mark van Rijmenam  07:30

Yeah, but despite that, you know, there’s a big gap between, you know, sentient AI and, and math and data, we, there are plenty of examples already of, you know, where AI or, you know, algorithms have have gone rogue. And I think, you know, that’s, that’s why it is important that we need to move towards, we need to move, create a society where we have, you know, these guidelines, as you mentioned, of how to deal with it. And I think that that’s the interesting thing, what’s happening here in Europe, is that the EU recently announced here, this risk-based approach to AI, apart from launching the AI guidelines, as they did in 2019, as well.

This risk-based approach is basically, you know, defining AI into four risks, unacceptable risk which basically means  AI systems that are considered a threat to safety, livelihoods, and the rights of people and those will be banned. You have the high-risk system, which is identified as critical infrastructure, law enforcement, administration of justice, etc, where, you know, it’s important that we do the right thing when it comes to comes to AI. And we need to ensure you know, a high level of robustness, security and accuracy. Then the third one is the limited risk where the AI system with require specific transparency obligations.

So just chatbots, you know, that we know that we are interacting with the machine. And the lowest one is the minimal risk, where we can, you know, where the framework sort of, proposes to free use of AI in applications such as, you know, games or spam filters, where they actually have, you know, a clear benefit to society. And I think this is a great first step forward, to have to have this sort of risk-based approach. What do you think of this approach? And what is the US doing in this regard?

Data Rights are the new Civil Rights

Dan Turchin  09:22

So on AI and the future of work, my podcast, you introduced the concept and I know Yuval Noah Harari discussed essentially you know, the replacement of liberalism, kind of, you know, democratic, free flow exchange of ideas with digitalism. I love the concept, and I’ve been kind of thinking about that since our conversation. And increasingly, not so much in a cynical way, but in a very pragmatic way. I feel like, increasingly, it’s the big seven tech companies, you know, Amazon, Google, Facebook, Microsoft, maybe I’d include Alibaba and Tencent in China that are really functioning as governments. So that is, that is where decisions are made about what we think what we see where we get healthcare advice, a lot of the things that were in the traditional domain of kind of democracies or governments are now things that are in the domain of this very small group of owners of data.

I feel strongly that for the next century, data rights are really the new civil rights. Policy moves much slower than technology. And as a result, decisions that, you know, would historically be made in the halls of power, you know, by elected representatives are increasingly made by, you know, seven global tech companies, and maybe that number expands contracts. But, you know, this idea of, you know, the point you’ve made about digitalism replacing liberalism, I think is a nice way to think about the responsibility that we have.

You mentioned, you know, what’s going on in the United States? Well, you know, most of us listening have probably seen the viral video of Mark Zuckerberg explaining how the internet works to Congress, in the United States back in 2018. You know, it’s just a farce to think that those legislators will do anything to intelligently regulate innovation, when they’re competing with a profit motive from companies like Google and Facebook, and a very poorly informed public.

Now, just in terms of regulation, it’s pushing its way through federal and state. legislative bodies in the United States, as of 2021 16, states of our 50 have at least one bill going through the Legislative Review Process. New Jersey, in fact, has seven small state, but it’s actually leading the country. The Federal Congress also has some bills. There’s one that was recently introduced called the Advancing American AI Act. But I’d say all of these are, again, far behind where technologies where technology companies are. And the reason I say that is because oftentimes, they are just in the process of creating ways to educate legislators. And by the time the legislators are sufficiently educated, the tech companies are 3, 4, 5, 7 years ahead.

I take government’s traditional government is at a big disadvantage here, I firmly believe we need to think differently about how we legislate or regulate AI. I’m in favour of, you know, not just calling tech executives for these, you know, circus acts to testify in front of Congress, but I propose something more like a decentralised approach to managing these automated decisions. I think that anyone who’s potentially impacted by these automated decisions should be able to share their experiences in kind of an open forum,

I propose using some kind of technological construct like a blockchain, whereby, you know, we all have greater visibility into what decisions were made, what data was used, how the decisions would be sensitive to changes in the data, almost like in the United States, we have something called the FDA, the Food and Drug Administration that says whether or not Food and Drug products are safe for consumers, I think there needs to be a parallel framework only in governed in a decentralised way that has authority over when AI is being used responsibly, to, to make decisions to me, that’s, that’s kind of the AI governance framework of the future.

Mark van Rijmenam  13:39

Yeah, and then ideally, of course, this is a decentralised framework is not controlled, and owned by the big tech, which is at the moment, you know, and the fact that, you know, these big tech, they have so much money in so much, such a big lobbying power that they can basically create, or, you know, cancel any, any legislation that they want, because they just have the power to do so.

Within a completely ignorant government, which I think is not only happening in America, it’s also happening in here in the Netherlands, or in many countries where, you know, the people who are responsible for creating the legislation have zero understanding of what’s going on. And to a certain extent, I can’t really blame them because, you know, it’s, it’s my, one of my objectives and my task where my work is to remain up to date with all the changes that are happening in the technological space. And even I can’t keep up almost because, you know, the world is changing so fast.

How can you expect legislators to keep up with what’s happening, if they also have to, you know, do all kinds of other stuff and I think that that’s sort of where the problem lies. And then and then the question is, of course, do we want to rely on the legislator or do we want to have the markets organising this themselves?

And I think, yeah, we live in a capitalist society and you know, rely purely on the market will be challenging because AI offers so many advantages in terms of efficiency and making your organisation more lean and etc, that it will be quite challenging to do so. But maybe the academic world might offer a solution there as well. And if we talk about the academics during my PhD, and you know, the AI and, and responsible AI, in particular was part of my observations that I did. And it’s often near what did they also say that you know, this is what we need to do to rein in AI and we need to create as responsible AI. But in your opinion, what is responsible AI? And how can we create responsible AI?

What is Responsible AI and how can we Create Responsible AI?

Dan Turchin  15:44

I love that we’re having this conversation. And I wish that more people were considering what it means to practice AI responsibly, I think one of the things to keep in mind is that AI is, at its foundations, a human activity.

Everything that AI does wrong, can potentially be undone and done in the right way. Because humans control AI, we can’t, we can’t ever feel like we’ve ceded control to robot overlords. And so to me, the three principles of responsible AI are 1) AI must be transparent. We need to know when automated decisions are made on our behalf, 2) they need to be predictable, the same input should reliably generate the same outputs. And then to me, the third principle is needs to be configurable.

When it’s determined that AI is either making poor decisions because of bias data, or because of issues with the algorithms, we, as those impacted by automated decisions, should be able to run our own sensitivity analyses to be able to figure out if the inputs were changed how the outputs would change. Now, I feel strongly that all algorithms should be scored on a scorecard almost like you’d score the hygiene of a restaurant, I think it should be very clear, you know, what weights are assigned to what data, it’s easy to say that AI should behave, quote fairly, which is a subjective term, and that it should be explainable, but I don’t think that goes far enough.

In the real world, I think it’s incumbent on us to define what it means to have responsible AI and go a step further that and actually have frameworks that enforce the use of these are the adoption of these principles of responsible AI, by not just the developers have the algorithms, but everyone that kind of hovers around the ecosystem of participating in these automated decisions.

Audit System for AI-based Systems

Mark van Rijmenam  17:52

Yeah, and I liked the way that you mentioned that there are three, you know, criteria guidelines, as responsibly, I need to adhere to and actually want to add a fourth one, which, which I think is that, you know, AI needs to be able to explain itself, ideally, in plain language, whether that’s plain English, or plain Dutch or whatever, and you know, that it made a decision or when it when an error occurs, you know, it can say, you know, I made an error because of x, y, z, and you can, if you, if you go to, you know, code, there, you can find why I made this decision, and you can change that, and ideally, would want to embed this explainability within the AI systems of the future, so that, you know, with AI is all becoming black boxes, it becomes so difficult for us to understand, even for, you know, developers to understand what’s going on that we need to rely almost on the AI to explain to us what is going wrong, and why a certain decision has been made.

And in addition to that, I think what we also need to move to is we are moving to a society where you know, our financials are being done, you know, automatically, so, the accountancy firms of the world, they become less and less relevant, because, you know, we with the push of a button, we can get the full financial statement of a company, but these companies are very well equipped to move to the next phase of monitoring, meaning, you know, AI in doing the accountancy, not anymore on the finances, but on the AI, where ideally we’ll start with publicly listed companies will need to be have a stamp of approval of one of the big four accountancy firms that yes, their AI systems, they are honest, they are transparent, they do what is that what they say they do, and you can trust this AI. And only once they have that approval, just that they need approval for the finances, then they are you know, they, they can move ahead. And I think that’s something that we, as a society we should really strive to move towards.

Dan Turchin  19:59

Good idea yeah. An audit system for AI-based systems.

Mark van Rijmenam  20:05

Yeah, especially because, you know, AI becomes so much more difficult to understand. And that was also what happened a few years ago. Yeah, for those who remember, Facebook killed an AI that had created its own language, and that even developers no longer understood. And if I then look to the to a famous book by Nick Bostrom, his book Superintelligence from 2014. And you know, if you want to be scared about the future, I can definitely recommend reading that book.

Because it’s sort of the holy grail in terms of AI books. And he discussed that there are several technical and organisational methods to prevent AI from going crazy. And one tactical approach would be to implement such a kill switch as the Facebook developers had done, they just killed the AI. What is your idea around this? And then do you think that such technical measures would actually work if we move to an era where although still far away, AI becomes more and more intelligent?

How to Create Responsible AI?

Dan Turchin  21:06

Superintelligence is a good one by Bostrom, I would add to that list Artificial Unintelligence by Meredith Broussard, I think she is a professor at NYU, but also a good one, slightly less cynical than in Bostrom, but a good read in that it to your point. Yeah, so I do recall, the I think was called Bob and Alice, the chatbots that Facebook introduced that started talking in their own language.

And then a bit after that there was a Microsoft Tay bot I’m sure we all remember that started spewing misogynistic and anti-semitic comments after about 12 hours, because it turned out it was really easy to kind of weaponize the AI to make it essentially pare it back, you know, whatever, whatever user said, these are dangers and these are things that seep into the popular press, and they get a lot of coverage.

Every technology company is or will soon have their Tay or their Bob and Alice moment, if we’re not very careful about how, how these technologies are deployed, and my fear, which, you know, is kind of what Bostrom covers, and even Meredith Broussard Is it the implications could be a lot greater, it may not just be, you know, some somewhat innocuous, although, you know, hurtful remarks made by a chatbot, that’s in experimental mode.

But when we start talking about, you know, flight paths, and minefields and global warming, you know, increasingly, these are all domains, let alone, you know, policing in urban areas, these are all domains that are being affected by AI. And before we’re prepared to just devolve control to automated decision making, I think it’s really important that we think about not just what can happen if the AI performs as it should, but it’s really important that we always ask the question, what could go wrong, and until we really feel like the technologies a lot more mature, and culturally, we’re more prepared based on you know, having kind of adopted these AI ethical frameworks, to kind of coexist alongside systems that make automated decisions, I think it’s really important that we tread lightly, and, you know, get really good at, you know, at thinking through, you know, call it, you know, a few very tangible things to prevent AI from going rogue, you know, I mentioned, you know, kind of asked what can go wrong, but I, I think it’s also important that we understand and kind of introspect the data being fed AI, to see what kind of biases encoded in the data before we deploy a system.

Again, math plus data equals AI, data is a really important component. I think we also need to get more comfortable involving more types of personas or more types of users in the AI vetting process. It should not just be the domain of developers or data scientists, I think it’s equally important, let’s say for finance or for marketing, to be involved. If you know, if you work at a bank, that using AI to make automated decisions about who gets a loan, I think it’s really important that everyone who’s part of that lifecycle of the loan management process to be aware of how AI is being used to impact those, those end consumers of the bank. So that’s it. Those are three specific things that we could do better as we acknowledge that you know, the power of AI and we, either intentionally or unintentionally cede more really important decision making to AI

Mark van Rijmenam  24:56

I couldn’t agree more. And I think you know, the recent Netflix documentary Coded Bias showed us, which is based on a very good book by Cathy O’Neil, Weapons of mass destruction, showed that how many things can go wrong with AI. And I think that the main thing of course that we should not forget is that AI is always being, you know, developed by biased developers, and is trained with biased data. No matter how good we try to remove the bias, there will always be some bias in there. And this documentary really showed some of the problems that we saw in terms of biased AI really causing havoc, chaos within organisations. But can you name some more examples that you have of AI really affecting now apart from the Tay example that we see that of course was pretty bad, but some recent examples that you are aware of?

AI Gone Rogue

Dan Turchin  25:56

So yes, Coded Bias is a great example. So the documentary follows the work of Joy Buolamwini, who was at MIT doing some work on facial recognition, she was kind of an artist first as well as a technologist and has since founded, which is called the AJL or the Algorithmic Justice League to track a lot of these situations where, you know, AI has been used for and it has had unintended consequences.

I think a few of the examples that are notable, this goes back quite a few years, but the massive IBM Watson failure at the MD Anderson Cancer Centre where they literally spent $62 million to develop a Watson based system that can detect cancer. And it was subsequently determined that it was frequently misdiagnosing cases, and it performed far worse than humans cancer detection. Amazon famously stopped using AI to screen candidates for engineering jobs, because it was disproportionately recommending white males because no one’s surprised that’s who had had the most success in Amazon engineering roles.

Janelle Shane gives a funny example, the book I mentioned previously, where a hedge fund used AI to do some selection and found that people named Jared that play Lacrosse tend to be the most successful in, in, in roles in the hedge fund. But, you know, humour aside, I mean, the implications are real, you know, I mentioned who gets a job and things like that.

But, you know, these are just the examples that we hear in the popular press, you know, I think, Joy Buolamwini gives a lot of more interesting examples, in fact, in Coded Bias, they talked about Big Brother Watch, an initiative in the UK, where they were following vehicles that were doing on the street face detection, to look for potential criminals in and around London.

Well, there’s a famous example in the United States in Congress where they used that Facebook’s recognition, it is a Facebook product technology to try to figure out, to try to match the faces of congressional leaders with the faces of those in a known criminal database. And it turned out 28 of the members of Congress, none of whom actually had a criminal record, were determined by Amazon recognition to match the mug shots of criminals. So, you know, we start to get into some pretty, some pretty deep water in terms of using AI in ways that violate our civil rights.

And, you know, this, this, you know, these are all of the kind of telltale signs of, you know, kind of an authoritarian state that we need to be careful for. It’s not imminent, but if as a technology community, we don’t really govern how these decisions are made, it’s clear that regulatory bodies those you know, elected officials aren’t really in a good position to make these decisions for us. So, all the more reason why conversations like this are so critical to be having now.

Mark van Rijmenam  29:34

Yeah, absolutely. And I think you know, the examples that you mentioned, show shows us that how, what a long road we still have to go in order to create AI that is actually to ensure that we that is good for us that we create AI that’s beneficial to organisations, to society, to citizens, to customers, etc. So, how would you what kind of tips would you give to organisations but also to legislators, how to regulate AI, what’s the best approach to follow here?

Vote with your Data

Dan Turchin  30:07

Yeah, I’m gonna go back in the Wayback Machine I referred to Isaac Asimov. I’ll start with his three rules for robots which have withstood the test of time. And then I’ve got a few other thoughts to add to that. Asimov, this is in a short story he published, I think this actually goes back to the 40s. But it was called Runaround, a science fiction story.

And he said, One, robots may not injure humans make sense, Two robots must obey orders from humans, except where such orders would conflict with rule number one. And then his third rule is robots must protect their own existence, as long as such protection doesn’t conflict with rules one or two. So a bit of a high level summary, but a good framework, I think, to think about use of AI.

I would add, let’s say, two to Asimov’s rules, that are more kind of put a modern spin on that kind of framework, I’d say, get educated, don’t be complacent, you know, channel, your inner Joy Buolamwini, you know, look to, you know, leaders in cities like San Francisco, or Portland or Boston, that have already banned the use of facial recognition by police, I think things like that are good step forward, at least until we can verify that these automated decisions won’t disproportionately impact underrepresented communities.

So get educated, have a voice. I think another, you know, another actionable behaviour I propose for all of us is vote with your data. So things like GDPR in Europe, and we have similar laws that are proliferating in the United States make it easy for us to get access to data, when we’re concerned that it might be misuse. Whenever you see organisations that are misusing your data, or using it for questionable purposes, boycott those organisations, exercise your right to enforce the principles of responsible AI on not just those big tech, those big seven tech companies, but all of the tech companies that are looking to those big tech, big seven tech companies as role models.

Mark van Rijmenam  32:22

Yeah, I think I think there is an I agree with you, there’s a lot of there’s a there is a responsibility Also with us, you know, the consumers, the citizens, because how often we just, you know, don’t read Terms and Conditions accept whatever we do just to use a, you know, an application, and I’m complacent to it as well, you know, everyone does it. And, and so there is a real responsibility for us. And yeah, we can indeed vote without data, we can say, Stop, this is a, this is enough.

And we’re no longer use a particular service. Although that might be a bit challenging, but we’ve seen it recently happening is, as well as WhatsApp when you know, Facebook decided to, to, to share the data with Facebook, which they always promised they would never do. And all of a sudden, now they are doing that a lot of people did leave, but it’s still difficult to leave because of the high barrier, the high switching cost that comes with the social network, and that’s what Zuckerberg knows. And that’s what he, you know, he counts on when we’re making such a move. And that’s a shame that we are moving to a society where, you know, these big tech companies are so powerful.

There’s a real responsibility for us consumers to take action and to also force the legislators to, as you mentioned correctly, to get educated and to start doing with this something and to pay attention to what’s happening in the in the field of AI. So yeah, well, thanks, Dan, for this, as you know, it’s a wrap, we could easily talk for I think, hours and hours on this topic. There’s so much so much happening. But I really would like to thank you for for joining me on the show. And I’m looking forward to continue the conversation on another podcast in the near future.

Dan Turchin  34:04

Mark, always a pleasure. I think it’s so critical that we’re having this conversation and we involve more people in it and I’m glad that The Digital Speaker is having the conversation and, and just we’d love to hear feedback from anyone listening and, and, you know, feel like we’re all we’re all in an important position right now in a critical place in history where we can influence how, how technology gets gets applied for good and not look forward to continuing the dialogue in the future.

Mark van Rijmenam  34:34

Absolutely. Well, thank you so much for joining us today. I’ve been your host Mark van Rijmenam, The Digital Speaker on this has been the Tech Journal. Do not forget to hit the like and subscribe button and I will see you next time for your information download. Stay digital.

Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr Mark van Rijmenam is The Digital Speaker. He is a leading strategic futurist who thinks about how technology changes organisations, society and the metaverse. Dr Van Rijmenam is an international innovation keynote speaker, 5x author and entrepreneur. He is the founder of Datafloq and the author of the book on the metaverse: Step into the Metaverse: How the Immersive Internet Will Unlock a Trillion-Dollar Social Economy, detailing what the metaverse is and how organizations and consumers can benefit from the immersive internet. His latest book is Future Visions, which was written in five days in collaboration with AI. Recently, he founded the Futurwise Institute, which focuses on elevating the world’s digital awareness.

Share