Why We Need Ethical AI: 5 Initiatives to Ensure Ethics in AI

Artificial intelligence (AI) has already had a profound impact on business and society. Applied AI and machine learning (ML) are creating safer workplaces, more accurate health diagnoses and better access to information for global citizens. The Fourth Industrial Revolution will represent a new era of partnership between humans and AI, with potentially positive global impact. AI advancements can help society solve problems of income inequality and food insecurity to create a more “inclusive, human-centred future” according to the World Economic Forum (WEF).

There is nearly limitless potential to AI innovation, which is both positive and frightening. Experts like Elon Musk, Bill Gates, and Stephen Hawking have expressed clear concerns about the need for AI ethics and risk assessment. AI is, ultimately, an advanced tool for computation and analysis. It’s susceptible to errors and bias when it’s developed with malicious intent or trained with adversarial data inputs. AI has enormous potential to be weaponised in ways which threaten public safety, security, and quality of life, which is why AI ethics is so important.

Ethical AI isn’t just a discussion of far-off possibilities, like superhuman intelligence or a robot uprising. AI ethics issues such as bias and deception have already started to have an impact on businesses and individual. In the past year, there have been numerous examples of unethical AI risks.

When AI Ethics Go Awry

Example #1: Deepfakes

Deepfakes are AI-generated video or audio content with an intent to deceive. Machine-generated videos have a massive potential to harm society by contributing to the spread of misinformation and cybercrime attacks. Government agencies, researchers, and social media outlets are scrambling to develop deepfake-detection technologies. There have even been successful instances of deepfakes used for targeted social engineering attacks.

Last year, an AI-generated impersonation of a CEO’s voice successfully duped an employee into completing a $243 million fraudulent wire transfer. It may not be particularly difficult to create a deepfake video or audio file. Recently, a tech journalist created a deepfake video of Mark Zuckerberg in less than two weeks with a widely-available smartphone app and less than $600 in cloud services.

Example #2: Biased AI

AI can propel troubling gender, race, and age biases. Biased AI can reinforce harmful stereotypes and put women, minorities, and other disadvantaged social groups at risk. A recent study from UNESCO found potentially-harmful gender stereotypes are rampant among public-facing chatbots. Biased AI could lead to negative outcomes in numerous settings, including healthcare, education, and hiring practice. Amazon abandoned a machine learning algorithm designed to source talent after discovering a clear bias against female applicants. The algorithm proposed candidates based on historical data, which caused a preference for male applicants since past hires were predominantly men.

Example #3: Facial Recognition Error

A leading facial-recognition software misidentified 25 professional athletes as criminals, including three-time Super Bowl champion Duron Harmon of the New England Patriots football team. An experiment run by the Massachusetts American Civil Liberties Union (ACLU) found a 1-in-6 false positive identification rate when hundreds of pro athlete photos were compared to a mugshot database.

The concerning study results have compounded existing fears that AI could contribute to false criminal convictions. According to the ACLU, 91% of registered voters believe facial recognition technology should be subject to regulations. It, therefore, makes sense that the European Union is investigating a temporary ban on public use of facial recognition.

5 Global Initiatives for AI Ethics

AI innovation and adoption are unstoppable forces in business and society. Nearly 80% of Chief Technology Officers plan to increase their adoption of AI and machine learning technologies in 2020. While there’s real cause for concern about unmitigated AI innovation, there’s also plenty of reason to be hopeful. A number of global initiatives are dedicated to researching ethical AI development and deployment for the benefit of businesses, media, educators, and politicians worldwide.

1. Responsible Computer Science Challenge

The Responsible Computer Science Challenge (RSSC) is an initiative designed to improve ethics education in global undergraduate computer science programs. RSSC hopes to educate “a new wave of engineers who bring holistic thinking to the design of technology products.”

The education challenge is a joint effort between the Mozilla Foundation and three philanthropic partners; Omidyar Network, Schmidt Futures, and Craig Newmark Philanthropies. Between December 2018 and July 2020, the challenge will award $3.5 million to grant recipients who propose “promising approaches” to embedding ethics in undergrad technology and engineering education.

Winners for the first round of challenges have been announced. RSSC results are being tracked through the end of the initial challenge period. Some winners include Georgetown University, who created formal collaborations between computer science students and ethics. Georgia Institute of Technology, another winner, has introduced gamification into introductory-level computer science classes. Georgetown freshmen play RPGS to simulate the possible impact of technologies on society.

While the initial funding competition is closed for applications, there’s still an opportunity to get involved. Educators and students globally can join the Challenge Community to learn from winners. Joining RSSC’s community or mailing list can also provide insight into potential future funding rounds and international competitions for educators.

2. The Institute for Ethical AI & Machine Learning

The Institute is a UK-based global research centre dedicated to some of the most cutting-edge technical research on ethical AI development. The areas of focus include research into ethical processes, frameworks, operations and deployment. The Institute is staffed by volunteer teams of machine learning and data science experts, who partner with industry experts, academics, and policy-makers on research projects.

The Institute has created a list of eight principles for ethical AI, which include an evaluation for biases, security controls, and human augmentation to ensure safe AI results. Other projects have included the AI-RFX Procurement Framework, a series of templates designed to help businesses select AI systems based on safety, quality, and performance. The AI-RFX project also offers free, open-source models and resources for evaluating the maturity of homegrown or acquired AI model maturity.

Interested parties can apply to join the BETA pilot of the Ethical AI Network for access to additional resources, case studies and invitations to exclusive events. Interested volunteers can also inquire directly about the possibility of contributing to the Institute’s active research initiatives in a volunteer capacity.

3. Berkman Klein Center

The Berkman Klein Center is a Harvard-based hub for academic research and inquiry into the intersection between “the internet & society.” Since the initial launch, Berkman Klein has expanded to include more comprehensive research on emerging technologies, including AI ethics and innovation. The Berkman Klein Center has a primary mission to educate stakeholders both inside and outside the Harvard Community, and help members of the public “turn education into action.” Historically, the Berkman Klein Center has incubated several successful projects that have spun off and expanded into independent research organisations.

Currently, the Berkman Klein Center has six active projects related to AI. Their initiatives include research on:

  • AI: Algorithms and Justice
  • AI: Global Governance and Inclusion
  • AI: Transparency

In addition, there are active projects dedicated to studying the impact of AI on the media, autonomous vehicle safety, and effective AI education. Berkman Klein’s various AI initiatives publish research and support public education initiatives, such as the renowned AGTech Forum. AGTech fosters collaboration between Attorney Generals, business leaders, and academics through frequent in-person events and open forum discussions.

4. European AI Alliance

The European Union was a pioneer in creating a government-level forum to broadly examine ethical AI development from all angles, including economic and societal impact. Following the launch of the European AI Strategy in early 2018, the EU Commission created a high-level expert group on AI (HLEG AI) and a parallel AI Alliance. These efforts work to engage hundreds of expert stakeholders from multiple disciplines in the discussion of frameworks, policy, and legislature. The goal of the Alliance and HLEG AI is to create balanced, expert recommendations for policy, legislation, and business investors.

The Alliance has defined a list of seven key requirements for trustworthy AI development, including technical standards for applying ethical AI principles in the industry. The ethics guidelines recommend human oversight, transparency, and accountability. The long-term goal of the Alliance is to expand participation among a “diverse set of participants,” including representatives from businesses, consumer advocacy organisations, trade unions, and other interests.

The EU AI Alliance has also created an official forum for open exchange of AI concerns and best practices among EU citizens and global AI experts. The Futurium forum is free to join, and there are no barriers to entry. The public can engage with the EU AI Alliance in other ways, including open-participation surveys and research.

5. The Open Robo Ethics Institute

ORI is a non-profit think tank dedicated to exploring the potential ethical and societal impacts of AI innovation. Founded in 2012, ORI’s primary goal is to improve the knowledge of AI ethics among technologists, business leaders, and regulators. In 2017, ORI expanded its capabilities to include a consultancy called Generation R. Consultants from ORI specialise in helping business leaders apply a systemic approach to ethical AI development.

ORI and Generation R offer their AI Ethics Consulting Toolkit to non-consulting clients free. Other notable projects to date have included a statement to the United Nations Convention on Certain Conventional Weapons and extensive research into human-AI topics, including ethical AI for senior healthcare.

The Future of AI Ethics

Google CEO Sundar Pichai made headlines recently when he called for global governments officials to step up AI regulation to avoid risks like mass surveillance and human rights violations. Pichai believes that “sensible regulation” is necessary to balance potential AI harms with social opportunities. Pichai isn’t the first tech expert to worry about the risks of AI ethics. An Open Letter on AI Ethics published by the Future of Life Institute has gathered over 8,000 signatures to date from tech industry leaders, researchers, and others.

Fortunately, numerous global initiatives are dedicated to making sure AI development includes ethical standards. Frameworks and policy can minimise the risks of AI and ensure the technology is used to create a safer, fairer world for everyone. There’s an opportunity for everyone to contribute to ethical AI, even if you don’t come from a background in data science or machine learning. Journalists, legislators, and private citizens have an opportunity to partner with global AI ethics initiatives to support safe innovation.

Image: Liu zishan/Shutterstock