AI journalism: possibilities, limitations, and outcomes
Microsoft made waves this May when it announced that it would lay off more than 50 journalists and editors from its workforce. Many of these employees were involved in the news curation process, selecting hand-picked content that would run on the Microsoft News and MSN websites. But their positions won’t go unfilled for long: Microsoft is expected to replace them not with new hires, but with AI software that can identify trending and interesting articles, essentially performing their old job function.
But curation is just one task that AI “journalists” can now accomplish. Kristian Hammond, a computer science professor and co-founder of the data storytelling company Narrative Science, predicted in 2012 that in 15 years, more than 90 per cent of news articles would be written by a computer. With 7 years to go until the deadline, however, we’re still very far off from this forecast—yet also much closer than we used to be, as we’ll discuss below.
The good news for human journalists is that there’s still very much a need for their services when writing articles like this one. But with machines now capable of doing more tasks than ever (and more complex ones), we face several important questions: what’s the role of AI in journalism, and what are the challenges and considerations when bringing AI into the newsroom?
What’s the current state of AI journalism?
Starting in the past several years, AI journalists have been able to write “plug and play” articles that don’t require independent research. For example, below is the opening paragraph from an AI journalist’s article on a college basketball game, created by the natural language generation platform Wordsmith in 2015. There’s little indication that the piece wasn’t written by a human hand:
UNC beats Louisville 72-71 on late Paige basket
Led by a Paige game-winner, North Carolina defeats Louisville 72-71
CHAPEL HILL, N.C. — Marcus Paige scored with nine seconds remaining in the game to give North Carolina a 72-71 lead over Louisville. The Heels held on to win by that same score following a missed 3-pointer by Wayne Blackshear and an unsuccessful second-chance attempt by Terry Rozier.
For now, AI-written articles are limited to relatively simple and formulaic topics: stock market results, M&A announcements, sports game scores, etc. Bloomberg News, for example, uses an AI system called Cyborg that automatically scans companies’ quarterly reports and then outputs a quick article with the most relevant information.
Anything more complicated than this is out of reach for now—at least if journalists want to publish stories that won’t put them at risk of a defamation lawsuit. Language models such as GPT-3 have created a buzz for their ability to synthesise several paragraphs of highly realistic English text, but there’s no guarantee that the content they generate has any basis in reality.
Assisting human journalists
When AI isn’t writing articles itself, it can also help human reporters with work that’s too intricate for it to handle, such as long-form articles, in-depth analyses, and investigative journalism.
One tremendously valuable AI use case: automated transcription of interviews, which can save human journalists untold hours of grunt work. While the results of AI transcription are rarely flawless, the few errors that the software does commit can easily be corrected by a human editor.
For example, the automated speech-to-text service Trint was founded by former TV news reporter Jeff Kofman. Trint closed a $4.5 million funding round in April 2019 and counts the Associated Press and Google Digital News Innovation Fund among its investors. The transcriptions output by the Trint platform are automatically mapped to the source audio or video, allowing journalists to easily search for specific content. According to Kofman, the software can deliver accuracy rates of 95 to 99 per cent for relatively clean audio within just a few minutes. I used Trint a lot during my PhD and it saved me a lot of time.
AI technology can also identify interesting trends and developments that are worthy of investigation by human reporters. In 2018, Forbes rolled out a new AI-enabled content management system called “Bertie” that recommends possible article topics and headlines to contributors, based on their previous work. Thus far, the experiment seems to have been a success: according to Forbes, the number of loyal visitors (i.e. those who visit the Forbes website more than once a month) doubled in the months after Bertie’s rollout.
Curated content for users
Not only is AI affecting the news that gets written, but it is also controlling the articles that people see as well, as we’ve seen with Microsoft’s decision to use AI news curators.
It’s well-known that the Facebook news feed algorithm suggests (what it believes to be) the most relevant content to users, based on the pages they’ve liked and their previous interactions on the platform. Similarly, AI technology can keep track of the articles that subscribers visit on a news website, learning about their behaviour and preferences (e.g. how much time they spend reading each article).
The more a news outlet knows about its customers, the more relevant content it can display on the website or in a weekly newsletter, delivering a personalised experience that helps encourage user interaction and discourage attrition. According to a 2019 survey by the digital media company Digiday, 70 per cent of digital publishers say that they personalise content for visitors.
Some examples of content personalisation in journalism include:
- The New York Times mobile app displays a prominent “For You” section on its homepage.
- The Boston Globe uses a customer data platform to collect information on subscribers and has found that its readers respond 70 per cent more favourably to targeted messaging.
- The Hearst Newspapers group has used Google Cloud Natural Language API to categorise its digital content, helping to segment users based on their reading preferences.
Data-driven business decisions
The business office of a news outlet can benefit just as much from AI as the reporters themselves. By collecting data and crunching the numbers, AI can help managers and executives make decisions about which type of content to produce, which subscribers and former subscribers to target, which marketing campaigns to run, how much to charge for ads and subscriptions, and more.
One intriguing use of AI for the business of journalism is the Wall Street Journal’s dynamic paywall, in which different users are shown different amounts of information based on the likelihood that they will purchase a subscription. For example, users who have searched for articles on the WSJ website are seen as more likely customers than users who have found the same articles through social media. As a result, these promising leads are able to see more content before they’re asked to subscribe.
The limitations of AI journalism
AI-enabled journalism has shown great promise thus far, but is there a hard limit to that promise? According to the November 2019 JournalismAI report, which surveyed 71 news organisations in 32 countries, there are still significant difficulties that news companies face in adopting AI. The top three challenges to bringing AI into the newsroom were:
- Financial resources (27 per cent)
- Lack of knowledge or skills (24 per cent)
- Cultural resistance (24 per cent)
Beyond these institutional barriers, there’s also a good reason to question whether AI in journalism is really the game-changer that its advocates claim it will be. Columbia journalism professor Francesco Marconi estimates that in the future, only 8 to 12 per cent of reporters’ tasks will be replaceable by a machine.
The state of artificial intelligence for journalism helps illustrate the difference between strong and weak AI. Strong or “general” AI, a machine that approaches human-level intelligence across the board, is still decades away and may never be truly attainable. Weak AI, on the other hand, refers to machine intelligence that is highly skilled at a narrowly defined task or set of tasks.
Essentially, there’s very good reason to be sceptical that newsrooms will ever see a “robot reporter” that pounds the pavement like its human colleagues, conducting interviews and writing stories from scratch. But there’s also evidence all around us that AI has an important role to play in the field of journalism—creating article summaries, generating ideas and proposals, analysing data to find interesting stories—and that this role is increasing with every passing year.
The repercussions of AI journalism
Journalistic ethics is already a sensitive topic—and so is ethics in AI, which makes their combination especially contentious. After all, this is an era where “fake news” is a growing concern: only 41 per cent of Americans say they trust the media to be fair and accurate, and 46 per cent of Europeans say they trust their written press, which is exactly why we are developing Mavin so that everyone can instantly recognise which articles to trust.
There are two distinct concerns for how AI technology is applied in journalism: the use of AI to generate content, and the use of AI to curate and display content to the end-user. Both of these issues need to be handled with care.
In 2014, for example, Facebook revealed the results of a controversial experiment with users’ news feeds. People who were exposed to “positive emotional content” on Facebook more often made more positive posts of their own, while the inverse effect happened with people who saw more negative content.
Similarly, a news company that promotes articles based only on users’ interactions would likely find itself in a race to the bottom in terms of quality: spitting out clickbait articles and news that incurs strong emotions, including sadness and outrage.
To avoid this fate, news outlets need to maintain a steady human hand on the wheel, even as they cede more responsibilities to AI journalists and curators. Although machines can churn out hundreds of cut-and-paste stories per day, these articles still require human supervision and fact-checking before they go to print. Likewise, humans need to oversee and verify news curation algorithms to ensure that the results are consistently high-quality (and to curb the backlash that recommendation algorithms have received in recent years).
AI journalists have shown tremendous possibility in clearing away much of the field’s hard labour: collecting data, transcribing recordings, writing fewer interesting articles, etc. But when it comes to the work that truly makes a news organisation stand out—in-depth reporting and analysis, political commentary, opinion columns—it’s clear that humans will be an essential part of this equation well into the future.
Image: Bas Nastassia/Shutterstock