AI is here to stay. What impact is this going to have on the media?

Blog
19 Feb 2024, 11:10

By Alona Cherkassky, Director of Strategic Communications

Some predict the rapid rise of artificial intelligence (AI) is set to change the world as we know it. Media and news will not escape the effects of AI, as readers increasingly turn to AI tools to help them process information and provide analysis of complex issues.

While AI has the ability to simplify concepts and support wider understanding, there is growing concern about accuracy. AI is an evolving technology which relies on the information supplied to it. Inevitably, this means that sometimes the data is out of date and may lead to unintentional misinformation.

However, AI tools are increasingly being used to create intentional misinformation, especially in the form of audio and video deepfakes. It takes just minutes for AI tools to simulate someone’s voice and manipulate images. In a year of globally significant elections, this has potential for dangerous consequences, especially as the public shows a continued preference for news consumption through social media, where misinformation is able to spread. Just recently, an audio recording of what sounded like President Joe Biden, was used to call numerous constituents in New Hampshire and discourage them from voting, an incident which is clearly a major concern for democracy. Even if not of the same importance, a deepfake of Taylor Swift was watched some 47 million times before X (Twitter) took it down, just emphasising how quickly mis (dis) information can spread.

Governments and tech companies are starting to take action on AI and seek solutions to growing concerns. In the US, the Federal Election Commission has launched a consultation into a proposal to prohibit deepfakes as an example of fraudulent representation in campaigning. At the recent World Economic Forum, Nick Clegg, president of global affairs at Meta and former Liberal Democrat Deputy Prime Minister of the UK, called the effort to detect AI-driven fake content as “the most urgent task” facing the tech industry. Companies and individuals subjected to deepfakes could find themselves in the middle of a whole new type of crisis-management. For companies, the notion of reputation management resulting from deep fakes will require a very different approach to traditional public relations.

Similar to the public, journalists are increasingly using AI, whether to distil information, summarise complex reports, write headlines, edit writing, or transcribe audio and video. But how news organisations will adapt to AI and ultimately compete with it, remains to be seen. Newsroom leaders may learn to work with AI to create a greater level of efficiency, understanding reader data and individual content preferences. If fully utilised, this could tailor content even further, with newsrooms creating outputs catered to individual interests. These types of shifts must be closely watched as they will undoubtedly change how companies use the media to communicate with their audiences.

ChatGPT boasted an estimated 1.6 billion website visitors in December 2023 and some 100 million active users. No news organisation could possibly match these numbers, let alone OpenAI’s revenues of $5 billion in 2023. If these numbers are indicative of anything, there is appetite for these tools. How AI changes how and with whom companies engage when trying to reach the public, is something to watch. One thing remains certain – 20th century tactics for content creation, media outreach and impact evaluation are certainly shifting to a new age.