Skip to Content

Breaking News: UK public uneasy about ChatGPT writing news content

The use of AI in news media is raising concerns among the UK public. While some predict AI will replace static content, over half of adults express discomfort with AI-generated news articles. The public is particularly uneasy about AI writing opinion pieces, instead valuing authenticity in content creation. However, there may be a role for AI in assisting reporters by automating tasks, such as generating headlines and fact-checking.

Jessica Fairhurst Executive 11/10/2023
Generative AI technologies, such as ChatGPT, are reshaping the landscape of content creation and news media.

This is not news for those who have been following BuzzFeed CEO Jonah Peretti, who claimed in recent months that, “over the next few years, generative AI will replace the majority of static content”, with articles written by human authors making way for tailored content generated for users by AI. This bold statement came amidst the decision to shut down BuzzFeed’s news department and lay off around 15% of the company’s workforce.

However, a recent survey conducted by Savanta suggests that the UK public is not quite as enthused by this prospect. A survey of over 2,000 adults reveals widespread concern about the use of generative AI technology to produce news articles and opinion pieces, and that its use is likely to negatively impact upon perceptions of media and news organisations.

The public see a role for generative AI, but not as an author

Comfort with the use of generative AI depends on its intended use. AI-generated news articles are a much more widespread source of uneasiness than more ‘lowbrow’ AI-generated content. Over half of adults say they are uncomfortable with AI being used in the former case (52%), with just 3 in 10 saying the same for non-news content, such as the quizzes for which BuzzFeed is so well-known (31%).

The public showed even greater unease with generative AI being used to write opinion pieces, with 55% of UK adults expressing discomfort with this. This suggests authenticity is important to the public in content creation and is something that media organisations ought to consider carefully when developing an AI strategy.

There may nonetheless be a role for generative AI to play in news content production. A significant portion of the public is less likely to feel uncomfortable with generative AI being used to assist reporters in content production when it is not directly responsible for writing. Two-fifths of the public say they are uncomfortable with the technology being used for generating headlines (42%), fact-checking a human author’s work (39%) and researching a topic for a human author to write about (38%). Whilst these levels of discomfort are still relatively high, an equal number of individuals feel comfortable with these use cases as those who feel uncomfortable.

Attitudes vary considerably across generations, with Baby Boomers (those aged over 55) being generally found to be more uncomfortable with the use of generative AI compared to younger generations. Given that older people are more likely to be accustomed to ‘traditional’ news media, and their potential wariness towards new technologies like generative AI, this is unsurprising.

Interestingly, there is also a noticeable gender divide in attitudes. Men are significantly more likely to say that they are comfortable with each use case than women, with the difference in attitudes being most pronounced when it comes to fact-checking – 51% of men say they are comfortable with generative AI being used for this, compared to just 41% of women.

News media organisations need to tread carefully to maintain the public’s trust

News media organisations should take the public’s wariness seriously, as the use of generative AI bears upon the reputation of these organisations amongst the public.

This is more pronounced when the use of generative AI is undisclosed by the organisation. The majority of the public expressed that they would trust a news organisation less if they used generative AI without making it clear how it was used (52% for local news organisations, 51% for national news organisations). Even where news organisations do disclose exactly how they are using generative AI, it remains the case that over a third of the public say they would trust the organisation less for using the technology (37% and 36% for local and national news organisations respectively).

Trust is crucial for news media, and a significant decrease in public trust towards a specific publication can lead to substantial damage to its reputation. The survey results therefore highlight the need for news organisations to tread carefully as they look to benefit from advances in generative AI technology.

How should the UK government respond?

One could also argue that the responsibility of maintaining public trust in the news media ecosystem falls within the UK government’s remit, as well as that of individual organisations. This viewpoint is supported by public opinion, with four-fifths (78%) saying that the UK government should take some action in response to recent advances in generative AI. This figure is similar across all age groups, despite the aforementioned differences in comfort with generative AI’s usage across generations.

There is a lack of consensus amongst the public regarding the specific actions that the government should take. However, a significant proportion of UK adults (30%) support the idea of businesses, including news media organisations, being required to publicly disclose how they use generative AI. This is unsurprising, given the extent to which disclosure of generative AI use bears upon the public’s trust in news media organisations. However, several other measures see almost as much support, as the below chart shows.

Notably, a significant minority of the public (20%) want the UK government to ban all forms of generative AI until it is better understood, as was initially done in Italy. This is indicative of the depth of public concern around the technology in the UK. It is something that ought to be taken seriously as the government looks to respond to recent advancements in the technology, even if a complete ban is an unlikely approach.

It is possible that the public’s attitudes will soften as their familiarity with generative AI increases. However, for now, it is clear that there is widespread unease about the prospect of generative AI authoring news and opinion pieces. As news media organisations and the government explore the potential benefits of this technology in the future, it is crucial for them to consider the concerns of the general public.

Individual news media organisations and the government must take this to into account as they look to benefit from this technology in the coming months and years.

Note: Generative AI is a type of artificial intelligence that is capable of producing text, images, or other media in response to prompts, with ChatGPT being a prominent example. It is trained on large amounts of data such as books, articles and websites.

For more insights, get in touch with us at [email protected]

Further reading:

Knowledge centre

Read More
Agencies & Consultancies | B2B Brands | Business & Professional Services (B2B) | Media | Opinion polling | Polling & fast surveys | Thought leadership

Women in business

21/03/2024 - by Alketa Berzani