Economy
Public Projects Content

Clear policies for AI in journalism, imperative for ethics

Developing clear policies and guidelines for the use of artificial intelligence (AI) in journalism is imperative to ensure media organisations remain committed to ethical and transparent practices. But how?

This article is part of our special report AI4TRUST – AI-based-technologies for trustworthy solutions against disinformation

Access the full report
Content-Type:

Underwritten Produced with financial support from an organization or individual, yet not approved by the underwriter before or after publication.

The core principles state that ethics must govern technological choices within the media; human agency must remain central in editorial decisions. [Getty Images: brightstars]

Xhoi Zajmi Euractiv's Public Projects Oct 29, 2024 00:34 4 min. read
Underwritten

Produced with financial support from an organization or individual, yet not approved by the underwriter before or after publication.

Developing clear policies and guidelines for the use of artificial intelligence (AI) in journalism is imperative to ensure media organisations remain committed to ethical and transparent practices.

Recent developments and greater integration of AI have been transforming global industries, and journalism is not exempt. AI tools are ushering in a new era for journalism, where speed, content diversity and efficiency are key.

From automating news production to generating news articles based on structured data, AI has helped save time and resources for journalists. It has also facilitated producing more content with limited resources.

However, it has also raised ethical and editorial concerns at a time when society’s trust in media stands at around 40 per cent, according to a Reuters Institute for the Study of Journalism report.

The future impact of AI is uncertain, but it has the potential to have a profound influence on how journalism is made and consumed. Research shows that it is unevenly distributed, although it is already a significant part of journalism.

“The reality and the potential of AI, machine learning, and data processing is to give journalists new powers of discovery, creation, and connection,” says Charlie Beckett, who led the research.

“Algorithms will power the systems. But the human touch, the insight and judgement of the journalist will be at a premium. Can the news industry seize this opportunity? What of the economic, ethical, and editorial threats AI technologies also bring?” Beckett asks in the research’s preface.

Attempts at regulating

On November 10, 2023, Reporters Without Borders (RSF) and 16 partner organisations published the Paris Charter on AI and Journalism in the context of the Paris Peace Forum. Work on the Charter was launched in July 2023.

The Charter defines ten key principles for safeguarding the integrity of information and preserving journalism’s social role as a response to the “turmoil” that AI has created in the news and information arena.

The core principles state that ethics must govern technological choices within the media; human agency must remain central in editorial decisions; media must help society to distinguish between authentic and synthetic content with confidence; media must participate in global AI governance and defend the viability of journalism when negotiating with tech companies.

Maria Ressa, Nobel Peace Prize laureate who chaired the commission initiated by RSF on the matter, argues that “technological innovation does not inherently lead to progress. Therefore, it must be steered by ethics to truly benefit humanity.”

The Society of Professional Journalists (SPJ) official code of ethics has four rules: seek truth and report it; minimise harm; act independently; be accountable and transparent. All are applicable to the use of AI in journalism as well.

The overall consensus is that AI can be used in journalism as long as it is applied in moderation and subjected to human fact-checking. Moreover, journalists who use AI are encouraged to be transparent about its role in their work.

More studies needed

In addressing the challenges AI poses for journalism, legislation will need to offer clear definitions of AI categories and specific disclosures for each, argues an article of the Centre for News, Technology & Innovation (CNTI).

Newer generative AI (GAI) tools, such as ChatGPT and DALL-E, while offering new ways to streamline news production, also risk reducing search traffic to news sites and raising questions of copyright infringement.

Critics argue that GAI’s role in content creation often lacks transparency, and some publishers have sought legal protection against the unauthorised use of their content for AI training.

The article also mentions research gaps in understanding AI’s role in journalism and how audiences perceive AI-generated content, often viewing it as less biased. Much of the current research is theoretical, pointing to the need for more data-driven studies, especially in non-Western contexts.

The CNTI article highlights the countries’ varying approaches, with legislation lagging behind AI developments. Some countries, like the EU, prioritise data privacy, while others, like China, focus on state control.

The lack of consensus on AI regulation complicates global agreements. Effective policies must account for the complexity of AI itself, avoid vague definitions, and involve diverse stakeholders, especially those most affected by AI, concludes CNTI.

[Edited By Brian Maguire | Euractiv's Advocacy Lab ]

Subscribe