Thinking

Ethical challenges in using Generative AI for News and Information Services

Written by Matt Morton | March 06 2025

The double-edged sword of GenAI in News and Information Services

Generative AI (GenAI) is reshaping the news and information services industry, offering unprecedented efficiency, scalability, and creative potential. From automated news summaries to GenAI-assisted investigative journalism, these technologies promise to revolutionise how information is produced, shared and consumed.

However, this innovation comes with profound ethical dilemmas. The rise of AI-generated misinformation, deepfakes, and content bias presents new challenges for public trust and information integrity. In an era where credibility is paramount, how can organisations harness GenAI responsibly while maintaining accuracy, transparency, and accountability? This article explores these ethical challenges and outlines practical steps for embedding ethical GenAI principles as part of digital transformation in news and information services.

What are the risks? Misinformation, deepfakes, and bias

GenAI-generated misinformation

GenAI can generate misleading or entirely false information at scale, posing a significant risk to public discourse. The speed at which GenAI-created misinformation can spread far exceeds traditional fact-checking capabilities, threatening the credibility of legitimate journalism. Without robust safeguards, GenAI could be weaponised to manipulate public opinion, distort political narratives, and erode trust in democratic institutions.

A recent study by the BBC found that 51% of all GenAI answers to questions about the news were judged to have “significant issues” of some form. In other words, their responses contained inaccuracies that could materially mislead the reader.

Deepfakes

Deepfake technology is advancing rapidly, making it increasingly difficult to distinguish authentic media from synthetic content. GenAI-powered video and audio manipulation can be used for malicious purposes, from political disinformation campaigns to financial fraud. It is projected that 8 million deepfakes will be shared online in 2025, up from half a million in 2023.

Compounding this risk is the recent announcement that Meta (Facebook, Instagram, and Threads) is ending its fact-checking programmes, further weakening efforts to combat misinformation and raising concerns about the credibility of social media platforms as news sources. Without proactive measures, deepfake technology could fundamentally alter the landscape of trust in digital media.

Content bias

GenAI models are only as objective as the data they are trained on. If training datasets contain biases—whether racial, gender, political, or ideological—these biases can be perpetuated and amplified. This is especially concerning for topics at risk of bias, such as politics, public health, and climate change.

A recent analysis of almost 30,000 images from news media, technology news websites, social media, and knowledge-sharing platforms found that GenAI news reporting can reinforce pervasive gender biases, such as the framing of women as being disempowered and adhering to traditional gender stereotypes.

Addressing this requires a concerted effort to ensure GenAI systems are trained on diverse, representative datasets and are regularly audited for fairness.

Ethical imperatives: Accountability, accuracy and transparency

‘Human-in-the-Loop’ accountability

GenAI-generated content should not replace human editorial judgment. Instead, it must be subject to rigorous verification and oversight, adopting a Human-in-the-Loop (HITL) approach. Editorial teams must establish clear governance frameworks to review GenAI-generated content, ensuring alignment with journalistic standards and ethical principles and instilling human accountability.

Verification systems and transparency

Transparency is critical to maintaining trust. GenAI-generated content should be clearly labeled and fact-checked. Readers must be informed when GenAI has been used in the development of content—just as this article acknowledges its reliance on GenAI-powered insights. Establishing verification systems can further enhance credibility.

Embedding ethical GenAI in news and information services organisations

Ethical AI principles

Industry-wide ethical AI principles must guide the deployment of GenAI in the news and information services sector. The OECD has outlined five AI principles promoting a human-centric and sustainable approach. Organisations should align with international standards such as these and embed them into their AI governance frameworks. Leading institutions, including the BBC, The University of Edinburgh and Thomson Reuters, have already published AI principles to ensure responsible GenAI use.

That said, it is not simply enough for organisations to sign up to ethical AI principles and signal intent—they must back this up with action by embedding these values into the design and operationalisation of all GenAI transformations. Every GenAI use case should be evaluated through the lens of ethical responsibility, ensuring that technology is used in a way that upholds societal trust.

Practical steps for embedding ethical GenAI principles into delivery

At Clarasys, we advocate a structured approach to ethical GenAI-led transformation, ensuring that responsible AI principles are embedded from the outset and throughout.

To bring this to life in a practical way, below, we have set out some simple steps that we would recommend to integrate ethical considerations into a discovery and design sprint for GenAI:

  1. Agree on ethical AI principles and translate them into clear design principles for the project.
  2. Conduct consequence scanning to identify potential positive and negative impacts on people, communities, and the planet. This involves cross-functional teams answering key questions:
    - “What are the intended and unintended consequences of this GenAI system?
    - “What are the positive outcomes to prioritise?
    - “What risks must be mitigated?
  3. Develop user stories and research exercises to refine hypotheses and ensure diverse user needs are accurately represented.
  4. Prioritise based on risk impact, likelihood, and ethical alignment, deciding whether to act, influence, or monitor specific GenAI applications.
  5. Use divergent thinking exercises to explore innovative solutions for enhancing benefits and mitigating risks.
  6. Create an action plan with measurable outcomes, governance structures, and accountability mechanisms to ensure ethical AI principles remain embedded throughout development.

A call for responsible GenAI in News and Information Services

The use of GenAI in news and information services presents both opportunities and risks. While GenAI has the potential to revolutionise journalism, it must be implemented responsibly, with clear ethical guidelines and oversight.

The future landscape of AI remains uncertain, and the UK is no exception, with its regulatory frameworks continually evolving. Nevertheless, it is clear that AI will play a pivotal role in supporting the UK Government in achieving its five missions, as highlighted in its AI Opportunities Action Plan. This plan underscores a push for pro-innovation regulation and signals a future Industrial Strategy focused on AI adoption in key industries.

News and information services firms must act now to define and embed ethical AI principles, not only as a moral obligation but also as a strategic advantage. By doing so, they can position themselves as credible partners in shaping GenAI governance, ensuring compliance with future regulations while fostering responsible innovation.

At Clarasys, we are committed to helping organisations navigate the ethical challenges of GenAI implementation. By embedding responsible AI principles into every stage of design and delivery, we ensure that GenAI serves as a force for good in the information ecosystem.

If you need help implementing GenAI, get in touch.