Over 30 UK female politicians experiencing a malicious deepfake pornography campaign designed to embarrass them
In a concerning revelation, British television channel Channel 4 reported that prominent UK politicians, including Labour deputy leader Angela Rayner, education secretary Gillian Keegan, Commons leader Penny Mordaunt, the former Home Secretary Priti Patel, and Labour backbencher Stella Creasy, were among the targets of digitally altered pornographic images on an unnamed, sexually explicit website [1].
The site, known for allowing anonymous users and encouraging the sharing of explicit content, also hosts thousands of photos of men masturbating [2]. While the specific incident involving these politicians was not confirmed to be related to a recent arrest of a teenage boy for creating graphic deepfake AI images of over 50 female students [6], it underscores the growing trend of deepfake pornography targeting high-profile individuals such as actors, musicians, influencers, and politicians.
Currently, creating or sharing sexually explicit deepfake images without the person's consent is illegal under laws prohibiting the intentional distribution of intimate images without consent [3]. However, the UK government is taking further steps to combat this issue. In April 2024, it was announced that there would be changes to the Criminal Justice Bill, including new offences for creating this kind of content [4].
The recently passed Data Use and Access Act (DUAA) 2025 introduces new offences under the Sexual Offences Act 2003 against the non-consensual creation or request of purported intimate images, which explicitly includes AI-generated deepfakes depicting adults in sexual or intimate contexts—even if no real images exist [1]. This law is expected to provide significant legal protection against deepfake pornography.
Moreover, the UK is considering broader frameworks for AI transparency and regulation aligned with the EU AI Act's requirements for deepfake labelling and disclosure [2][3]. Providers and deployers of AI systems that generate synthetic or manipulated content are required to label such outputs clearly as AI-generated, helping to combat deepfakes through transparency and platform accountability.
The Online Safety Act 2023 requires pornography sites and platforms to implement strict age verification and take measures against illegal and harmful content, which indirectly affects deepfake porn hosting sites by holding platforms accountable for content safety, though it does not specifically target deepfakes [4].
The amendment to the Criminal Justice Bill focuses on intent to cause harm, instead of simple consent [5]. This evolving legal environment reflects heightened concern about deepfake pornography and aims to provide robust protections through a combination of criminal law, AI regulation, and platform accountability.
Several of the victims are now planning to initiate legal action by involving the police [1]. As the fight against deepfake pornography continues, it is crucial that individuals and governments work together to ensure the safety and dignity of all people.
References
- Channel 4 News
- EU AI Act
- The Guardian
- Online Safety Bill
- The Telegraph
- BBC News
- The alarming report on Channel 4 about politically targeted deepfake pornography, along with the increasing instances of deepfake pornography in the realm of general-news, highlights the growing convergence of technology and politics, with alarming implications for public figures' privacy.
- As the UK government takes steps to combat deepfake pornography through new laws like the amendment to the Criminal Justice Bill and the Data Use and Access Act, it becomes increasingly crucial for technology regulation to address not only crime-and-justice matters but also protect the rights and dignity of individuals in the digital age.