Fact-Check Dispatch: Racist tropes and transphobia fanned by AI, fake magazine writers, and the case against Bob Vylan
Issue 25 of the Fact-Check Dispatch
A heat wave gripping Europe gave rise to a trend of videos and pictures showing people cooling off in the summer sunshine, but alongside real content, there were old and AI-generated clips featuring non-white people being shared to provoke a reaction.
As Pride Month closed, there was a resurgence in LGBT-related misinformation, and an examination on the long-running conspiracy theories facing several famous women who are accused — without evidence — that they are secretly transgender.
Confusion reigned over a claim from the Gaza Humanitarian Foundation that it does not hand out sugar, contrary to previous evidence, while in the UK, legal experts unpicked the likelihood of a case being brought against punk duo Bob Vylan for its “Death to the IDF” chants.
Here’s our fact-check dispatch.
AI & old content stokes anti-migrant sentiment
Anti-immigrant rage-bait content is nothing new on European social media. Even more common are the speculative narratives about the proliferation of Islam, which have been widespread for at least a decade.
What is newer about these types of narratives is the vehicle being used to spread them. Where out of context clips were once shared to exaggerate the presence of certain ethnic groups in European cities, artificial intelligence is now able to generate the exact scenes sought by the spreaders of these theories.
The propaganda issue could become even worse, reports Bram Vandendriessche for VRT nws check. Google’s new Veo3 model, which turns text to video, can create very realistic videos and has just been launched in Belgium, he writes.
Among the AI-generated clips Vandendriessche analysed were interviews with asylum seekers coming ashore in the UK, who talked about beautiful women and free houses and money from the government. Another example was a series of clips about cities like Brussels and Madrid and how they might look in 2050. These videos included racist depictions of Muslims with the streets full of rubbish and camels. Meanwhile, eastern European cities like Moscow and Budapest “look neat and prosperous”, Vandendriessche writes.
“The rather cartoonish videos about the future are not intended to come across as ‘real’ but are mainly intended to provoke emotional reactions and thus gain more likes,” he adds. The videos show that even when AI-generated videos are not being presented as real, they are still being leveraged to influence political opinions and ride the rising wave of anti-migrant sentiment across Europe.
There were plenty of examples of similar content being shared around amid the heat wave that baked the continent over the past week. For Eurovision News Spotlight, Jenny Hauser unpacked a viral video shared by serial misinformation spreader @RadioGenoa, which shares almost exclusively anti-immigrant content.
The probe found the video was at least partially old, as one of the clips in it was found in use in 2023. The footage seemed to be shared to suggest immigrants were ‘flooding’ Paris, but as Hauser notes, this is already the most ethnically diverse part of France.
How AI fuels transphobia & Pride Month debunks
The list of famous women who have found themselves at the centre of false claims about their gender is long. For many of them, like Brigitte Macron, Imane Khelif, and Michelle Obama, the theories have endured over years, resurfacing every few months when they are back in the news.
This genre of misinformation is, once again, nothing new. When Lady Gaga first burst onto the music scene in the late 2000s, for example, she was the subject of a years-long misinformation campaign claiming that she was intersex, which was even covered by mainstream media outlets at the time.
For VerificaRTVE, Paula Peña tracked the newer prevalence of AI-altered images that push these narratives forward. Old pictures of famous women, changed with AI to make them look more masculine, are widely shared and reshared with claims that they are transgender. The false images purport to show proof that they were once men.
The wife of the French president, Brigitte Macron, is a very common target for theories about her gender identity. In fact, we have already reviewed a long-running conspiracy theory about her for Eurovision News Spotlight.
In recent days, the discourse took a new turn, as Armêl Balogog reports for franceinfo’s Vrai ou Faux. An article claimed that a surgeon who had proof of Brigitte Macron’s transgender identity died after a fall — suggesting foul play.
The investigation by Vrai ou Faux found the website hosting the article was impersonating five real journalists to lend its stories an air of credibility. “This process is reminiscent of a well-known method used in pro-Russian interference operations, which involves hijacking the identities of media outlets to spread fake news, except this time they are targeting individuals,” Balogog writes.
Artificial intelligence was used to create a deepfake video featured in the article. Journalists who do exist, and whose names are credited as authors on the website, are considering legal action, Balogog reports.
Elsewhere during Pride Month, an image circulated showing the inside of a metro car decorated with the rainbow Pride colours, claiming to show a subway in Spain. As Javier Menasalvas from VerificaRTVE reports, it is actually from Rome in 2024.
There was also a video showing Donald Trump announcing that he was cancelling the celebratory month-long festival, which turned out to be an AI-altered video, reports VerificaRTVE.
Top magazines invent fake writers for AI-generated articles
An investigation by VRT nws check’s Tom Buytaert , Bram Vandendriessche, and Daan Nicolay revealed an extraordinary story for the media industry: the Belgian edition of Elle magazine created fake journalists and enlisted the help of AI to write more than half of its online output across several months, without disclosing it to readers.
The investigation further found that other brands owned by the same parent company, Ventures Media, were doing the same, including Marie Claire and Forbes. The company said it was a test, but after questions from the VRT journalists, they changed the profiles of the article authors and added a disclaimer to the articles “written” by them to say they were actually generated using AI.
There were a number of factors that served to tip off the VRT journalists. For one, photos used for the authors were found on a database of AI-generated headshots. Through the Wayback Machine, they were able to find that one of the authors even underwent a name change but kept the same image. Her email address as listed on the website did not exist.
The investigation even uncovered a psychologist from Psychologies magazine who did not actually exist, despite references in her articles to “my practice”. Carl Defreyne, chairman of the Psychologists' Commission, told VRT that psychologist was a protected title and added: “The example of Femke, who presents herself as a psychologist under a journalistic profile and provides advice from that role, is not only misleading, but also legally and socially unacceptable.”
Gaza Humanitarian Foundation claims it doesn’t hand out sugar
The Israel- and U.S.-backed Gaza Humanitarian Foundation has begun posting on X about what it says are false claims made about events at its distribution centres and the aid it hands out.
On July 1, the GHF published a screenshot of a post by a Palestinian man claiming personnel at a GHF distribution site had dug a trap for people trying to get to sugar — and that it said had been placed in a separate area from the remainder of the aid.
In a bizarre development, the aid foundation said the claim could not be true because it “is not distributing sugar”, Jenny Hauser writes for Eurovision News Spotlight. The head of the organisation, Rev. Johnnie Moore, doubled down with the same statement.
There were no widespread claims about the “trap” alleged by the original poster. However, Hauser looked into the statement that the Gaza Humanitarian Foundation does not hand out sugar at all. “There are a number of reports appearing to refute this claim. An Associated Press report from May 30 cites a GHF spokesperson that the food boxes being handed out contain sugar among other food products. Photos in news reports by the Middle East Eye and ABC News of the content of boxes handed out by the GHF also showed packages with the Hebrew word for sugar on them. Eyewitness footage and photos shared on social media also showed sugar inside the aid boxes,” Hauser writes.
In fact, the below is an image published directly by the Israeli military on May 27 along with a statement about the establishment of GHF sites, and a caption: “Photos of humanitarian aid packages”. Packets of white sugar are clearly visible.
Legal experts weigh in on Bob Vylan chants
The controversy that beset the Glastonbury music festival in the UK lasted throughout the whole week after police said they were investigating a performance by punk rap duo Bob Vylan.
The group’s singer led chants of “Death, death to the IDF” and “Free, free Palestine” during the set on June 28, which was broadcast live on the BBC. In the UK, some politicians and commentators said the chant amounted to antisemitism, and Prime Minister Keir Starmer labelled it “hate speech”. Confirming an investigation, the police said they were probing potential hate crime or public order offences.
Even before any police investigation had been carried out, Bob Vylan was dropped by its management and agency. The group has denied antisemitism and pledged to continue to speak up for Palestinians.
For RTÉ Clarity, Jack McCarron and Kate McDonald asked British legal experts if there was a clear legal basis for any case against the duo. Professor David Mead told them the prosecution would need to be satisfied that they have enough evidence to ensure a realistic prospect of conviction, and would need to decide whether bringing charges was in the public interest. Meanwhile, Jonathan Hall KC said a section of the Public Order Act relating to “stirring up racial hatred” could be used, but added police could struggle to bring a charge.
“It's quite hard to show off the back of saying ‘Death to the IDF’ that he intended people in the audience to hate Jews … that's because what he said was to a military of a country — although, there is an exceptionally strong link between Jews and that country,” he said.
Both experts agreed that the legal test hinges on how to interpret the intent behind the chant. As this story shows, leaning on the expertise of specialists can be an invaluable way to reach a fuller analysis on a complex story.