Taylor Swift is said to be furious and contemplating legal action after AI-generated explicit images of her emerged on social media earlier this week.
Singer Swift is looking into taking action against a deepfake porn website that hosted the images, according to a report by the Daily Mail.
The images, generated by AI – known as deepfakes because of their realistic look – started to circulate on social media platforms including X (formerly Twitter), Instagram and Reddit.
Once the images became common knowledge, Swift’s fanbase – the Swifties – mobilised, campaigning to get the images taken down and posting messages of support to the singer, who will soon be performing in Japan and Singapore.
But it took 17 hours for the images to be removed from social media. The source of the images were traced to an account on X, which appears to have since been banned by the site.
On the day, the term ‘Taylor Swift AI’ trended in various regions across the world, with one post getting more than 45 million views before it was finally taken down.
The Daily Mail reported sources close to Swift saying that the singer hadn’t decided on whether to initiate legal proceedings, but that the images were abusive, offensive and exploitative and done without her knowledge or consent.
Where the AI images originated from is still unconfirmed. Some have speculated that they came from a Telegram group, where users have been known to share content such as this.
Meanwhile, X put out a tweet reaffirming its stance on deepfakes, without mentioning Swift specifically: ‘Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We’re committed to maintaining a safe and respectful environment for all users.’