LONDON - The wildfire spread of fabricated porn images of pop megastar Taylor Swift has fuelled calls in the United States for strong legislation to tackle an explosion of deepfake sexual abuse facilitated by artificial intelligence.
The images - which grafted Swift's face onto another woman's body - attracted tens of millions of views on social media last week in what one lawyer said was the biggest such case to date.
There has been media speculation that the billionaire music icon could pursue legal action but given the limited legislation around deepfake pornography it is not clear how she might do this.
Here is a look at what laws are out there and why it is so hard to bring a case.
While politicians worry about the potential for fabricated images and videos to skew elections, the vast majority of deepfakes are non-consensual porn of women.
The first such clips were shared on social media platform Reddit in 2017. They required technical skills to create and numerous images of the targeted woman's face.
Today there are multiple apps that allow anyone to make deepfakes with just one photo and no expertise.
"I call it point-and-click violence against women. It's that easy now," said Adam Dodge, an attorney and founder of online safety company EndTAB.
The number of forged videos being created has skyrocketed.
More than 144,000 clips - over 14,000 hours of footage - were posted last year on the main sites for deepfake porn, according to independent analyst Genevieve Oh. This is more than the combined total for all preceding years.
Her research shows there have been more than 4.2 billion views on these sites since 2017.