Law and Tech Will Have to Work Together to Tackle Deepfake Porn

After what feels like a hundred false dawns, this could finally be the year that AI goes mainstream. In the four months since it launched ChatGPT has attracted hundreds of millions of users while embarrassing some of the most established players in tech with its uncanny ability to generate realistic and intelligent-sounding text. 

But as a lawyer I was struck by another recent AI story that made the news earlier this month when content creator Brandon “Atrioc” was caught on a stream with a deepfake pornography subscription service open in his browser. Not only that, but the deepfakes he was generating featured other popular creators in gaming space – many of them online friends of Atrioc. 

It was a reminder that for all of Generative AI’s potential use cases – from restoring speech to those with medical conditions to bringing us new Elvis tracks – it will put us into a world where anyone with an app can generate convincingly spoofed content. By the end of the decade, we will be living in a world where synthetic media is dominant, and where a new and now unimaginable class of harms will be facing us. According to one estimate, 96% of existing deepfakes are pornographic, and the majority target women.

The law and deepfake porn

There is no federal law tackling deepfake porn, which means the ability to bring criminal or civil charges differs between states. California and Virginia have laws that allow victims of deepfakes to sue, but the internet isn’t delineated around state lines. If the person creating the deepfake is out of state — or even outside the country — from those they’re targeting, there isn’t a whole lot the law can do to help. 

A bigger problem is that the technology is developing faster than the law can catch up. Many states have now passed laws against intimate image abuse (“revenge porn”), but it took 10 – 15 years for us to get there. Deepfakes were not a realistic technical threat when lawyers started to draft these laws, and deepfake content often falls in a gray area in many states where revenge porn is illegal. Unfortunately, we’ll likely have to wait another 15 years before similar legislation covers the threats that are emerging now, by which time we’ll have something else to worry about.  

Beyond this, there is an almost complete lack of regulatory oversight. As with any nascent industry, it’s the companies at the forefront that get to decide the rules. Are these the same rules that the rest of us want? If not, perhaps we should allow technologies based on a level of computational power that was unthinkable a few years ago to develop without any serious oversight. China is the only country so far to have introduced an outright ban on deepfakes, though presumably not out of concern for the harms it may cause (China’s response to ChatGPT, ChatYuan, was recently suspended for giving users “incorrect answers’” to politically sensitive questions). 

Law is always going to be behind the curve when it comes to AI, but developments at the intersection of law and tech indicate how we might mitigate some of the worst harms.  The International Bar Association-initiated eyeWitness to Atrocities project is bringing legal and technological analysis together to verify photo and video evidence in human rights abuse prosecutions. A similar project by Stanford’s Starling Lab and Hala Systems has been used to create a digital evidence package and make a cryptographic submission showing evidence of war crimes in Ukraine to the International Criminal Court. It’s easy to see how these techniques could be used to track metadata, allowing online platforms to automatically authenticate or remove deepfake content.

Honza Cervenka is a columnist at Grit Daily. He is a solicitor of England & Wales and an attorney registered with the California Bar. He works on a wide variety of our cases on both sides of the Atlantic, including civil, employment, and data protection litigation in the UK, and Title VII and Title IX discrimination cases in the US. Due to his background in IT, he works on the technical aspects of helping people whose reputations have been attacked online.

Credit: Source link

Comments are closed.