Generative AI has gone from zero to a hundred in under a year. While early, it’s shown its potential to transform business. That we can all agree on. Where we diverge is on how to contain the dangers it poses.
To be clear, I am pro innovation, and far from a fearmonger. But the recent uptick in misinformation—largely aimed at polarization around controversial issues of the moment—has made it clear that, if left unchecked, gen AI could wreak havoc on societies.
We’ve seen this movie before with social media, but it took years and hard lessons for us to wake up to its flaws. We’ve (presumably) learned something. The question today is who will help stem the tide of reality distortion from gen AI, and how?
Predictably, governments are beginning to act. Europe is leading the charge, as they have increasingly demonstrated on regulating tech. The US is right behind, with President Biden issuing an executive order this past October.
But it’s going to take a global village acting together to “keep gen AI honest.” And before government can help, it needs to understand the limitations of available approaches.
The identity problem has gotten much worse
In this new world, truth becomes the needle in a haystack of opinions masquerading as facts. Knowing who the content comes from matters more than ever.
And it’s not as easy as decreeing that every social media account must be identity-verified. There is fierce opposition to that, and in some cases anonymity is needed to justifiably protect account holders. Moreover, many consumers of the worst content don’t care if it is credible, nor where it came from.
Despite those caveats, the potential role of identity in coping with gen AI is underappreciated. Skeptics, hear me out.
Let’s imagine that regulation or social conscience cause platforms to give every account holder these choices:
- Verify their identity or not, and
- Publicly reveal their verified identity, or just be labeled, “ID Verified”
Then the social media audience can better decide who is credible. Equally important if not more so, identity supports accountability. Platforms can decide on actions to take against serial “disinformers” and repeat abusers of AI-generated content, even if they pop up under different account names.
With gen AI raising the stakes, I believe that identity—knowing exactly who posted what—is critical. Some will oppose it, and identity is not a comprehensive answer. In fact, no solution will satisfy all stakeholders. But if regulation compels the platforms to offer identity verification to all accounts, I’m convinced the impact will be a huge positive.
The moderation conundrum
Content moderation—automated and human—is the last line of defense against undesirable content. Human moderation is a rough job, with risk of psychological harm from exposure to the worst humanity can offer. It’s also expensive and often accused of the biased censorship the platforms strive to cut back on.
Automated moderation scales beyond human capacity to cope with the torrents of new content, but it fails to understand context (memes being a common example) and cultural nuances. Both forms of moderation are crucial and necessary, but they are only part of the answer.
The oft-heard, conventional prescription for controlling gen AI is: “Collaboration between tech leaders, government, and civil society is needed.” Sure, but what specifically?
Governments, for their part, can push social and media platforms to offer identity verification and prominently display it on all posts. Regulators can also pave the way to credibility metrics that actually help gauge whether a source is believable. Collaboration is necessary to develop universal standards that give specific guidance and direction so the private sector doesn’t have to guess.
Finally, should it be illegal to create malicious AI output? Legislation to prohibit content meant for illegal activity could reduce the volume of toxic content and lighten the load on moderators. I don’t see regulation and laws as capable of defeating disinformation, but they are essential in confronting the threat.
The sunny side of the street: innovation
The promise of innovation makes me an optimist here. We can’t expect politicians or platform owners to fully protect against AI-generated deception. They leave a big gap, and that is exactly what will inspire invention of new technology to authenticate content and detect fakery.
Since we now know the downside of social media, we’ve been quick to realize generative AI could turn out to be a huge net-negative for humanity, with its ability to polarize and mislead.
Optimistically, I see benefits to multi-pronged approaches where control methods work together, first at the source, limiting creation of content designed for illegal use. Then, prior to publication, verifying the identity of those who decline anonymity. Next, clear labeling to show credibility ratings and the poster’s identity or lack thereof. Finally, automated and human moderation can filter out some of the worst. I’d anticipate new authentication technology to come online soon.
Add it all up, and we’ll have a much better, though never perfect, solution. Meanwhile, we should build up our skill set to figure out what’s real, who’s telling the truth, and who is trying to fool us.
Credit: Source link
Comments are closed.