Since at least the 2016 election, when concerns around disinformation burst into the public consciousness, experts have been sounding the alarm about deepfakes. The implications of this technology were—and remain—terrifying. The unchecked proliferation of hyper-realistic synthetic media poses a threat to everyone—from politicians to everyday people. In a combustible environment already characterized by widespread mistrust, deepfakes promised to only stoke the flames further.
As it turns out, our fears were premature. The technological know-how required to actually make deepfakes, coupled with their often shoddy quality, meant that for at least the last two presidential election cycles, they remained a minimal concern.
But all of that is about to change—is changing already. Over the last two years, generative AI technology has entered the mainstream, radically simplifying the process of creating deepfakes for the average consumer. These same innovations have significantly increased the quality of deepfakes, such that, in a blind test, most people would be unable to distinguish a doctored video from the real thing.
This year, especially, we’ve started to see indications of how this technology might affect society if efforts aren’t taken to combat it. Last year, for instance, an AI-generated photo of Pope Francis wearing an unusually stylish coat went viral, and was taken by many to be authentic. While this might seem, on one level, like an innocuous bit of fun, it reveals the dangerous potency of these deepfakes and how hard it can be to curb misinformation once it’s started to spread. We can expect to find far less amusing—and far more dangerous—instances of this kind of viral fakery in the months and years to come.
For this reason, it is imperative that organizations of every stripe—from the media to finance to governments to social media platforms—take a proactive stance towards deepfake detection and content authenticity verification. A culture of trust via safeguards needs to be established now, before a tidal wave of deepfakes can wash away our shared understanding of reality.
Understanding the deepfake threat
Before delving into what organizations can do to combat this surge in deepfakes, it’s worth elaborating on precisely why safeguarding tools are necessary. Typically, those concerned about deepfakes cite their potential effect on politics and societal trust. These potential consequences are extremely important and should not be neglected in any conversation about deepfakes. But as it happens, the rise of this technology has potentially dire effects across multiple sectors of the US economy.
Take insurance, for instance. Right now, annual insurance fraud in the United States tallies up to $308.6 billion—a number roughly one-fourth as large as the entire industry. At the same time, the back-end operations of most insurance companies are increasingly automated, with 70% of standard claims projected to be touchless by 2025. What this means is that decisions are increasingly made with minimal human intervention: self-service on the front end and AI-facilitated automation on the back end.
Ironically, the very technology that has permitted this increase in automation—i.e., machine learning and artificial intelligence—has guaranteed its exploitation by bad actors. It is now easier than ever for the average person to manipulate claims—for instance, by using generative AI programs like Dall-E, Midjourney, or Stable Diffusion to make a car look more damaged than it is. Already, apps exist specifically for this purpose, such as Dude Your Car!, which allows users to artificially create dents in photos of their vehicles.
The same applies to official documents, which can now be easily manipulated—with invoices, underwriting appraisals, and even signatures adjusted or invented wholesale. This ability is a problem not just for insurers but across the economy. It’s a problem for financial institutions, which must verify the authenticity of a wide range of documents. It’s a problem for retailers, who may receive a complaint that a product arrived defective, accompanied by a doctored image.
Businesses simply cannot operate with this degree of uncertainty. Some degree of fraud is likely always inevitable, but with deepfakes, we are not talking about fraud on the margins—we are talking about a potential epistemological catastrophe in which businesses have no clear means of determining truth from fiction, and wind up losing billions of dollars to this confusion.
Fighting fire with fire: how AI can help
So, what can be done to combat this? Perhaps unsurprisingly, the answer lies in the very technology that facilitates deepfakes. If we want to stop this scourge before it gathers more momentum, we need to fight fire with fire. AI can help generate deepfakes—but it also, thankfully, can help identify them automatically and at scale.
Using the right AI tools, businesses can automatically determine whether a given photograph, video, or document has been tampered with. Bringing dozens of disparate models to the task of fake identification, AI can automatically tell businesses precisely whether a given photograph or video is suspicious. Like the tools businesses are already deploying to automate daily operations, these tools can run in the background without burdening overstretched staff or taking time away from important projects.
If and when a photograph is identified as potentially altered, human staff can then be alerted, and can evaluate the problem directly, aided by the information provided by the AI. Using deep-scan analysis, it can tell businesses why it believes a photograph has likely been doctored—pointing, for instance, to manually altered metadata, the existence of identical images across the web, various photographic irregularities, etc.
None of this is to denigrate the incredible advancements we’ve seen in generative AI technology over the last few years, which do indeed have useful and productive applications across industries. But the very potency—not to mention simplicity—of this emerging technology nearly guarantees its abuse by those looking to manipulate organizations, whether for personal gain or to sow societal chaos.
Organizations can have the best of both worlds: the productivity benefits of AI without the downsides of ubiquitous deepfakes. But doing so requires a new degree of vigilance, especially given the fact that generative AI’s outputs are only becoming more persuasive, detailed and life-like by the day. The sooner organizations turn their attention to this problem, the sooner they can reap the full benefits of an automated world.
Credit: Source link
Comments are closed.