Dr. Nick Oberheiden: Is Your Business Protected Against the Coming AI Litigation Wave?

When any new technology hits the market, its adoption presents novel legal risks. Privacy concerns accompanied the advent of the internet and social media. More recently, complex regulatory issues surrounded the release of blockchain technology, and cryptocurrencies that were originally welcomed as alternative investment tools are now being scrutinized by the SEC and prospective plaintiffs for possible securities law violations.

It’s the same with artificial intelligence (AI). Artificial intelligence has quickly taken on a prominent role in an extremely wide range of industries. Yet, the legal framework surrounding AI is not fully developed—and in many respects it has not developed at all.

Dr. Nick Oberheiden, an attorney who has authored a book on AI Litigation, is intimately familiar with the legal challenges—and legal risks—that AI’s widespread adoption presents. He has gained this familiarity from working on the frontline, representing CEOs and businesses on both sides of AI-related disputes both in and out of court.

Dr. Nick Oberheiden, with permission.
Dr. Nick Oberheiden, with permission.

“While still significantly hamstrung by a dearth of relevant case law, judges today are confronted with a whole host of novel questions surrounding AI. For example, if an AI platform generates copyrightable material or produces new technology that qualifies for patent protection, where do the associated intellectual property rights lie? Do the human creators of the AI platform own the AI-generated intellectual property? Do the intellectual property rights vest in the individual or company that used the platform to generate the original material or new technology? Or could the intellectual property rights possibly belong to the AI platform itself?”

This is just the beginning, says Dr. Oberheiden. Along with questions regarding ownership of intellectual property rights, Dr. Oberheiden says that he is already seeing cases arise involving issues ranging from employment discrimination (when companies rely on AI to weed out job applicants) to criminal liability (when individuals use AI in connection with fraud and other crimes). Then, of course, there is the question of who (or what company or entity) is liable when a self-driven vehicle or other AI-controlled product causes a serious or fatal accident.

So, is your business ready?

According to Dr. Oberheiden, this is one of the most important questions that CEOs are facing in 2024. “Artificial intelligence is here, and the inevitable wave of AI-related litigation isn’t far behind. While there have been relatively few AI-specific lawsuits filed to date, all indications are that 2024 will be the breakout year for AI litigation.”

What can your business do to prepare? Dr. Oberheiden offers several tips for CEOs of AI companies as well as executives at companies that are adopting AI-driven tools. For example:

1. Learn About the (Limited) AI Case Law that Currently Exists

While there isn’t much out there, Dr. Oberheiden says that it is worth it for executives to learn about the limited AI case law that currently exists. He says that since AI presents so many unique legal issues, CEOs should handle AI differently from how they handle other technologies. Due to the varied and complex legal issues that AI inherently presents, CEOs should proactively learn about these issues before releasing or adopting AI-based platforms, products, services, and content.

“To the extent possible, a proactive approach to risk mitigation is going to be critical with AI,” he says. “The cases that have been decided to date—while limited in number—provide some useful insight into where the law may be headed in the coming years.”

2. Draft and Negotiate Contracts with Potential AI-Related Litigation in Mind

One of the most effective ways that companies can protect themselves is by drafting and negotiating contracts with potential AI-related litigation in mind, says Dr. Oberheiden. He points specifically to the value of so-called “liability-shifting” provisions, which are not unique to AI and which have a long history of withstanding scrutiny in the courts.

One example of a liability-shifting provision is an indemnification clause. “Essentially, an indemnification clause says, ‘If your company gets sued, our company will take responsibility.’” Notably, some major companies—including Microsoft and Google—have publicly offered to indemnify any customers who get sued as a result of using their generative AI platforms. Although, as Dr. Oberheiden points out, this may not be as risky or avant garde as it seems. “While some generative AI companies are facing legal risks because they aren’t licensing the content used to ‘train’ their platforms, companies like Microsoft and Google are almost certainly taking the necessary steps to protect themselves. As a result, the risk of facing claims that fall under their limited indemnification clauses is almost certainly extremely small.”

Along with indemnification clauses, Dr. Oberheiden says that “hold harmless” clauses, warranties, and other contract terms can provide suitable protections for parties on both sides of transactions involving AI technologies. “The key,” he says, “is to negotiate terms that address the unique risks associated with artificial intelligence and not rely on boilerplate clauses that were drafted for different purposes.”

3. Make Smart Decisions About How and When You Release AI-Driven Products and Services

Currently, companies are racing to be the first to market with their AI-driven products and services. But, while this might make sense from a marketing perspective, Dr. Oberheiden says that, from a legal perspective, rushing to market can be risky. “Any time you have a new product or service, it is crucial to make sure it is ready before you start selling it to clients or customers. But, this is especially important with artificial intelligence. Already, we have seen several companies release AI-driven products and services without giving due consideration to the legal risks involved—and they are facing negative publicity and lawsuits as a result.”

Dr. Oberheiden points out that this raises another major issue as well. Oftentimes, the developers behind AI platforms aren’t aware of the legal implications that their coding entails. For example, your average AI software developer isn’t likely to be familiar with the laws that restrict use of competitors’ trademarks—and that prevent the adoption of “confusingly similar” trademarks for use in commerce. “If an AI platform is ‘trained’ to read competitors’ marketing copy or examine existing logos when creating new ones, then trademark infringement may essentially be baked into the underpinnings of the platform.” If this is the case, he says, “litigation is almost inevitable.”

4. Make Smart Decisions About How and When You Use AI for Marketing-Related Purposes

With this in mind, Dr. Oberheiden also advises that companies need to be making smart decisions about how and when they use AI for marketing-related purposes. But, here too, he notes that intellectual property infringement is just one of numerous potential pitfalls. “Already, we’re seeing disputes arise out of generative AI platforms creating defamatory content and content that violates individuals’ and companies’ rights in other ways.”

Once again, Dr. Oberheiden notes that the inherent legal issues involved with AI present unique challenges that don’t exist with the business use of other types of technologies. Individuals in a company’s marketing department might not be aware that AI-generated content raises trademark or copyright implications, for example. Even if they are aware of these concerns at a base level, they will almost certainly lack the expertise needed to assess whether content is potentially infringing.

With this being the case, what is the solution? Dr. Oberheiden says that, at least for now, companies should adopt policies and procedures that restrict how and when generative AI can be used. He says that companies that choose to use AI for content-generation purposes should also seriously consider engaging counsel to review all AI-generated content prior to publication. “The risks associated with using AI for marketing-related purposes are very real. Even if the company behind an AI platform offers indemnification, seeking indemnification after a competitor or consumer files a lawsuit is not an ideal solution.”

5. Put Appropriate Safeguards in Place

This dovetails with Dr. Oberheiden’s last tip, which is to ensure that companies have appropriate safeguards in place. Along with liability-shifting provisions and content-focused policies and procedures, this includes adopting various other contractual protections, policies and procedures, and quality control standards. “While there is currently a rush to adopt AI, companies that move too quickly will necessarily be putting themselves at risk. Once companies have the necessary safeguards in place, then they can start cautiously moving toward the future.”

Even when businesses take all of these steps, Dr. Oberheiden says that they should still be prepared for the possibility of AI-related litigation. “With AI, there are still a lot of unknowns. As a result, in addition to putting the necessary safeguards in place, companies that leverage AI should also proactively prepare for the possibility of investigations, lawsuits, and enforcement actions. Their executive leadership teams should have defense strategies in place, and they should be prepared to deploy these strategies quickly and effectively when a legal threat arises. While AI may be the cutting edge of technology, this is one area in which the old adage still very much applies: An ounce of prevention is worth a pound of cure.”

 

Grit Daily News is the premier startup news hub. It is the top news source on Millennial and Gen Z startups — from fashion, tech, influencers, entrepreneurship, and funding. Based in New York, our team is global and brings with it over 400 years of combined reporting experience.

Credit: Source link

Comments are closed.