The Biden administration may be funding AI research, but it’s also hoping to keep companies accountable for their behavior. Vice President Kamala Harris has met the CEOs of Alphabet (Google’s parent), Microsoft, OpenAI and Anthropic in a bid to get more safeguards for AI. Private firms have an “ethical, moral and legal responsibility” to make their AI products safe and secure, Harris says in a statement. She adds that they still have to honor current laws.
The Vice President casts generative AI technologies like Bard, Bing Chat and ChatGPT as having the potential to both help and harm the country. It can address some of the “biggest challenges,” but it can also be used to violate rights, create distrust and weaken “faith in democracy,” according to Harris. She pointed to investigations into Russian interference during the 2016 presidential election as evidence that hostile nations will use tech to undercut democratic processes.
Finer details of the discussions aren’t available as of this writing. However, Bloomberg claims invitations to the meeting outlined discussions of the risks of AI development, efforts to limit those risks and other ways the government could cooperate with the private sector to safely embrace AI.
Generative AI has been helpful for detailed search answers, producing art and even writing messages for job hunters. Accuracy remains a problem, however, and there are concerns about cheating, copyright violations and job automation. IBM said this week it would pause hiring for roles that could eventually be replaced with AI. There’s been enough worry about AI’s dangers that industry leaders and experts have called for a six-month pause on experiments to address ethical issues.
Biden’s officials aren’t waiting for companies to act. The National Telecommunications and Information Administration is asking for public comments on possible rules for AI development. Even so, the Harris meeting sends a not-so-subtle message that AI creators face a crackdown if they don’t act responsibly.
Credit: Source link
Comments are closed.