The AI race is ‘out of control’ & starting to freak out tech titans, with Musk, Woz and others wanting a 6-month freeze

Elon Musk, Steve Wozniak, and over 1300 academics, tech and business luminaries have signed a Future of Life Institute (FLI) open letter calling for a 6-month freeze on “out-of-control” AI development that, they say, poses “profound risks to society and humanity.”

That development has accelerated at a furious rate since last November’s release of GPT-3 – the natural-language generative AI model that is already being used to answer interview questions, develop malware, write application code, revolutionise web browsing, create prize-winning art, bolster productivity suites from Microsoft and Google, and more.

A global race to embrace and improve the technology – and its new successor, the ‘multimodal’ GPT-4 capable of analysing images using techniques that emulate significantly improved deductive reasoning – has fuelled unchecked investment in the technology so quickly, the FLI letter warns, that adoption of “human-competitive” AI is now advancing without consideration of its long-term implications.

Those implications, according to the letter, include the potential to “flood our information channels with propaganda and untruth”; automation of “all the jobs”; “loss of control of our civilisation”; and development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”

To stave off such AI-driven annihilation, the letter calls for a “public and verifiable” six-month hiatus on development of AI models more powerful than GPT-4 – or, in the absence of a rapid pause, a government-enforced moratorium on AI development.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development [to] ensure that systems adhering to them are safe beyond a reasonable doubt,” the letter argues.

The letter is not calling for a complete pause on AI development, FLI notes, but a “stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Tech giants all but absent

The letter comes less than a year after Google AI researcher Blake Lemoine was put on administrative leave for claiming Google’s own LaMDA AI engine had become so advanced that it was sentient – a claim that Google’s ethicists and technologists flat-out rejected.

Lemoine is not listed among the signatories to the FLI open letter, but many share the responsibility for AI development’s breakneck pace, with Musk – one of the original co-founders of GPT-3 creator OpenAI – recently reported to have pitched AI researchers about developing an alternative non-“woke” platform with fewer restrictions on the creation of offensive content.

The list of signatories – which has been paused to allow vetting processes to catch up amidst high demand – includes executives at content-based companies such as Pinterest and Getty Images, as well as AI and robotics thinktanks including the Center for Humane Technology, Cambridge Centre for the Study of Existential Risk, Edmond and Lily Safra Center for Ethics, UC Berkeley Center for Human-Compatible AI, Unanimous AI, and more.

Australian signatories include Western Sydney University professor of mathematics Andrew Francis; Melbourne University professors Andrew Robinson and David Balding and neuroscience research fellow Colin G Hales; UNSW scientia professor Robert Brooks; University of Queensland honorary professor Joachim Diederich; University of Sydney law professor Kimberlee Weatherall; and others.

Tech giants such as Meta, which recently closed its Responsible Innovation team after one year, are all but absent from the list – which features no Apple, Twitter, or Instagram employees, only one employee of Meta, three Google researchers and software engineers, and three employees of Google AI subsidiary DeepMind.

The letter isn’t the first time FLI has warned about the risks of AI, with previous open letters warning about lethal autonomous weapons, the importance of guiding AI Principles, and the need to prioritise research on “robust and beneficial” AI.


Credit: Source link

Comments are closed.