AI and the Fight Against Technophobia

When it comes to Generative AI and Large Language Models, like ChatGPT. AI enthusiasm is mixed with technophobia. This is natural for the general public: they like new exciting things, but they are afraid of the unknown. The new thing is that several prominent scientists became themselves techno-sceptics, if not technophobic. The case of the scientists and industrialists asking for a six month ban on AI research, or the skepticism of the top AI scientist Prof. A. Hinton, are such examples. The only related historical equivalent I can recall, is the criticism of atomic and nuclear bombs by a part of the scientific community during the cold war. Luckily, humanity managed to address these concerns in a rather satisfactory way.

Of course, everyone has the right to question the current state of AI affairs:

  • Nobody knows why Large Language Models work so well and if they have a limit.
  • Many dangers that the bad guys create ‘AI bombs’ lurk, particularly if the states remain passive bystanders, in terms of regulations.

These are legitimate concerns that fuel the fear of the unknown, even to prominent scientists. After all, they are humans themselves.

However, can AI research stop even temporarily? In my view, no, as AI is the response of humanity to a global society and physical world of ever-increasing complexity. As the physical and social complexity increase processes are very deep and seeming relentless, AI and citizen morphosis are our only hope to have a smooth transition from the current Information Society to a Knowledge Society. Else, we may face a catastrophic social implosion.

The solution is to deepen our understanding of AI advances, speed up its development, regulate its use towards maximizing its positive impact, while minimizing the already evident and other hidden negative effects. AI research can and should become different: more open, democratic, scientific and ethical. Here is a proposed list of points to this end:

  • The first word on important AI research issues that have far-reaching social impact should be delegated to elected Parliaments and Governments, rather than to corporations or individual scientists.
  • Every effort should be made to facilitate the exploration of the positive aspects of AI in social and financial progress and to minimize its negative aspects.
  • The positive impact of AI systems can greatly outweigh their negative aspects, if proper regulatory measures are taken. Technophobia is neither justified, nor is a solution.
  • In my view, the biggest current threat comes from the fact that such AI systems can remotely deceive too many commoners that have little (or average) education and/or little investigative capacity. This can be extremely dangerous to democracy and any form of socio-economic progress.
  • In the near future, we should counter the big threat coming from LLM and/or CAN  use in illegal activities (cheating in University exams is a rather benign use in the space of the related criminal possibilities).
  • Their impact on labor and markets will be very positive, in the medium-long run.
  • In view of the above, AI systems should: a) be required by international law to be registered in an ‘AI system register’, and b) notify their users that they converse with or use the results of an AI system.
  • As AI systems have huge societal impact, and towards maximizing benefit and socio-economic progress, advanced key AI system technologies should become open.
  • AI-related data should be (at least partially) democratized, again towards maximizing benefit and socio-economic progress.
  • Proper strong financial compensation schemes must be foreseen for AI technology champions to compensate any profit loss, due to the fore-said open-ness and to ensure strong future investments in AI R&D (e.g., through technology patenting, obligatory licensing schemes).
  • The AI research balance between Academia and Industry should be rethought to maximize research output, while maintaining competitiveness and granting rewards for undertaken R&D risks.
  • Education practices should be revisited at all education levels to maximize the benefit out of AI technologies, while creating a new breed of creative and adaptable citizens and (AI) scientists.
  • Proper AI regulatory/supervision/funding mechanisms should be created and beefed up to ensure the above.

Several such points are treated in detail in my recent book 4 volume book on ‘AI Science and Society’, particularly in volumes A (rewritten in May 2023 to cover LLMs and Artificial General Intelligence) and C.

Book References:

Artificial Intelligence Science and Society Part A: Introduction to AI Science and Information Technology

Artificial Intelligence Science and Society Part C: AI Science and Society

Credit: Source link

Comments are closed.