Evaluating the Potential for Consciousness in AI: A Scientific Exploration of Indicator Properties Based on Neuro-Scientific Theories

The possibility of conscious AI systems is a hot topic right now. Top researchers are drawing inspiration from brain processes related to human consciousness to advance AI capabilities. Progress in AI has been astonishingly swift. Meanwhile, developing AI systems that can accurately mimic human speech will probably lead to increased perception of conscious AI systems among users. They contend in this study that the best way to evaluate consciousness in AI is by referring to neuroscientific theories of consciousness. They discuss well-known ideas of this type and examine how they could affect AI. 

They consider the following to be their main contributions to this report: 

1. Demonstrating that the evaluation of consciousness in AI is scientifically tractable since consciousness can be investigated scientifically and that the results of this study apply to AI

2. They provide preliminary evidence that many indicator properties can be implemented in AI systems using current techniques, even though no system appears to be a strong candidate for consciousness

3. Outlining a rubric for evaluating consciousness in AI in the form of a list of indicator properties derived from scientific theories. They expect the list of indicator traits they would include to alter as research progresses, making the rubric they provide tentative.

They use three fundamental principles to research awareness in AI. As a working hypothesis, they first accept computational functionalism, which holds that the appropriate computations are both required and sufficient for understanding. Although controversial, this claim is a mainstay of modern philosophical thinking. They embrace this theory for pragmatic reasons since, unlike other viewpoints, it implies that AI awareness is theoretically feasible and that researching AI systems’ inner workings is important for figuring out whether AI systems are likely to be conscious. This means it is useful to think about the effects of computational functionalism on AI awareness. Second, they contend that theories of consciousness based on neuroscience have substantial empirical validity and may be used to evaluate consciousness in artificial intelligence. 

Computational functionalism suggests that analogous functions would be adequate for consciousness in AI. These theories seek to find the functions that are both essential and sufficient for consciousness in humans. Thirdly, they contend that the best strategy for examining consciousness in AI is a theory-heavy one. This entails determining whether AI systems perform tasks that are similar to those associated with consciousness according to scientific theories and then judging the plausibility of these theories based on:

  1. The similarity of the functions.
  2. The strength of the evidence supporting them.
  3. One’s belief in computational functionalism.

The primary alternative to this strategy is to test for awareness behaviorally. However, this strategy could be more reliable since AI systems may be trained to mimic human actions while operating in quite different ways. 

They do not support any particular theory in this context because several hypotheses are active contenders in the science of consciousness. Instead, they gather a list of indicators from a study of consciousness theories. One or more theories claim that each indicator quality is essential for consciousness and that some subsets are adequate. However, they contend that AI systems are more likely to be aware if they have more indicator traits. One should evaluate if a current or planned AI system has or would have these features to determine whether it is a serious contender for consciousness. They address several scientific ideas, such as computational higher-order theories, global workspace theories, and recurrent processing theories. Since integrated information theory is incompatible with computational functionalism, they do not consider it. 

Additionally, they consider the idea that agency and embodiment are indicators. However, it is important to understand them in terms of the computational aspects they suggest. They discuss the Perceiver architecture and Transformer-based big language models, which they assess in light of the global workspace idea. A system taught to complete tasks by managing a virtual rodent body; PaLM-E, referred to as an “embodied multimodal language model,” and DeepMind’s Adaptive Agent, a reinforcement learning agent working in a 3D virtual environment, are also being examined. They employ these three systems as case studies to demonstrate the indicator qualities relating to agency and embodiment.

Check out the Pre-Print Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

➡️ Hostinger AI Website Builder: User-Friendly Drag-and-Drop Editor. Try Now (Sponsored)


Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.


🚀 CodiumAI enables busy developers to generate meaningful tests (Sponsored)


Credit: Source link

Comments are closed.