Researchers from the University of Washington and NVIDIA Propose Humanoid Agents: An Artificial Intelligence Platform for Human-like Simulations of Generative Agents
Human-like generative agents are commonly used in chatbots and virtual assistants to provide natural and engaging user interactions. They can understand and respond to user queries, engage in conversations, and perform tasks like answering questions and making recommendations. These agents are often built using natural language processing (NLP) techniques and machine learning models, such as GPT-3, to produce coherent and contextually relevant responses. They can create interactive stories, dialogues, and characters in video games or virtual worlds, enhancing the gaming experience.
Human-like generative agents can assist writers and creatives in brainstorming ideas, generating story plots, or even composing poetry or music. However, this process is different from how humans think fully. Humans often tend to constantly adapt changes to their plans according to the changes in the physical environment. Researchers at the University of Washington and the University of Hong Kong propose Humanoid agents that guide generative agents to behave more like humans by introducing different elements.
Inspired by the psychology of humans, researchers have proposed a two-system mechanism with system 1 to handle the intuitive and effortless process of thinking and system 2 to handle the logical process of thinking. To influence the behavior of these agents, they introduced aspects like basic needs, emotions, and closeness of their social relationship with other agents.
The designed agents need to interact with others, and upon failing, they will receive negative feedback comprising loneliness, sickness, and tiredness.
The social brain hypothesis proposes that a large part of our cognitive ability has evolved to track the quality of social relationships. People often interact with others to adapt to changes. To mimic this behavior, they empower humanoid agents to adjust their conversations based on how close they are to one another. Their agents visualize them using a Unity WebGL game interface and present the statuses of stimulated agents over time using an interactive analytics dashboard.
They created a sandbox HTML game environment using the Unity WebGL game engine to visualize humanoid agents in their world. Users can select from one of the three worlds to see the agent’s status and location at each step. Their game interface ingests JSON-structured files from the simulated worlds and transforms them into animations. They built Plotly Dash to visualize the status of various humanoid agents over time.
Their systems currently support dialogues between only two agents, aiming to help multi-party conversations. As the agents are working with a simulation that does not perfectly reflect human behavior in the real world, the users must be informed that they are working with a simulation. Despite their capabilities, it’s essential to consider ethical and privacy concerns when using human-like generative agents, such as the potential for spreading misinformation, biases in the training data, and responsible usage and monitoring.
Check out the Paper and Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
We are also on WhatsApp. Join our AI Channel on Whatsapp..
Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.
Credit: Source link
Comments are closed.