Researchers from South Korea Propose a Machine Learning Model that Adjusts Video Game Difficulty based on Player Emotions

Dynamic difficulty adjustment (DDA) is a technique for automatically altering a game’s features, behaviors, and scenarios in real-time based on the player’s proficiency so that the player does not get bored or upset whether the game is very easy or challenging. The DDA aims to keep the player engaged and give him or her a demanding experience throughout the game. In classic games, difficulty levels rise linearly or gradually during the game’s duration. Only at the start of the game may features like frequency, starting levels, or rates be modified by selecting a difficulty level. This can, however, result in an unpleasant experience for gamers as they try to follow a predecided learning curve. DDA tries to address this issue by offering gamers a unique option.

In video games, difficulty is a challenging factor to balance. While some desire an easy experience, some prefer challenging video games. Most developers employ dynamic difficulty adjustment to simplify this approach (DDA). With DDA, a game’s difficulty can be changed in real-time in response to player performance. For instance, the game’s DDA agent may automatically boost the difficulty if player performance surpasses the developer’s expectations for a certain difficulty level, increasing the challenge for the player. This tactic is beneficial but has limitations because it considers player performance, not how much pleasure they truly have.

A research team recently modified the DDA approach in a study published in Expert Systems With Applications. They created DDA agents that modified the game’s complexity to optimize four different characteristics associated with a player’s satisfaction:

  •  Challenge– Challenge indicates how challenged a player feels.
  •  Competence– Competence gauges a player’s capacity for achieving in-game objectives.
  •  Flow- Flow has to do with how it feels to play within the rules of the game. 
  • Valence–  Both happy and negative game-related emotions are described by positive and negative impacts. The sum of the Positive effect score and a reverse score of Negative effect is considered as Valence state factor in this study.

 instead of focusing on the player’s performance. The DDA agents were trained using machine learning data from real-world gamers who competed in a fighting game against different artificial intelligences (AIs) and then provided feedback.

Each DDA agent used real-world and simulated data to adjust the fighting technique of the opposing AI in a way that maximized a particular feeling, or “affective state,” using a process called Monte-Carlo tree search.

Through an experiment involving 20 volunteers, the team established that the suggested DDA agents could create AIs that enhanced players’ overall experiences regardless of their preferences. This is the first instance where emotive states have been directly included in DDA agents, which may be advantageous for commercial games.

“Large amounts of player data are already available to commercial game businesses. Using their method, they can use these data to model the players and address various balancing-related problems. It’s crucial to highlight that this strategy may be applicable to other fields that can be “gamified,” such as health care, physical fitness, and education.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Diversifying dynamic difficulty adjustment agent by integrating player state models into Monte-Carlo tree search'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and reference article.

Please Don't Forget To Join Our ML Subreddit


I am consulting intern at MarktechPost. I am majoring in Mechanical Engineering at IIT Kanpur. My interest lies in the field of machining and Robotics. Besides, I have a keen interest in AI, ML, DL, and related areas. I am a tech enthusiast and passionate about new technologies and their real-life uses.


Credit: Source link

Comments are closed.