Bandai Namco AI Researchers Release A Repository For Motion Datasets That Can Be Used for Motion Style Transfer (MST) Models
Motion datasets gathered by the team were shared in a repository with Bandai Namco Research. The datasets include a wide variety of content, including dancing, fighting, daily activities, and busy, exhausted, and cheerful types. These can be utilized as Motion Style Transfer (MST) model training data.
For games and movies that aim for realistic and expressive character animation, there has long been an interest in developing a variety of stylistic motions; nevertheless, it can be challenging to create new activities that incorporate all the many forms of expression using current techniques. Due to this, Motion Style Transfer (MST), which tries to change the motion in a clip with a specific content into another action in a different style while maintaining the same content, has gained interest recently. A signal is made up of two parts: content and style. The content serves as the motion’s framework, and the type comprises characteristics like the character’s mood and personality.
The datasets include a wide variety of content, including dancing, fighting, daily activities, and busy, exhausted, and cheerful types. For MST models, this can be utilized as training data. An animation of imagined motions is shown below.
The team has been drawn to MST because it seeks to change the motion in a clip with a particular piece of material into a different motion while maintaining that piece of content. The researchers were interested in creating various stylized motions for games and movies that pursue realistic and expressive character animation.
A movement is made up of two parts: content and style. The content serves as the motion’s framework, and the type comprises characteristics like the character’s mood and personality.
Currently, this repository offers two datasets:
- The first has 36,673 frames and 17 sorts of varied contents, such as daily activities, fighting, and dancing. It also has 15 styles with a diversity of expressions.
- The second has seven styles that use a single, consistent expression, 384,931 frames, and ten sorts of content that primarily focus on locomotion and hand actions.
Each and every dataset is based on the motion of three professional actors tracked at Bandai Namco’s motion capture facility. The group used clipping, proportion alignment, and noise removal before saving it in BVH format.
This Article is written as a summary article by Marktechpost Staff based on the article source and github repository. All Credit For This Research Goes To Researchers on This Project. Please Don't Forget To Join Our ML Subreddit
Credit: Source link
Comments are closed.