AI-Powered Model EDGE Choreographs Human Dance Animation to Match Any Piece of Music
Category Computer Science Saturday - April 29 2023, 07:12 UTC - 1 year ago Stanford University researchers have developed an AI-enabled model called Editable Dance GEneration (EDGE) that can choreograph human dance animation to match any piece of music and be edited by the user. It will be formally introduced at the Computer Vision and Pattern Recognition conference in June, where anyone can pick the tune and watch as EDGE creates a new dance sequence on the 'EDGE Playground' website.
Stanford University researchers have developed a generative AI model that can choreograph human dance animation to match any piece of music. It's called Editable Dance GEneration (EDGE).
"EDGE shows that AI-enabled characters can bring a level of musicality and artistry to dance animation that was not possible before," says Karen Liu, a professor of computer science who led a team that included two student collaborators, Jonathan Tseng and Rodrigo Castellon, in her lab.
The researchers believe that the tool will help choreographers design sequences and communicate their ideas to live dancers by visualizing 3D dance sequences. Key to the program's advanced capabilities is editability. Liu imagines that EDGE could be used to create computer-animated dance sequences by allowing animators to intuitively edit any parts of dance motion.
For example, the animator can design specific leg movements of the character, and EDGE will "auto-complete" the entire body from that positioning in a way that is realistic, seamless, and physically plausible as well—a human could complete the moves. Above all, the moves are consistent with the animator's choice of music.
Like other generative models for images and text—ChatGPT and DALL-E, for instance—EDGE represents a new tool for choreographic idea generation and movement planning. The editability means that dance artists and choreographers can iteratively refine their sequences move by move, position by position, adding specific poses at precise moments. EDGE then incorporates the additional details into the sequence automatically. In the near future, EDGE will allow users to input their own music and even demonstrate the moves themselves in front of a camera.
Credit: Stanford University .
"We think it's a really a fun and engaging way for everyone, not just dancers, to express themselves through movement and tap into their own creativity," Liu says.
"With its ability to generate captivating dances in response to any music, we think EDGE represents a major milestone in the intersection of technology and movement," adds Tseng. "It will unlock new possibilities for creative expression and physical engagement," says Castellon.
The team has published a paper and will formally introduce EDGE at the Computer Vision and Pattern Recognition conference in Vancouver, British Columbia, in June. There is also a website called "EDGE Playground" where anyone who is interested can pick the tune and watch as EDGE creates a new dance sequence from scratch.
"Everyone is invited to play with it. It's fun!" Liu says.
Share