Don't be Shy! This AI Technology Can Help You Dress up for Work
Don't be Shy! This AI Technology Can Help You Dress up for Work
According to computer scientists from the Georgia Institute of Technology and Google Brain, the task of dressing up is quite complex and involves several different physical interactions between the character and his or her clothing.

Do you find getting dressed in the mornings a mundane task? Take heart, a novel computational method driven by machine learning techniques will assist you in the multi-step process of putting on clothes.

According to computer scientists from the Georgia Institute of Technology and Google Brain — Google's artificial intelligence research arm — the task of dressing up is quite complex and involves several different physical interactions between the character and his or her clothing, primarily guided by the person's sense of touch.

The team leveraged simulation to teach a neural network to accomplish the complex tasks of dressing by breaking down the task into smaller pieces with well-defined goals.

It allowed the character to try the task thousands of times and providing reward or penalty signals when the character tries beneficial or detrimental changes to its policy.

The researchers' method then updates the neural network one step at a time to make the discovered positive changes more likely to occur in the future.

"We've opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning," said lead author Alexander Clegg, a doctoral student at the Georgia Institute of Technology.

"There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world."

In the study, the researchers demonstrated their approach on several dressing tasks: putting on a t-shirt, throwing on a jacket and robot-assisted dressing of a sleeve.

With the trained neural network they were able to achieve complex re-enactment of a variety of ways an animated character puts on clothes. Key is incorporating the sense of touch into their framework to overcome the challenges in cloth simulation.

The researchers found that careful selection of the cloth observations and the reward functions in their trained network are crucial to the framework's success. As a result, this novel approach not only enables single dressing sequences but a character controller that can successfully dress under various conditions.

What's your reaction?

Comments

https://popochek.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!