Over the past few decades, computer scientists have been trying to train robots to tackle a variety of tasks, including house chores and manufacturing processes. One of the most renowned strategies used to train robots on manual tasks is imitation learning.
As suggested by its name, imitation learning entails teaching a robot how to do something using human demonstrations. While in some studies this training strategy achieved very promising results, it often requires large and annotated datasets containing hundreds of videos where humans complete a given task.
Researchers at New York University have recently developed VINN, an alternative imitation learning framework that does not necessarily require large training datasets. This new approach, presented in a paper pre-published on arXiv, works by decoupling ...
Copyright of this story solely belongs to phys.org . To see the full text click HERE