While traditional automation is suitable for repetitive executions, it reaches its limits when robots need to act in a Human-Robot Collaboration (HRC) environment. Here challenges like high product and variant diversity, low lot sizes and especially low setup times make it necessary to develop new ways to interact with robots.
The focus of this project is to learn activities by observing the worker. Based on the results of the previous project, methods will be researched how to align visual information of objects with robotic motion.
For this purpose, powerful 2D image processing algorithms based on “deep learning” are synergistically combined with current developments in the field of “reinforcement learning”.
The main research goals of the project are:
* Visual understanding of a demonstrated process through deep neural networks supported by instrumented tools.
* Generalization of process knowledge through deep reinforcement learning.
* Synthesis of movements for new parts using movement primitives.
Project name:
LERN4MRKII: Extended modelling, learning and abstraction of processes for human-robot cooperation.
Funding:
AIT Strategy Research Programme
Duration
01.01.2019 – 31.12.2019