Please use another Browser

It looks like you are using a browser that is not fully supported. Please note that there might be constraints on site display and usability. For the best experience we suggest that you download the newest version of a supported browser:

Internet Explorer, Chrome Browser, Firefox Browser, Safari Browser

Continue with the current browser

Robot Learning Challenge

Automation meets Artificial Intelligence

Researchers at Siemens Corporate Technology in Berkeley, CA, have developed a set of gears to improve different robot learning approaches to assembly. Researchers around the world can now use this test case as a benchmark for robot assembly using machine learning techniques. 

Autonomous Manufacturing

Innovation Challenge

Robot Learning, and more generic, Autonomous Manufacturing, is an exciting research field at the intersection of Machine Learning and Automation. The combination of "traditional" control techniques with Artificial Intelligence holds the promise of allowing robots to learn new behaviors through experience. This has motivated many labs around the world to focus their attention to this area of research. 

The question arises how can we benchmark different machine learning algorithms and apply them to the challenges of industrial automation?

image

Researchers at Siemens Corporate Technology in Berkeley, CA, have developed a set of gears to test different robot learning approaches to assembly. The assembly of these gears requires high precision and the ability to learn changing complex dynamics.

 

If you want to benchmark your robot learning algorithms and apply them to a challenging problem, 3D print the gears and share your results with us! 

 

How fast can your system learn? How much training data is required? What would you change in the design to make it even more challenging?  These are all important questions that we want to open to the research community. 

 

You can access the CAD files of the gears here.

Robot Assembly

Robot Learning covers the methodology, theory and art of enabling a robot, or any other automation system, to learn new skills and adapt to a flexible environment. Traditional control and Artificial Intelligence approaches are combined to increase the automation flexibility in tasks such as locomotion, grasping or assembly.

image

Robotic assembly typically involves object manipulation tasks with substantial contacts and friction, such as inserting or removing tight fitting objects, or twisting a bolt into place. Designing robot controllers for such tasks is difficult, due to the complexity of modelling and estimating contact dynamics accurately. Consequently, nearly all real-world robotic assembly applications are implemented in repetitive scenarios, which can pay off the substantial engineering efforts required. In addition, the implementations often rely on clever (special-purpose) fixtures to guide the assembly, and part feeders for assuring repetitive initial conditions.

 

Prominent approaches for autonomous manipulation are based on either motion planning, or reinforcement learning (RL). Recently, many promising results for autonomous control applications in the area of Deep Reinforcement Learning (DRL) emerged.  DRL is a synergy between Reinforcement Learning (RL) and Deep Learning (DL). DRL algorithms have already been applied to different problems, ranging from video games to robotics.


The question is, however, how can Siemens researchers maximize the robustness and precision of DRL algorithms so that they can be applied with the highest level of confidence to the challenges of industrial applications?
 

See related works of our collaborators:
 

https://arxiv.org/abs/1609.09001

https://arxiv.org/abs/1501.05611

https://arxiv.org/abs/1504.00702