NOTE: This code is being refactored. See reviewer branch.
This repository contains code for reproducing and building upon the work described in Simulation to Reality: Reinforcement Learning for Robotic Assembly of Timber Joints. A video of our results is available here.
ABSTRACT -- We demonstrate the first successful application of Reinforcement Learning for assembly tasks using industrial robots in architectural construction. We focus on assembly of lap joints in timber structures specifically. We adapt Ape-X DDPG to train policies entirely in simulation, using force/torque and pose observations, and show that these policies can be deployed successfully in reality. We also show that these policies generalize to a fair range of inaccuracy and variation of pose or shape of materials in the real world.
-
Clone repositories.
$ mkdir rllib $ cd rllib $ git clone https://github.com/AutodeskRoboticsLab/rllib $ mkdir RLRoboticAssembly $ cd RLRoboticAssembly $ git clone https://github.com/AutodeskRoboticsLab/RLRoboticAssembly
-
Optionally, setup virtual environment.
$ python3 -m venv pyenv $ source activate pyenv
-
Install requirements.
$ (pyenv/) pip3 install -r requirements.txt
-
Patch rllib.
$ (pyenv/) python3 setup-rllib-dev.py $ (pyenv/) python3 RLRoboticAssembly/setup/setup-rllib-local.py
-
Inspect the simulation.
$ python3 viewer.py --taskdir=example
-
Provide a demonstration.
$ python3 demonstrate.py --taskdir=example
-
Inspect hyperparameters.
$ cat tasks/example/hyperparameters/example.yaml
-
Train a policy.
$ python3 train.py --taskdir=example
-
Rollout a policy.
$ python3 rollout.py --taskdir=example --checkpoint=120
- All code is written in Python 3.6.
- All code was tested on MacOS, Windows, and Ubuntu.
- For licensing information see LICENSE.
- For a list of contributors see AUTHORS.
- For a list of dependencies see requirements.
- For notes on bugs and quirks see buglist.
- For notes on URDFs and STLs see urdfs.
- For notes on input devices see inputs.