From 1b1ffa9b048e163a956605f45be945d55e4fa762 Mon Sep 17 00:00:00 2001 From: Ztira <aritz.martinez@tecnalia.com> Date: Wed, 27 Jan 2021 10:02:00 +0100 Subject: [PATCH] README updated --- README.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index f0fee49..50444e8 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,3 @@ -# Citing A-MFEA-RL -> Aritz D. Martinez, Javier Del Ser, Eneko Osaba and Francisco Herrera, Adaptive Multi-factorial Evolutionary Optimization for Multi-task Reinforcement Learning, 2020. # A-MFEA-RL: Adaptive Multi-factorial Evolutionary Optimization for Multi-task Reinforcement Learning >(ABSTRACT) Evolutionary Computation has largely exhibited its potential to replace conventional learning algorithms in a manifold of Machine Learning tasks, especially those related to unsupervised (clustering) and supervised learning. It has not been until lately when the computational efficiency of evolutionary solvers has been put in prospective for training Reinforcement Learning (RL) models. However, most studies framed in this context so far have considered environments and tasks conceived in isolation, without any exchange of knowledge among related tasks. In this manuscript we present A-MFEA-RL, an adaptive version of the well-known MFEA algorithm whose search and inheritance operators are tailored for multitask RL environments. Specifically, our A-MFEA-RL approach includes crossover and inheritance mechanisms for refining the exchange of genetic material that rely on the multi-layered structure of modern Deep Learning based RL models. In order to assess the performance of the proposed evolutionary multitasking approach, we design an extensive experimental setup comprising different multitask RL environments of varying levels of complexity, comparing them to those furnished by alternative non-evolutionary multitask RL approaches. As concluded from the discussion of the obtained results, A-MFEA-RL not only achieves competitive success rates over the tasks being simultaneously solved, but also fosters the exchange of knowledge among tasks that could be intuitively expected to keep a degree of synergistic relationship. @@ -23,21 +21,27 @@ conda env create -f mujoco36.yml conda activate mujoco36 ``` -A-MFEA-RL depends on Metaworld and [MuJoco](https://github.com/openai/mujoco-py) (license required). To install Metaworld please follow the instructions in the [official GitHub](https://github.com/rlworkgroup/metaworld) or run: +A-MFEA-RL depends on Metaworld and [MuJoco](https://github.com/openai/mujoco-py) (license required). To install Metaworld please run: ```bash -pip install git+https://github.com/rlworkgroup/metaworld.git@master#egg=metaworld +cd a-mfea-rl/metaworld +pip install -e . +pip install ray ``` The experimentation can be replicated by running the `RUN_ALL.sh`. In order to run experiments independently: ``` -python3 exp.py -exp INT -t INT -p STR +python3 exp.py -exp INT -t INT -p STR -r INT ``` * `-exp`: Integer. 0 = TOY, 1 = MT-10/MT-10-R, 2 = MT-50/MT-50-R. * `-t`: Integer. Number of threads used by Ray. * `-p`: STRING. Name of the folder under `summary` where results are saved. +* `-r`: Integer. 1 (default) for random reinitialization 0 for fixed reinitialization. + +# Citing A-MFEA-RL +> Aritz D. Martinez, Javier Del Ser, Eneko Osaba and Francisco Herrera, Adaptive Multi-factorial Evolutionary Optimization for Multi-task Reinforcement Learning, 2020. # Results | **Environment name (complexity)** | **MT-10** | **MT-10-R** | **MT-50** | **MT-50-R** | -- GitLab