@@ -15,7 +15,7 @@ The code works on top of [Metaworld-v1](https://github.com/rlworkgroup/metaworld
# Running the experimentation
It is recommended to use the conda environment provided with the code (*mujoco36.yml*) for ease:
It is recommended to use the conda environment provided with the code (`mujoco36.yml`) for ease:
```bash
conda env create -f mujoco36.yml
conda activate mujoco36
...
...
@@ -29,9 +29,9 @@ pip install -e .
pip install ray
```
The experimentation can be replicated by running the `RUN_ALL.sh`. In order to run experiments independently:
In order to run experiments:
```
```bash
python3 exp.py -exp INT -t INT -p STR -r INT
```
...
...
@@ -40,6 +40,13 @@ python3 exp.py -exp INT -t INT -p STR -r INT
*`-p`: STRING. Name of the folder under `summary` where results are saved.
*`-r`: Integer. 1 (default) for random reinitialization 0 for fixed reinitialization.
```bash
# Example: Running Fixed MT-10
python3 exp.py -exp 1 -t 8 -p MT-10-F -r 0
# Example: Running Random MT-50
python3 exp.py -exp 2 -t 12 -p MT-50-R
```
# Citing A-MFEA-RL
> Aritz D. Martinez, Javier Del Ser, Eneko Osaba and Francisco Herrera, Adaptive Multi-factorial Evolutionary Optimization for Multi-task Reinforcement Learning, 2020.