diff --git a/README.md b/README.md index 50444e85c19e9c3a1e6b0ef6daa61fb7c730e3e5..364b94cafe6606a305645cfc99ce7082c2831a27 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ The code works on top of [Metaworld-v1](https://github.com/rlworkgroup/metaworld # Running the experimentation -It is recommended to use the conda environment provided with the code (*mujoco36.yml*) for ease: +It is recommended to use the conda environment provided with the code (`mujoco36.yml`) for ease: ```bash conda env create -f mujoco36.yml conda activate mujoco36 @@ -29,9 +29,9 @@ pip install -e . pip install ray ``` -The experimentation can be replicated by running the `RUN_ALL.sh`. In order to run experiments independently: +In order to run experiments: -``` +```bash python3 exp.py -exp INT -t INT -p STR -r INT ``` @@ -40,6 +40,13 @@ python3 exp.py -exp INT -t INT -p STR -r INT * `-p`: STRING. Name of the folder under `summary` where results are saved. * `-r`: Integer. 1 (default) for random reinitialization 0 for fixed reinitialization. +```bash +# Example: Running Fixed MT-10 +python3 exp.py -exp 1 -t 8 -p MT-10-F -r 0 +# Example: Running Random MT-50 +python3 exp.py -exp 2 -t 12 -p MT-50-R +``` + # Citing A-MFEA-RL > Aritz D. Martinez, Javier Del Ser, Eneko Osaba and Francisco Herrera, Adaptive Multi-factorial Evolutionary Optimization for Multi-task Reinforcement Learning, 2020. diff --git a/RUN_ALL.sh b/RUN_ALL.sh deleted file mode 100755 index 601a9768a0e11b10a9a44a3b56db1b00dd832abf..0000000000000000000000000000000000000000 --- a/RUN_ALL.sh +++ /dev/null @@ -1,3 +0,0 @@ -python3 exp.py -exp 0 -python3 exp.py -exp 1 -python3 exp.py -exp 2