From a7b3bce88d582d52c567bb31d670e39fd7d38cb6 Mon Sep 17 00:00:00 2001
From: Ztira <aritz.martinez@tecnalia.com>
Date: Wed, 27 Jan 2021 10:13:49 +0100
Subject: [PATCH] Some extra info added to README

---
 README.md  | 13 ++++++++++---
 RUN_ALL.sh |  3 ---
 2 files changed, 10 insertions(+), 6 deletions(-)
 delete mode 100755 RUN_ALL.sh

diff --git a/README.md b/README.md
index 50444e8..364b94c 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,7 @@ The code works on top of [Metaworld-v1](https://github.com/rlworkgroup/metaworld
 
 # Running the experimentation
 
-It is recommended to use the conda environment provided with the code (*mujoco36.yml*) for ease:
+It is recommended to use the conda environment provided with the code (`mujoco36.yml`) for ease:
 ```bash
 conda env create -f mujoco36.yml
 conda activate mujoco36
@@ -29,9 +29,9 @@ pip install -e .
 pip install ray
 ```
 
-The experimentation can be replicated by running the  `RUN_ALL.sh`. In order to run experiments independently:
+In order to run experiments:
 
-```
+```bash
 python3 exp.py -exp INT -t INT -p STR -r INT
 ```
 
@@ -40,6 +40,13 @@ python3 exp.py -exp INT -t INT -p STR -r INT
 * `-p`: STRING. Name of the folder under `summary` where results are saved.
 * `-r`: Integer. 1 (default) for random reinitialization 0 for fixed reinitialization.
 
+```bash
+# Example: Running Fixed MT-10
+python3 exp.py -exp 1 -t 8 -p MT-10-F -r 0
+# Example: Running Random MT-50
+python3 exp.py -exp 2 -t 12 -p MT-50-R
+```
+
 # Citing A-MFEA-RL
 > Aritz D. Martinez, Javier Del Ser, Eneko Osaba and Francisco Herrera, Adaptive Multi-factorial Evolutionary Optimization for Multi-task Reinforcement Learning, 2020. 
 
diff --git a/RUN_ALL.sh b/RUN_ALL.sh
deleted file mode 100755
index 601a976..0000000
--- a/RUN_ALL.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-python3 exp.py -exp 0
-python3 exp.py -exp 1
-python3 exp.py -exp 2
-- 
GitLab