Commit 5cb68421 authored by Perez Visaires, Jon's avatar Perez Visaires, Jon

Notebooks actualizados.

parent 892ef5f6
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Autoencoder"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Importamos las bibliotecas necesarias"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"import os\n",
"import shutil\n",
"import sys\n",
"import math\n",
"import random\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import scipy.misc\n",
"import matplotlib.pyplot as plt\n",
"sys.path.append(\"../tools\") # Herramientas propias de MantaFlow\n",
"import uniio # Biblioteca para la lectura de ficheros .uni"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Inicializamos las seed para funciones random. Al ser inicializadas al mismo número, el resultado no cambiará en cada ejecución."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(13)\n",
"tf.set_random_seed(13)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ruta a los datos de simulación, donde también se guardan los resultados."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"base_path = \"../data\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Carga de datos de simulación"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cargamos los datos desde los ficheros .uni en arrays de numpy."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"densities = []"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"for sim in range(1000, 2000):\n",
" if os.path.exists(\"%s/simSimple_%04d\" % (base_path, sim)): # Comprueba la existencia de las carpetas (cada una 100 frames de datos)\n",
" for i in range(0, 100):\n",
" filename = \"%s/simSimple_%04d/density_%04d.uni\" # Nombre de cada frame (densidad)\n",
" uni_path = filename % (base_path, sim, i) # 100 frames por sim, rellena parametros de la ruta\n",
" header, content = uniio.readUni(uni_path) # Devuelve una array np [Z, Y, X, C]\n",
" h = header[\"dimX\"]\n",
" w = header[\"dimY\"]\n",
" arr = content[:, ::-1, :, :] # Cambia el orden de Y\n",
" arr = np.reshape(arr, [w, h, 1]) # Deshecha Z\n",
" densities.append(arr)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Necesitamos al menos 2 simulaciones para trabajar de manera adecuada."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"load_num = len(densities)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"if load_num <200:\n",
" print(\"Error - usa al menos dos simulaciones completas\")\n",
" exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Convertimos la lista \"densities\" en una array de Numpy."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(1100, 64, 64, 1)"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"densities = np.reshape(densities, (len(densities), 64, 64, 1))\n",
"densities.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creación de set de validación"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creamos el set de validación de entre los datos de simulación generados, al menos una simulación completa o el 10% de los datos (el que sea mayor de los dos)."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Read uni files, total data (1100, 64, 64, 1)\n",
"Split into 990 training and 110 validation samples\n"
]
}
],
"source": [
"print(\"Read uni files, total data \" + format(densities.shape))\n",
"\n",
"vali_size = max(100, int(load_num * 0.1)) # Al menos una simu completa\n",
"vali_data = densities[load_num - vali_size : load_num, :]\n",
"densities = densities[0 : load_num - vali_size, :]\n",
"\n",
"print(\"Split into %d training and %d validation samples\" % (densities.shape[0], vali_data.shape[0]))\n",
"\n",
"load_num = densities.shape[0]"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(990, 64, 64, 1)\n",
"(110, 64, 64, 1)\n"
]
}
],
"source": [
"densities = np.reshape(densities, (len(densities), 64, 64, 1))\n",
"vali_data = np.reshape(vali_data, (len(vali_data), 64, 64, 1))\n",
"\n",
"print(densities.shape)\n",
"print(vali_data.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creación del modelo Autoencoder mediante Keras (Sequential)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Importamos las bibliotecas de Keras"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using TensorFlow backend.\n"
]
}
],
"source": [
"from keras.models import Sequential\n",
"from keras.layers import Conv2D, MaxPooling2D, UpSampling2D"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creacion de las capas del modelo"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"En la primera capa debemos definir las dimensiones del input esperado."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"l0 = Conv2D(filters = 1, \n",
" kernel_size = (3, 3), \n",
" activation = \"relu\", \n",
" padding = \"same\", \n",
" input_shape = (densities.shape[1], densities.shape[2], densities.shape[3]))"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"autoencoder = Sequential([l0])"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"autoencoder.compile(optimizer = \"adadelta\", loss = \"binary_crossentropy\")"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Train on 990 samples, validate on 110 samples\n",
"Epoch 1/10\n",
"990/990 [==============================] - 1s 508us/step - loss: 0.1760 - val_loss: 0.1559\n",
"Epoch 2/10\n",
"990/990 [==============================] - 0s 406us/step - loss: 0.1219 - val_loss: 0.1284\n",
"Epoch 3/10\n",
"990/990 [==============================] - 0s 407us/step - loss: 0.1087 - val_loss: 0.1143\n",
"Epoch 4/10\n",
"990/990 [==============================] - 0s 405us/step - loss: 0.1011 - val_loss: 0.1074\n",
"Epoch 5/10\n",
"990/990 [==============================] - 0s 409us/step - loss: 0.0943 - val_loss: 0.0977\n",
"Epoch 6/10\n",
"990/990 [==============================] - 0s 408us/step - loss: 0.0870 - val_loss: 0.0896\n",
"Epoch 7/10\n",
"990/990 [==============================] - 0s 416us/step - loss: 0.0790 - val_loss: 0.0824\n",
"Epoch 8/10\n",
"990/990 [==============================] - 0s 409us/step - loss: 0.0731 - val_loss: 0.0789\n",
"Epoch 9/10\n",
"990/990 [==============================] - 0s 422us/step - loss: 0.0688 - val_loss: 0.0766\n",
"Epoch 10/10\n",
"990/990 [==============================] - 0s 414us/step - loss: 0.0658 - val_loss: 0.0754\n"
]
},
{
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7fcd0db7bf98>"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"autoencoder.fit(densities, densities, \n",
" epochs = 10,\n",
" verbose = 1,\n",
" validation_data = (vali_data, vali_data),\n",
" shuffle = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment