Skip to content
Snippets Groups Projects
Commit 3766a3d3 authored by Cernivec, Ales's avatar Cernivec, Ales
Browse files

y1 baseline

parents
No related branches found
No related tags found
No related merge requests found
Showing
with 964 additions and 0 deletions
.vagrant
wazuh-ansible/
*.swp
*.retry
security-monitoring-ansible/ansible/opendistro/
[submodule "wazuh-docker"]
path = wazuh-docker
url = https://github.com/wazuh/wazuh-docker.git
[submodule "sm-controller"]
path = sm-controller
url = git@git.code.tecnalia.com:piacere/private/t64-runtime-security-monitoring/security-monitoring-controller.git
@startuml
skinparam responseMessageBelowArrow true
participant RuntimeController as RTPRC
participant MonitoringController as RUNMON #99FF99
participant PerformanceMonitoring as IAMON
participant PerformanceMonitoringAgents as IAMONAGENTS
participant PerformanceSelfLearning as IASEL
participant SecurityMonitoring as SECMON #99FF99
participant SecurityMonitoringAgents as SECMONAGENTS #99FF99
participant SecuritySelfLearning as SECSEL #99FF99
participant SelfHealing as IASEH
participant "DOML & IaC\nrepository" as DBDOMLIAC
participant "Infrastructural Code\nGenerator (ICG)" as DESICG
participant "IaC Execution Manager (IEM)" as RTIEM
participant "Infrastructural\nElements\nCatalogue" as DBINFRACAT
group start
RTPRC -> DBDOMLIAC: aplication info including monitoring info (includes the anomaly detection model)
RTPRC <- DBDOMLIAC: configuration with the anomaly detection model
note over RTPRC, SECMON: Wait for application to be deployed\n including Monitoring agents. These steps are the same as given in Runtime Monitoring - T6.1.
RUNMON -> SECMON: Start Security Monitoring stack
RUNMON <- SECMON: Ack
RUNMON -> SECSEL: Start Security Self Learning stack (configuration with a chosen model)
RUNMON <- SECSEL: Ack
end
group Security Monitoring
SECMON <- SECMONAGENTS: Data
SECMON -> SECMON: Store data
SECMON <-- SECMON: Data stored
SECMON -> SECMON: Calculate IOP support data
SECMON -> DBINFRACAT: Add IOP Data
SECMON <- DBINFRACAT: Store data ACK
SECMON <-- SECMON: Evaluate events (continuously)
group Anomaly detection
SECMON <- DBINFRACAT: Load a model/configuration
SECMON <-- SECMON: Detect anomalies (continuously)
end
note over SECMON, IASEH: In cases where an event has raised a warning and it needs direct healing action
SECMON -> IASEH: Notify event
SECMON <-- IASEH: Ack
end
group Security Self Learning
SECMON <- SECSEL: Acquire data
SECMON --> SECSEL: Data (continuous)
group Model training (optional)
SECSEL <- SECSEL: Train a model
SECSEL -> DBINFRACAT: Store the model to file storage
SECSEL <- DBINFRACAT: Store data ACK
end
end
group end
note over RTPRC, SECMON: These steps are similar to those from T6.1. Wait for application to be undeployed\n including Monitoring agents.
RUNMON -> SECMON: Stop monitoring stack
RUNMON <- SECMON: Ack
RUNMON -> SECSEL: Stop monitoring stack
RUNMON <- SECSEL: Ack
end
@enduml
README.md 0 → 100644
# Security Monitoring
This project is meant for quickly setting up Wazuh instance using Ansible scripts
on top infrastructure provisioned using Vagrant.
## Setting up the demo using docker-compose
### Requirements
Tested with:
* `docker-compose` 1.28.6
* `docker` version `20.10.12, build e91ed57`
```
$ docker --version
Docker version 20.10.12, build e91ed57
$ docker-compose version
docker-compose version 1.28.6, build unknown
docker-py version: 4.4.4
CPython version: 3.8.10
OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020
```
### Set up the demo
Checkout the latest code and init&update submodules:
```
git clone git@git.code.tecnalia.com:piacere/private/t64-runtime-security-monitoring/security-monitoring-deployment.git
git submodule init
git submodule update
# To fetch the latest submodules' code
git submodule update --remote
```
Important, ceheck `sm-controller`'s configuration:
```bash
$ cat configuration/smc_settings.cfg
```
It should resemble to this:
```text
# Configuration of the PIACERE Security Monitoring Controller
[sqlalchemy]
SQLALCHEMY_DATABASE_URI = sqlite:///../storage/security-monitoring-controller-db.sqlite
SQLALCHEMY_TRACK_MODIFICATIONS = False
[sm]
# Security Monitoring section
SM_KIBANA_ENDPOINT = https://0.0.0.0:443/kibana
SM_ELASTICSEARCH_USERNAME = admin
SM_ELASTICSEARCH_PASSWORD = admin
SM_ELASTICSEARCH_ENDPOINT = elasticsearch:9200
SM_ELASTICSEARCH_SCHEMA = https
SM_DEFAULT_DEPLOYMENT_NAME = PIACERE Deployment
[smsl]
# Security Monitoring Self Learning section
SMSL_ENDPOINT = https://piacere-security-monitoring.xlab.si
SMSL_API_ENDPOINT = https://piacere-security-monitoring.xlab.si/api
SMSL_GRAFANA_ENDPOINT = https://piacere-security-monitoring.xlab.si/grafana
```
Check that `sm` section is correct, resolvable, etc.
Then run:
```
$ docker-compose -f docker-compose.yml --env-file sm-controller/MANIFEST up
```
Examine:
```
$ docker-compose -f docker-compose.yml --env-file sm-controller/MANIFEST ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
security-monitoring-deployment_elasticsearch_1 /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp
security-monitoring-deployment_kibana_1 /bin/sh -c ./entrypoint.sh Up 0.0.0.0:443->5601/tcp
security-monitoring-deployment_sm-c_1 python3 -m swagger_server Up 0.0.0.0:8080->8080/tcp
security-monitoring-deployment_wazuh_1 /init Up 0.0.0.0:1514->1514/tcp, 0.0.0.0:1515->1515/tcp, 1516/tcp, 0.0.0.0:514->514/udp,
0.0.0.0:55000->55000/tcp
```
Wazuh runs on `https://0.0.0.0:443`, username:password by default is set to `admin:admin`. Can be changed via ENV variables in the docker-compose file.
Sample request towards `sm-c` (i.e. Security Monitoring Controller or `sm-controller`) can be sent to the next end-points:
```
$ curl http://localhost:8080/security-monitoring/v1/monitoring
[
{
".kibana_1": {
"aliases": {
".kibana": {}
}
},
".kibana_92668751_admin": {
"aliases": {}
},
".opendistro_security": {
"aliases": {}
},
"security-auditlog-2022.03.01": {
"aliases": {}
},
"wazuh-alerts-4.x-2022.03.01": {
"aliases": {}
},
"wazuh-monitoring-2022.9w": {
"aliases": {}
}
}
]
$ curl http://localhost:8080/security-monitoring/v1/events
[
{},
{},
{
"_shards": {
"failed": 0,
"skipped": 0,
"successful": 3,
"total": 3
},
"hits": {
"hits": [],
"max_score": null,
"total": {
"relation": "eq",
"value": 0
}
},
"timed_out": false,
"took": 3
}
]
```
## Setting up the environment using Ansible and environment on dedicated VMs
### Requirements
* Vagrant 2.2.14
* Ansible 2.9.16
* (optional / integrations) `npm` / `npx` in order to run the simple HTTP server for the integrations
### Setting up the demo - Ansible, VMs environment
First, checkout Wazuh's tag `v4.1.5` into the current directory:
```
$ git clone https://github.com/wazuh/wazuh-ansible.git
$ git checkout tags/v4.1.5
```
You need to update 2 files:
* wazuh-ansible/playbooks/wazuh-agent.yml
* wazuh-ansible/playbooks/wazuh-odfe-single.yml
And provide IPs of the manager and the agents.
```
diff --cc playbooks/wazuh-odfe-single.yml
index ce98cfa,d3ef5a3..0000000
--- a/playbooks/wazuh-odfe-single.yml
+++ b/playbooks/wazuh-odfe-single.yml
@@@ -12,10 -12,11 +12,10 @@@
single_node: true
minimum_master_nodes: 1
elasticsearch_node_master: true
- elasticsearch_network_host: <your server host>
- elasticsearch_network_host: 192.168.33.10
++ elasticsearch_network_host: 192.168.33.10
filebeat_node_name: node-1
- filebeat_output_elasticsearch_hosts: <your server host>
- filebeat_output_elasticsearch_hosts: 192.168.33.10
++ filebeat_output_elasticsearch_hosts: 192.168.33.10
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
- ip: <your server host>
+ ip: 192.168.33.10
- ansible_shell_allow_world_readable_temp: true
diff --git a/playbooks/wazuh-agent.yml b/playbooks/wazuh-agent.yml
index be73e03..79150b5 100644
--- a/playbooks/wazuh-agent.yml
+++ b/playbooks/wazuh-agent.yml
@@ -1,10 +1,10 @@
---
-- hosts: <your wazuh agents hosts>
+- hosts: 192.168.33.11, 192.168.33.12
roles:
- ../roles/wazuh/ansible-wazuh-agent
vars:
wazuh_managers:
- - address: <your manager IP>
+ - address: 192.168.33.10
port: 1514
protocol: tcp
api_port: 55000
```
1. Provision Wazuh server and Wazuh agents:
```
$ cd security-monitoring-ansible
$ ENVIRONMENT=vagrant-1manager-2agents make create provision
```
2. Check the running instances:
Navigate browser to: `https://192.168.33.10:5601`, login with default credentials `admin:changeme`. Navigate to `wazuh` section on the left hand-side.
You should see 2 agents registered and running with Wazuh.
3. Run HTTP Simple server using `npx`
```
$ PORT=8088 npx http-echo-server
```
### Checking ElasticSearch is working fine
List of incides:
```
curl -X GET https://192.168.33.10:9200/_cat/indices?v -u admin:changeme -k
```
List all entries in the index `wazuh-alerts`:
```
$ curl -X GET https://192.168.33.10:9200/wazuh-alerts-4.x-2021.11.03/_search -u admin:changeme -k
```
### Potential issues
#### Vagrant issue:
```
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
umount /mnt
Stdout from the command:
Stderr from the command:
umount: /mnt: not mounted.
```
Solved:
```
$ vagrant plugin uninstall vagrant-vbguest
```
#### Ansible failing due to ssh issues.
This is important for `manager` and `agents` - VMs need to be running already.
```
[sre maj 12][10:33:33][ales@~/workspace/PIACERE/security-monitoring/wazuh-ansible]
$ ssh vagrant@192.168.33.10 -i ../inventory-server/.vagrant/machines/default/virtualbox/private_key
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:tq9iDMmDjQP9igfVLfIO/R7hKfyzbzfXT/F+KkTcn54.
Please contact your system administrator.
Add correct host key in /home/ales/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/ales/.ssh/known_hosts:336
remove with:
ssh-keygen -f "/home/ales/.ssh/known_hosts" -R "192.168.33.10"
ECDSA host key for 192.168.33.10 has changed and you have requested strict checking.
Host key verification failed.
[sre maj 12][10:35:34][ales@~/workspace/PIACERE/security-monitoring/wazuh-ansible]
```
Solution:
```
ssh-keygen -f "/home/ales/.ssh/known_hosts" -R "192.168.33.10"
ssh-keygen -f "/home/ales/.ssh/known_hosts" -R "192.168.33.11"
ssh-keygen -f "/home/ales/.ssh/known_hosts" -R "192.168.33.12"
ssh-keyscan -H 192.168.33.10 >> /home/ales/.ssh/known_hosts
ssh-keyscan -H 192.168.33.11 >> /home/ales/.ssh/known_hosts
ssh-keyscan -H 192.168.33.12 >> /home/ales/.ssh/known_hosts
```
\ No newline at end of file
@startuml
participant RuntimeController
participant RuntimeMonitoring
participant SecurityMonitoring #99FF99
participant DOML
participant Selflearning
group Configure security monitoring
RuntimeController->SecurityMonitoring: Start security monitoring configuration
SecurityMonitoring -> DOML: Acquire information about the NFRs to configure security monitoring
SecurityMonitoring -> SecurityMonitoring: Configure server (rules)
SecurityMonitoring -> SecurityMonitoring: Configure agents (register agents, define rules)
SecurityMonitoring -> RuntimeController: Security monitoring configured
note over SecurityMonitoring,RuntimeController:The server and agents should be deployed beforehand (included in the IaC already? Described within DOML/IaC implicitly?)
end
group Start security monitoring
RuntimeController->SecurityMonitoring: Start security monitoring
SecurityMonitoring -> SecurityMonitoring : Start security monitoring
RuntimeController<-SecurityMonitoring: Security monitoring started
end
group Security monitoring runtime
Selflearning<-SecurityMonitoring: Send notification/alarm
end
group Stop security monitoring
RuntimeController ->SecurityMonitoring: Stop security monitoring
SecurityMonitoring ->RuntimeController: Security monitoring stopped
end
@enduml
\ No newline at end of file
version: '3.7'
## Runs all the needed services on the piacere-network
services:
sm-c:
extends:
file: sm-controller/docker-compose.yml
service: sm-controller
depends_on:
- elasticsearch
links:
- elasticsearch:elasticsearch
wazuh:
image: wazuh/wazuh-odfe:4.2.5
hostname: wazuh-manager
restart: always
ports:
- "1514:1514"
- "1515:1515"
- "514:514/udp"
- "55000:55000"
environment:
- ELASTICSEARCH_URL=https://elasticsearch:9200
- ELASTIC_USERNAME=admin
- ELASTIC_PASSWORD=admin
- FILEBEAT_SSL_VERIFICATION_MODE=none
volumes:
- ossec_api_configuration:/var/ossec/api/configuration
- ossec_etc:/var/ossec/etc
- ossec_logs:/var/ossec/logs
- ossec_queue:/var/ossec/queue
- ossec_var_multigroups:/var/ossec/var/multigroups
- ossec_integrations:/var/ossec/integrations
- ossec_active_response:/var/ossec/active-response/bin
- ossec_agentless:/var/ossec/agentless
- ossec_wodles:/var/ossec/wodles
- filebeat_etc:/etc/filebeat
- filebeat_var:/var/lib/filebeat
elasticsearch:
image: amazon/opendistro-for-elasticsearch:1.13.2
hostname: elasticsearch
restart: always
ports:
- "9200:9200"
environment:
- discovery.type=single-node
- cluster.name=wazuh-cluster
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- bootstrap.memory_lock=true
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
image: wazuh/wazuh-kibana-odfe:4.2.5
hostname: kibana
restart: always
ports:
- 443:5601
environment:
- ELASTICSEARCH_USERNAME=admin
- ELASTICSEARCH_PASSWORD=admin
- SERVER_SSL_ENABLED=true
- SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/opendistroforelasticsearch.example.org.cert
- SERVER_SSL_KEY=/usr/share/kibana/config/opendistroforelasticsearch.example.org.key
depends_on:
- elasticsearch
links:
- elasticsearch:elasticsearch
- wazuh:wazuh
volumes:
ossec_api_configuration:
ossec_etc:
ossec_logs:
ossec_queue:
ossec_var_multigroups:
ossec_integrations:
ossec_active_response:
ossec_agentless:
ossec_wodles:
filebeat_etc:
filebeat_var:
\ No newline at end of file
ENVIRONMENT ?= vagrant-1manager-2agents
DEPLOY_DIR = $(PWD)
ENV_DIR = $(DEPLOY_DIR)/environments/$(ENVIRONMENT)
ANSIBLE_DIR = $(DEPLOY_DIR)/ansible
include $(ENV_DIR)/$(ENVIRONMENT).mk
ANSIBLE_ARGS = -i $(ENV_DIR)/inventory.txt \
-e ansible_dir=$(ANSIBLE_DIR) \
-e environment_dir=$(ENV_DIR)
reprovision:
@ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook $(ANSIBLE_ARGS) $(ANSIBLE_DIR)/provision-reset.yml
provision-managers:
@ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook $(ANSIBLE_ARGS) $(ANSIBLE_DIR)/provision-managers.yml
provision-agents:
@ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook $(ANSIBLE_ARGS) $(ANSIBLE_DIR)/provision-agents.yml
provision:
@ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook $(ANSIBLE_ARGS) $(ANSIBLE_DIR)/provision.yml
\ No newline at end of file
# Security Monitoring
This project is meant for quickly setting up Wazuh instance using Ansible scripts
on top infrastructure provisioned using Vagrant.
In addition to Wazuh, ClamAV is also installed to agent machines (for testing purposes).
## Requirements
* Vagrant 2.2.14
* Ansible 2.9.16
## Setting up the demo
First, checkout Wazuh's tag `v4.1.5` into the directory above the current one:
```
$ cd ..
$ git clone https://github.com/wazuh/wazuh-ansible.git
$ git checkout tags/v4.1.5
```
1. Provision Wazuh server and Wazuh agents:
```
[sre maj 12][10:31:32][ales@~/workspace/PIACERE/security-monitoring/security-monitoring-ansible]
$ make create provision
```
2. Check the running instances:
Navigate browser to: `https://192.168.33.10:5601`, login with default credentials `admin:changeme`. Navigate to `wazuh` section on the left hand-side.
You should see 2 agents registered and running with Wazuh.
\ No newline at end of file
---
- name: Install Epel-Release
become: True
yum:
name: epel-release
- name: Install ClamAV packages
become: True
yum:
name:
- clamav-server
- clamav-data
- clamav-update
- clamav-filesystem
- clamav
- clamav-scanner-systemd
- clamav-devel
- clamav-lib
- clamav-server-systemd
- name: Configure SELinux
become: true
command: setsebool -P {{ item }}
with_items:
- antivirus_can_scan_system 1
- clamd_use_jit 1
- name: Edit ClamAV configuration
become: true
replace:
path: /etc/clamd.d/scan.conf
regexp: '^Example'
replace: '#Example'
- name: Edit ClamAV socket location configuration
become: true
replace:
path: /etc/clamd.d/scan.conf
regexp: '#LocalSocket /run/clamd.scan/clamd.sock'
replace: 'LocalSocket /tmp/clamd.sock'
- name: Edit ClamAV’s freshclam update engine configuration
become: true
replace:
path: /etc/freshclam.conf
regexp: '^Example'
replace: '#Example'
- name: Run virus definition database update
become: True
command: freshclam
- name: Start ClamAV and run it on boot
become: True
service:
name: clamd@scan
state: restarted
enabled: yes
#!/bin/sh
# Copyright (C) 2015-2020, Wazuh Inc.
# Created by Wazuh, Inc. <info@wazuh.com>.
# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2
WPYTHON_BIN="framework/python/bin/python3"
SCRIPT_PATH_NAME="$0"
DIR_NAME="$(cd $(dirname ${SCRIPT_PATH_NAME}); pwd -P)"
SCRIPT_NAME="$(basename ${SCRIPT_PATH_NAME})"
case ${DIR_NAME} in
*/active-response/bin | */wodles*)
if [ -z "${WAZUH_PATH}" ]; then
WAZUH_PATH="$(cd ${DIR_NAME}/../..; pwd)"
fi
PYTHON_SCRIPT="${DIR_NAME}/${SCRIPT_NAME}.py"
;;
*/bin)
if [ -z "${WAZUH_PATH}" ]; then
WAZUH_PATH="$(cd ${DIR_NAME}/..; pwd)"
fi
PYTHON_SCRIPT="${WAZUH_PATH}/framework/scripts/${SCRIPT_NAME}.py"
;;
*/integrations)
if [ -z "${WAZUH_PATH}" ]; then
WAZUH_PATH="$(cd ${DIR_NAME}/..; pwd)"
fi
PYTHON_SCRIPT="${DIR_NAME}/${SCRIPT_NAME}.py"
;;
esac
${WAZUH_PATH}/${WPYTHON_BIN} ${PYTHON_SCRIPT} "$@"
#!/usr/bin/env python
import json
import sys
import time
import os
try:
import requests
from requests.auth import HTTPBasicAuth
except Exception as e:
print("No module 'requests' found. Install: pip install requests")
sys.exit(1)
# Global vars
debug_enabled = False
pwd = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
json_alert = {}
now = time.strftime("%a %b %d %H:%M:%S %Z %Y")
# Set paths
log_file = '{0}/logs/integrations.log'.format(pwd)
def main(args):
debug("# Starting")
# Read args
alert_file_location = args[1]
webhook = args[3]
debug("# Webhook")
debug(webhook)
debug("# File location")
debug(alert_file_location)
# Load alert. Parse JSON object.
with open(alert_file_location) as alert_file:
json_alert = json.load(alert_file)
debug("# Processing alert")
debug(json_alert)
debug("# Generating message")
msg = generate_msg(json_alert)
debug(msg)
debug("# Sending message")
send_msg(msg, webhook)
def debug(msg):
if debug_enabled:
msg = "{0}: {1}\n".format(now, msg)
print(msg)
f = open(log_file, "a")
f.write(msg)
f.close()
def generate_msg(alert):
level = alert['rule']['level']
msg = {}
msg['pretext'] = "WAZUH Alert"
msg['title'] = alert['rule']['description'] if 'description' in alert['rule'] else "N/A"
msg['text'] = alert.get('full_log')
msg['fields'] = []
if 'agent' in alert:
msg['fields'].append({
"title": "Agent",
"value": "({0}) - {1}".format(
alert['agent']['id'],
alert['agent']['name']
),
})
if 'agentless' in alert:
msg['fields'].append({
"title": "Agentless Host",
"value": alert['agentless']['host'],
})
msg['fields'].append({"title": "Location", "value": alert['location']})
msg['fields'].append({
"title": "Rule ID",
"value": "{0} _(Level {1})_".format(alert['rule']['id'], level),
})
msg['ts'] = alert['id']
attach = {'attachments': [msg]}
return json.dumps(attach)
def send_msg(msg, url):
headers = {'content-type': 'application/json', 'Accept-Charset': 'UTF-8'}
res = requests.post(url, data=msg, headers=headers)
debug(res)
if __name__ == "__main__":
try:
# Read arguments
bad_arguments = False
if len(sys.argv) >= 4:
msg = '{0} {1} {2} {3} {4}'.format(
now,
sys.argv[1],
sys.argv[2],
sys.argv[3],
sys.argv[4] if len(sys.argv) > 4 else '',
)
debug_enabled = (len(sys.argv) > 4 and sys.argv[4] == 'debug')
else:
msg = '{0} Wrong arguments'.format(now)
bad_arguments = True
# Logging the call
f = open(log_file, 'a')
f.write(msg + '\n')
f.close()
if bad_arguments:
debug("# Exiting: Bad arguments.")
sys.exit(1)
# Main function
main(sys.argv)
except Exception as e:
debug(str(e))
raise
\ No newline at end of file
---
- name: Add custom integration with Wazuh deployment
copy: src={{ item.src }} dest={{ item.dest }} owner=root group=ossec mode=0750
with_items:
- { src: 'custom-integration', dest: '/var/ossec/integrations/' }
- { src: 'custom-integration.py', dest: '/var/ossec/integrations/' }
- name: Restart wazuh-manager
service:
name: wazuh-manager
state: restarted
enabled: true
---
- name: add Docker CE repository
yum_repository:
name: docker-ce-stable
file: docker
description: Docker CE Stable - $basearch
baseurl: https://download.docker.com/linux/centos/7/$basearch/stable
enabled: yes
gpgcheck: yes
gpgkey: https://download.docker.com/linux/centos/gpg
- name: create a docker group
group: name=docker
- name: install Docker CE from repository
yum: name=docker-ce state=installed
- name: add curent '{{ ansible_user }}' to docker groups
user: name={{ ansible_user }} groups=docker append=yes
- name: enable docker service
service: name=docker enabled=yes state=started
- name: reset connection to apply group permissions
meta: reset_connection
\ No newline at end of file
---
custom_integration_hook: 'http://10.0.2.2:8088'
custom_integration_alert_level: 10
custom_integration_alert_format: 'json'
elasticsearch_host_ip: '0.0.0.0'
wazuh_manager_ip: '0.0.0.0'
\ No newline at end of file
---
- name: include globals
include_vars: globals.yml
\ No newline at end of file
---
# Agents
- hosts: wazuh_agents
become: yes
pre_tasks:
- import_tasks: "{{ ansible_dir }}/globals/vars.yml"
roles:
- ../../../wazuh-ansible/roles/wazuh/ansible-wazuh-agent
- docker
vars:
wazuh_managers:
- address: "{{ wazuh_manager_ip }}"
port: 1514
protocol: tcp
api_port: 55000
api_proto: 'http'
api_user: admin
max_retries: 5
retry_interval: 5
tasks:
- name: Import ClamAV tasks
import_tasks: "{{ ansible_dir }}/clamav/tasks/install-clamav.yml"
\ No newline at end of file
---
# Manager
- hosts: wazuh_managers
become: yes
become_user: root
pre_tasks:
- import_tasks: "{{ ansible_dir }}/globals/vars.yml"
roles:
- role: ../../../wazuh-ansible/roles/opendistro/opendistro-elasticsearch
- role: ../../../wazuh-ansible/roles/wazuh/ansible-wazuh-manager
- role: ../../../wazuh-ansible/roles/wazuh/ansible-filebeat-oss
- role: ../../../wazuh-ansible/roles/opendistro/opendistro-kibana
- role: custom-integration
vars:
single_node: true
## Set-up integrations
wazuh_manager_integrations:
# custom-integration
- name: custom-integration
hook_url: "{{ custom_integration_hook }}"
alert_level: "{{ custom_integration_alert_level }}"
alert_format: "{{ custom_integration_alert_format }}"
minimum_master_nodes: 1
elasticsearch_node_master: true
elasticsearch_network_host: "{{ elasticsearch_host_ip }}"
filebeat_node_name: node-1
filebeat_output_elasticsearch_hosts: "{{ elasticsearch_host_ip }}"
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: "{{ elasticsearch_host_ip }}"
\ No newline at end of file
---
- name: Start provision of the Wazuh Managers
import_playbook: provision-managers.yml
- name: Start provision of the Wazuh Agents
import_playbook: provision-agents.yml
\ No newline at end of file
# -*- mode: ruby -*-
# vi: set ft=ruby :
servers=[
{
:hostname => "manager",
:ip => "192.168.33.10",
:box => "centos/7",
:ram => 4096,
:cpu => 2
},
{
:hostname => "agent1",
:ip => "192.168.33.11",
:box => "centos/7",
:ram => 512,
:cpu => 1
},
{
:hostname => "agent2",
:ip => "192.168.33.12",
:box => "centos/7",
:ram => 512,
:cpu => 1
}
]
Vagrant.configure(2) do |config|
servers.each do |machine|
config.vm.define machine[:hostname] do |node|
# Can cause error:
# "You are trying to forward a host IP that does not exist. Please set `host_ip`
# to the address of an existing IPv4 network interface, or remove the option
# from your port forward configuration."
if machine[:hostname] == "manager"
node.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "192.168.33.10"
node.vm.network "forwarded_port", guest: 443, host: 8443 , host_ip: "192.168.33.10"
node.vm.network "forwarded_port", guest: 55000, host: 55000 , host_ip: "192.168.33.10"
node.vm.network "forwarded_port", guest: 1514, host: 1514 , host_ip: "192.168.33.10"
node.vm.network "forwarded_port", guest: 1515, host: 1515 , host_ip: "192.168.33.10"
node.vm.network "forwarded_port", guest: 1516, host: 1516 , host_ip: "192.168.33.10"
end
node.vm.box = machine[:box]
node.vm.hostname = machine[:hostname]
node.vm.network "private_network", ip: machine[:ip]
node.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", machine[:ram]]
end
end
end
end
\ No newline at end of file
[wazuh_managers]
192.168.33.10 public_ip=192.168.33.10 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant ansible_ssh_private_key_file=environments/vagrant-1manager-2agents/.vagrant/machines/manager/virtualbox/private_key
[wazuh_managers:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
[wazuh_agents]
192.168.33.11 public_ip=192.168.33.11 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant ansible_ssh_private_key_file=environments/vagrant-1manager-2agents/.vagrant/machines/agent1/virtualbox/private_key
192.168.33.12 public_ip=192.168.33.12 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant ansible_ssh_private_key_file=environments/vagrant-1manager-2agents/.vagrant/machines/agent2/virtualbox/private_key
[wazuh_agents:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment