Skip to content
Snippets Groups Projects
Select Git revision
  • master default
1 result

wazuh-deploy

  • Clone with SSH
  • Clone with HTTPS
  • Matevž Eržen's avatar
    Matevz Erzen authored
    4977d68f
    History

    Security Monitoring Demo

    This project is meant for quickly setting up a demo of evidence collection with Wazuh.

    Project is deployed using Ansible scripts on top of infrastructure provisioned with Vagrant.

    It creates 5 CentOS virtual machines (if ran in full-setup mode):

    • Wazuh server (manager),
    • 2x machines acting as Wazuh agents,
    • Evidence Collector,
    • Clouditor.

    In addition to Wazuh, ClamAV is also installed on agent machines.


    Requirements

    • Vagrant 2.2.19
    • VirtualBox 6.1.32
    • Ansible >=2.9.6
    • (optional / integrations) npm / npx in order to run the simple HTTP server for the integrations

    Setting up the demo

    Important: make sure you have installed the right versions of Vagrant and VirtualBox!

    1. Checkout Wazuh's tag v4.1.5 into the current directory:

      $ make clone-wazuh
    2. Select your ENVIRONMENT in Makefile. Set it to full-setup or no-collector (for development purposes, when evidence-collector runs on local machine).

      Note: Docker registry credentials used for pulling Evidence Collector are located in /ansible/docker/credentials/credentials.yml. They don't need to be changed unless you explicitly want to use other registry.

    3. If you're using full-setup environment, you can set custom environment variable (that will be passed to evidence-collector) in /environments/full-setup/.env.

      If you wish to set or remove custom resource ID mapping scheme used by evidence-collector, you can change mapped values inside /environments/full-setup/resource-id-map.json.

      See Evidence collector's documentation for more information.

      Note: neither of these two files has to be changed for security-monitoring to work. You can (and should; in case of .env) leave them unchanged.

    4. Create and provision VMs:

      $ make create provision

      Note: create command also adds /etc/vbox/networks.conf config required by Vagrant/VirtualBox.


    Using demo components

    Alert forwarding

    To test Wazuh's alert forwarding, run HTTP Simple server using npx:

    $ PORT=8088 npx http-echo-server

    Clouditor

    Note: Clouditor version is defined in /ansible/provision-clouditor.yml and can be changed if needed.

    Clouditor starts automatically when Clouditor VM is provisioned.

    To see Clouditor's output, ssh to its machine and examine the log file:

    $ make logs-clouditor

    To manually (re)start Clouditor (normally not needed), you can use the following command on the Clouditor VM (inside /home/vagrant/clouditor):

    $ make ssh-clouditor    # on host machine
    
    $ make run              # on VM

    Evidence Collector

    To see Evidence Collector's output, ssh to its machine and open Docker logs:

    $ make logs-evidence-collector

    Wazuh

    To check running instances (via Wazuh web interface):

    1. Navigate browser to: https://192.168.33.10:5601.

    2. Login with default credentials admin:changeme.

    3. Navigate to Wazuh section on the left hand-side.

    You should see 2 agents registered and running with Wazuh.


    Vagrant & Ansible environment configuration

    Vagrant boxes (and variables later used by Ansible) are defined inside /environments/ folder. Each environment contains 3 main files:

    • inventory.txt:

      contains environment variables/configs that will be used by Ansible when provisioning.

    • Makefile:

      named the same as the environment (for easier referencing in the main Makefile in root directory), adds additional commands that are environment specific.

    • Vagrantfile:

      contains Vagrant configuration. IPs, hostnames etc. of machines have to match those defined in the corresponding inventory.txt.

    Note: full-setup environment contains additional .env file containing environment variables required by evidence-collector.

    To deploy to some other existing machines (assuming they run same/similar Linux distro etc.), use custom-provision functionality.


    Provision existing machines

    Ansible playbooks allow for easy installation and set-up of Wazuh (both manager and agents) and Evidence collector.

    As part of the Wazuh agent deploy, machines will also have ClamAV installed.

    Wazuh manager and Evidence collector should be installed on the same, clean machine, while Wazuh agents can be (and should be) installed onto existing machines with other software running.

    Note: this functionality was developed primarily for CentOS based machines (as it uses YUM package manager).

    Possible problems: CentOS 7 versions with RHEL 7.9 could have problems starting Docker containers due to libseccomp-devel package deprecation.

    1. Generate SSH key-pair on the remote server(s) as well as on your local machine (if you haven't yet done so or want to use separate credentials):

      $ ssh-keygen -t rsa
    2. Copy your SSH public key to remote server's authorized_keys file:

      $ ssh-copy-id root@192.168.0.13

      Note: this will copy your default SSH pub-key from ~/.ssh/id_rsa.pub.

    3. Add machine info to /custom-provision/custom-inventory.txt file (see /environments/.../inventory.txt file for example).

      Make sure to set correct variables:

      Variable Description
      public_ip Machine's IP address.
      ansible_sudo_pass Machine's root password.
      ansible_ssh_user Username used to SSH (and later used by Ansible).
      ansible_ssh_pass SSH password (corresponding to ansible_ssh_user).
      ansible_ssh_private_key_file Location of your private key (corresponding to public key set in previous step).

      Example (user: root, password: admin, @ 192.168.0.13):

      192.168.0.13 public_ip=192.168.0.13 ansible_sudo_pass=admin ansible_ssh_pass=admin ansible_ssh_user=root ansible_ssh_private_key_file=~/.ssh/id_rsa
    4. Set evidence-collector environment variables in /custom-provision/.env. See Evidence collector's documentation for more information.

      If you're installing both Evidence collector as well as Wazuh manager on the same machine (as intended), you have to set only clouditor_host, elastic_host & wazuh_host variables (where elastic_host & wazuh_host are the same).

      Note: empty line in .env file can cause Invalid line in environment file Docker error. This happens only on certain Docker builds - distro dependant.

    5. Set variables in /ansible/globals/globals.yml:

      Variable Description
      elasticsearch_host_ip IP of the machine running Elasticsearch (same as Wazuh manager).
      wazuh_manager_ip IP of the machine running Wazuh manager.
    6. Set custom resource ID mapping scheme used by evidence-collector. Change it in /custom-provision/resource-id-map.json.

      Note: this doesn't need to be changed or set for it to work.

    7. Provision:

      $ make -B custom-provision

    Wazuh troubleshooting

    Depending on your machine and network configuration, Wazuh could have problem connecting agents to manager. Check Wazuh's web interface to see if agents work corrrectly - if interface doesn't work, you probably need to open ports first (see below).

    To troubleshoot in more details, check the logs in /var/ossec/logs/ossec.log and consult official troubleshooting manual.

    Two of the most common problems (often times in tandem) are missing open ports and invalid agent names (if agent machine's hostname matches hostname of any already existing Wazuh machine).

    Current version of security-monitoring automatically opens required ports on the manager (using Ansible, see ansible/provision-managers.yml). However, if that fails, run the following commands on Wazuh manager machine to enable required ports:

    $ sudo firewall-cmd --zone=public --add-port=1514/tcp --permanent
    $ sudo firewall-cmd --zone=public --add-port=1515/tcp --permanent
    $ sudo firewall-cmd --zone=public --add-port=55000/tcp --permanent
    $ sudo firewall-cmd --zone=public --add-port=5601/tcp --permanent
    $ sudo firewall-cmd --reload

    After this, you should be able to at least see Wazuh's web interface - available at https://wazuh_manager_ip:5601 (make sure to include https protocol prefix).

    Minimum hardware requirements

    Component Wazuh manager +
    Evidence collector machine
    Wazuh agent
    machine
    Memory 2 GB 1 GB
    CPU 2 1
    Storage 10 GB 10 GB

    Potential issues

    ClamAV (re)start failed/timed out

    ClamAV restart can time-out due to slow disk read/write speeds (if using HDD) and lack of memory. To resolve this, provide the machine with more RAM. Current implementation has it set to 1024 MB (which should suffice for the majoirty of host machine configurations). If you're using SSD, you can lower it to 512 MB.

    Vagrant issue

    The following SSH command responded with a non-zero exit status.
    Vagrant assumes that this means the command failed!
    
    umount /mnt
    Stdout from the command:
    
    Stderr from the command:
    umount: /mnt: not mounted.

    Solved:

    $ vagrant plugin uninstall vagrant-vbguest

    Ansible failing due to ssh issues

    This is important for manager and agents - VMs need to be running already.

    [sre maj 12][10:33:33][ales@~/workspace/PIACERE/security-monitoring/wazuh-ansible]
    $ ssh vagrant@192.168.33.10 -i ../inventory-server/.vagrant/machines/default/virtualbox/private_key
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that a host key has just been changed.
    The fingerprint for the ECDSA key sent by the remote host is
    SHA256:tq9iDMmDjQP9igfVLfIO/R7hKfyzbzfXT/F+KkTcn54.
    Please contact your system administrator.
    Add correct host key in /home/ales/.ssh/known_hosts to get rid of this message.
    Offending ECDSA key in /home/ales/.ssh/known_hosts:336
      remove with:
      ssh-keygen -f "/home/ales/.ssh/known_hosts" -R "192.168.33.10"
    ECDSA host key for 192.168.33.10 has changed and you have requested strict checking.
    Host key verification failed.
    [sre maj 12][10:35:34][ales@~/workspace/PIACERE/security-monitoring/wazuh-ansible]

    Solution:

    ssh-keygen -f ".ssh/known_hosts" -R "192.168.33.10"
    ssh-keygen -f ".ssh/known_hosts" -R "192.168.33.11"
    ssh-keygen -f ".ssh/known_hosts" -R "192.168.33.12"
    ssh-keygen -f ".ssh/known_hosts" -R "192.168.33.13"

    Virtual networking problem

    If your Vagrant / hypervisor for whatever reason doesn't make the 192.168.33.0 virtual network directly accessible from the host, you need to manually specify the IP address and port for SSH connections to each of the VMs.

    After the VMs have been created, the SSH connection parameters can be seen with the vagrant ssh-config command:

    $ cd environments/full-setup/
    $ vagrant ssh-config

    Edit environments/full-setup/inventory.txt and add ansible_host and ansible_port parameters to each of the VMs. Example:

    [wazuh_managers]
    192.168.33.10 ansible_host=127.0.0.1 ansible_port=2222 public_ip=192.168.33.10 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant ansible_ssh_private_key_file=environments/full-setup/.vagrant/machines/manager/virtualbox/private_key