Skip to content
Snippets Groups Projects

Evidence Collector

This project includes modules for collecting evidence regarding Wazuh and VAT and sending it to Clouditor for further processing.

Wazuh evidence collector

Wazuh evidence collector uses Wazuh's API to access information about manager's and agents' system informations and configurations. As an additional measure to ensure correct configuration of ClamAV (if installed on machine) we also make use of Elasticsearch's API to dirrectly access collected logs - Elastic stack is one of the Wazuh's required components (usually installed on the same machine as Wazuh server, but can be stand alone as well).

Installation & use

Using docker

  1. Set up your Wazuh development environment. Use Security Monitoring repository to create and deploy Vagrant box with all the required components.

  2. Clone this repository.

  3. Build Docker image:

    $ make build
  4. Run the image:

    $ make run

    Note: See Environment variables section for more information about configuration of this component and it's interaction with Wazuh, Clouditor etc.

Local environment

  1. Set up your Wazuh development environment. Use Security Monitoring repository to create and deploy Vagrant box with all required components.

  2. Clone this repository.

  3. Install dependencies:

    $ pip install -r requirements.txt
  4. Set environment variables:

    $ source .env
  5. a) Install Redis server locally:

    $ sudo apt-get install redis-server

    Note: To stop Redis server use /etc/init.d/redis-server stop.

    b) Run Redis server in Docker container:

    $ docker run --name my-redis-server -p 6379:6379 -d redis

    In this case also comment-out server start command in entrypoint.sh:

    #redis-server &
  6. Run entrypoint.sh:

    $ ./entrypoint.sh

    Note: This repository consists of multiple Python modules. When running Python code manually, use of -m flag might be necessary.

Component configuration

Environment variables

Required environment variables (if deployed localy) are located and can be set in .env file.

Variables used when deploying to Kubernetes can be edited in data section of /kubernetes/wazuh-vat-evidence-collector-configmap.yaml file.

All of the following environment variables have to be set (or passed to container) for evidence-collector to work:

  • demo_mode,
  • wazuh_host,
  • wazuh_port,
  • wazuh_username,
  • wazuh_password,
  • elastic_host,
  • elastic_port,
  • elastic_username,
  • elastic_password,
  • redis_host,
  • redis_port,
  • redis_queue,
  • clouditor_host,
  • clouditor_port.

Generate gRPC code from .proto files

pip3 install grpcio-tools # (included in requirements.txt)
python3 -m grpc_tools.protoc --proto_path=proto evidence.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
python3 -m grpc_tools.protoc --proto_path=proto assessment.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
python3 -m grpc_tools.protoc --proto_path=proto metric.proto --python_out=grpc_gen --grpc_python_out=grpc_gen

As we are interacting with Clouditor, .proto files are taken from there.
Because of dependencies on Google APIs, .proto files in proto/google are taken from here.

Note: since we are running the code as a package, we have to modify imports in newly generated code: import evidence_pb2 as evidence__pb2 --> import grpc_gen.evidence_pb2 as evidence__pb2
(check all generated files)

API User authentication

Current implementation has disabled SSL certificate verification & uses simple username/password verification (defined inside /constants/constants.py). Production version should change this with cert verification.

Manual Elasticsearch API testin with cURL

Example command for testing the API via CLI:

$ curl --user admin:changeme --insecure -X GET "https://192.168.33.10:9200/wazuh-alerts*/_search?pretty" -H 'Content-Type: application/json' -d'
  {"query": {
    "bool": {
      "must": [{"match": {"predecoder.program_name": "clamd"}},
              {"match": {"rule.description": "Clamd restarted"}},
              {"match": {"agent.id": "001"}}]
      }
    }
  }'

Running RQ and RQ-scheduler localy

  1. Install (if needed) and run redis-server:

    $ sudo apt-get install redis-server
    
    $ redis-server

    Note: By default, server listens on port 6379. Take this into consideration when starting other components.

  2. Install RQ and RQ-scheduler:

    $ pip install rq
    
    $ pip install rq-scheduler
  3. Run both components in 2 terminals:

    $ rqworker low
    
    $ rqscheduler --host localhost --port 6379

    Note: low in the first command references task queue worker will use.

  4. Run Python script containing RQ commands as usual:

    $ python3 -m wazuh_evidence_collector.wazuh_evidence_collector

Known issues

Python Elasticsearch library problems with ODFE

Latest versions (7.14.0 & 7.15.0) of Python Elasticsearch library have problems connecting to Open Distro for Elasticsearch and produce the following error when trying to do so:

elasticsearch.exceptions.UnsupportedProductError: The client noticed that the server is not a supported distribution of Elasticsearch

To resolve this, downgrade to older package version:

$ pip install 'elasticsearch<7.14.0'