Wazuh-VAT Evidence Collector
Author: XLAB
This project includes modules for collecting evidence regarding Wazuh and VAT and sending it to Clouditor for further processing.
Wazuh evidence collector
Wazuh evidence collector uses Wazuh's API to access information about manager's and agents' system informations and configurations. As an additional measure to ensure correct configuration of ClamAV (if installed on machine) we also make use of Elasticsearch's API to dirrectly access collected logs. Elastic stack is one of the Wazuh's required components (usually installed on the same machine as Wazuh server, but can be stand alone as well).
VAT evidence collector
VAT evidence collector uses VAT API to create w3af & OWASP scans and retrieve their results. These are later processed and forwarded to Clouditor (Assessment Interface).
Installation & use
Using docker
-
Set up your Wazuh & VAT development environment. Use Wazuh Deploy repository to create and deploy Vagrant box with all the required components.
Note: Wazuh Deploy repository is not up to date! Use only for development.
-
Clone this repository.
-
Build Docker image:
$ make build
-
Run the image:
$ make run
Note: See
Environment variables
section for more information about configuration of this component and it's interaction with Wazuh, Clouditor etc.
Local environment
-
Set up your Wazuh & VAT development environment. Use Wazuh Deploy repository to create and deploy Vagrant box with all required components.
Note: Wazuh Deploy repository is not up to date! Use only for development.
-
Clone this repository.
-
Install dependencies:
$ pip install -r requirements.txt
-
Set environment variables:
$ source .env
-
a) Install Redis server locally:
$ sudo apt-get install redis-server
Note: To stop Redis server use
/etc/init.d/redis-server stop
.b) Run Redis server in Docker container:
$ docker run --name my-redis-server -p 6379:6379 -d redis
In this case also comment-out server start command in
entrypoint.sh
:#redis-server &
-
Run
entrypoint.sh
:$ ./entrypoint.sh
Note: This repository consists of multiple Python modules. When running Python code manually, use of
-m
flag might be necessary.
Component configuration
Environment variables
Required environment variables (if deployed locally) are located and can be set in .env
file.
Variables used when deploying to Kubernetes can be edited in data
section of /kubernetes/wazuh-vat-evidence-collector-configmap.yaml
file.
All of the following environment variables have to be set (or passed to container) for the wazuh-vat-evidence-collector
to work:
Variable | Description |
---|---|
redis_host |
Redis server host's IP address. Usually localhost . |
redis_port |
Redis server port. Default value 6379 . |
redis_queue |
Redis queue name. Default value low . Can be set to any name. |
dummy_wazuh_manager |
Default value true . Set to false in case you have Wazuh running and don't want to use dummy generated data. |
wazuh_host |
Wazuh manager host's IP address. |
wazuh_port |
Wazuh manager port. Default value 55000 . |
wazuh_username |
Wazuh manager's username. |
wazuh_password |
Wazuh manager's password. |
elastic_host |
Elasticsearch host's IP address. Usually same as wazuh_host . |
elastic_port |
Elasticsearch port. Default value 9200 . |
elastic_username |
Elasticsearch's username. |
elastic_password |
Elasticsearch's password. |
dummy_vat |
Default value true . Set to false in case you have VAT running and don't want to use dummy generated data. |
vat_protocol |
VAT API transfer protocol. Can be set either to http or https . Default value http . |
vat_host |
VAT host's IP address. |
vat_port |
VAT port. Default value 80 . |
vat_api_prefix |
VAT API's prefix. Default value /api . |
vat_check_hosts |
List of comma divided IPs (hosts) for VAT to check. |
vat_nmap_check_timeout |
VAT Nmap check timeout in minutes. Default value 2 . |
vat_w3af_check_timeout |
VAT w3af check timeout in minutes. Default value 15 . |
wazuh_rule_level |
Min. Wazuh rule severity level that is required for an event to be counted as a threat; values from 0 to 15 . Default value 10 . |
vat_vulnerability_level |
Min. VAT vulnerability risk level that is required for an event to be counted as a vulnerability; values from 0 to 100 . Default value 75 . |
wazuh_check_interval |
Interval in minutes; how often should evidence be created and forwarded. Should be the same as the check interval set on Wazuh manager. Default value 15 . |
vat_check_timeout |
Interval in minutes; how often should VAT checks be performed (i.e. how often should evidence be created and forwarded). Default value 15 . |
local_clouditor_deploy |
Default value true . Set to false in case Evidence collector will be using Kubernetes deployed Clouditor. |
clouditor_host |
Clouditor host's IP address. |
clouditor_port |
Clouditor port. Default value 9090 . |
orchestrator_host |
Orchestrator host's IP address. |
orchestrator_port |
Orchestrator port. Default value 443 . |
clouditor_oauth2_port |
Clouditor port used for authentication services. Default value 8080 . |
clouditor_client_id |
Clouditor OAuth2 default id. Default value clouditor . |
clouditor_client_secret |
Clouditor OAuth2 default secret. Default value clouditor . |
clouditor_oauth2_scope |
Must be defined if local_clouditor_deploy is set to false . Defines scope used when requesting OAuth2 token. |
K8s Clouditor DEV environment variables configuration example:
clouditor_host=security-assessment-dev.k8s.medina.esilab.org
clouditor_port=443
orchestrator_host=orchestrator-dev.k8s.medina.esilab.org
orchestrator_port=443
clouditor_oauth2_host=catalogue-keycloak-dev.k8s.medina.esilab.org/auth/realms/medina/protocol/openid-connect/token
clouditor_oauth2_port=443
clouditor_client_id=wazuh-vat-evidence-collector-dev
clouditor_client_secret=secret
clouditor_oauth2_scope=openid
Medina Resource ID mapping
Resource IDs used to generate evidence resources can be easily mapped to required values. In case ID isn't set, Evidence collector will use name
parameter (which is set to machine's hostname, unless explicitly set to something else) acquired from Wazuh or IP address in the case of VAT.
IDs can be set as key:value
pairs inside id_maps/resource_id_map.json
file, that is later passed to Docker container:
{
"manager": "wazuh_manager",
"agent1": "test_agent_1",
"agent2": "test_agent_2",
"192.168.33.101": "vat_test_vm"
}
Where key
represents Wazuh's name
parameter (machine's hostname/IP) and value
equals to string name
will be mapped to.
Medina Cloud Service ID mapping
Cloud Services used by Wazuh & VAT can be configured by editing id_maps/cloud_service_name_map.json
file:
{
"vat": {
"name": "vat-test-service",
"description": ""
},
"wazuh": {
"name": "wazuh-test-service",
"description": ""
}
}
Top-level keys vat
& wazuh
and their name
fields must be defined for the app to work. If Cloud Service with certain name already exists, app will find and fetch its id
from the Orchestrator API. Otherwise it will create a new Cloud Service with name
and description
defined in this file.
Note:
creation of the Cloud Services with more complex definitions (includingcatalogs_in_scope
&configured_metrics
) is not yet supported by the Evidence Collector and should be done through the Web GUI. In this case just change correspondingname
variable accordingly.
Medina Tool ID
Tool ID is generated from the information contained in MANIFEST
file; in <SERVICE>:<VERSION>
format i.e. wazuh-vat-evidence-collector:v0.0.1
.
Dependant components: Wazuh, ClamAV, VAT
Wazuh-VAT Evidence Collector generates evidence using information acquired from Wazuh and Vulnerability Assessment Tool APIs. These components should be installed and configurated in accordance with instructions given in the relavant repositories.
Wazuh Agents also require ClamAV tool to be installed on their machines (to successfully cover all the requirements).
Required component versions:
- Wazuh:
v4.1.5
, - ClamAV:
latest
, - VAT:
latest
.
See wazuh-deploy
for further details how to set up Wazuh & ClamAV.
Note:
wazuh-deploy
repository is deprecated and its information regarding (Wazuh-VAT) Evidence Collector configuration could be incomplete. However, information regarding Wazuh configuration is still up-to-date.
See vat-deploy
for relavant information regarding VAT installation.
Development
.proto
files
Generate gRPC code from If Clouditor's API changes, new gRPC code has to be generated using their prototype files:
$ pip3 install grpcio-tools # included in requirements.txt
$ python3 -m grpc_tools.protoc --proto_path=proto evidence.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
$ python3 -m grpc_tools.protoc --proto_path=proto assessment.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
$ python3 -m grpc_tools.protoc --proto_path=proto metric.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
$ python3 -m grpc_tools.protoc --proto_path=proto tagger.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
$ python3 -m grpc_tools.protoc --proto_path=proto validate.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
Note:
some.proto
import paths might need to be updated before generating gRPC code. This is due to our directory's structure not being a direct copy of the one of Clouditor.
See the next note for additional info relating to the same cause.
Clouditor prototype files origin: https://github.com/clouditor/clouditor/tree/main/api
Dependencies:
-
googleapis
: https://github.com/googleapis/googleapis/tree/master/google/api -
protoc-gen-gotag
: https://github.com/srikrsna/protoc-gen-gotag/blob/master/tagger/tagger.proto -
protoc-gen-validate
: https://github.com/bufbuild/protoc-gen-validate/blob/main/validate/validate.proto
All the required Couditor prototype files and their dependencies are already added to the repository for easier usage. However, they need to be updated manually in case anything changes. Alternatively, they can be automatically updated using Golang, but this repository does not support/automate this in any way.
Note:
since we are running this code as a package, we have to modify imports in the newly generated Python code:
import evidence_pb2 as evidence__pb2
-->import grpc_gen.evidence_pb2 as evidence__pb2
(check all generated files!)
Wazuh & Elastic API User authentication
Current implementation has disabled SSL certificate verification and uses simple username/password verification (credentials passed via .env
file). The production version should change this to certificate verification, unless all the components are installed inside a local private network.
Manual Elasticsearch API testin with cURL
Example command for testing the API via CLI:
$ curl --user admin:changeme --insecure -X GET "https://192.168.33.10:9200/wazuh-alerts*/_search?pretty" -H 'Content-Type: application/json' -d'
{"query": {
"bool": {
"must": [{"match": {"predecoder.program_name": "clamd"}},
{"match": {"rule.description": "Clamd restarted"}},
{"match": {"agent.id": "001"}}]
}
}
}'
RQ and RQ-scheduler locally
Running-
Install (if needed) and run
redis-server
:$ sudo apt-get install redis-server $ redis-server
Note: By default, server listens on port
6379
. Take this into consideration when starting other components. -
Install RQ and RQ-scheduler:
$ pip install rq $ pip install rq-scheduler
-
Run both components in 2 terminals:
$ rqworker low $ rqscheduler --host localhost --port 6379
Note:
low
in the first command references task queue worker will use. -
Run Python script containing RQ commands as usual:
$ python3 -m wazuh_evidence_collector.wazuh_evidence_collector
Known issues & debugging
Debugging gRPC services
gRPC can be easily set to verbose debug mode by adding the following variables to .env
file passed to Docker container:
GRPC_VERBOSITY=DEBUG
GRPC_TRACE=http,tcp,api,channel,connectivity_state,handshaker,server_channel
Full list of gRPC environment variables is available here.
Python Elasticsearch library problems with ODFE
Latest versions (7.14.0
& 7.15.0
) of Python Elasticsearch library have problems connecting to Open Distro for Elasticsearch and produce the following error when trying to do so:
elasticsearch.exceptions.UnsupportedProductError: The client noticed that the server is not a supported distribution of Elasticsearch
To resolve this, downgrade to older package version:
$ pip install 'elasticsearch<7.14.0'