Newer
Older
This project includes modules for collecting evidence regarding Wazuh and VAT and sending it to [Clouditor](https://github.com/clouditor/clouditor) for further processing.
## Wazuh evidence collector
Wazuh evidence collector uses [Wazuh's API](https://documentation.wazuh.com/current/user-manual/api/reference.html) to access information about manager's and agents' system informations and configurations. As an additional measure to ensure correct configuration of [ClamAV](https://www.clamav.net/) (if installed on machine) we also make use of [Elasticsearch's API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search.html) to dirrectly access collected logs | Elastic stack is one of the Wazuh's required components (usually installed on the same machine as Wazuh server, but can be stand alone as well).
## Installation & use
### Using docker
1. Set up your Wazuh development environment. Use [Wazuh-deploy](https://git.code.tecnalia.com/medina/public/wazuh-deploy) repository to create and deploy Vagrant box with all the required components.
2. Clone this repository.
3. Build Docker image:
```
$ make build
```
4. Run the image:
```
$ make run
```
> Note: See `Environment variables` section for more information about configuration of this component and it's interaction with Wazuh, Clouditor etc.
### Local environment
1. Set up your Wazuh development environment. Use [Wazuh-deploy](https://git.code.tecnalia.com/medina/public/wazuh-deploy) repository to create and deploy Vagrant box with all required components.
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
2. Clone this repository.
3. Install dependencies:
```
$ pip install -r requirements.txt
```
4. Set environment variables:
```
$ source .env
```
5. a) Install Redis server locally:
```
$ sudo apt-get install redis-server
```
> Note: To stop Redis server use `/etc/init.d/redis-server stop`.
b) Run Redis server in Docker container:
```
$ docker run --name my-redis-server -p 6379:6379 -d redis
```
In this case also comment-out server start command in `entrypoint.sh`:
```
#redis-server &
```
6. Run `entrypoint.sh`:
```
$ ./entrypoint.sh
```
> Note: This repository consists of multiple Python modules. When running Python code manually, use of `-m` flag might be necessary.
## Component configuration
### Environment variables
Required environment variables (if deployed locally) are located and can be set in `.env` file.
Variables used when deploying to Kubernetes can be edited in `data` section of `/kubernetes/wazuh-vat-evidence-collector-configmap.yaml` file.
All of the following environment variables have to be set (or passed to container) for `evidence-collector` to work:
| Variable | Description |
| ---------- | ---------- |
| `dummy_wazuh_manager` | Default value `false`. Set to `true` in case Evidence collector runs alone (without `security-monitoring` framework) locally - generates dummy data. |
| `wazuh_host` | Wazuh manager host's IP address. |
| `wazuh_port` | Wazuh manager port. Default value `55000`. |
| `wazuh_username` | Wazuh manager's username. |
| `wazuh_password` | Wazuh manager's password. |
| `elastic_host` | Elasticsearch host's IP address. Usually same as `wazuh_host`. |
| `elastic_port` | Elasticsearch port. Default value `9200`. |
| `elastic_username` | Elasticsearch's username. |
| `elastic_password` | Elasticsearch's password. |
| `redis_host` | Redis server host's IP address. Usually `localhost`. |
| `redis_port` | Redis server port. Default value `6379`. |
| `redis_queue` | Redis queue name. |
| `local_clouditor_deploy` | Default value `true`. Set to `false` in case Evidence collector will be using Kubernetes deployed Clouditor. |
| `clouditor_host` | Clouditor host's IP address. |
| `clouditor_port` | Clouditor port. Default value `9090`. |
| `clouditor_oauth2_port` | Clouditor port used for authentication services. Default value `8080`. |
| `clouditor_client_id` | Clouditor OAuth2 default id. Default value `clouditor`. |
| `clouditor_client_secret` | Clouditor OAuth2 default secret. Default value `clouditor`. |
| `clouditor_oauth2_scope` | Must be defined if `local_clouditor_deploy` is set to `false`. Defines scope used when requesting OAuth2 token. |
| `wazuh_check_interval` | Interval in seconds (rounded to a minute/60 second intervals); how often should evidence be created and forwarded. Should be the same as the check interval set on Wazuh manager. |
| `wazuh_rule_level` | Min. Wazuh rule severity level that is required for an event to be counted as a threat. |
### Medina resource ID mapping
Resource IDs used to generate evidence resources can be easily mapped to required values. In case ID isn't set, Evidence collector will use `name` parameter acquired from Wazuh - which is set to machine's hostname, unless explicitly set to something else.
IDs can be set as `key:value` pairs inside `resource_id_map.json` file, that is later passed to Docker container:
```
{
"manager": "wazuh_manager",
"agent1": "test_agent_1",
"agent2": "test_agent_2"
}
```
Where `key` represents Wazuh's `name` parameter (machine's hostname) and `value` equals to string `name` will be mapped to.
### Generate gRPC code from `.proto` files
```
pip3 install grpcio-tools # (included in requirements.txt)
python3 -m grpc_tools.protoc --proto_path=proto evidence.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
python3 -m grpc_tools.protoc --proto_path=proto assessment.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
python3 -m grpc_tools.protoc --proto_path=proto metric.proto --python_out=grpc_gen --grpc_python_out=grpc_gen
```
As we are interacting with Clouditor, .proto files are taken from [there](https://github.com/clouditor/clouditor/tree/main/proto).
Because of dependencies on Google APIs, .proto files in proto/google are taken from [here](https://github.com/googleapis/googleapis/tree/master/google/api).
> Note:
> since we are running the code as a package, we have to modify imports in newly generated code:
> `import evidence_pb2 as evidence__pb2` --> `import grpc_gen.evidence_pb2 as evidence__pb2`
> (check all generated files)
### API User authentication
Current implementation has disabled SSL certificate verification & uses simple username/password verification (defined inside `/constants/constants.py`). Production version should change this with cert verification.
### Manual Elasticsearch API testin with cURL
Example command for testing the API via CLI:
```
$ curl --user admin:changeme --insecure -X GET "https://192.168.33.10:9200/wazuh-alerts*/_search?pretty" -H 'Content-Type: application/json' -d'
{"query": {
"bool": {
"must": [{"match": {"predecoder.program_name": "clamd"}},
{"match": {"rule.description": "Clamd restarted"}},
{"match": {"agent.id": "001"}}]
}
}
}'
```
### Running [RQ](https://github.com/rq/rq) and [RQ-scheduler](https://github.com/rq/rq-scheduler) locally
1. Install (if needed) and run `redis-server`:
```
$ sudo apt-get install redis-server
$ redis-server
```
> Note: By default, server listens on port `6379`. Take this into consideration when starting other components.
2. Install RQ and RQ-scheduler:
```
$ pip install rq
$ pip install rq-scheduler
```
3. Run both components in 2 terminals:
```
$ rqworker low
$ rqscheduler --host localhost --port 6379
```
> Note: `low` in the first command references task queue worker will use.
4. Run Python script containing RQ commands as usual:
```
$ python3 -m wazuh_evidence_collector.wazuh_evidence_collector
```
## Known issues & debugging
### Debugging gRPC services
gRPC can be easily set to verbose debug mode by adding the following variables to `.env` file passed to Docker container:
```
GRPC_VERBOSITY=DEBUG
GRPC_TRACE=http,tcp,api,channel,connectivity_state,handshaker,server_channel
```
Full list of gRPC environment variables is available [here](https://github.com/grpc/grpc/blob/master/doc/environment_variables.md).
### Python Elasticsearch library problems with ODFE
Latest versions (`7.14.0` & `7.15.0`) of Python Elasticsearch library have problems connecting to Open Distro for Elasticsearch and produce the following error when trying to do so:
```
elasticsearch.exceptions.UnsupportedProductError: The client noticed that the server is not a supported distribution of Elasticsearch
```
To resolve this, downgrade to older package version:
```
$ pip install 'elasticsearch<7.14.0'
```