Skip to content
Snippets Groups Projects
Commit e9bb14da authored by Matevz Erzen's avatar Matevz Erzen Committed by Matevz Erzen
Browse files

Added scheduling and proto files

parent 7a579ff1
No related branches found
No related tags found
No related merge requests found
...@@ -2,3 +2,4 @@ __pycache__/ ...@@ -2,3 +2,4 @@ __pycache__/
*.pyc *.pyc
*$py.class *$py.class
.idea/ .idea/
dump.rdb
\ No newline at end of file
...@@ -2,11 +2,13 @@ ...@@ -2,11 +2,13 @@
FROM python:3.8-slim-buster FROM python:3.8-slim-buster
WORKDIR /evidence-collector/ WORKDIR /evidence-collector
COPY requirements.txt requirements.txt COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt RUN pip3 install -r requirements.txt
COPY . . COPY . .
CMD ["python3", "test.py"] RUN apt-get update && apt-get install -y redis-server
\ No newline at end of file
ENTRYPOINT ["entrypoint.sh"]
\ No newline at end of file
...@@ -8,9 +8,11 @@ Wazuh evidence collector uses [Wazuh's API](https://documentation.wazuh.com/curr ...@@ -8,9 +8,11 @@ Wazuh evidence collector uses [Wazuh's API](https://documentation.wazuh.com/curr
## Installation & use ## Installation & use
### Using docker: ### Using docker
1. Set up your Wazuh development environment. Use [Security Monitoring](https://gitlab.xlab.si/medina/security-monitoring) repository to create and deploy Vagrant box with all required components. > Note: Docker image is not yet complete and might not work due to recent changes around scheduler etc.
1. Set up your Wazuh development environment. Use [Security Monitoring](https://gitlab.xlab.si/medina/security-monitoring) repository to create and deploy Vagrant box with all the required components.
2. Clone this repository. 2. Clone this repository.
...@@ -28,7 +30,7 @@ docker run evidence-collector ...@@ -28,7 +30,7 @@ docker run evidence-collector
> Note: Current simple image runs code from `test.py`. If you wish to test anything else, change this file or edit `Dockerfile`. > Note: Current simple image runs code from `test.py`. If you wish to test anything else, change this file or edit `Dockerfile`.
### Local environment: ### Local environment
1. Set up your Wazuh development environment. Use [Security Monitoring](https://gitlab.xlab.si/medina/security-monitoring) repository to create and deploy Vagrant box with all required components. 1. Set up your Wazuh development environment. Use [Security Monitoring](https://gitlab.xlab.si/medina/security-monitoring) repository to create and deploy Vagrant box with all required components.
...@@ -40,13 +42,21 @@ docker run evidence-collector ...@@ -40,13 +42,21 @@ docker run evidence-collector
pip install -r requirements.txt pip install -r requirements.txt
``` ```
4. Run `test.py`: 4. Install Redis server (or run it in a separate Docker container - in this case remove server start command from `entrypoint.sh`):
```
sudo apt-get install redis-server
```
> Note: To stop Redis server use `/etc/init.d/redis-server stop`.
5. Run `entrypoint.sh`"
``` ```
python3 test.py ./entrypoint.sh
``` ```
> Note: This repository consists of multiple modules. When running code manually, use of `-m` flag might be necessary. > Note: This repository consists of multiple Python modules. When running Python code manually, use of `-m` flag might be necessary.
### API User authentication ### API User authentication
...@@ -68,6 +78,42 @@ curl --user admin:changeme --insecure -X GET "https://192.168.33.10:9200/wazuh-a ...@@ -68,6 +78,42 @@ curl --user admin:changeme --insecure -X GET "https://192.168.33.10:9200/wazuh-a
}' }'
``` ```
### Running [RQ](https://github.com/rq/rq) and [RQ-scheduler](https://github.com/rq/rq-scheduler) localy
1. Install (if needed) and run `redis-server`:
```
sudo apt-get install redis-server
redis-server
```
> Note: By default, server listens on port `6379`. Take this into consideration when starting other components.
2. Install RQ and RQ-scheduler:
```
pip install rq
pip install rq-scheduler
```
3. Run both components in 2 terminals:
```
rqworker low
rqscheduler --host localhost --port 6379
```
> Note: `low` in the first command references task queue worker will use.
4. Run Python script containing RQ commands as usual:
```
python3 ...
```
## Known issues ## Known issues
### Python Elasticsearch library problems with ODFE ### Python Elasticsearch library problems with ODFE
......
...@@ -7,3 +7,7 @@ ELASTIC_IP = '192.168.33.10' ...@@ -7,3 +7,7 @@ ELASTIC_IP = '192.168.33.10'
ELASTIC_API_PORT = 9200 ELASTIC_API_PORT = 9200
ELASTIC_USERNAME = 'admin' ELASTIC_USERNAME = 'admin'
ELASTIC_PASSWORD = 'changeme' ELASTIC_PASSWORD = 'changeme'
REDIS_IP = 'localhost'
REDIS_PORT = '6379'
REDIS_QUEUE_NAME = 'low'
\ No newline at end of file
#!/bin/bash
redis-server &
rqworker low &
rqscheduler &
python3 -m scheduler.scheduler &
\ No newline at end of file
...@@ -2,7 +2,7 @@ import json ...@@ -2,7 +2,7 @@ import json
class Evidence: class Evidence:
def __init__(self, evidence_id, timestamp, resource_id, tool, resource_type, feature_type, feature_property, measurement_result, body): def __init__(self, evidence_id, timestamp, resource_id, tool, resource_type, feature_type, feature_property, measurement_result, raw):
self.evidence_id = evidence_id self.evidence_id = evidence_id
self.timestamp = timestamp self.timestamp = timestamp
self.resource_id = resource_id self.resource_id = resource_id
...@@ -11,11 +11,11 @@ class Evidence: ...@@ -11,11 +11,11 @@ class Evidence:
self.feature_type = feature_type self.feature_type = feature_type
self.feature_property = feature_property self.feature_property = feature_property
self.measurement_result = measurement_result self.measurement_result = measurement_result
self.body = body self.raw = raw
def toJson(self): def toJson(self):
return json.dumps(self.__dict__) return json.dumps(self.__dict__)
def simple_evidence(evidence_id, timestamp, resource_id, measurement_result, body): def simple_evidence(evidence_id, timestamp, resource_id, measurement_result, raw):
return Evidence(evidence_id, timestamp, resource_id, None, None, None, None, measurement_result, body) return Evidence(evidence_id, timestamp, resource_id, None, None, None, None, measurement_result, raw)
\ No newline at end of file
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: proto/evidence.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='proto/evidence.proto',
package='',
syntax='proto3',
serialized_options=_b('Z\010evidence'),
serialized_pb=_b('\n\x14proto/evidence.proto\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xc1\x01\n\x08\x45vidence\x12\n\n\x02id\x18\x01 \x01(\t\x12\x12\n\nservice_id\x18\x02 \x01(\t\x12\x13\n\x0bresource_id\x18\x03 \x01(\t\x12-\n\ttimestamp\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x1a\n\x12\x61pplicable_metrics\x18\x05 \x03(\x05\x12\x0b\n\x03raw\x18\x06 \x01(\t\x12(\n\x08resource\x18\x07 \x01(\x0b\x32\x16.google.protobuf.ValueB\nZ\x08\x65videnceb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,])
_EVIDENCE = _descriptor.Descriptor(
name='Evidence',
full_name='Evidence',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='Evidence.id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='service_id', full_name='Evidence.service_id', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='resource_id', full_name='Evidence.resource_id', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='timestamp', full_name='Evidence.timestamp', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='applicable_metrics', full_name='Evidence.applicable_metrics', index=4,
number=5, type=5, cpp_type=1, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='raw', full_name='Evidence.raw', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='resource', full_name='Evidence.resource', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=88,
serialized_end=281,
)
_EVIDENCE.fields_by_name['timestamp'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_EVIDENCE.fields_by_name['resource'].message_type = google_dot_protobuf_dot_struct__pb2._VALUE
DESCRIPTOR.message_types_by_name['Evidence'] = _EVIDENCE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Evidence = _reflection.GeneratedProtocolMessageType('Evidence', (_message.Message,), dict(
DESCRIPTOR = _EVIDENCE,
__module__ = 'proto.evidence_pb2'
# @@protoc_insertion_point(class_scope:Evidence)
))
_sym_db.RegisterMessage(Evidence)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
syntax = "proto3";
import "google/protobuf/struct.proto";
import "google/protobuf/timestamp.proto";
option go_package = "evidence";
// TODO
// Coppied from https://github.com/clouditor/clouditor/blob/main/proto/evidence.proto
message Evidence {
string id = 1;
string service_id = 2;
string resource_id = 3;
// TODO: replace with google/type/date.proto timestamp.proto or date.proto?
google.protobuf.Timestamp timestamp = 4;
repeated int32 applicable_metrics = 5;
// "raw" evidence (for the auditor), for example the raw JSON response from
// the API. This does not follow a defined schema
string raw = 6;
// optional; a semantic representation of the Cloud resource according to our
// defined ontology. a JSON serialized node of our semantic graph. This may be
// Clouditor-specific.
google.protobuf.Value resource = 7;
}
\ No newline at end of file
elasticsearch_dsl==7.4.0
urllib3==1.25.8
elasticsearch==7.13.4 elasticsearch==7.13.4
urllib3==1.25.8
redis==3.5.3
elasticsearch_dsl==7.4.0
rq==1.10.0
rq_scheduler==0.11.0
from redis import Redis
from rq import Queue
from rq_scheduler import Scheduler
from constants import constants
from wazuh_evidence_collector import wazuh_evidence_collector
def remove_jobs(scheduler):
jobs = scheduler.get_jobs()
for job in jobs:
scheduler.cancel(job)
def print_jobs(scheduler):
jobs = scheduler.get_jobs()
for job in jobs:
print(job)
redis = Redis(constants.REDIS_IP, constants.REDIS_PORT)
q = Queue(constants.REDIS_QUEUE_NAME, connection=redis)
scheduler = Scheduler(connection=redis)
# TODO: Remove if needed
remove_jobs(scheduler)
# TODO: Change cron expression and repeat value for production verion.
scheduler.cron(
'* * * * * ',
func=wazuh_evidence_collector.run_full_check,
args=[],
repeat=10,
queue_name='low',
use_local_timezone=False
)
# TODO: Remove if needed
print_jobs(scheduler)
import pprint
from wazuh_evidence_collector.wazuh_evidence_collector import *
evidences = run_full_check()
for evidence in evidences:
pprint.pprint(evidence.__dict__)
...@@ -11,7 +11,7 @@ class WazuhClient: ...@@ -11,7 +11,7 @@ class WazuhClient:
self._auth_token = None self._auth_token = None
def req(self, method, resource, data=None, headers={}, auth_retry=True): def req(self, method, resource, data=None, headers={}, auth_retry=True):
# TODO add cert verification # TODO: add cert verification
c = urllib3.HTTPSConnectionPool(self._ip, port=self._port, cert_reqs='CERT_NONE', assert_hostname=False) c = urllib3.HTTPSConnectionPool(self._ip, port=self._port, cert_reqs='CERT_NONE', assert_hostname=False)
url = "https://%s:%i/%s" % (self._ip, self._port, resource) url = "https://%s:%i/%s" % (self._ip, self._port, resource)
......
...@@ -19,6 +19,7 @@ es = Elasticsearch( ...@@ -19,6 +19,7 @@ es = Elasticsearch(
ssl_show_warn=False, ssl_show_warn=False,
) )
# TODO: Get real data.
# Get (temporary) ID # Get (temporary) ID
def get_id(reqId): def get_id(reqId):
return reqId + '-' + str(randint(0, maxsize)) return reqId + '-' + str(randint(0, maxsize))
...@@ -50,6 +51,11 @@ def run_full_check(): ...@@ -50,6 +51,11 @@ def run_full_check():
agent_evidences.append(wazuh_monitoring_enabled(wc, agent)) agent_evidences.append(wazuh_monitoring_enabled(wc, agent))
agent_evidences.append(malvare_protection_enabled(wc, es, agent)) agent_evidences.append(malvare_protection_enabled(wc, es, agent))
# TODO: : Remove for production. This is only output for easier local testing.
for evidence in agent_evidences:
pprint.pprint(evidence.toJson())
return agent_evidences return agent_evidences
# Check Wazuh's configuration # Check Wazuh's configuration
...@@ -181,7 +187,7 @@ def malvare_protection_enabled(wc, es, agent_id): ...@@ -181,7 +187,7 @@ def malvare_protection_enabled(wc, es, agent_id):
return simple_evidence(get_id('05.4'), get_timestamp(), agent_id, "false", raw_evidence) return simple_evidence(get_id('05.4'), get_timestamp(), agent_id, "false", raw_evidence)
# Check last Syscheck & Rootcheck scan times # Check last Syscheck & Rootcheck scan times
# When producing 'real' evidence, make sure to provide differentiation between Syscheck and Rootcheck outputs. # TODO: When producing 'real' evidence, make sure to provide differentiation between Syscheck and Rootcheck outputs.
def check_last_scan_time(wc, agent_id): def check_last_scan_time(wc, agent_id):
body = wc.req('GET', 'syscheck/' + agent_id + '/last_scan') body = wc.req('GET', 'syscheck/' + agent_id + '/last_scan')
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment