Skip to content
Snippets Groups Projects
Commit e32388a6 authored by Radosław Piliszek's avatar Radosław Piliszek :flag_ua:
Browse files

Publish y2 code

parents
No related branches found
No related tags found
No related merge requests found
Showing
with 2255 additions and 0 deletions
assert_used:
skips: ["./tests/*"]
.tox/
.*_cache
*/*/__pycache__
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Visual Studio Code
.vscode/
FROM python:3.8.14-slim-bullseye as kolla-ansible-build-stage
RUN apt-get update -eany
RUN apt-get upgrade -y --no-install-recommends
RUN /usr/local/bin/python3.8 -m venv /opt/kolla-ansible-venv
# Xena
COPY openstack_requirements/upper-constraints.txt /tmp/
RUN /opt/kolla-ansible-venv/bin/python -m pip install -c /tmp/upper-constraints.txt \
kolla-ansible==13.4.0 \
ansible==4.10.0
FROM python:3.10.7-slim-bullseye as csep-build-stage
RUN apt-get update -eany
RUN apt-get upgrade -y --no-install-recommends
RUN /usr/local/bin/python3.10 -m venv /opt/csep-venv
COPY *requirements.txt /tmp/
RUN /opt/csep-venv/bin/python -m pip install \
-r /tmp/requirements.txt \
-r /tmp/uvicorn-requirements.txt
FROM debian:bullseye-slim AS csep-deps
# need openssh-client to enable ssh access in ansible
# need sshpass to support non-interactive ssh with passwords
# libexpat is needed to run kolla ansible commands due to reliance on pbr
# (which in turn relies on packaging which requires XML parsing)
RUN apt-get update -eany \
&& apt-get upgrade -y --no-install-recommends \
&& apt-get install -y --no-install-recommends \
openssh-client \
sshpass \
libexpat1 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY --from=kolla-ansible-build-stage /usr/local/bin/python3.8 /usr/local/bin/
COPY --from=kolla-ansible-build-stage /usr/local/lib/libpython3.8.so* /usr/local/lib/
COPY --from=kolla-ansible-build-stage /usr/local/lib/python3.8 /usr/local/lib/python3.8
COPY --from=kolla-ansible-build-stage /opt/kolla-ansible-venv /opt/kolla-ansible-venv
COPY --from=csep-build-stage /usr/local/bin/python3.10 /usr/local/bin/
COPY --from=csep-build-stage /usr/local/lib/libpython3.10.so* /usr/local/lib/
COPY --from=csep-build-stage /usr/local/lib/python3.10 /usr/local/lib/python3.10
COPY --from=csep-build-stage /opt/csep-venv /opt/csep-venv
# We copied shared objects, need to rerun ldconfig to refresh the ld cache.
RUN ldconfig
FROM csep-deps AS csep
COPY csep_api /opt/csep/csep_api
COPY csep_common /opt/csep/csep_common
COPY csep_worker /opt/csep/csep_worker
Feature: Learning the details of the CSEP API
Scenario: User wishes to know the details of the API
When User navigates to GET /docs of the API service with their web browser
Then the service displays OpenAPI docs
Feature: General operations on deployments
Scenario: Querying the status of all deployments
When User requests all deployments details via the GET /deployments API
Then the CSEP API return all deployments details
Scenario: Querying the status of a single deployment
When User requests a deployment details via the GET /deployments/{name} API
Then the CSEP API returns the deployment details
Scenario: Querying the deployment events of a single deployment
When User requests a deployment events via the GET /deployments/{name}/events API
Then the CSEP API returns the deployment events
Scenario: Querying the deployment events last message of a single deployment
When User requests a deployment events last message via the GET /deployments/{name}/events/last_message API
Then the CSEP API returns the deployment events last message
# for details on the following regarding OpenStack, please see the openstack.feature
Scenario: Initation of the OS CSE Deployment
Given the JSON Description of the Deployment
When User requests a deployment via the POST /deployments API
Then the CSEP initiates the deployment
Scenario: Redeploying an existing deployment
When User requests a redeployment via the POST /deployments/{name}/redeploy API
Then the CSEP initialises a new deployment invocation
Feature: Deployment of OpenStack Canary Sandbox Environment (CSE)
Scenario: Initation of the OS CSE Deployment
Given the JSON Description of the OpenStack CSE Deployment
# An example of which is the following:
#
# {
# "name": "test_os",
# "spec": {
# "type": "OpenStack",
# "auth": {
# "type": "username_password",
# "username": "debian",
# "password": "password"
# },
# "hosts": [
# {
# "ip_address": "192.0.2.2",
# "network_interface_name": "veth0"
# }
# ]
# }
# }
#
# The details on the description can be learnt via docs.feature
#
When User requests a deployment via the POST /deployments API
Then the CSEP initiates the deployment
Scenario: Obtaining clouds.yaml for an OpenStack CSE deployment
Given the deployment has finished
# the above can be validated by main.feature query scenarios
When User requests clouds.yaml via the GET /deployments/{name}/custom_output/clouds.yaml API
Then the CSEP API returns clouds.yaml with necessary access credentials
LICENSE 0 → 100644
Copyright 2022 7bulls.com
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor’s Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party’s
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients’ rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients’
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
party’s negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a party’s ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.
# CSEP
Canary Sandbox Environment Provisioner
# Run for development
Use Docker and ``docker-compose``:
docker-compose up -d
The API server auto-reloads.
The worker can be restarted as a container.
The API server is, by default, accessible on ``http://127.0.0.1:8000``
(also available as ``http://localhost:8000`` in most cases).
The OpenAPI docs are specifically at ``http://127.0.0.1:8000/docs``.
# Static OpenAPI definition
The static OpenAPI specification in ``openapi.json`` can be regenerated
from code using ``tools/dump_openapi_spec.sh`` which requires the
``csep_api`` container to be running locally with the default settings.
# Example request data
Look into ``tests/sample_requests/``.
# License
The contents of this repository are licensed under Mozilla Public License
version 2.0 (MPL 2.0) as attached in the LICENSE file and copyright 2022
7bulls.com.
from datetime import datetime
from typing import List, Optional
from uuid import uuid4
from fastapi import Body, Depends, FastAPI, HTTPException, Response
from csep_common.datamodels import (
DeploymentIn,
DeploymentInDB,
DeploymentOut,
DeploymentSpecPatch,
DeploymentStatus,
Event,
IncompatibleSpecPatch,
OperationProgress,
UnknownCustomOutputTypeError,
apply_spec_patch,
)
from csep_common.datastore import (
DataStore,
DeploymentExistsAlreadyError,
DeploymentNotFoundError,
get_data_store,
)
app = FastAPI()
# TODO: refactor error handling to a common place
@app.get(
"/deployments/",
response_model=List[DeploymentOut],
response_model_exclude_unset=True,
)
def get_all_deployments(
data_store: DataStore = Depends(get_data_store),
) -> List[DeploymentInDB]:
return data_store.get_all_deployments()
@app.post(
"/deployments/", response_model=DeploymentOut, response_model_exclude_unset=True
)
def post_deployment(
deployment_in: DeploymentIn, data_store: DataStore = Depends(get_data_store)
) -> DeploymentInDB:
now = datetime.now()
deployment = DeploymentInDB(
**deployment_in.dict(exclude_unset=True),
uuid=uuid4(),
created=now,
status=DeploymentStatus(
last_updated=now,
progress=OperationProgress.New,
),
)
try:
data_store.add_deployment(deployment)
return deployment
except DeploymentExistsAlreadyError:
raise HTTPException(status_code=409, detail="Deployment exists already")
@app.get(
"/deployments/{deployment_name}",
response_model=DeploymentOut,
response_model_exclude_unset=True,
)
def get_deployment(
deployment_name: str, data_store: DataStore = Depends(get_data_store)
) -> DeploymentInDB:
try:
deployment = data_store.get_deployment(deployment_name)
return deployment
except DeploymentNotFoundError:
raise HTTPException(status_code=404, detail="Deployment not found")
@app.get(
"/deployments/{deployment_name}/events",
response_model=list[Event],
response_model_exclude_unset=True,
)
def get_deployment_events(
deployment_name: str, data_store: DataStore = Depends(get_data_store)
) -> list[Event]:
try:
deployment = data_store.get_deployment(deployment_name)
return deployment.status.events
except DeploymentNotFoundError:
raise HTTPException(status_code=404, detail="Deployment not found")
@app.get(
"/deployments/{deployment_name}/events/last_message",
)
def get_deployment_events_last_message(
deployment_name: str, data_store: DataStore = Depends(get_data_store)
) -> Response:
try:
deployment = data_store.get_deployment(deployment_name)
msg = ""
if len(deployment.status.events) > 0:
msg = deployment.status.events[-1].message
return Response(content=msg, media_type="text/plain")
except DeploymentNotFoundError:
raise HTTPException(status_code=404, detail="Deployment not found")
@app.get(
"/deployments/{deployment_name}/custom_output/{custom_output_type}",
)
def get_deployment_custom_output(
deployment_name: str,
custom_output_type: str,
data_store: DataStore = Depends(get_data_store),
) -> Response:
try:
deployment = data_store.get_deployment(deployment_name)
out = deployment.get_custom_output(custom_output_type)
return Response(content=out, media_type="text/plain")
except DeploymentNotFoundError:
raise HTTPException(status_code=404, detail="Deployment not found")
except UnknownCustomOutputTypeError:
raise HTTPException(status_code=400, detail="Unknown custom output type")
@app.post(
"/deployments/{deployment_name}/redeploy",
response_model=DeploymentOut,
response_model_exclude_unset=True,
)
def redeploy_deployment(
deployment_name: str,
deployment_spec_patch: Optional[DeploymentSpecPatch] = Body(default=None),
data_store: DataStore = Depends(get_data_store),
) -> DeploymentInDB:
try:
deployment = data_store.get_deployment(deployment_name)
if deployment_spec_patch is not None:
apply_spec_patch(deployment.spec, deployment_spec_patch)
deployment.status.progress = OperationProgress.New
data_store.update_deployment(deployment)
return deployment
except DeploymentNotFoundError:
raise HTTPException(status_code=404, detail="Deployment not found")
except IncompatibleSpecPatch:
raise HTTPException(status_code=400, detail="Incompatible spec patch")
@app.delete("/deployments/{deployment_name}")
def delete_deployment(
deployment_name: str, data_store: DataStore = Depends(get_data_store)
) -> None:
try:
data_store.delete_deployment(deployment_name)
except DeploymentNotFoundError:
raise HTTPException(status_code=404, detail="Deployment not found")
# TODO: websocket to watch for changes
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Literal, Optional, Union
import yaml
from pydantic import UUID4, BaseModel, Field
# NOTE: use discriminator functionality with "type" fields once implemented upstream
# TODO: more descriptions
# TODO: validations for addresses, ports and alike
class UnknownCustomOutputTypeError(Exception):
pass
class OperationProgress(str, Enum):
New = "New"
Running = "Running"
Failed = "Failed"
Completed = "Completed"
class EventSeverity(str, Enum):
Info = "Info"
Warning = "Warning"
Error = "Error"
class Event(BaseModel):
timestamp: datetime
severity: EventSeverity
message: str
class DeploymentStatus(BaseModel):
last_updated: datetime
progress: OperationProgress
events: List[Event] = []
def add_event(self, event: Event) -> None:
# NOTE: Simply appending will not trigger saving the field.
# Thus, we use this concatenation and assignment.
self.events = self.events + [event]
class DeploymentStatusOut(BaseModel):
last_updated: datetime
progress: OperationProgress
class Host(BaseModel):
ip_address: str
port: Optional[int]
mac_address: Optional[str]
bmc_ip_address: Optional[str]
name: Optional[str]
role: Optional[str]
network_interface_name: Optional[str]
class UsernameSSHKeyAuth(BaseModel):
type: Literal["username_sshkey"]
username: str
private_ssh_key: str
class UsernamePasswordAuth(BaseModel):
type: Literal["username_password"]
username: str
password: str
class VaultAuth(BaseModel):
type: Literal["vault"]
key: str
class DeploymentSpecBase(BaseModel):
auth: Union[UsernamePasswordAuth, UsernameSSHKeyAuth, VaultAuth]
bmc_auth: Optional[UsernamePasswordAuth]
hosts: List[Host] = Field(min_items=1)
def get_custom_output(self, custom_output_type: str) -> str:
raise UnknownCustomOutputTypeError()
class DummyDeploymentSpec(DeploymentSpecBase):
type: Literal["dummy"]
time_to_start: int = 0
time_to_complete: int = 10
should_succeed: bool = True
class OpenStackDeploymentSpec(DeploymentSpecBase):
type: Literal["OpenStack"]
globals: Dict[str, Any] = {}
passwords: Dict[str, Any] = {}
def get_custom_output(self, custom_output_type: str) -> str:
if custom_output_type != "clouds.yaml":
raise UnknownCustomOutputTypeError()
# else must be clouds.yaml
# TODO: support user overrides of all the names below (lo-prio)
clouds_yaml: dict[str, Any] = {
"clouds": {
"csep-cloud": {
"auth": {
"auth_url": f"http://{self.hosts[0].ip_address}:5000/",
"project_name": "admin",
"username": "admin",
"password": self.passwords.get("keystone_admin_password"),
"user_domain_name": "Default",
"project_domain_name": "Default",
"domain_name": "Default",
},
"region_name": "RegionOne",
}
}
}
return yaml.safe_dump(clouds_yaml, sort_keys=False) # type: ignore
class OpenStackDeploymentSpecOut(BaseModel):
type: Literal["OpenStack"]
hosts: List[Host] = Field(min_items=1)
# TODO: figure out how to elegantly make this dynamic
DeploymentSpec = Union[DummyDeploymentSpec, OpenStackDeploymentSpec]
DeploymentSpecOut = Union[DummyDeploymentSpec, OpenStackDeploymentSpecOut]
class DeploymentIn(BaseModel):
# TODO: make sure name is a safe string like [a-zA-Z][a-zA-Z0-9_]*
name: str = Field(..., min_length=1, max_length=63)
spec: DeploymentSpec
class DeploymentOut(BaseModel):
# TODO: make sure name is a safe string like [a-zA-Z][a-zA-Z0-9_]*
name: str = Field(..., min_length=1, max_length=63)
spec: DeploymentSpecOut
uuid: UUID4
created: datetime
status: DeploymentStatusOut
class DeploymentInDB(BaseModel):
# TODO: make sure name is a safe string like [a-zA-Z][a-zA-Z0-9_]*
name: str = Field(..., min_length=1, max_length=63)
spec: DeploymentSpec
uuid: UUID4
created: datetime
status: DeploymentStatus
def get_custom_output(self, custom_output_type: str) -> str:
return self.spec.get_custom_output(custom_output_type)
class DummyDeploymentSpecPatch(BaseModel):
type: Literal["dummy"]
time_to_start: Optional[int]
time_to_complete: Optional[int]
should_succeed: Optional[bool]
class OpenStackDeploymentSpecPatch(BaseModel):
type: Literal["OpenStack"]
globals: Optional[Dict[str, Any]]
DeploymentSpecPatch = Union[DummyDeploymentSpecPatch, OpenStackDeploymentSpecPatch]
class IncompatibleSpecPatch(Exception):
pass
def apply_spec_patch(target: DeploymentSpec, patch: DeploymentSpecPatch) -> None:
if target.type != patch.type:
raise IncompatibleSpecPatch()
for key, value in patch.dict(exclude_unset=True).items():
if isinstance(value, dict):
target_value = getattr(target, key)
if target_value is None:
target_value = {}
setattr(target, key, target_value)
deep_patch_dict(target_value, value)
else:
setattr(target, key, value)
def deep_patch_dict(target: dict, patch: dict) -> None:
for key, value in patch.items():
if isinstance(value, dict):
target_value = target.setdefault(key, {})
deep_patch_dict(target_value, value)
else:
target[key] = value
import json
from abc import ABC, abstractmethod
from datetime import datetime
from typing import Callable, Iterator, List, Tuple
import etcd3
from pydantic import BaseSettings
from .datamodels import DeploymentInDB
class Settings(BaseSettings):
etcd_host: str = "localhost"
etcd_port: int = 2379
settings = Settings()
class ObjectExistsAlreadyError(Exception):
pass
class DeploymentExistsAlreadyError(ObjectExistsAlreadyError):
pass
class ObjectNotFoundError(Exception):
pass
class DeploymentNotFoundError(ObjectNotFoundError):
pass
class DataStore(ABC):
@abstractmethod
def get_deployment(self, deployment_name: str) -> DeploymentInDB:
...
@abstractmethod
def get_all_deployments(self) -> List[DeploymentInDB]:
...
@abstractmethod
def delete_deployment(self, deployment_name: str) -> None:
...
@abstractmethod
def add_deployment(self, deployment: DeploymentInDB) -> None:
...
@abstractmethod
def update_deployment(self, deployment: DeploymentInDB) -> None:
...
class Etcd3DataStore(DataStore):
DEPLOYMENTS_KEY_PREFIX = "/deployments/"
def __init__(self, etcd3_client: etcd3.Etcd3Client) -> None:
super().__init__()
self.etcd3_client = etcd3_client
def _get_deployment_key(self, deployment_name: str) -> str:
return f"{self.DEPLOYMENTS_KEY_PREFIX}{deployment_name}"
def get_deployment(self, deployment_name: str) -> DeploymentInDB:
key = self._get_deployment_key(deployment_name)
value, _ = self.etcd3_client.get(key)
if value:
return DeploymentInDB(**json.loads(value))
else:
raise DeploymentNotFoundError()
def get_all_deployments(self) -> List[DeploymentInDB]:
result = []
for (value, _) in self.etcd3_client.get_prefix(self.DEPLOYMENTS_KEY_PREFIX):
result.append(DeploymentInDB(**json.loads(value)))
return result
def delete_deployment(self, deployment_name: str) -> None:
key = self._get_deployment_key(deployment_name)
response_deleted = self.etcd3_client.delete(key)
if not response_deleted:
raise DeploymentNotFoundError()
def add_deployment(self, deployment: DeploymentInDB) -> None:
key = self._get_deployment_key(deployment.name)
deployment_serialised = deployment.json(exclude_unset=True)
success, _ = self.etcd3_client.transaction(
compare=[
self.etcd3_client.transactions.version(key) == 0,
],
success=[
self.etcd3_client.transactions.put(key, deployment_serialised),
],
failure=[], # this is optional in call but actually required to work
)
if not success:
raise DeploymentExistsAlreadyError()
def update_deployment(self, deployment: DeploymentInDB) -> None:
deployment.status.last_updated = datetime.now()
key = self._get_deployment_key(deployment.name)
deployment_serialised = deployment.json(exclude_unset=True)
success, _ = self.etcd3_client.transaction(
compare=[
self.etcd3_client.transactions.version(key) > 0,
# TODO: check if matches the previous content
],
success=[
self.etcd3_client.transactions.put(key, deployment_serialised),
],
failure=[], # this is optional in call but actually required to work
)
if not success:
raise DeploymentNotFoundError()
def watch_deployments(
self,
) -> Tuple[Iterator[etcd3.events.Event], Callable[[], None]]:
return self.etcd3_client.watch_prefix( # type: ignore
self.DEPLOYMENTS_KEY_PREFIX
)
def get_data_store() -> DataStore:
# only etcd3 for now
etcd3_client = etcd3.client(
host=settings.etcd_host,
port=settings.etcd_port,
)
return Etcd3DataStore(etcd3_client)
from abc import ABC, abstractmethod
from csep_common.datamodels import DeploymentInDB
from csep_common.datastore import DataStore
class Backend(ABC):
def __init__(self, data_store: DataStore):
super().__init__()
self.data_store = data_store
@abstractmethod
def run_deployment(self, deployment: DeploymentInDB) -> None:
...
import time
from typing import cast
from csep_common.datamodels import (
DeploymentInDB,
DummyDeploymentSpec,
OperationProgress,
)
from csep_worker.backend import Backend
class DummyBackend(Backend):
def run_deployment(self, deployment: DeploymentInDB) -> None:
spec = cast(DummyDeploymentSpec, deployment.spec)
# hard analysis happening here!
time.sleep(spec.time_to_start)
deployment.status.progress = OperationProgress.Running
self.data_store.update_deployment(deployment)
# hard work happening here!
time.sleep(spec.time_to_complete)
if not spec.should_succeed:
raise Exception()
import logging
import os
import shutil
import subprocess # nosec
import tempfile
from datetime import datetime
from os import path
from typing import List, cast
import yaml
from csep_common.datamodels import (
DeploymentInDB,
Event,
EventSeverity,
Host,
OpenStackDeploymentSpec,
OperationProgress,
UsernamePasswordAuth,
UsernameSSHKeyAuth,
)
from csep_worker.backend import Backend
from csep_worker.vault_client import vault_client
KOLLA_BASE = "/opt/kolla-ansible-venv"
ANSIBLE_PLAYBOOK_CMD = path.join(KOLLA_BASE, "bin/ansible-playbook")
KOLLA_GENPWD_CMD = path.join(KOLLA_BASE, "bin/kolla-genpwd")
KOLLA_MERGEPWD_CMD = path.join(KOLLA_BASE, "bin/kolla-mergepwd")
KOLLA_ANSIBLE_SITE_PLAYBOOK = path.join(
KOLLA_BASE, "share/kolla-ansible/ansible/site.yml"
)
KOLLA_ANSIBLE_KOLLA_HOST_PLAYBOOK = path.join(
KOLLA_BASE, "share/kolla-ansible/ansible/kolla-host.yml"
)
KOLLA_PASSWORDS_YAML = path.join(
KOLLA_BASE, "share/kolla-ansible/etc_examples/kolla/passwords.yml"
)
KOLLA_GLOBALS_DEFAULTS = {
# the following two vals are soon to be the only available
# variant so defaulting to them already
"kolla_base_distro": "debian",
"kolla_install_type": "source",
# no need to touch the hosts too much
"create_kolla_user": False,
"customize_etc_hosts": False,
# save resources
"rabbitmq_server_additional_erl_args": (
"+S 1:1 +sbwt none +sbwtdcpu none +sbwtdio none"
),
# this is going to be non-HA
"enable_haproxy": False,
"kolla_internal_vip_address": "{{ 'api' | kolla_address(groups.control | first) }}",
"neutron_external_interface": "",
}
# get trailing inventory (constant kolla groups)
# TODO: it would be better to try obtain this from kolla-ansible itself
TRAILING_INVENTORY_FILE_PATH = path.join(
path.dirname(__file__), "openstack/inventory.ini"
)
with open(TRAILING_INVENTORY_FILE_PATH, "r") as f:
TRAILING_INVENTORY = f.read()
def private_file_opener(path: str, flags: int) -> int:
return os.open(path, flags, mode=0o0600)
class OpenStackBackend(Backend):
def run_deployment(self, deployment: DeploymentInDB) -> None:
spec = cast(OpenStackDeploymentSpec, deployment.spec)
deployment.status.progress = OperationProgress.Running
self.data_store.update_deployment(deployment)
logging.info(f"Deployment {deployment.name!r} marked running")
auth: UsernamePasswordAuth | UsernameSSHKeyAuth
if spec.auth.type == "vault":
# need to establish the type dynamically
s = vault_client.read_secret(spec.auth.key)
if "username" in s and "password" in s:
auth = UsernamePasswordAuth(
type="username_password",
username=s["username"],
password=s["password"],
)
elif "username" in s and "private_ssh_key" in s:
auth = UsernameSSHKeyAuth(
type="username_sshkey",
username=s["username"],
private_ssh_key=s["private_ssh_key"],
)
else:
# TODO: use a custom exception type
raise Exception("Unknown auth type")
else:
auth = spec.auth
def gen_host_entry(host: Host) -> str:
# TODO: it's probably nicer to do it with array and join later
if host.name is None:
name = "host_" + host.ip_address.replace(".", "_")
else:
name = host.name
ip_address = host.ip_address
entry = f"{name} ansible_host={ip_address}"
if host.port is not None:
entry += f" ansible_port={host.port}"
entry += ansible_auth_str
if host.network_interface_name is None:
network_interface_name = "eth0"
else:
network_interface_name = host.network_interface_name
entry += f" network_interface={network_interface_name}"
return entry
with tempfile.TemporaryDirectory() as tmpdirpath:
def gen_file_path(name: str) -> str:
return path.join(tmpdirpath, name)
# NOTE: we are not using match/case because of mypy failing to
# recognise that pattern of type hint
if auth.type == "username_sshkey":
ssh_private_key_file_path = gen_file_path("ssh_key")
with open(
ssh_private_key_file_path, "w", opener=private_file_opener
) as f:
# TODO: validate the key format
f.write(auth.private_ssh_key)
# write an empty line in case it was missing (very likely)
# in the input string
f.write("\n")
ansible_auth_str = (
f" ansible_user={auth.username}"
f" ansible_ssh_private_key_file={ssh_private_key_file_path}"
)
elif auth.type == "username_password":
ansible_auth_str = (
f" ansible_user={auth.username}"
f" ansible_password={auth.password}"
)
# TODO: validate if roles are coherent and avoid the "I use a single one
# below" (also in kolla_external_fqdn)
inventory_file_path = gen_file_path("inventory.ini")
inventory_lines: List[str] = []
inventory_lines.append("[control]")
inventory_lines.append(gen_host_entry(spec.hosts[0]))
inventory_lines.append("[network]")
inventory_lines.append(gen_host_entry(spec.hosts[0]))
inventory_lines.append("[compute]")
for host in spec.hosts:
if host.role is None or host.role == "compute":
inventory_lines.append(gen_host_entry(host))
inventory_lines.append(gen_host_entry(spec.hosts[0]))
inventory_lines.append("[monitoring]")
inventory_lines.append(gen_host_entry(spec.hosts[0]))
inventory_lines.append("[storage]")
inventory_lines.append(gen_host_entry(spec.hosts[0]))
with open(inventory_file_path, "w") as f:
f.writelines((f"{line}\n" for line in inventory_lines))
f.write(TRAILING_INVENTORY)
# prepare globals
globals_file_path = gen_file_path("globals.yaml")
# init globals with defaults
globals = KOLLA_GLOBALS_DEFAULTS.copy()
globals["kolla_external_fqdn"] = spec.hosts[0].ip_address
# update with existing globals (user-provided or from previous run)
globals.update(**spec.globals)
with open(globals_file_path, "w") as f:
yaml.safe_dump(globals, f)
spec.globals = globals
self.data_store.update_deployment(deployment)
logging.info(f"Updated globals for deployment {deployment.name!r}")
# prepare passwords
passwords_file_path = gen_file_path("passwords.yaml")
# generate passwords
shutil.copyfile(KOLLA_PASSWORDS_YAML, passwords_file_path)
subprocess.run( # nosec
[KOLLA_GENPWD_CMD, "-p", passwords_file_path],
check=True,
capture_output=True,
)
with open(passwords_file_path, "r") as f:
passwords = yaml.safe_load(f)
# update with existing passwords (user-provided or from previous run)
passwords.update(**spec.passwords)
with open(passwords_file_path, "w") as f:
yaml.safe_dump(passwords, f)
spec.passwords = passwords
self.data_store.update_deployment(deployment)
logging.info(f"Updated passwords for deployment {deployment.name!r}")
self._run_kolla_ansible(
deployment,
inventory_file_path,
globals_file_path,
passwords_file_path,
tmpdirpath,
"bootstrap-servers",
)
self._run_kolla_ansible(
deployment,
inventory_file_path,
globals_file_path,
passwords_file_path,
tmpdirpath,
"pull",
)
self._run_kolla_ansible(
deployment,
inventory_file_path,
globals_file_path,
passwords_file_path,
tmpdirpath,
"precheck",
)
self._run_kolla_ansible(
deployment,
inventory_file_path,
globals_file_path,
passwords_file_path,
tmpdirpath,
"deploy",
)
# TODO: could we capture the progress as it moves forward?
def _run_kolla_ansible(
self,
deployment: DeploymentInDB,
inventory_file_path: str,
globals_file_path: str,
passwords_file_path: str,
config_dir_path: str,
action: str, # TODO: turn into an Enum
) -> None:
if action == "bootstrap-servers":
playbook = KOLLA_ANSIBLE_KOLLA_HOST_PLAYBOOK
else:
playbook = KOLLA_ANSIBLE_SITE_PLAYBOOK
# run ansible-playbook to deploy OpenStack using kolla-ansible
# adapted from kolla-ansible cmd
# TODO: watch out, this command is happy even if it does nothing, e.g.,
# due to the inventory being malformed
try:
deployment.status.add_event(
Event(
timestamp=datetime.now(),
severity=EventSeverity.Info,
message=f"Starting {action}",
)
)
self.data_store.update_deployment(deployment)
subprocess.run( # nosec
[
ANSIBLE_PLAYBOOK_CMD,
"--ssh-extra-args",
"-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null",
"-i",
inventory_file_path,
"-e",
f"@{globals_file_path}",
"-e",
f"@{passwords_file_path}",
"-e",
f"CONFIG_DIR={config_dir_path}",
"-e",
f"kolla_action={action}",
playbook,
],
check=True,
capture_output=True,
)
deployment.status.add_event(
Event(
timestamp=datetime.now(),
severity=EventSeverity.Info,
message=f"Finished {action}",
)
)
self.data_store.update_deployment(deployment)
except subprocess.CalledProcessError as e:
# TODO: it would be nice to parse the actual error message
# instead of dumping the whole output
p_out = e.stdout.decode("utf-8")
p_err = e.stderr.decode("utf-8")
deployment.status.add_event(
Event(
timestamp=datetime.now(),
severity=EventSeverity.Error,
message=f"stdout:\n{p_out}\nstderr:\n{p_err}",
)
)
self.data_store.update_deployment(deployment)
raise
# This file (below) is a fragment copy from kolla-ansible source code.
# It includes only the groups that are usually not to be modified.
# (Xena branch)
[deployment]
# This is modified (in ansible_python_interpreter) to work in the container.
localhost ansible_connection=local ansible_python_interpreter=/opt/kolla-ansible-venv/bin/python
[baremetal:children]
control
network
compute
storage
monitoring
[tls-backend:children]
control
# You can explicitly specify which hosts run each project by updating the
# groups in the sections below. Common services are grouped together.
[common:children]
control
network
compute
storage
monitoring
[collectd:children]
compute
[grafana:children]
monitoring
[etcd:children]
control
[influxdb:children]
monitoring
[prometheus:children]
monitoring
[kafka:children]
control
[kibana:children]
control
[telegraf:children]
compute
control
monitoring
network
storage
[elasticsearch:children]
control
[hacluster:children]
control
[hacluster-remote:children]
compute
[loadbalancer:children]
network
[mariadb:children]
control
[rabbitmq:children]
control
[outward-rabbitmq:children]
control
[qdrouterd:children]
control
[monasca-agent:children]
compute
control
monitoring
network
storage
[monasca:children]
monitoring
[storm:children]
monitoring
[keystone:children]
control
[glance:children]
control
[nova:children]
control
[neutron:children]
network
[openvswitch:children]
network
compute
manila-share
[cinder:children]
control
[cloudkitty:children]
control
[freezer:children]
control
[memcached:children]
control
[horizon:children]
control
[swift:children]
control
[barbican:children]
control
[heat:children]
control
[murano:children]
control
[solum:children]
control
[ironic:children]
control
[magnum:children]
control
[sahara:children]
control
[mistral:children]
control
[manila:children]
control
[ceilometer:children]
control
[aodh:children]
control
[cyborg:children]
control
compute
[gnocchi:children]
control
[tacker:children]
control
[trove:children]
control
[senlin:children]
control
[vmtp:children]
control
[vitrage:children]
control
[watcher:children]
control
[octavia:children]
control
[designate:children]
control
[placement:children]
control
[bifrost:children]
deployment
[zookeeper:children]
control
[zun:children]
control
[skydive:children]
monitoring
[redis:children]
control
[blazar:children]
control
# Additional control implemented here. These groups allow you to control which
# services run on which hosts at a per-service level.
#
# Word of caution: Some services are required to run on the same host to
# function appropriately. For example, neutron-metadata-agent must run on the
# same host as the l3-agent and (depending on configuration) the dhcp-agent.
# Common
[cron:children]
common
[fluentd:children]
common
[kolla-logs:children]
common
[kolla-toolbox:children]
common
# Elasticsearch Curator
[elasticsearch-curator:children]
elasticsearch
# Glance
[glance-api:children]
glance
# Nova
[nova-api:children]
nova
[nova-conductor:children]
nova
[nova-super-conductor:children]
nova
[nova-novncproxy:children]
nova
[nova-scheduler:children]
nova
[nova-spicehtml5proxy:children]
nova
[nova-compute-ironic:children]
nova
[nova-serialproxy:children]
nova
# Neutron
[neutron-server:children]
control
[neutron-dhcp-agent:children]
neutron
[neutron-l3-agent:children]
neutron
[neutron-metadata-agent:children]
neutron
[neutron-ovn-metadata-agent:children]
compute
[neutron-bgp-dragent:children]
neutron
[neutron-infoblox-ipam-agent:children]
neutron
[neutron-metering-agent:children]
neutron
[ironic-neutron-agent:children]
neutron
# Cinder
[cinder-api:children]
cinder
[cinder-backup:children]
storage
[cinder-scheduler:children]
cinder
[cinder-volume:children]
storage
# Cloudkitty
[cloudkitty-api:children]
cloudkitty
[cloudkitty-processor:children]
cloudkitty
# Freezer
[freezer-api:children]
freezer
[freezer-scheduler:children]
freezer
# iSCSI
[iscsid:children]
compute
storage
ironic
[tgtd:children]
storage
# Manila
[manila-api:children]
manila
[manila-scheduler:children]
manila
[manila-share:children]
network
[manila-data:children]
manila
# Swift
[swift-proxy-server:children]
swift
[swift-account-server:children]
storage
[swift-container-server:children]
storage
[swift-object-server:children]
storage
# Barbican
[barbican-api:children]
barbican
[barbican-keystone-listener:children]
barbican
[barbican-worker:children]
barbican
# Heat
[heat-api:children]
heat
[heat-api-cfn:children]
heat
[heat-engine:children]
heat
# Murano
[murano-api:children]
murano
[murano-engine:children]
murano
# Monasca
[monasca-agent-collector:children]
monasca-agent
[monasca-agent-forwarder:children]
monasca-agent
[monasca-agent-statsd:children]
monasca-agent
[monasca-api:children]
monasca
[monasca-grafana:children]
monasca
[monasca-log-transformer:children]
monasca
[monasca-log-persister:children]
monasca
[monasca-log-metrics:children]
monasca
[monasca-thresh:children]
monasca
[monasca-notification:children]
monasca
[monasca-persister:children]
monasca
# Storm
[storm-worker:children]
storm
[storm-nimbus:children]
storm
# Ironic
[ironic-api:children]
ironic
[ironic-conductor:children]
ironic
[ironic-inspector:children]
ironic
[ironic-pxe:children]
ironic
[ironic-ipxe:children]
ironic
# Magnum
[magnum-api:children]
magnum
[magnum-conductor:children]
magnum
# Sahara
[sahara-api:children]
sahara
[sahara-engine:children]
sahara
# Solum
[solum-api:children]
solum
[solum-worker:children]
solum
[solum-deployer:children]
solum
[solum-conductor:children]
solum
[solum-application-deployment:children]
solum
[solum-image-builder:children]
solum
# Mistral
[mistral-api:children]
mistral
[mistral-executor:children]
mistral
[mistral-engine:children]
mistral
[mistral-event-engine:children]
mistral
# Ceilometer
[ceilometer-central:children]
ceilometer
[ceilometer-notification:children]
ceilometer
[ceilometer-compute:children]
compute
[ceilometer-ipmi:children]
compute
# Aodh
[aodh-api:children]
aodh
[aodh-evaluator:children]
aodh
[aodh-listener:children]
aodh
[aodh-notifier:children]
aodh
# Cyborg
[cyborg-api:children]
cyborg
[cyborg-agent:children]
compute
[cyborg-conductor:children]
cyborg
# Gnocchi
[gnocchi-api:children]
gnocchi
[gnocchi-statsd:children]
gnocchi
[gnocchi-metricd:children]
gnocchi
# Trove
[trove-api:children]
trove
[trove-conductor:children]
trove
[trove-taskmanager:children]
trove
# Multipathd
[multipathd:children]
compute
storage
# Watcher
[watcher-api:children]
watcher
[watcher-engine:children]
watcher
[watcher-applier:children]
watcher
# Senlin
[senlin-api:children]
senlin
[senlin-conductor:children]
senlin
[senlin-engine:children]
senlin
[senlin-health-manager:children]
senlin
# Octavia
[octavia-api:children]
octavia
[octavia-driver-agent:children]
octavia
[octavia-health-manager:children]
octavia
[octavia-housekeeping:children]
octavia
[octavia-worker:children]
octavia
# Designate
[designate-api:children]
designate
[designate-central:children]
designate
[designate-producer:children]
designate
[designate-mdns:children]
network
[designate-worker:children]
designate
[designate-sink:children]
designate
[designate-backend-bind9:children]
designate
# Placement
[placement-api:children]
placement
# Zun
[zun-api:children]
zun
[zun-wsproxy:children]
zun
[zun-compute:children]
compute
[zun-cni-daemon:children]
compute
# Skydive
[skydive-analyzer:children]
skydive
[skydive-agent:children]
compute
network
# Tacker
[tacker-server:children]
tacker
[tacker-conductor:children]
tacker
# Vitrage
[vitrage-api:children]
vitrage
[vitrage-notifier:children]
vitrage
[vitrage-graph:children]
vitrage
[vitrage-ml:children]
vitrage
[vitrage-persistor:children]
vitrage
# Blazar
[blazar-api:children]
blazar
[blazar-manager:children]
blazar
# Prometheus
[prometheus-node-exporter:children]
monitoring
control
compute
network
storage
[prometheus-mysqld-exporter:children]
mariadb
[prometheus-haproxy-exporter:children]
loadbalancer
[prometheus-memcached-exporter:children]
memcached
[prometheus-cadvisor:children]
monitoring
control
compute
network
storage
[prometheus-alertmanager:children]
monitoring
[prometheus-openstack-exporter:children]
monitoring
[prometheus-elasticsearch-exporter:children]
elasticsearch
[prometheus-blackbox-exporter:children]
monitoring
[masakari-api:children]
control
[masakari-engine:children]
control
[masakari-hostmonitor:children]
control
[masakari-instancemonitor:children]
compute
[ovn-controller:children]
ovn-controller-compute
ovn-controller-network
[ovn-controller-compute:children]
compute
[ovn-controller-network:children]
network
[ovn-database:children]
control
[ovn-northd:children]
ovn-database
[ovn-nb-db:children]
ovn-database
[ovn-sb-db:children]
ovn-database
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment