Commit 817d7760 authored by dobli's avatar dobli
Browse files

Updated documentation

parent 74f14397
......@@ -27,7 +27,7 @@ cd openhab-pb-stack
#### Requirements
The building manager script has a few requirements for the system and the python environment before being able to be executed.
The building manager script has a few requirements for the system as well as for the python environment before being able to be executed.
**System:**
......@@ -43,7 +43,7 @@ mosquitto (needed to for mosquitto password generation)
ssh-keygen
```
On a Ubuntu System these can be installed following these commands:
On a Ubuntu system most of these can be installed following these commands:
```bash
sudo apt install mosquitto, python3-pip # Needed to use mosquitto_passwd
......@@ -51,7 +51,7 @@ sudo systemctl stop mosquitto # Stop Mosquitto service
sudo systemctl disable mosquitto # Disable Mosquitto service
```
To install docker it is not recommended to use the versions in the Ubuntu repository. Instead the official Docker install instructions should be used to install [Docker](https://docs.docker.com/install/linux/docker-ce/ubuntu/), [Docker Compose](https://docs.docker.com/compose/install/) and [Docker Machine](https://docs.docker.com/machine/install-machine/).
To install docker it is **not** recommended to use the versions in the Ubuntu repository. Instead the official Docker install instructions should be used to install [Docker](https://docs.docker.com/install/linux/docker-ce/ubuntu/), [Docker Compose](https://docs.docker.com/compose/install/) and [Docker Machine](https://docs.docker.com/machine/install-machine/).
While the other requirements are only necessary on a single machine to work with the script, Docker needs to be available on all machines.
......@@ -67,7 +67,7 @@ bcrypt # generate bcrypt hashes
pip-tools # manage requirements (Optional)
```
Again on an Ubuntu system the following command can be used to install them (you need to be in the cloned folder) for the current user:
Again on an Ubuntu system the following command can be used to install them for the current user (you need to be in the cloned folder):
```
pip3 install --user -r requirements.txt
......@@ -79,13 +79,13 @@ Updating the `requirements.txt` file can be done using `pip-compile` again. In a
### Preparation
After installing the requirements it is necessary to connect all instances intended to be used with docker-machine. Docker-machine allows to manage multiple machines running the docker.
After installing the requirements it is necessary to connect all instances intended to be used with docker-machine. Docker-machine allows to manage multiple machines running the docker daemon.
[These instructions](https://docs.docker.com/machine/drivers/generic/) explain how to add a machine to docker-machine.
**NOTE:** Following is assumed the machines have the hostnames *building1* (IP: 192.168.1.10) and *building2* (IP: 192.168.1.20) both have a user called *pbuser*. These values need to be **adapted** to your setup.
**NOTE:** Following example assumes the machines have the hostnames *building1* (IP: 192.168.1.10) and *building2* (IP: 192.168.1.20) both have a user called *pbuser*. These values need to be **adapted** to your setup.
Following steps need to be executed for every machine that should run the script:
Following steps need to be executed for every machine that should run the script to configure:
1. Generate keys on the master node for ssh access
......@@ -102,19 +102,19 @@ Following steps need to be executed for every machine that should run the script
This allows to access the machines using ssh without a password.
3. Docker-machine needs the users on *each node* to be able to use sudo without a password, to enable it for our example *pbuser* add the following line to the `/etc/sudoers`:
3. Docker-machine needs the users on **each node** to be able to use sudo without a password, to enable it for our example *pbuser* add the following line to the `/etc/sudoers`:
```sh
pbuser ALL=(ALL) NOPASSWD: ALL
```
To add this line with a single command to the file execute the following (on each node):
To add this line with a single command to the file execute the following (**on each node**):
```sh
echo "pbuser ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers
```
4. Add the nodes to docker-machine:
4. Finally add all nodes to docker-machine on each machine that shall run the script:
```sh
docker-machine create --driver generic --generic-ip-address=192.168.1.10 --generic-ssh-key ~/.ssh/id_rsa --generic-ssh-user pbuser building1 # Building 1
......@@ -136,44 +136,94 @@ This will open the script in interactive mode. It shows a menu with various opti
### Initial Setup
When the script is started for the first time the only option is to create an initial setup. This will ask multiple questions about the setup, e.g. which machine nodes will be used, what services they shall provide and administrative passwords.
When the script is started for the first time the only option is to create an initial setup. This will ask multiple questions about the setup, e.g. which machine nodes will be used, what services they shall provide and what the administrative password should be. It then generates all needed files and places them in the `custom_configs/` folder.
![init_menu](docs/images/init_menu.png)
### Start and stop the stack
After successful initial execution the stack can be started and stoped either by rerunning the application and using the service menu or by executing the following commands from the repo directory:
```sh
docker stack deploy -c custom_configs/docker-stack.yml ohpb # Start stack
docker stack rm ohpb # Stop stack
```
### Manage Services
As already mentioned the application offers the option to start and stop services, this is done by executing the above commands. In addition it allows to adjust, create and remove services by adjusting the stack file.
![service_menu](docs/images/service_menu.png)
### Manage Users
A basic user management is also possible through the application. It allows to create new users (for the access to the web applications only), as well as change passwords and remove them.
![user_menu](docs/images/user_menu.png)
### Manage Backups
A further addition is a backup menu. It allows to execute backups by executing the necessary volumerize commands. It also allows to restore files from a backup to the correct volumes.
![backup_menu](docs/images/backup_menu.png)
### Manage Devices
An automation setup needs to access several different devices to be able to communicate with sensors and actors (e.g. an USB Z-Wave modem). These are by default not accessible to docker containers, docker swarm also does not provide a default way to grant access to them. Docker uses cgroups to manage device access though. This enables us to grant the correct cgroup permissions when a container launches. The script offers a menu entry to install the necessary files and rules on any connected node. A second entry then allows to setup links between containers and devices.
## Config file generation
![device_menu](docs/images/device_menu.png)
The openhab-pb stack consists of multiple configuration files that need to be available and will be used by the docker containers. The Manager Script generates these for convinience. In addition they are documented here, sorted by application/folder, to understand their usecases.
To execute these steps manually it is also possible to execute the `install_usb_support.sh` script manually. To link and unlink devices on a specific node it is only necessary to create/remove a corresponding systemd service:
```sh
# Create and start link service for openhab locally
sudo systemctl enable --now swarm-device@zwave_stick\\x20openhab.service
# Remove and stop systemd service for openhab locally
sudo systemctl disable --now swarm-device@zwave_stick\\x20openhab.service
```
## How it works
Following parts describe a little more in detail how the script works and how parts may be executed manually.
### Configuration file generation
The generated swarm stack consists of multiple configuration files that need to be available and will be used by the docker containers. The *Public Building Manager* script generates these for convenience. In addition they are documented here, sorted by application/folder, to understand what they do.
**docker-stack.yml**
- Main docker stack file that contains all services
- Generated by copying and modifying snippets from two templates
- *docker-skeleton.yml*: contains the base structure of the compose file
- *docker-templates.yml*: contains templates for service entires
**mosquitto**
- *mosquitto.conf*: basic configuration of mosquitto
- copy from template folder
- disables anonymous access
- enables usage of password file
- copied from template folder
- disables anonymous access to the MQTT server
- enables usage of a password file
- *mosquitto_passwords*: List of users/passwords that gain access to mosquitto
- generated with `mosquitto_passwd`
- Uses SHA512 crypt -> maybe generated using pythons crypt library
- Uses custom SHA512 crypt
**nodered**
- *nodered_package.json*: packages to be installed when node red is setup
- copy from template folder
- contains entry for openhab package
- *nodered_settings.js*: basic node red config
- copy from template folder
- *nodered_settings.js*: basic node red settings
- copied from template folder
**ssh**
- *sshd_config*: basic ssh config
- copy from template folder
- copied from template folder
- *sftp_users.conf*: file containing users for sftp container
- generated, grants access to configuration files
- uses `makepasswd` to generate MD5 hashed passwords
- script uses pythons `crypt` to generate them
- generated, grants access to configuration files over SFTP
- usually `makepasswd` is used to generate MD5 hashed passwords
- script uses pythons `crypt` to generate them
- as it relies on the Linux password system we can even use stronger hashes like SHA512
- *known_hosts*: make backup (volumerize) hosts know internal ssh servers
- generated using ssh-keygen
......@@ -182,71 +232,129 @@ The openhab-pb stack consists of multiple configuration files that need to be av
- *ssh_host_x_key*: hostkey for ssh, X is cryptosystem
- generated using ssh-keygen
**postgres**
- *passwd*: contains the initial password for the database administrator
- MD5 hash generated in python
- *user*: contains the initial user name for the database administrator
**traefik**
- *traefik.toml*: basic traefik configuration
- copy from template folder
- copied from template folder
- entryPoints.http.auth.basic contains usersFile that describes the path to a htpasswd file
- *traefik_users*: htpasswd style file that contains users and hashed passwords
- generated
- file and contained hash are generated using the `bcrypt` hash library in python
**pb-framr**
- *logo.svg*: the logo used in the frame menu, may be replaced by a custom one
- copied from the template folder
- *pages.json*: configures the menu entries in the frame menu
- generated by the script based on choosen services
**volumerize**
- *backup_config_X.json*: backup/volumerize config for each building, X is replaced by building name
- contains the backup targets/restore sources for volumerize
## Development
### Setup
## OLD CONTENT
To develop the application a similar setup is needed as described in the production instructions above.
This repository contains files describing how an openHAB stack could look for a public instition with multiple buildings.
It consists of a main docker file, example configurations for the included components and explanations how to handle and adapt them.
It is recommended to use a `virtualenv` python environment though. This can be either setup for the project in the used IDE or directly on the system using solutions like [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/). The `virtualenv` keeps the python environment of the project separate of the system. Ensure it is a python 3 environment (at least python 3.6)
This project only provides a template and explanations to build an own setup of distributed openHAB instance. Therefore it needs to be adapted and customized to the actual environment before usage.
When the `virtualenv` is activated execute the following commands from the root directory of the repository to ensure all python dependencies are installed:
### Prerequisites
```sh
pip install pip-tools # installs tools for dependency management
pip-sync # ensures virtualenv matches requirements.txt
```
When additional requirements are needed they can be added to the `requirements.in` file and installed to the `virtualenv` with following commands:
```sh
pip-compile # compile requirements.txt from requirements.in
pip-sync # ensures virtualenv matches requirements
```
The template and it's infrastructure relies heavily on docker to achive an easy automated setup and maintenance. The first step would be the ![installation of docker](https://docs.docker.com/install/). In addition it is also necessary to ![install docker compose](https://docs.docker.com/compose/install/#install-compose).
**Test environment**
The setup is tailored towards the usage on multiple machines. Therefore it expects docker to run in ![swarm mode](https://docs.docker.com/engine/swarm/swarm-tutorial/). To start our example configuration that defines three buildings we need three hosts running docker.
To be able to properly try and test the script separate docker machines are needed. An obvious way to achieve this locally is to use a set of VMs. These can be easily created using [docker machine](https://docs.docker.com/machine/install-machine/). When installed on the development machine it is enough to execute:
To initialize swarm mode on the main host machine we run:
```sh
docker swarm init --advertise-addr <MANAGER-IP> # Replace <MANAGER-IP> IP by the ip of the machine
docker-machine create --driver virtualbox building1 # Creates VM with name building1
docker-machine create --driver virtualbox building2 # Creates VM with name building2
```
This will setup a swarm environment and print a command to be used on other machines to join this swarm similar to this:
This will create a VM install an OS that only contains docker inside and start it. It can then be managed with following commands:
```sh
docker swarm join --token SWMTKN-1-44lk56nj5h6jk4h56yz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwachk4h567c <MANAGER-IP>:2377
docker-machine start <machine-name> # Start the VM
docker-machine stop <machine-name> # Stop the VM
```
After executing this on the other two hosts we have a ready to use swarm environment, it can be checked by running `docker node ls` on our main host.
### Installing
What makes handling easy is, that all docker commands can be executed as if they were executed in one of the VMs (e.g. `docker ps`). To achieve this the docker environment can be set with the following command:
```sh
eval $(docker-machine env <machine-name>) # Set docker to a machine
```
With our swarm environment ready we can continue with starting our example setup. First switch to the main host again. First it is necessary to clone the template to the machine using git:
### Manual Swarm environment
The generated stack needs a docker swarm that will be used to execute it. On initial run the script will ask to generate it for the choosen machines. The swarm can also be created manually.
To initialize swarm mode on the first machine switch the environment to the first machine (see above) and execute:
```sh
git clone https://github.com/Dobli/openhab-pb-stack/edit/master/README.md
docker swarm init --advertise-addr <MANAGER-IP> # Replace <MANAGER-IP> IP by the ip of the machine (check with docker-machine ls)
```
To start it up then it is enough to change into the cloned directory and run:
This will setup a swarm environment and print a command to be used on other machines to join this swarm similar to this:
```sh
docker staack deploy -c docker-compose.yml ohSwarmTest # ohSwarmTest is the name of the exmaple stack
docker swarm join --token SWMTKN-1-44lk56nj5h6jk4h56yz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwachk4h567c <MANAGER-IP>:2377
```
This will instruct docker swarm to download the corresponding application images and run them.
This than has to be executed on the second machine (again switch environment). The result can be checked by running `docker node ls` on the first machine again.
#### Add building labels
This will not start openHAB yet as it needs to now the assignment of hosts to buildings first. This is solved by labels assigned to the nodes. The example configurations uses the labels `b1`, `b2` and `b3` to assign these run the following commands on the main host:
A manually created swarm is not able to launch a stack yet or rather will not start applications. It is first necessary to assign building roles to hosts first. This is solved by labels assigned to the nodes. By default they match the machines node name so e.g. labels could be `building1` and`building2` to assign these run the following commands on the first machine:
```sh
docker node update --label-add building=b1 <NAME_OF_HOST_1>
docker node update --label-add building=b2 <NAME_OF_HOST_2>
docker node update --label-add building=b3 <NAME_OF_HOST_3>
docker node update --label-add building=building1 <NAME_OF_HOST_1>
docker node update --label-add building=building2 <NAME_OF_HOST_2>
```
Docker swarm should pick up the changes automatically and start openHAB on each machine.
Docker swarm should pick up the changes automatically and start the stack on each machine if it was already deployed.
### Adding own services
As the script does not save any information itself, it is possible to add own services by manually modifying the generated stack file in `custom_configs/docker-stack.yml`, just be aware to set a deployment label to ensure it will run on the intended building, take existing services as a reference.
### Extending Device Support
As of now the device link helper only supports the *Aeotec Z-Stick Gen5* Z-Wave Stick. To add support for additional devices it is necessary to create a udev rule for each. This ensures the device can be uniquely identified later during linking with docker containers.
For this we first need to get the vendor and product ID of our new device. For devices connected via USB they can be obtained by executing `lsusb` before and after connecting the device. The new entry is the just connected device the IDs can be found after the ID keyword in the format `vvvv:pppp` the first number (`vvvv`) is the vendor ID the second (`pppp`) is the product ID.
After obtaining the IDs, add a new rule to the `./tempolate_configs/docker-devices.rules` file. It should look like this:
```sh
# Serial USB device notice singular SUBSYSTEM
SUBSYSTEM=="tty", ATTRS{idVendor}=="vvvv", ATTRS{idProduct}=="pppp", GROUP="dialout", MODE="0666", SYMLINK+="my_serial"
# Real USB device notice plural SUBSYSTEMS
SUBSYSTEMS=="usb", ATTRS{idVendor}=="vvvv", ATTRS{idProduct}=="pppp", GROUP="dialout", MODE="0666", SYMLINK+="my_usb"
```
**Notice:** Serial devices connected through USB (like ZigBee or Z-Wave modems) still need an `tty` entry.
The instances should then be available on the subdomains b1, b2, b3 on each of the hosts.
To use this new rule simply rerun the installation of the device scripts as explained above. After replugging the device it will now always be available by the name defined with `SYMLINK+=`, e.g. `/dev/my_serial`.
-
\ No newline at end of file
From now on the device can be linked manually already by creating a systemd entry manually using its device name. To make the script aware of the new device (and have an entry in the menu) simply extend the constant `USB_DEVICES` in the `building_manager.py` by a new entry using the device name as the value.
\ No newline at end of file
......@@ -1223,6 +1223,37 @@ def list_enabled_devices():
# ******************************
# Docker client commands <<<
# ******************************
def deploy_docker_stack(base_dir, machine):
"""Deploys the custom stack in the base_dir
:base_dir: Base directory to look for stack file
:machine: Docker machine to execute command
"""
# Set CLI environment to target docker machine
machine_env = get_machine_env(machine)
os_env = os.environ.copy()
os_env.update(machine_env)
# Get compose file and start stack
compose_file = f'{base_dir}/{CUSTOM_DIR}/{COMPOSE_NAME}'
deploy_command = f'docker stack deploy -c {compose_file} ohpb'
run([f'{deploy_command}'], shell=True, env=os_env)
def remove_docker_stack(machine):
"""Removes the custom stack in the base_dir
:machine: Docker machine to execute command
"""
# Set CLI environment to target docker machine
machine_env = get_machine_env(machine)
os_env = os.environ.copy()
os_env.update(machine_env)
remove_command = f'docker stack rm ohpb'
run([f'{remove_command}'], shell=True, env=os_env)
def resolve_service_nodes(service):
"""Returnes nodes running a specified service
......@@ -1682,12 +1713,19 @@ def service_menu(args):
# Ask for action
choice = qust.select("What do you want to do?", choices=[
'Re-/Start docker stack', 'Stop docker stack',
'Modify existing services', 'Add additional service',
'Exit'], style=st).ask()
if "Add" in choice:
service_add_menu(base_dir)
elif "Modify" in choice:
service_modify_menu(base_dir)
elif "Start" in choice:
machine = docker_client_prompt(" to execute deploy")
deploy_docker_stack(base_dir, machine)
elif "Stop" in choice:
machine = docker_client_prompt(" to execute remove")
remove_docker_stack(machine)
def service_add_menu(base_dir):
......@@ -1740,7 +1778,6 @@ def device_menu(args):
:args: Arguments form commandline
"""
print("Adding device")
# Base directory for configs
base_dir = args.base_dir
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment