Browse Source

docs: document services

keep-around/a917dd4ee5fade96d7f44672a570002acf0f9571
Loïc Dachary 1 year ago
committed by Loic Dachary
parent
commit
a917dd4ee5
Signed by: dachary GPG Key ID: 992D23B392F9E4F2
  1. 57
      docs/community/domain.rst
  2. 3
      docs/community/index.rst
  3. 49
      docs/services/VPN.rst
  4. 7
      docs/services/authorized_keys.rst
  5. 80
      docs/services/backup.rst
  6. 90
      docs/services/bind.rst
  7. 18
      docs/services/enough.rst
  8. 4
      docs/services/forum.rst
  9. 17
      docs/services/gitlab.rst
  10. 18
      docs/services/ids.rst
  11. 18
      docs/services/index.rst
  12. 15
      docs/services/mattermost.rst
  13. 329
      docs/services/monitoring.rst
  14. 10
      docs/services/nextcloud.rst
  15. 10
      docs/services/pad.rst
  16. 30
      docs/services/postfix.rst
  17. 12
      docs/services/weblate.rst
  18. 4
      docs/services/website.rst
  19. 10
      docs/services/wekan.rst
  20. 18
      docs/user-guide.rst
  21. 6
      inventory/group_vars/all/openvpn.yml
  22. 5
      inventory/host_vars/icinga-host/monitoring.yml
  23. 2
      playbooks/openvpn/roles/openvpn/defaults/main.yml

57
docs/community/domain.rst

@ -0,0 +1,57 @@
.. _domain:
Domain
======
The `enough.community` domain name is registered at `Gandi
<https://gandi.net>`_ under the user EC8591-GANDI.
After the `bind-host` virtual machine is created, click on `Glue record management` in the Gandi web
interface and set ns1 to the IP, i.e. 51.68.79.8 and wait a few
minutes. Click on `Update DNS` and set the `DNS1` server to
ns1.enough.community and click on `Add Gandi's secondary nameserver`
which should add a new entry in DNS2: it will automatically act as a
secondary DNS.
The `bind-host` virtual machine should be initialized before any other
because everything depends on it.
Mail
----
The `admin mail <admin@enough.community>`_ is
hosted at Gandi and is used as the primary contact for all
`enough.community` resources (hosting etc.). In case a password is lost
this is the mail receiving the link to reset the password etc.
Zones
-----
enough.community
````````````````
The `enough.community` zone is managed on a dedicated virtual machine
`ns1.enough.community`. It is generated via `the bind playbook
<http://lab.enough.community/main/enough-community/blob/master/playbooks/bind/bind-playbook.yml>`_.
* The port udp/53 is open to all but recursion is only allowed for IPs
of the enough.community VMs
* An **A** record is created for all existing VM names
* A **CNAME** record is created for all VM names without the `-host` suffix
* The `SPF` **TXT** record help :doc:`send mail <../services/postfix>` successfully.
test.enough.community and d.enough.community
````````````````````````````````````````````
They can be updated locally by the `debian` user via ``nsupdate``. Example:
::
- E - debian@bind-host:~$ nsupdate <<EOF
server localhost
zone test.enough.community
update add bling.test.enough.community. 1800 TXT "Updated by nsupdate"
show
send
quit
EOF

3
docs/community/index.rst

@ -2,11 +2,10 @@ Community
=========
.. toctree::
:caption: Community
:name: Community
:maxdepth: 2
documentation
domain
infrastructure
extending
contribute

49
docs/services/VPN.rst

@ -0,0 +1,49 @@
.. _vpn:
VPN
===
Enough hosts can be connected to a public network (with public IP
addresses) and an internal network (with private IP addresses. When a
host is not connected to the public network, it can only be accessed
in two ways:
* By connecting to a host connected to both the public network and the
internal network.
* By connecting to the VPN (which is running on a host connected to
both the public network and the internal network).
VPN Server configuration
------------------------
The `OpenVPN <https://openvpn.net/>`__ server is configured with
variables (see `the documentation
<https://lab.enough.community/main/infrastructure/blob/master/playbooks/openvpn/roles/openvpn/defaults/main.yml>`__).
VPN Clients
-----------
The certificates for clients to connect to the VPN will be created
from the list in the `openvpn_active_clients` variable in
`~/.enough/example.com/inventory/group_vars/all/openvpn.yml`,
using `this example
<https://lab.enough.community/main/infrastructure/blob/master/inventory/group_vars/all/openvpn.yml>`__.
For each name in the `openvpn_active_clients` list, a `.tar.gz` file will be created in the
`~/.enough/example.com/openvpn/` directory. For instance, for
.. code::
---
openvpn_active_clients:
- loic
- glen
After running `enough --domain example.com playbook`, the files
`~/.enough/example.com/openvpn/loic.tar.gz` and
`~/.enough/example.com/openvpn/glen.tar.gz` will be created and
will contain the credentials.
On Debian GNU/Linux the `.tar.gz` can be extracted in a `vpn`
directory and the `.conf` file it contains imported using the `Network
=> VPN` system settings.

7
docs/services/authorized_keys.rst

@ -1,7 +0,0 @@
SSH public keys
===============
Each :doc:`team <../community/team>` member with access to the infrastructure
should have their ssh public key in `the ssh_keys directory
<http://lab.enough.community/main/infrastructure/tree/master/playbooks/authorized_keys/roles/authorized_keys/files/ssh_keys>`_ so it is added to the `~debian/.ssh/authorized_keys` file.

80
docs/services/backup.rst

@ -1,65 +1,35 @@
Backups and recovery
====================
Backups
=======
Backup policy
-------------
Persistent data is placed in :ref:`encrypted volumes
<attached_volumes>` otherwise it may be deleted at any moment, when
the host fails. A daily backup of all volumes is done by the host in
the `backup-service-group` group. Exactly one host must be set in the
`~/.enough/example.com/inventory/services.yml` file, like so:
Each VM is `snapshoted daily <http://lab.enough.community/main/infrastructure/blob/master/playbooks/backup/roles/backup/templates/backup.sh>`_ and snapshots `older than 30 days <http://lab.enough.community/main/infrastructure/blob/master/playbooks/backup/roles/backup/templates/prune-backup.sh>`_ are removed.
.. code:: yaml
Disaster recovery
-----------------
backup-service-group:
hosts:
bind-host:
The VMs are cheap and do not provide any kind of guarantee: all
data they contain can be lost. To recover a lost production VM:
The number of backups is defined with the `backup_retention_days` variable
as documented `in this file <https://lab.enough.community/main/infrastructure/blob/master/playbooks/backup/roles/backup/defaults/main.yml>`__ and can be set in `~/.enough/example.com/inventory/group_vars/backup-service-group.yml` like so:
* login debian@ansible.enough.community and get OpenStack credentials from `~/openrc.sh` or :doc:`ask a team member <../community/team>`.
* cd /srv/checkout
.. code:: yaml
If the virtual machine is cattle
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
---
backup_retention_days: 7
* Delete the broken machine if it is still around, e.g. ``openstack stack delete website-host``
* Create a new machine by the same name
* Run the playbook so the DNS updates with the IP of the newly created VM
.. note::
If the virtual machine is a pet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the quota for volume snapshots displayed by `enough --domain
example.com quota show` is too low, a support ticket should be
opened with the cloud provider to increase it.
* Get the name of the latest backup with ``openstack image list --private``
* Rename the broken machine if it is still around, e.g. ``openstack server set --name packages-destroyed packages-host``
* Get the flavor for the machine
* Create a new machine from the backup, e.g. ``openstack server create --flavor s1-2 --image 2018-05-14-packages-host --security-group infrastructure --wait packages-host``
* Edit ``inventories/common/01-hosts.yml`` and replace the IP of the broken machine with the IP of the new machine
* Clear the ansible cache ``rm -fr ~/.ansible``
* Run the playbook so the DNS updates with the IP of the newly created VM
* Reboot the machine, in case it had IPs from before the DNS was updated by ansible
* Run the playbook so the DNS updates with the IP of the newly created VM
A volume backup can be used to :ref:`restore a service
<restore_service_from_backup>` in the state it was at the time of the
backup.
Disaster recover exercize
-------------------------
If the virtual machine is cattle
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Remove the machine, e.g. ``openstack server delete website-host``
If the virtual machine is a pet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Rename the machine, e.g. ``openstack server set --name packages-destroyed packages-host``
* Suspend, e.g. ``openstack server suspend packages-destroyed``
* Remove the machine when the recovery is successfull, e.g. ``openstack server remove packages-destroyed``
Booting from a backup
---------------------
* Login https://horizon.cloud.ovh.net/
* Pause the broken instance
* Go to Images
* Locate the latest backup
* Launch with the expected flavor and the same name as the broken flavor
* Edit security group, remove default and add the security group that has the same name as the broken instance
* Add the IP of the new instance to /etc/hosts and manually check it can be logged in and responds
* Login the new instance and set /etc/resolv.conf to 8.8.8.8
* Edit hosts.yml and replace the IP of the broken instance with the IP of the new instance
* Run ansible-playbook on the new instance
The volumes are replicated three times and their data cannot be lost
because of a hardware failure.

90
docs/services/bind.rst

@ -3,72 +3,46 @@
DNS
===
Registrar
---------
Records
-------
The `enough.community` domain name is registered at `Gandi
<https://gandi.net>`_ under the user EC8591-GANDI.
When a new host is created (for instance with `enough --domain
example.com host create cloud-host`) the names
`cloud-host.example.com` and `cloud.example.com` are added to the DNS.
After the `bind-host` virtual machine is created, click on `Glue record management` in the Gandi web
interface and set ns1 to the IP, i.e. 51.68.79.8 and wait a few
minutes. Click on `Update DNS` and set the `DNS1` server to
ns1.enough.community and click on `Add Gandi's secondary nameserver`
which should add a new entry in DNS2: it will automatically act as a
secondary DNS.
The `bind_zone_records` variable is inserted in the `example.com` zone
declaration verbatim (see the `BIND documentation for information <https://bind9.readthedocs.io/en/latest/reference.html#zone-file>`__).
It can be set in `~/.enough/example.com/inventory/host_vars/bind-host/zone.yml` like so:
The `bind-host` virtual machine should be initialized before any other
because everything depends on it.
.. code:: yaml
Mail
----
bind_zone_records: |
imap 1800 IN CNAME access.mail.gandi.net.
pop 1800 IN CNAME access.mail.gandi.net.
smtp 1800 IN CNAME relay.mail.gandi.net.
@ 1800 IN MX 50 fb.mail.gandi.net.
@ 1800 IN MX 10 spool.mail.gandi.net.
The `admin mail <admin@enough.community>`_ is
hosted at Gandi and is used as the primary contact for all
`enough.community` resources (hosting etc.). In case a password is lost
this is the mail receiving the link to reset the password etc.
Zones
-----
enough.community
````````````````
The `enough.community` zone is managed on a dedicated virtual machine
`ns1.enough.community`. It is generated via `the bind playbook
<http://lab.enough.community/main/enough-community/blob/master/playbooks/bind/bind-playbook.yml>`_.
* The port udp/53 is open to all but recursion is only allowed for IPs
of the enough-community VMs
* An **A** record is created for all existing VM names
* A **CNAME** record is created for all VM names without the `-host` suffix
* Manually maintained records are added to `the bind playbook <http://lab.enough.community/main/enough-community/blob/master/playbooks/bind/bind-playbook.yml>`_.
* The `SPF` **TXT** record help :doc:`send mail <postfix>` successfully.
test.enough.community
`````````````````````
The `test.enough.community` zone is managed on the same dedicated virtual machine
`ns1.enough.community`. It is generated via `the bind playbook
<http://lab.enough.community/main/enough-community/blob/master/playbooks/bind/bind-playbook.yml>`_.
Host Resolver
-------------
It can be updated locally by the `debian` user via ``nsupdate``. This enables
any enough.community's administrator to setup new preproduction testing
subdomains. Exemple:
The resolver of all hosts (in `/etc/resolv.conf`) is set with the IP
of the DNS server that was :ref:`created to bootstrap Enough
<bind_create>`. It is used to resolve the host names in the Enough
domain (for instance `example.com` or `cloud.example.com`) and all
other domain names (for instance `gnu.org` or `fsf.org`).
::
VPN Resolver
------------
- E - debian@bind-host:~$ nsupdate <<EOF
server localhost
zone test.enough.community
update add bling.test.enough.community. 1800 TXT "Updated by nsupdate"
show
send
quit
EOF
When a client connects to the :doc:`VPN <VPN>`, its resolver is set to the
Enough DNS server.
VMs resolvers
-------------
.. note::
Each VM is set to use `ns1.enough.community` as a resolver via `the bind-client playbook <http://lab.enough.community/main/enough-community/blob/master/playbooks/bind/bind-client-playbook.yml>`_
which also sets the FQDN.
Using the Enough DNS instead of the DNS of an internet service
provider bypasses rewrites of DNS entries (imposed by by `the state
<https://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000030195477&dateTexte=&categorieLien=id>`__
in some cases).

18
docs/services/enough.rst

@ -1,18 +0,0 @@
Enough
======
`cloud.enough.community <http://lab.enough.community/main/infrastructure/tree/master/playbooks/enough/roles/nextcloud>`_ is installed `with docker <https://github.com/nextcloud/docker>`_.
The ``/var/lib/docker`` directory is mounted on a 3 replica volume and
should be manually backup from time to time to keep the history. If
there is not enough space, it can be resized with:
.. code::
$ openstack volume set --size 200 cloud-volume
$ ssh debian@cloud.enough.community
$ sudo resize2fs /dev/sdb
Note that the ``size`` in the ansible role for the ``os_volume`` tasks
is only used when the volume is created and cannot be used to shrink
or enlarge the volume.

4
docs/services/forum.rst

@ -0,0 +1,4 @@
Forum
=====
`Discourse <https://discourse.org/>`__ is available at `forum.example.com`.

17
docs/services/gitlab.rst

@ -1,13 +1,10 @@
GitLab
======
`lab.enough.community <http://lab.enough.community/main/infrastructure/tree/master/playbooks/gitlab/roles/gitlab>`_ is installed `with docker <https://hub.docker.com/r/sameersbn/gitlab/>`_.
The configuration variables are set in `inventories/common/host_vars/gitlab-host/gitlab.yml` at
the root of the repository. It can be copied from
`playbooks/gitlab/roles/gitlab/defaults/main.yml`.
* `gitlab_password`: database password
* `gitlab_shared_runners_registration_token`: runner registration token predefined for tests
* `gitlab_secrets_db_key_base`, `gitlab_secrets_otp_key_base` and `gitlab_secrets_secret_key_base`: unique keys that can be generated with `pwgen -Bsv1 64`
* `gitlab_os_*`: default to the OpenStack tenant variables. In production they should be set to a dedicated tenant, entirely separated from production, because it will be used by all commits pushed to the repository.
`GitLab <https://gitlab.com/>`__ is available at `lab.example.com`.
The user with administrative rights is `root`. Its password can be set
as documented in `this file
<https://lab.enough.community/main/infrastructure/blob/master/inventory/group_vars/gitlab/gitlab.yml>`__
and can be modified in the
`~/.enough/example.com/inventory/group_vars/gitlab/gitlab.yml`
file.

18
docs/services/ids.rst

@ -3,18 +3,6 @@
Intrusion Detection System
==========================
Wazuh
-----
The `Wazuh <http://wazuh.com/>`_ server/manager is installed on a
dedicated host and all other hosts run an agent. The roles used by the `wazuh playbook <https://lab.enough.community/main/infrastructure/tree/master/playbooks/wazuh>`_ are
from a submodule including `a short lived fork
<https://lab.enough.community/singuliere/wazuh-ansible>`_ of the
`wazuh-ansible repository
<https://github.com/wazuh/wazuh-ansible>`_. All commits unique to the
fork must match a pull request so they are eventually merged.
Notifications
-------------
All notifications are sent to `ids@enough.community`.
The `Wazuh <http://wazuh.com/>`_ Intrusion Detection System watches
over all hosts and will report problems to the `ids@example.com` mail
address.

18
docs/services/index.rst

@ -2,17 +2,19 @@ Services
========
.. toctree::
:caption: Services
:name: Services
:maxdepth: 2
nextcloud
forum
mattermost
pad
weblate
gitlab
website
wekan
bind
VPN
postfix
ids
authorized_keys
backup
monitoring
weblate
gitlab
mattermost
enough
backup

15
docs/services/mattermost.rst

@ -1,18 +1,5 @@
Mattermost
==========
`chat.enough.community <http://lab.enough.community/main/infrastructure/tree/master/playbooks/chat/roles/mattermost>`_ is installed `with docker <https://docs.mattermost.com/install/prod-docker.html>`. The configuration is done `via the admin console web interface <https://chat.enough.community/admin_console>`_.
`Mattermost <https://mattermost.com/>`__ is available at `chat.example.com`.
Using the CLI:
.. code::
cd /srv/mattermost
docker-compose -f docker-compose-infrastructure.yml exec app platform
Entering the Mattermost container:
.. code::
cd /srv/mattermost
docker-compose -f docker-compose-infrastructure.yml exec app sh

329
docs/services/monitoring.rst

@ -1,314 +1,19 @@
.. _monitoring:
Monitoring howto
================
Icinga2 follows an "apply logic style" for its configuration but it
does not disallow adding isolated services, to handle exceptions to
the rules.
Most of the service definitions are based on predefined
commands which are documented
`here <https://www.icinga.com/docs/icinga2/latest/doc/10-icinga-template-library/#plugin-check-commands-for-monitoring-plugins>`__.
Monitoring deployment
---------------------
Monitoring is deployed by importing the
`playbooks/icinga/icinga-playbook.yml` playbook. The Icinga2 master is
`icinga-host`. See also
`inventories/common/host_vars/icinga-host/monitoring.yml` for specific
deployment attributes: icingaweb credentials, https, virtualhost fqdn.
Each host is monitored by default.
To disable monitoring for some host, you have to define a host variable
``not_monitored``.
Base system monitoring
^^^^^^^^^^^^^^^^^^^^^^
For each host we:
- check ping (default host check in Icinga)
- check ssh
- check apt
- check etckeeper
- check icinga
- check load
- check procs
- check swap when ``vars.swap`` is defined
- check users
- check run\_kernel (check if it run the most up-to-date kernel)
- check fail2ban process
- check sshd process
- check rsyslogd process
- check icinga2 process
- check cron process
Git repos monitoring
^^^^^^^^^^^^^^^^^^^^
A host can declare a git repo to be checked (designed originally for
`etckeeper`):
::
vars.repos["Bling"] = {
dir = "/var/git/bling"
}
The git check command is sudoed.
Example of use in a role: `playbooks/icinga/roles/deploy_dummy_monitoring_objects/tasks/main.yml`.
Disk and partitions monitoring
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A host can declare any partition to be checked:
::
vars.disks["disk"] = {
}
vars.disks["disk /"] = {
disk_partitions = "/"
}
vars.disks["disk /var"] = {
disk_partitions = "/var"
}
vars.disks["disk /tmp"] = {
disk_partitions = "/tmp"
}
Processes monitoring
^^^^^^^^^^^^^^^^^^^^
A host can declare any process presence to be checked:
::
vars.process["Incron"] = {
procs_command = "incrond"
procs_critical = "1:1"
}
Example of use in a role: `playbooks/icinga/roles/deploy_dummy_monitoring_objects/tasks/main.yml`.
Mail sending monitoring
^^^^^^^^^^^^^^^^^^^^^^^
A host can declare any non-null value in ``vars.sendmail``. Then
mailname, mail queue and process are checked.
It's currently not sufficient.
I wrote some time ago a qshape based test which is suitable to detect
delivery problems for mass mailling (much better than the mail queue
which can legitimately grows when needed). But it is not adapted for
sparse emailling.
So, some nice projects would be of interest to monitor our ability to
send emails:
- check qshape.
- check rbls.
- a mail loop test (verify the self-delivery of a sent mail gone via
another relay)
- a mail delivery test (verify the delivery of a sent mail in some of
the majors mails domains)
Web services monitoring
^^^^^^^^^^^^^^^^^^^^^^^
A host can declare hosting web at a given fqdn:
::
vars.http_vhosts["Secure Drop Forum"] = {
http_vhost = "forum.enough.community"
http_uri = "/c/devops"
http_ssl = true
http_string = "devops discussions"
}
- Each fqdn is processed via ``check_http`` from Icinga master and
should provide ``http_string`` in answer's body.
- Each fqdn is processed via ``check_http`` from Icinga master and
should *not* provide some strings in the answer. It is useful to
prevent from accidentally deploy spywares. For now, spywares checked
are:
- googleapis.com
- cloudflare.com
- google-analytics.com
- gravatar.com
- If ``http_ssl = true`` the check is processes using https and the TLS
certificate is retrieved for validity check.
Moreover if a host declare ``vars.httpd = "apache"`` or
``vars.httpd = "apache2"`` or ``vars.httpd = "nginx"``, then processes
check are executed.
If a host declare ``vars.sqlserver = "mysql"`` or
``vars.sqlserver = "mariadb"`` or ``vars.sqlserver = "pgsql"``, then
processes check are executed.
It is probably easily feasible to associate a list of scripts to each
fqdn for more advanced checks (check result of a POST, etc.) if needed.
Since monitoring `http vhosts` happens often in `enough.community`, an Ansible
role helps to declare it:
::
- role: monitor_http_vhost
http_vhost_name: Secure Drop Forum
http_vhost_fqdn: "forum.{{ domain }}"
http_vhost_uri: /c/devops
http_vhost_string: "devops discussions"
http_vhost_https: true
Torified Web services monitoring
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Similarly to `http_vhosts`, a host can declare a `tor_http_vhosts` dictionnary.
The main difference is that it is not a `fqdn` which is transmitted, but the
path of the service hostname. An Ansible role helps to declare it:
::
- role: monitor_tor_http_vhost
tor_hostname_file: /var/lib/tor/services/cloud/hostname
tor_http_vhost_name: Cloud
tor_http_vhost_uri: "/login"
tor_http_vhost_string: "Forgot password"
.. note:: For now the only handled case concerns plain http over tor. TLS hasn't yet been defined.
DNS service monitoring
^^^^^^^^^^^^^^^^^^^^^^
A host can declare hosted zones files which can be checked via
``named-checkzone`` (syntax consistency) and ``check_whois`` (domain
expiration):
::
/* Define zones and files for checks */
vars.zones["Secure Drop Club"] = {
fqdn = "enough.community"
file = "/etc/bind/zones/masters/enough.community"
view = "external"
}
Example of use in a role: `playbooks/bind/roles/monitoring-bind/tasks/main.yml`.
Maybe we could add a check dig on the A and NS records, and eventually
use ``zonemaster`` or a webservice providing ``zonemaster`` results.
Monitoring tweaking
-------------------
Service templates
^^^^^^^^^^^^^^^^^
A host can set a prefered service template, using the icinga variable
``vars.service_template``.
The templates can be found in `playbooks/icinga/roles/icinga2/files/templates.conf`.
Hosts vars
^^^^^^^^^^
A host can define a list of lines to be added to its icinga configuration,
using the Ansible variable ``monitoring_host_vars``. Se e.g.
``inventories/common/host_vars/icinga-host/monitoring.yml`` for an example.
Default is empty.
Monitoring architecture
=======================
Icinga2 overview
----------------
Icinga2 is very flexible and doesn't impose monitoring
architecture. So we have to define it. The simplest is to follow the
`master with clients
setup <https://www.icinga.com/docs/icinga2/latest/doc/06-distributed-monitoring/#master-with-clients>`__. It implies:
- getting a master server (including web GUI),
- getting a client for each VM on which you would execute checks.
Icinga2 uses the same software for clients and masters and their
configuration defines if one is master or client. We can use the same
configuration objects for master and clients.
Our master server deployment is defined in `playbooks/icinga/roles/icinga2`.
Our client deployment is defined in `playbooks/icinga/roles/icinga2_client`.
Executables checks are managed locally on each computer, as
well as required sudo permissions. To deploy them, a common role is defined in
`playbooks/icinga/roles/icinga2_common`.
Icinga2 define "zones", which is a way to control information sharing
over the monitoring infrastructure. We define:
- a global zone for the configuration shared among all the cluster;
- a master zone for the master;
- a client zone for each client which the master zone as a parent.
Client dont't know about each other (the master will distribute only
what is required).
Apply logic style
-----------------
Icinga2 uses the "`apply logic style
<https://www.icinga.com/docs/icinga2/latest/doc/08-advanced-topics/#advanced-use-of-apply-rules>`__".
All behavior is described using a language (including list and
associative array), as an host attribute (e.g. a list of hardware
block devices, a list of mounted volumes, a list of vhosts and some
associated attributes, a list of process you would like to
check and their associated limits, a list of git repos to be checked,
etc.)
Based on those attributes provided, generic service can be defined.
Here is how one can check all the certificates of all the vhosts which
are declared to use TLS:
::
apply Service ""Check TLS certificate "" for (http_vhost => config in host.vars.http_vhosts) {
import ""generic-service""
check_command = ""http""
vars.http_address = config.http_vhost
command_endpoint = "" ... ""
vars.http_certificate = 21
vars.http_sni = true
vars += config
assign where config.http_ssl == true
}
with n host which contains this declaration:
::
vars.http_vhosts[""Forum""] = {
http_vhost = ""forum.enough.community""
http_ssl = true
}
The main monitoring configuration for enough.community is available in
`playbooks/icinga/roles/icinga2/files/services/` and deployed in the
global Icinga zone, thus available to all the cluster.
There are checks for vhosts, DNS zones consistency, DNS views
consistency, attended processes, attended vhosts, attended output IPs,
git repos, mails queues, services banners (ssh, smtp, etc.), upgrades,
running kernels, mailname consistency, volumes, databases, etc.
Monitoring
==========
The `Icinga <https://icinga.com/>`_ Monitoring System watches over all
hosts (disk space, load average, security updates, etc.). In addition
services may add specific monitoring probes such as loading a web page
and verifying its content is valid.
The Icinga web interface is at `icinga.example.com`. The user name
and password with administrator rights must be defined in
`~/.enough/example.com/inventory/host_vars/icinga-host/icinga-secrets.yml`
with variables documented in `this file
<https://lab.enough.community/main/infrastructure/blob/master/playbooks/icinga/roles/icinga2/defaults/main.yml>`__
Problems found by Icinga will be notified via email to the address defined in
`~/.enough/example.com/inventory/host_vars/icinga-host/mail.yml` with
a variable doumented in `this file <https://lab.enough.community/main/infrastructure/blob/master/inventory/host_vars/icinga-host/monitoring.yml>`__.

10
docs/services/nextcloud.rst

@ -0,0 +1,10 @@
Nextcloud
=========
`Nextcloud <https://nextcloud.com/>`__ is available at
`cloud.example.com`. The user with administrative rights, the
Nextcloud version and other variables are documented in `this file
<https://lab.enough.community/main/infrastructure/blob/master/playbooks/enough/roles/nextcloud/defaults/main.yml>`__
and can be modified in the
`~/.enough/example.com/inventory/host_vars/cloud-host/nextcloud.yml`
file.

10
docs/services/pad.rst

@ -0,0 +1,10 @@
Etherpad
========
`Etherpad <https://etherpad.org/>`__ is available at `pad.example.com`.
The user with administrative rights is `admin`. Its password can be set
as documented in `this file
<https://lab.enough.community/main/infrastructure/blob/master/playbooks/pad/roles/pad/defaults/main.yml>`__
and can be modified in the
`~/.enough/example.com/inventory/group_vars/pad-service-group.yml`
file.

30
docs/services/postfix.rst

@ -1,24 +1,16 @@
.. _postfix:
Postfix mail server
===================
SMTP server
===========
Each VM installed via Ansible is able to send emails from the `enough.community`
domain.
A SMTP server is running on each host. A service running on
`some-host.example.com` can use the SMTP server as follows:
Postfix mail relay
------------------
* Address: some-host.example.com
* Port: 25
* Authentication: No
* SSL/TLS: No
A mail relay based on Postfix is defined on the host `postfix-host`, via the
playbook `playbooks/postfix/postfix-relay-playbook.yml`, based on the
`Postfix DebOps <https://github.com/debops/ansible-postfix>`_ role.
It is configured as an open relay using `smtps`. The relaying restrictions are
set in :ref:`firewall <firewall>` using OpenStack.
Postfix mail satellite
----------------------
A Postfix satellite is defined on each host (except for `postfix-host`),
via the playbook `playbooks/postfix/postfix-client-playbook.yml`, based on the
`Postfix DebOps <https://github.com/debops/ansible-postfix>`_ role.
It is not possible (and it would not be secure) for services running
on another host (`other-host.example.com` for instance) to use this
SMTP server.

12
docs/services/weblate.rst

@ -1,8 +1,10 @@
Weblate
=======
`weblate.enough.community <http://lab.enough.community/main/infrastructure/tree/master/playbooks/weblate/roles/weblate>`_ is installed `with docker <https://github.com/WeblateOrg/docker>`_.
See also `host_vars/weblate-host/weblate.yml`.
The `docker-compose file <http://lab.enough.community/main/infrastructure/blob/master/playbooks/weblate/roles/weblate/templates/docker-compose-infrastructure.yml>`_ is adapted from the one found `in the weblate repository <https://github.com/WeblateOrg/docker/blob/master/docker-compose-https.yml>`_
`Weblate <https://weblate.org/>`__ is available at
`weblate.example.com`. The user with administrative rights and the
contact email are defined as documented in `this file
<https://lab.enough.community/main/infrastructure/blob/master/playbooks/weblate/roles/weblate/defaults/main.yml>`__
and can be modified in the
`~/.enough/example.com/inventory/host_vars/weblate-host/secrets.yml`
file.

4
docs/services/website.rst

@ -0,0 +1,4 @@
Hugo
====
`Hugo <https://gohugo.io/>`__ is available at `www.example.com`.

10
docs/services/wekan.rst

@ -0,0 +1,10 @@
Wekan
=====
`Wekan <https://wekan.github.io/>`__ is available at `wekan.example.com`.
The user with administrative rights is `admin`. Its password can be set
as documented in `this file
<https://lab.enough.community/main/infrastructure/blob/master/playbooks/wekan/roles/wekan/defaults/main.yml>`__
and can be modified in the
`~/.enough/example.com/inventory/group_vars/wekan-service-group.yml`
file.

18
docs/user-guide.rst

@ -58,6 +58,8 @@ Installation
enough --version
.. _bind_create:
Create the DNS name server
--------------------------
@ -123,9 +125,9 @@ Create or update a service
The following services are available:
* :doc:`bind <services/bind>` for `DNS server <https://www.isc.org/bind/>`__ at ``bind.examples.com``
* ``openvpn``, for `VPN <https://openvpn.net/>`__ at ``openvpn.example.com``
* :doc:`OpenVPN <services/VPN>`, for `VPN <https://openvpn.net/>`__ at ``openvpn.example.com``
* :doc:`chat <services/mattermost>`, for `instant messaging <https://mattermost.com/>`__ at ``chat.example.com``
* :doc:`cloud <services/enough>`, for `file sharing <https://nextcloud.com/>`__ at ``cloud.example.com``
* :doc:`cloud <services/nextcloud>`, for `file sharing <https://nextcloud.com/>`__ at ``cloud.example.com``
* ``forum``, for `discussions and mailing lists <https://www.discourse.org/>`__ at ``forum.example.com``
* ``packages``, a `static web service <https://www.nginx.com/>`__ at ``packages.example.com``
* ``pad``, for `collaborative note taking <https://etherpad.org/>`__ at ``pad.example.com``
@ -161,12 +163,14 @@ fails. For instance, if `bind-host` is lost:
$ enough --domain example.com host create bind-host
$ enough --domain example.com playbook
However, most services such as :doc:`file sharing <services/enough>`
However, most services such as :doc:`file sharing <services/nextcloud>`
and :doc:`translations <services/weblate>` rely on persistent
information that are located in a encrypted volume attached to the
machine. A daily :doc:`backup <services/backup>` is made in case a
file is inadvertendly lost.
.. _restore_service_from_backup:
Restore a service from a backup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -182,7 +186,7 @@ To restore the volume attached to a service from a designated backup:
In this example, the restoration is done as follows:
* The :doc:`cloud service <services/enough>` is created, if it does not
* The :doc:`cloud service <services/nextcloud>` is created, if it does not
already exist.
* The machine (``cloud-host``) attached to the volume (``cloud-volume``) is
@ -225,14 +229,14 @@ cloning is done as follows:
* A volume is created from the ``2020-04-12-cloud-volume``, in the
``~/.enough/test-clouds.yml`` OpenStack region.
* The :doc:`cloud service <services/enough>` is created, as well as
* The :doc:`cloud service <services/nextcloud>` is created, as well as
all the services it depends on, if they do not already
exist. Including the :doc:`DNS server <services/bind>`.
* The ``test.d.enough.community`` domain is delegated to the
:doc:`DNS server <services/bind>` located in the ``~/.enough/test-clouds.yml``
OpenStack region so that ``https://cloud.test.d.enough.community`` resolves
to the newly created :doc:`cloud service <services/enough>`.
to the newly created :doc:`cloud service <services/nextcloud>`.
Infrastructure services and access
----------------------------------
@ -295,6 +299,8 @@ The default can also be changed for a given host (for instance
`weblate-host`) by setting the desired value in the
`~/.enough/example.com/inventory/host_vars/weblate-host/network.yml` file.
.. _attached_volumes:
Attached volumes
~~~~~~~~~~~~~~~~

6
inventory/group_vars/all/openvpn.yml

@ -12,9 +12,3 @@ openvpn_local_directory: "{{ enough_domain_config_directory }}/openvpn"
# List of active openvpn clients
#
openvpn_active_clients: []
#
#############################################
#
# List of retired openvpn clients
#
openvpn_retired_clients: []

5
inventory/host_vars/icinga-host/monitoring.yml

@ -5,4 +5,9 @@
# mail address icinga will use to send notifications
#
icingaadmins_email: icingaadmins@{{ domain }}
#
#############################################
#
# DO NOT MODIFY BELOW THIS LINE
#
icinga_vhost_fqdn: icinga.{{ domain }}

2
playbooks/openvpn/roles/openvpn/defaults/main.yml

@ -38,5 +38,7 @@ openvpn_public_ip: "{{ ansible_host }}"
#
######################################################
#
# DO NOT MODIFY VARIABLES BELOW THIS LINE
#
openvpn_local_directory: /tmp/enough-openvpn
openvpn_overwrite_nftables_conf: yes
Loading…
Cancel
Save