Ansible Roles¶
Documentation for roles included in system-config
There are two types of roles. Top-level roles, kept in the roles/
directory, are available to be used as roles in Zuul jobs. This
places some constraints on the roles, such as not being able to use
plugins. Add
roles:
- zuul: openstack-infra/system-config
to your job definition to source these roles.
Roles in playbooks/roles
are designed to be run on the
Infrastructure control-plane (i.e. from bridge.openstack.org
).
These roles are not available to be shared with Zuul jobs.
Role documentation¶
- accessbot¶
Set up accessbot
- add-inventory-known-hosts¶
Add the host keys from inventory to global known_hosts
- afs-release¶
afs-release
Install the script and related bits and pieces for periodic release of various AFS volumes. This role is really only intended to be run on the mirror-update host, as it uses the ssh-key installed by that host to run vos release under -localauth on the remote AFS servers.
Role Variables
- afsmon¶
afsmon
Install the afsmon tool and related bits and pieces for periodic monitoring of AFS volumes. This role is really only intended to be run on the mirror-update host as we only need one instance of it running.
Role Variables
- apache-ua-filter¶
Reject requests from problematic user agent strings
This role installs and configures a filter macro called UserAgentFilter which can be included in Apache vhosts
Role Variables
- base¶
Directory to hold base roles.
- bazelisk-build¶
Run bazelisk build
Runs bazelisk build with the specified targets.
Role Variables
-
bazelisk_targets¶
Default:["release"]
The bazelisk targets to build.
-
bazelisk_executable¶
Default:bazelisk
The path to the bazelisk executable.
-
zuul_work_dir¶
Default:{{ ansible_user_dir }}/{{ zuul.project.src_dir}}
The working directory in which to run bazelisk.
-
bazelisk_targets¶
- borg-backup¶
Configure a host to be backed up
This role setups a host to use
borgp
for backup to any hosts in theborg-backup-server
group.A separate ssh key will be generated for root to connect to the backup server(s) and the host key for the backup servers will be accepted to the host.
The
borg
tool is installed and a cron job is setup to run the backup periodically.Note the
borg-backup-server
role must run after this to create the user correctly on the backup server. This role sets a tupleborg_user
with the username and public key; theborg-backup-server
role uses this variable for each host in theborg-backup
group to initalise users.Hosts can place into
/etc/borg-streams
which should be a script that outputs to stdout data to be fed into a backup archive on each run. This will be saved to an archive with the name of the file. This is useful for raw database dumps which allowborg
to deduplicate as much as possible.Role Variables
-
borg_username¶
The username to connect to the backup server. If this is left undefined, it will be automatically set to
borg-$(hostname)
-
borg_backup_excludes_extra¶
Default:[]
A list of extra items to pass as
--exclude
arguments to borg. Appended to the global default list of excludes set withborg_backup_excludes
.
-
borg_backup_dirs_extra¶
Default:[]
A list of extra directories to backup. Appended to the global default list of directories set with
borg_backup_dirs
.
-
borg_username¶
- borg-backup-server¶
Setup backup server
This role configures backup server(s) in the
borg-backup-server
group to accept backups from remote hosts.Note that the
borg-backup
role must have run on each host in theborg-backup
group before this role. That role will create aborg_user
tuple in the hostvars for for each host consisting of the required username and public key.Each required user gets a separate home directory in
/opt/backups
. Theirauthorized_keys
file is configured with the public key to allow the remote host to log in and only runborg
in server mode.Role Variables
-
borg_retire_users¶
Default:[]
A list of backup user names that are in a “retired” state. The host should not be in the inventory or active. The backup user will be diabled and when running a prune, we will only keep the latest backup to save space.
-
borg_retire_users¶
- codesearch¶
Run a hound container to index Opendev code
- configure-kubectl¶
Configure kube config files
Configure kubernetes files needed by kubectl.
Role Variables
-
kube_config_dir¶
Default:/root/.kube
-
kube_config_owner¶
Default:root
-
kube_config_group¶
Default:root
-
kube_config_file¶
Default:{{ kube_config_dir }}/config
-
kube_config_template¶
-
kube_config_dir¶
- configure-openstacksdk¶
Configure openstacksdk files
Configure openstacksdk files needed by nodepool and ansible.
Role Variables
-
openstacksdk_config_dir¶
Default:/etc/openstack
-
openstacksdk_config_owner¶
Default:root
-
openstacksdk_config_group¶
Default:root
-
openstacksdk_config_file¶
Default:{{ openstacksdk_config_dir }}/clouds.yaml
-
openstacksdk_config_template¶
-
openstacksdk_config_dir¶
- create-venv¶
Create a venv
You would think this role is unnecessary and roles could just install a
venv
directly … except sometimes pip/setuptools get out of date on a platform and can’t understand how to install compatible things. For example the pip shipped on Bionic will upgrade itself to a version that doesn’t support Python 3.6 because it doesn’t understand the metadata tags the new version marks itself with. We’ve seen similar problems with wheels. History has shown that whenever this problem appears solved, another issue will appear. So for reasons like this, we have this as a synchronization point for setting up venvs.Role Variables
-
create_venv_path¶
Default:unset
Required argument; the directory to make the
venv
-
create_venv_path¶
- disable-puppet-agent¶
Disable the puppet-agent service on a host
Role Variables
None
- dstat-logger¶
Install, configure, and run a dstat logger
This is primarily useful for testing where we don’t have instances hooked up to cacti. You can use this to get a csv log file at /var/log/dstat-csv.log in test jobs that records similar system performance information.
- edit-secrets-script¶
This role installs a script called edit-secrets to /usr/local/bin that allows you to safely edit the secrets file without needing to manage gpg-agent yourself.
- etherpad¶
Run an Etherpad server.
- gerrit¶
Run Gerrit.
This role deploys MariaDB alongside the Gerrit service using docker-compose to run both services in docker containers. Variables below configure MariaDB connection details.
Role Variables
-
gerrit_reviewdb_mariadb_dbname¶
Default:gerrit
The database to make and connect to.
-
gerrit_reviewdb_mariadb_username¶
Default:gerrit
The MariaDB user to make and connect with.
-
gerrit_reviewdb_mariadb_password¶
Default:<unset>
The password to set for
gerrit_reviewdb_mariadb_username
-
gerrit_reviewdb_mariadb_dbname¶
- gerritbot¶
Set up gerritbot
- gitea¶
Install, configure, and run Gitea.
Role Variables
- gitea-git-repos¶
Create git repos on a gitea server
Role Variables
-
gitea_url¶
Default:https://localhost:3000
The gitea to talk to. This is relative to the remote ansible node which means localhost:3000 is on the ansible inventory node.
-
gitea_always_update¶
Default:false
Whether or not all projects should be coerced to the configured and desired state. This defaults to false because it can be expensive to run, but if project attributes like issue trackers or descriptions update this is used to make those changes.
-
gitea_url¶
- gitea-lb¶
Install the gitea-lb services
This configures haproxy
Role Variables
-
gitea_lb_listeners¶
The backends to configure
-
gitea_lb_listeners¶
- gitea-set-org-logos¶
Set custom logos for organisations in gitea
Note that logos should be a PNG file. We’ve standardised on a 400x400 to keep it simple.
Images should respect the limits set by gitea; see https://docs.gitea.io/en-us/config-cheat-sheet/#picture-picture
- grafana¶
Run Grafana
- graphite¶
Run Graphite
- haproxy¶
Install, configure, and run a haproxy server.
Role Variables
-
haproxy_config_template¶
Default:Undefined
Type: string The config template to install for haproxy. Must be defined.
-
haproxy_run_statsd¶
Default:True
Type: string Run the
haproxy-statsd
docker container to report back-end stats to graphite.opendev.org
-
haproxy_config_template¶
- import-gpg-key¶
import-gpg-key
Import a gpg ASCII armored public key to the local keystore.
Role Variables
-
gpg_key_id¶
The ID of the key to import. If it already exists, the file is not imported.
-
gpg_key_asc¶
The path of the ASCII armored GPG key to import
-
gpg_key_id¶
- install-ansible¶
Install and configure Ansible on a host via pip
This will install ansible into a virtualenv at
/usr/ansible-venv
Role Variables
-
install_ansible_requirements¶
Default:[ansible, openstacksdk]
The packages to install into the virtualenv. A list in Python
requirements.txt
format.
-
install_ansible_collections¶
Default:undefined
A list of Ansible collections to install. In the format
-
install_ansible_ara_enable¶
Default:false
Whether or not to install the ARA Records Ansible callback plugin into Ansible. If using the default
install_ansible_requirements
will install the ARA package too.
-
install_ansible_ara_config¶
A dictionary of configuration keys and their values for ARA’s Ansible plugins.
Default configuration keys:
api_client: offline
(can behttp
for sending to remote API servers)api_server: http://127.0.0.1:8000
(has no effect when using offline)api_username: null
(if required, an API username)api_password: null
(if required, an API password)api_timeout: 30
(the timeout on http requests)
For a list of available configuration options, see the ARA documentation
-
install_ansible_requirements¶
- install-ansible-roles¶
Install additional Ansible roles from git repos
- install-apt-repo¶
Install an APT repo
Role Variables
-
repo_name¶
The name of the repo (used for filenames).
-
repo_key¶
The contents of the GPG key, ASCII armored.
-
repo_content¶
The file content for the sources list.
-
repo_name¶
- install-borg¶
Install borg backup tool to /opt/borg
Install borg to a virtualenv; the binary will be available at
/opt/borg/bin/borg
.Role Variables
-
borg_version¶
The version of
borg
to install. This should likely be pinned to be the same between server and client.
-
borg_version¶
- install-certcheck¶
Install ssl-cert-check
Installs the ssl-cert-check tool and a cron job to check the freshness of the SSL certificates for the configured domains daily.
Role Variables
-
ssl_cert_check_domain_list¶
Default:/var/lib/certcheck/domainlist
The list of domains to check
-
ssl_cert_check_days¶
Default:30
Warn about certificates who have less than this number of days to expiry.
-
ssl_cert_check_email¶
Default:root
The email to send reports to
-
ssl_cert_check_domain_list¶
- install-docker¶
An ansible role to install docker in the OpenStack infra production environment
This also installs a log redirector for syslog
`docker-
tags. For most containers, they can be setup in the compose file with a section such as:logging: driver: syslog options: tag: docker-<appname>
Role Variables
-
use_upstream_docker¶
Default:True
By default this role adds repositories to install docker from upstream docker. Set this to False to use the docker that comes with the distro.
-
docker_update_channel¶
Default:stable
Which update channel to use for upstream docker. The two choices are
stable
, which is the default and updates quarterly, andedge
which updates monthly.
-
use_upstream_docker¶
- install-kubectl¶
Install kubectl
Role Variables
None
- install-launch-node¶
Install the launch node script to a venv
- install-podman¶
An ansible role to install podman in the OpenDev production environment
- iptables¶
Install and configure iptables
Role Variables
-
iptables_allowed_hosts¶
Default:[]
A list of dictionaries, each item in the list is a rule to add for a host/port combination. The format of the dictionary is:
-
iptables_allowed_hosts.hostname¶
The hostname to allow. It will automatically be resolved, and the inventory IP address will be added to the firewall.
-
iptables_allowed_hosts.protocol¶
One of “tcp” or “udp”.
-
iptables_allowed_hosts.port¶
The port number.
-
iptables_allowed_hosts.hostname¶
-
iptables_allowed_groups¶
Default:[]
A list of dictionaries, each item in the list is a rule to add for a host/port combination. The format of the dictionary is:
-
iptables_allowed_groups.group¶
The ansible inventory group to add. Every host in the group will be added to the firewall.
-
iptables_allowed_groups.protocol¶
One of “tcp” or “udp”.
-
iptables_allowed_groups.port¶
The port number.
-
iptables_allowed_groups.group¶
-
iptables_public_tcp_ports¶
Default:[]
A list of public TCP ports to open.
-
iptables_public_udp_ports¶
Default:[]
A list of public UDP ports to open.
-
iptables_rules¶
Default:[]
A list of iptables ingress rules. Each item is a string containing the iptables command line options for the rule. These will be expanded to cover IPv4 and IPv6.
-
iptables_rules_v4¶
Default:[]
A list of iptables v4 ingress rules. Each item is a string containing the iptables command line options for the rule.
-
iptables_rules_v6¶
Default:[]
A list of iptables v6 ingress rules. Each item is a string containing the iptables command line options for the rule.
-
iptables_egress_rules¶
Default:[]
A list of iptables egress rules. Each item is a string containing the iptables command line options for the rule. These will be expanded to cover IPv4 and IPv6.
-
iptables_egress_rules_v4¶
Default:[]
A list of iptables v4 egress rules. Each item is a string containing the iptables command line options for the rule.
-
iptables_egress_rules_v6¶
Default:[]
A list of iptables v6 egress rules. Each item is a string containing the iptables command line options for the rule.
-
iptables_allowed_hosts¶
- jaeger¶
Run a Jaeger (tracing) server.
- jitsi-meet¶
Install, configure, and run jitsi-meet.
Note that the Jitsi Meet docker images supply template files in /defaults on the image. These template files are used to generate configs in /config on the image (/var/jitsi-meet on the host) using the docker-compose .env file and its vars.
If we need to make changes to the configs we need to bind mount in a modified template file so that the config file generation produces what we expect. If we try to write the configs files ourselves then when jitsi meet restarts we will lose those configs until the next ansible run.
- kerberos-client¶
An ansible role to configure a kerberos client
Note
`k5start
is installed on Debuntu distributions, but is not part of RedHat distributions.Role Variables
-
kerberos_realm¶
The realm for Kerberos authentication. You must set the realm. e.g.
MY.COMPANY.COM
. This will be the default realm.
-
kerberos_admin_server¶
Default:{{ ansible_fqdn }}
The host where the administraion server is running. Typically this is the master Kerberos server.
-
kerberos_kdcs¶
Default:[ {{ ansible_fqdn }} ]
A list of key distribution center (KDC) hostnames for the realm.
-
kerberos_realm¶
- kerberos-kdc¶
Configure a Kerberos KDC server
All KDC servers (primary and replicas) should be in a common
kerberos-kdc
group that defineskerberos_kdc_realm
andkerberos_kdc_master_key
.The
kerberos-kdc-primary
group should have a single primary KDC host. It will be configured to replicate its database to hosts in thekerberos-kdc-replica
group.Hosts in the
kerberos-kdc-replica
group will be configured to receive updates from thekerberos-kdc-primary
host.The role should be run twice; once limited to the primary group and then a second time limited to the secondary group.
Role Variables
-
kerberos_kdc_relam¶
The realm for all KDC servers.
-
kerberos_kdc_master_key¶
The master key written into the stash file for each KDC, which allows them to auth.
-
kerberos_kdc_relam¶
- keycloak¶
Run a Keycloak server.
- letsencrypt-acme-sh-install¶
Install acme.sh client
This makes the acme.sh client available on the host.
Additionally a
driver.sh
script is installed to run the authentication procedure and parse output.Role Variables
-
letsencrypt_gid¶
Default:unset
Unix group gid for the letsencrypt group which has permissions on the /etc/letsencrypt-certificates directory. If unset, uses system default. Useful if this conflicts with another role that assumes a gid value.
-
letsencrypt_account_email¶
Default:undefined
The email address to register with accounts. Renewal mail and other info may be sent here. Must be defined.
-
letsencrypt_gid¶
- letsencrypt-config-certcheck¶
Generate SSL check list
This role automatically generates a list of domains for the certificate validation checks. This ensures our certificates are valid and are being renewed as expected.
This role must run after
letsencrypt-request-certs
role, as that builds theletsencrypt_certcheck_domains
variable for each host and certificate. It must also run on a host that has already run theinstall-certcheck
role.Role Variables
-
letsencrypt_certcheck_domain_list¶
Default:/var/lib/certcheck/ssldomains
The ssl-cert-check domain configuration file to write. See also the
install-certcheck
role.
-
letsencrypt_certcheck_additional_domains¶
Default:[]
A list of additional domains to check for hosts not using the
letsencrypt-*
roles. Each entry should be in the formathostname port
.
-
letsencrypt_certcheck_domain_list¶
- letsencrypt-create-certs¶
Generate letsencrypt certificates
This must run after the
letsencrypt-install-acme-sh
,letsencrypt-request-certs
andletsencrypt-install-txt-records
roles. It will run theacme.sh
process to create the certificates on the host.Role Variables
-
letsencrypt_self_sign_only¶
Default:False
If set to True, will locally generate self-signed certificates in the same locations the real script would, instead of contacting letsencrypt. This is set during gate testing as the authentication tokens are not available.
-
letsencrypt_self_generate_tokens¶
Default:False
When set to
True
, self-generate fake DNS-01 TXT tokens rather than acquiring them through the ACME process with letsencrypt. This avoids leaving “half-open” challenges during gate testing, where we have no way to publish the DNS TXT records letsencrypt gives us to complete the certificate issue. This should beTrue
ifletsencrypt_self_sign_only
isTrue
(unless you wish to specifically test theacme.sh
operation).
-
letsencrypt_use_staging¶
Default:False
If set to True will use the letsencrypt staging environment, rather than make production requests. Useful during initial provisioning of hosts to avoid affecting production quotas.
-
letsencrypt_certs¶
The same variable as described in
letsencrypt-request-certs
.
-
letsencrypt_self_sign_only¶
- letsencrypt-install-txt-record¶
Install authentication records for letsencrypt
Install TXT records to the
acme.opendev.org
domain. This role runs only the adns server, and assumes ownership of the/var/lib/bind/zones/acme.opendev.org/zone.db
file. After installation the nameserver is refreshed.After this,
letsencrypt-create-certs
can run on each host to provision the certificates.Role Variables
-
acme_txt_required¶
A global dictionary of TXT records to be installed. This is generated in a prior step on each host by the
letsencrypt-request-certs
role.
-
acme_txt_required¶
- letsencrypt-request-certs¶
Request certificates from letsencrypt
The role requests certificates (or renews expiring certificates, which is fundamentally the same thing) from letsencrypt for a host. This requires the
acme.sh
tool and driver which should have been installed by theletsencrypt-acme-sh-install
role.This role does not create the certificates. It will request the certificates from letsencrypt and populate the authentication data into the
acme_txt_required
variable. These values need to be installed and activated on the DNS server by theletsencrypt-install-txt-record
role; theletsencrypt-create-certs
will then finish the certificate provision process.Role Variables
-
letsencrypt_self_generate_tokens¶
Default:False
When set to
True
, self-generate fake DNS-01 TXT tokens rather than acquiring them through the ACME process with letsencrypt. This avoids leaving “half-open” challenges during gate testing, where we have no way to publish the DNS TXT records letsencrypt gives us to complete the certificate issue. This should beTrue
ifletsencrypt_self_sign_only
isTrue
(unless you wish to specifically test theacme.sh
operation).
-
letsencrypt_use_staging¶
If set to True will use the letsencrypt staging environment, rather than make production requests. Useful during initial provisioning of hosts to avoid affecting production quotas.
-
letsencrypt_certs¶
A host wanting a certificate should define a dictionary variable
letsencyrpt_certs
. Each key in this dictionary is a separate certificate to create (i.e. a host can create multiple separate certificates). Each key should have a list of hostnames valid for that certificate. The certificate will be named for the first entry. Naming the cert for the service (rather than the hostname) will simplify references to the file (for example in Apache VirtualHost configs), so listing it first is preferred.For example:
letsencrypt_certs: hostname-main-cert: - hostname.opendev.org - hostname01.opendev.org hostname-secondary-cert: - foo.opendev.org
will ultimately result in two certificates being provisioned on the host in
/etc/letsencrypt-certs/hostname.opendev.org
and/etc/letsencrypt-certs/foo.opendev.org
.Note the creation role
letsencrypt-create-certs
will call a handlerletsencrypt updated {{ key }}
(for example,letsencrypt updated hostname-main-cert
) when that certificate is created or updated. Because Ansible errors if a handler is called with no listeners, you must define a listener for event.letsencrypt-create-certs
hashandlers/main.yaml
where handlers can be defined. Since handlers reside in a global namespace, you should choose an appropriately unique name.Note that each entry will require a
CNAME
pointing the ACME challenge domain to the TXT record that will be created in the signing domain. For example above, the following records would need to be pre-created:_acme-challenge.hostname01.opendev.org. IN CNAME acme.opendev.org. _acme-challenge.hostname.opendev.org. IN CNAME acme.opendev.org. _acme-challenge.foo.opendev.org. IN CNAME acme.opendev.org.
The hostname in the first entry for each certificate will be registered with the
letsencrypt-config-certcheck
for periodic freshness tests; from the example above,hostname01.opendev.org
andfoo.opendev.org
would be checked. By default this will check on port 443; if the certificate is actually active on another port you can specify it after a colon; e.g.foo.opendev.org:5000
would indicate this host listens with this certificate on port 5000.
-
letsencrypt_self_generate_tokens¶
- limnoria¶
Setup limnoria and meetbot logging
TODO
ubuntu-bots bug tracker to highlight bug links * https://git.launchpad.net/~krytarik/ubuntu-bots/+git/ubuntu-bots/
- lodgeit¶
lodgeit
Paste service. Runs a mariadb container and lodgeit container.
** Role Variables **
-
lodgeit_db_username¶
Default:lodgeit
db username
-
lodgeit_db_password¶
Default:<unset>
lodgeit_db_password
-
lodgeit_db_dbname¶
Default:lodgeit
database to connect to
-
lodgeit_secret_key¶
Default:<unset>
secret key
-
lodgeit_db_username¶
- logrotate¶
Add log rotation file
Note
This role does not manage the
logrotate
package or configuration directory, and it is assumed to be installed and available.This role installs a log rotation file in
/etc/logrotate.d/
for a given file.For information on the directives see
logrotate.conf(5)
. This is not an exhaustive list of directives (contributions are welcome).** Role Variables **
-
logrotate_file_name¶
The full path to log file on disk to rotate. May be a wild-card; e.g.
/var/log/progname/*.log
.
-
logrotate_config_file_name¶
Default:Unique name based on the hash of :zuul:rolevar::`logrotate.logrotate_file_name`
The name of the configuration file in
/etc/logrotate.d
. If this is specified, it is up to the caller to ensure it is unique across all calls of this role.
-
logrotate_compress¶
Default:yes
-
logrotate_copytruncate¶
Default:yes
-
logrotate_delaycompress¶
Default:yes
-
logrotate_missingok¶
Default:yes
-
logrotate_rotate¶
Default:7
-
logrotate_frequency¶
Default:daily
One of
hourly
,daily
,weekly
,monthly
,yearly
orsize
.If choosing
size
, :zuul:rolevar::logrotate.logrotate_size must be specified
-
logrotate_size¶
Default:None
Size; e.g. 100K, 10M, 1G. Only when :zuul:rolevar::logrotate.logrotate_frequency is
size
.
-
logrotate_notifempty¶
Default:yes
-
logrotate_file_name¶
- mailman3¶
Role to configure mailman3.
- mariadb¶
Run MariaDB
This role deploys a standalone MariaDB using docker-compose. Variables below configure MariaDB connection details.
Role Variables
-
mariadb_dbname¶
The database to create.
-
mariadb_username¶
The MariaDB user to make and connect with.
-
mariadb_password¶
The password to set for
mariadb_username
-
mariadb_root_password¶
The password to set for the root mariadb user.
-
mariadb_dbname¶
- master-nameserver¶
Configure a hidden master nameserver
This role installs and configures bind9 to be a hidden master nameserver.
Role Variables
-
tsig_key¶
Type: dict The TSIG key used to control named.
-
tsig_key{}.algorithm¶
The algorithm used by the key.
-
tsig_key{}.secret¶
The secret portion of the key.
-
tsig_key{}.algorithm¶
-
dnssec_keys¶
Type: dict This is a dictionary of DNSSEC keys. Each entry is a dnssec key, where the dictionary key is the dnssec key id and the value is the a dictionary with the following contents:
-
dnssec_keys{}.zone¶
The name of the zone for this key.
-
dnssec_keys{}.public¶
The public portion of this key.
-
dnssec_keys{}.private¶
The private portion of this key.
-
dnssec_keys{}.zone¶
-
dns_repos¶
Type: list A list of zone file repos to check out on the server. Each item in the list is a dictionary with the following keys:
-
dns_repos[].name¶
The name of the repo.
-
dns_repos[].url¶
The URL of the git repository.
-
dns_repos[].refspec¶
Add an additional refspec passed to the git checkout
-
dns_repos[].version¶
An additional version passed to the git checkout
-
dns_repos[].name¶
-
dns_zones¶
Type: list A list of zones that should be served by named. Each item in the list is a dictionary with the following keys:
-
dns_zones[].name¶
The name of the zone.
-
dns_zones[].source¶
The repo name and path of the directory containing the zone file. For example if a repo was provided to master-nameserver.dns_repos.name with the name
example.com
, and within that repo, thezone.db
file was located atzones/example_com/zone.db
, then the value here should beexample.com/zones/example_com
.
-
dns_zones[].unmanaged¶
Default:False
Type: bool If
True
the zone is considered unmanaged. Thesource
file will be put in place if it does not exist, but will otherwise be left alone.
-
dns_zones[].name¶
-
dns_notify¶
Type: list A list of IP addresses of nameservers which named should notify on updates.
-
tsig_key¶
- matrix-eavesdrop¶
Run a matrix-eavesdrop bot
- matrix-gerritbot¶
Run the gerritbot-matrix bot.
Create the gerritbot_matrix_access_token with this command:
HOMESERVER_URL="https://opendev.ems.host" USER="@gerritbot:opendev.org" PASS="supersecret" export MATRIX_TOKEN=$(curl -XPOST ${HOMESERVER_URL}/_matrix/client/r0/login -d '{"user": "'${USER}'", "password": "'${PASS}'", "type": "m.login.password"}' | jq -r ".access_token") echo "gerritbot_matrix_access_token: ${MATRIX_TOKEN}"
Verify the token:
curl -H "Authorization: Bearer ${MATRIX_TOKEN}" ${HOMESERVER_URL}/_matrix/client/r0/account/whoami
Delete the token:
curl -H "Authorization: Bearer ${MATRIX_TOKEN}" -X POST ${HOMESERVER_URL}/_matrix/client/r0/logout -d{}
Create the gerritbot_matrix_identity_token with this command:
MATRIX_OPENID=$(curl -XPOST ${HOMESERVER_URL}/_matrix/client/r0/user/${USER}/openid/request_token -H "Authorization: Bearer ${MATRIX_TOKEN}" -d '{}') IDENTITY_URL="https://matrix.org" export MATRIX_IDENTITY_TOKEN=$(curl -XPOST ${IDENTITY_URL}/_matrix/identity/v2/account/register -d "${MATRIX_OPENID}" | jq -r '.access_token') echo "gerritbot_matrix_identity_token: ${MATRIX_IDENTITY_TOKEN}"
You might need to accept matrix terms:
curl -H "Authorization: Bearer ${MATRIX_IDENTITY_TOKEN}" ${IDENTITY_URL}/_matrix/identity/v2/terms curl -XPOST ${IDENTITY_URL}/_matrix/identity/v2/terms -H "Authorization: Bearer ${MATRIX_IDENTITY_TOKEN}" -d \ '{"user_accepts": ["https://matrix.org/legal/identity-server-privacy-notice-1"]}'
- mirror¶
Configure an opendev mirror
This role installs and configures a mirror node
Role Variables
- mirror-update¶
mirror-update
This role sets up the
mirror-update
host, which does the periodic sync of upstream mirrors to the AFS volumes.It is not intended to be a particularly generic or flexible role, as there is usually only one instance of the mirror-update host (to avoid conflicting updates).
At this stage, it handles the mirrors that are updated by
rsync
only. It is expected that it will grow to cover mirroring other volumes that are currently done by the legacyopenstack.org
host and managed by puppet.Role Variables
- nameserver¶
Configure an authoritative nameserver
This role installs and configures nsd to be an authoritative nameserver.
Role Variables
-
tsig_key¶
Type: dict The TSIG key used to authenticate connections between nameservers.
-
tsig_key{}.algorithm¶
The algorithm used by the key.
-
tsig_key{}.secret¶
The secret portion of the key.
-
tsig_key{}.algorithm¶
-
dns_zones¶
Type: list A list of zones that should be served by named. Each item in the list is a dictionary with the following keys:
-
dns_zones[].name¶
The name of the zone.
-
dns_zones[].source¶
The repo name and path of the directory containing the zone file. For example if a repo was provided to master-nameserver.dns_repos.name with the name
example.com
, and within that repo, thezone.db
file was located atzones/example_com/zone.db
, then the value here should beexample.com/zones/example_com
.
-
dns_zones[].name¶
-
dns_master_ipv4¶
Required argument. The IPv4 addresses of the master nameserver.
-
dns_master_ipv6¶
Required argument. The IPv6 addresses of the master nameserver.
-
tsig_key¶
- nodepool-base¶
nodepool base setup
Role Variables
-
nodepool_base_install_zookeeper¶
Install zookeeper to the node. This is not expected to be used in production, where the nodes would connect to an externally configured zookeeper instance. It can be useful for basic loopback tests in the gate, however.
-
nodepool_base_install_zookeeper¶
- nodepool-builder¶
Deploy nodepool-builder container
Role Variables
-
nodepool_builder_container_tag¶
Default:unset
Override tag for container deployment
-
nodepool_builder_upload_workers¶
Default:8
The number of upload workers
-
nodepool_builder_container_tag¶
- nodepool-launcher¶
Deploy nodepool launchers
- openafs-client¶
An ansible role to configure an OpenAFS client
Note
This role uses system packages where available, but for platforms or architectures where they are not available will utilise external packages. Defaults will pick packages built from the OpenDev infra project, but you should evaluate if this is suitable for your environment.
This role configures the host to be an OpenAFS client. Because OpenAFS is very reliant on distribution internals, kernel versions and host architecture this role has limited platform support. Currently supported are
Debian family with system packages available
Ubuntu LTS family with external 1.8 series packages
CentOS 7, 8-stream and 9-stream with external packages
Role Variables
-
openafs_client_cell¶
Default:openstack.org
The default cell.
-
openafs_client_cache_size¶
Default:500000
The OpenAFS client cache size, in kilobytes.
-
openafs_client_cache_directory¶
Default:/var/cache/openafs
The directory to store the OpenAFS cache files.
-
openafs_client_yum_repo_url¶
Default:``https://tarballs.openstack.org/project-config/package-afs-centos7``
The URL to a yum/dnf repository with the OpenAFS client RPMs. These are assumed to be created from the
.spec
file included in the OpenAFS distribution.
-
openafs_client_yum_repo_gpg_check¶
Default:no
Enable or disable gpg checking for
openafs_yum_repo_url
-
openafs_client_service_timeout_sec¶
Default:480
The TimeoutSec for service start. Accounting for the cache during startup can cause a high load which may necessitate a longer startup timeout on some platforms.
- openafs-db-server¶
Configure a host as an AFS db server (pts/vldb)
- openafs-file-server¶
Configure a host as an AFS fileserver
- openafs-server-config¶
Install openafs server components
- opendev-ca¶
Generate TLS certs for ZooKeeper
This will copy the certs to the remote node into the /etc/zuul directory by default.
- pip3¶
Install system packages for python3 pip and virtualenv
Role Variables
None
- ptgbot¶
Deploy ptgbot
- puppet-install¶
Install puppet on a host
Note
This role uses
puppetlabs
versions where available in preference to system packages.This roles installs puppet on a host
Role Variables
-
puppet_install_version¶
Default:3
The puppet version to install. Platform support for various version varies.
-
puppet_install_system_config_modules¶
Default:yes
If we should clone and run install_modules.sh from OpenDev
system-config
repository to populate required puppet modules on the host.
-
puppet_install_version¶
- puppet-run¶
Run puppet on remote servers
Omnibus role that takes care of installing puppet and then running puppet. Uses include_role so that the installation of the puppet role can run as the first task, then the puppet role can be used in a following task.
This role should run after
puppet-setup-ansible
-
manifest¶
Default:manifests/site.pp
Puppet manifest file to run.
-
manifest¶
- puppet-setup-ansible¶
Setup Ansible on this host to run puppet on remote hosts.
Import the ansible-roles-puppet role for running puppet on remote hosts and bring in the repository of required puppet modules.
- rax-dns-backup¶
Backup Rackspace managed DNS domain names
Export a bind file for each of the domains used in the Rackspace managed DNS-as-a-service.
- refstack¶
Install, configure, and run a refstack server.
- registry¶
Install, configure, and run a Docker registry.
- reprepro¶
reprepo
Install reprepo configuration for configuring various repositories. Note that this role is only intended to be called from the
mirror-update
role.
- root-keys¶
Write out root SSH private key
Role Variables
-
root_rsa_key¶
The root key to place in
/root/.ssh/id_rsa
-
root_rsa_key¶
- run-selenium¶
run-selenium
Run a selenium container that listens on port 4444 on a host.
This is intended only for use during gate testing to capture screenshots from a local service. Usually used from testinfra jobs.
- set-hostname¶
Set hostname
Statically set the hostname, hosts and mailname
Role Variables
None
- static¶
Configure an static webserver
This role installs and configures a static webserver to serve content published in AFS
Role Variables
- statusbot¶
Deploy statusbot
Note
This should be turned into a Limnoria plugin. Until this is done, we run it as a separate daemon.
- sync-project-config¶
Sync project-config to remote host
This syncs the
project-config
repo checked out on the bastion host (which is actually running the Ansible that runs this role) to the current host. This repo holds configuration for some production hosts and thus we want to make sure to deploy those services with the checked-out tree Zuul has prepared for a given deploy-pipeline CD job run (i.e. so we apply config updates in commit order).Also see the setup-src to see where this checkout is setup; there are some tricks – for example for hourly and periodic jobs we want to ensure we run from master at the time the job runs, not at the time the job was enqueued.
- vos-release¶
vos release with localauth
Install a user and script to do remote
vos release
withlocalauth
authentication. This can avoid kerberos or AFS timeouts.This relies on
vos_release_keypair
which is expected to be a single keypair set previously by hosts in the “mirror-update” group. It will allow that keypair to run/usr/local/bin/vos_release.sh
, which filters the incoming command. Releases are expected to be triggered on the update host with:ssh -i /root/.ssh/id_vos_release afs01.dfw.openstack.org vos release <mirror>.<volume>
Future work, if required
Allow multiple hosts to call the release script (i.e. handle multiple keys).
Implement locking within
vos_release.sh
script to prevent too many simulatenous releases.
Role Variables
-
vos_release_keypair¶
The authorized key to allow to run the
/usr/local/bin/vos_release.sh
script
- zookeeper¶
Install, configure, and run zookeeper servers.
- zuul¶
Install Zuul
- zuul-executor¶
Run Zuul Executor
- zuul-launcher¶
Run zuul launcher
- zuul-lb¶
Install the zuul-lb services
This configures haproxy
Role Variables
-
zuul_lb_listeners¶
The backends to configure
-
zuul_lb_listeners¶
- zuul-merger¶
Run zuul merger
- zuul-preview¶
Install, configure, and run zuul-preview.
- zuul-scheduler¶
Run Zuul Scheduler
- zuul-status-backup¶
Backup zuul status info
- zuul-user¶
zuul user
Install a user
zuul
that has the per-project key fromsystem-config
as anauthorized_key
.Role Variables
-
zuul_user_enable_sudo¶
Default:False
Enable passwordless
sudo
access for the zuul user.
-
zuul_user_authorized_key¶
Default:per-project key from system-config
Authorized key content for the zuul user.
-
zuul_user_enable_sudo¶
- zuul-web¶
Run zuul-web and zuul-fingergw