Copyright 2019 Red Hat Inc.

This work is licensed under a Creative Commons Attribution 3.0
Unported License.


Include the URL of your StoryBoard story:!/story/2006598

Move the services provided by into less centralised approaches more consistent with modern deployment trends.

Problem Description

The host is a monolithic server providing various hosting services via a large amount of volume-attached storage.

The immediate problem is it currently running Ubuntu Trusty which is reaching the end of its supported life.

The secondary problems are twofold:

Firstly, we would like to move the various publishing and hosting operations from centralised volumes on a single server to our AFS distributed file-system.

Secondly, we would like to make the hosting portion more OpenDev compatible; this means avoiding working on legacy deployment methods (i.e. puppet) and integrating with our general idea of a “whitebox” service that can be used by many different projects.

Thus we propose breaking up the services it offers to utlise more modern infrastructure alternatives and retiring the host.

Proposed Change

We can break the services down

Log storage

Legacy log storage (~14tb)


Apache service redirects a number of legacy URLs to new locations

Static site serving

100gb attached partition holding various static sites (i.e. plain HTML publishing, no middleware, etc)


512gb partition which holds and publishes release tarballs for all projects.


apt-get dist-ugprade the host to a more recent distribution, fix any puppet issues and ignore it until next time it needs updating.



Primary assignee:


Gerrit Topic

Use Gerrit topic “static-services” all patches related to this spec.

git-review -t static-services

Work Items

Log storage

OpenDev CI logs have been moved to various object-storage backends provided by donors. The existing logs will age out per our existing old-log cleanup jobs.

Since logs were always ephemeral there should be no issues with old links. For clarity we will remove (rather than redirect) the DNS entry so there is no confusion that logs might still live there.


  • remove DNS entries after old logs entries have cleared out

Legacy redirects

The following do straight redirects from their config hostnames to








The following have slightly different semantics

The following redirects to


Clearly there is a need for a generic ability to redirect various URLs as things change over time.

We will use a single containerised haproxy instance to handle redirects for the OpenDev project. Although initially it will simply be handling 302 redirects, it is imagined that future services can use it for it’s availability or load-balancing services as well. Note that gitea services also have their own load-balancer; although it reuses all the deployment mechanisms, the production service is kept separately to maintain isolation been probably the most important service (code) and more informational services.

Proof-of-concept reviews are provided at:

The work items consist of:

  • approval of the above reviews

  • starting the production host

  • iterating the extant DNS records and pointing them to the new load-balancer

OpenDev infrastructure migration

We wish to provide new services only using our latest deployment methods, to avoid introducing even more legacy services and to provide a basis for the migration process to OpenDev services.

Although has an existing role as a webserver serving content from the / AFS mount, it is configured using legacy puppet. Thus a new server will be provisioned using our Ansible environment, rather than adding more hosts to legacy configuration.

This server should be a “whitebox” server that is capable of serving a range of domains that OpenDev would like to serve. However, it’s role will only to be to serve static directories on AFS volumes. After this process, there will be numerous examples of SSL certificate generation, vhost configuration, AFS volume setup and publishing jobs for any other projects to copy and implement.

Initially this server needs to serve https sites for the replacement services; namely







Currently, SSL certificates are manually provisioned and entered into puppet secret data, where they are deployed to the host. We wish to use automatically renewing letsencrypt certificates per our other infrastructure, utilising our DNS based authentication. However, since remains administered by external teams in RAX’s propietary environment, we will make an exception and setup DNS validation records manually for these legacy sites until a full migration of to OpenDev infrastructure is possible. Other domains will use OpenDev nameservers, which support automated DNS validation renewals.

We will have the new server provisioned and ready before we begin the steps of migrating publishing locations. This means we can debug any setup issues outside production, and effects a zero-downtime cutover when the sites are ready.

Workitems are as follows:

  • Write roles and tests to provision a new server which will be limited to running Apache and serving AFS directories.

  • Create the server

  • Create CNAME which will be the main service hostname, to provide for easier server replacement or other updates in the future.

  • Pre-provision https certificates for the above listed services

    • Using the RAX web interface for name services and the openstack infra permissions, setup _acme-challenge.<service> records as a CNAME to

    • Each site should have a separate certificate provisioned. The configuration would be something like

    • Debug any failures; however the theory is (taking one example): the existing letsencrypt roles should request a certificate for on and receive the authentication key, which is placed in a TXT record in The certificate creation will will trigger a lookup of which will be a CNAME to, which contains the correct TXT record. The certificate is issued on

  • Preconfigure the vhost configuration for the above sites (using prior provisioned keys for SSL)

  • Confirm correct operation of the sites with dummy content.

Static hosting

A number of jobs publish directly to /srv/static on the server. These are then served by Apache as static websites.

In general, we want these jobs to publish to our AFS volumes. By publishing to AFS we remove the central point of failure of a single server and it’s attached disks (mitigated by multiple AFS servers and replicas).

The AFS volumes are then served by which has a dedicated role as an AFS to HTTP bridge.

The sites in question are:

The extant AFS layout has volumes for each project. Thus we will continue this theme and an admin will create one volume for each of the above static sites; e.g.

  • /afs/ (~200mb)

  • /afs/ (100mb)

  • /afs/ (520k)

  • /afs/ (current 706mb)

  • /afs/ (current 57mb)

  • /afs/ (current 134gb)

The work items are as follows

  • Create the volumes for each site as described above

  • Migrate the extant data to the new volumes. It is impractical to recreate all the sites as it would require triggering many often infrequently updated repos.

  • Publishing jobs will be updated to use AFS publishing to these new locations. During transition period, we can publish to both locations.

  • Update the site configuration on to serve the site from the new location

  • We should be able to fully test the new sites at this point with manual host entries. Ensure: * https certificates working correctly * old links remain consistent

  • For each site, move to production by updating the CNAME entries in the domain for the main server to point to (note, not the server directly, i.e., to give us flexibility in managing the backend service with server replacements or load-balancing in the future). Per prior testing, this should be transparent.

  • Old publishing jobs removed


Unlikley to require new repositories


  • a new http server for serving AFS content

  • A load-balancer server is suggested to host the haproxy container

DNS Entries

Quite a few DNS entries will need to be updated as described


Developers should largely not care where the results are published.

Small doc updates for any new services.

A guide to setting up jobs, host configuration, etc. for publishing static data for other projects may be useful.




Since all updates are replacements, we can confirm that the new sites are operational before putting them into production. Any DNS switches can be essentially zero impact.


N/A at this time