Standard Operating Environment – Part III: A Reference Implementation

In the previous article in this series we discussed the workflows and processes involved in an SOE. In the final part of this series we will discuss the implementation and operation of an SOE in practice, using Red Hat Satellite, Ansible Tower, and Jenkins to drive Continuous Integration and testing. This architecture is implemented using the SOE-CI scripts, available here. While this article focuses on using version 1.0 of SOE-CI, please note that a new version of SOE-CI is under development. This is a complete rewrite in Python, using the Nailgun API to satellite and is available here. 1


The general architecture of the solution is shown below. It consists of the following components:

Component Function
Red Hat Satellite 6 Content Repository, deployment server
Ansible Tower Client configuration management
Jenkins Orchestration of build creation and testing
Test client(s) Test servers, one for each role

The components and how they interact are shown in the diagram below:
High Level Architecture

The general flow of operation is as follows:

A developer (1) commits a change to the git repository (2). A change is typically:
* A kickstart template or snippet
* An ansible playbook
* Source code to package into an SRPM and build an RPM
* Binaries to package into an RPM

The Jenkins buildplan (3) polls the git repository (or responds to a githook) and performs the following actions in order:

  1. Create any RPMs of custom or repackaged software that need to be created. This is done using mock. The mock configuration used always builds against the latest SOE build. Built RPMs are exported via a yum repository on the Jenkins server.
  2. Build any puppet modules that need to be built. Built puppet modules are exported via a pulp repository on the Jenkins server.
  3. Using hammer, instruct the Satellite **(4)** to synchronise custom product RPM and Puppet repositories to import any newly created or modified RPMs and Puppet modules.
  4. Using hammer, load kickstarts and snippets into the satellite. This will automatically import any changes, as well as new files.
  5. Republish the Content View that describes the build.
  6. Promote the Content View into a Lifecycle Environment containing test clients **(6)**.
  7. Instruct the deployment component of satellite (Foreman) to rebuild a group of test clients, corresponding to the different roles described in the build **(7)**.
  8. Instruct the Ansible server **(8)** to configure each test server into a role (this may also be done with Puppet from the Satellite if Puppet is preferred for server configuration).
  9. Deploy test scripts from the git repository to the test servers and execute them.
  10. Report the results of test scripts to the developers and flag the overall build as either passed or failed.

The architecture described above implements the Development Activity described in detail in the previous article.

Implementation in Detail

NB The implementation by the SOE-CI scripts currently uses hammer executed on the satellite via an ssh session – this was a compromise that had to be implemented due to the fact that originally the hammer CLI RPM package was only available in the Red Hat Satellite channels, and therefore would not have been available on a Jenkins host (this oversight has since been rectified).

The build plan executed in Jenkins consists of the following shell scripts:

This script searches the build git repository for source RPM packages to build – for example, using the demo SOE here2 it will find all directories under the rpms/ directory containing an RPMSPEC file, and rebuild the RPMs therein using mock. Using git hashes ensured that RPMs are only rebuilt if they have actually been modified. It is up to the developer to ensure that RPM version and release tags are increased as necessary.

This script performs a similar function to the script but instead rebuilds puppet modules as required, searching under the puppet/ path for directories containing either a Modulefile or metadata.json file. As with RPM builds, it is incumbent on the developer to ensure that version tags in the metadata are incremented.

This script simply synchronises template files in the kickstarts/ directory of the build repo to an export directory in preparation for uploading to the satellite.

This script copies newly built RPMs to a yum repository exported from the Jenkins server via HTTP. A hammer command is then used to instruct the satellite to synchronise to this repository. The repository to synchronise is set using a Jenkins build variable.

Similar to the previous script, this script copies newly built Puppet modules to a Pulp repository exported from the Jenkins server via HTTP. A hammer command is then used to instruct the satellite to synchronise to the repository. Additional logic ensures that newly created Puppet modules are uploaded, and the version is set to ‘latest’ in the satellite repository. The repository to synchronise is set using a Jenkins build variable.

This script takes template files (e.g. kickstarts, snippets, cloud-init files) and synchronises them to the satellite.

This script republishes the build Content View (and dependent Content Views in the case of a Composite Content View) and promotes it into the engineering test environment (in the environment in which test machines are resident). Both the CV to republish, and the lifecycle environment into which to promote it are determined using Jenkins build variables.

This script instructs the satellite to trigger a rebuild of all clients in the test machine host collection (the name of the host collection is set using a Jenkins build variable).

This script waits for all test machines to rebuild. If they do not rebuild within a given time (hardcoded into the script), the buildplan is aborted and the build is marked as failed. Once all machines have successfully rebooted, Bats scripts (in the tests/ directory in the build repo) are copied to every test machine and executed there, and the results collected using the (TAP)[] format. Tests that should only execute on machines with certain (profiles)[] need to be written to execute conditionally. An example is shown (here)[] that will only execute if a specific puppet module is present in the test client’s manifest.

Finally, the (TAP Plugin)[] is used to display build results – examples are shown below:

Screenshot from 2017-06-01 13-29-53-cropped

Screenshot from 2017-06-01 13-29-44-cropped

Further Areas for Development

Currently, this implementation only automates the Development and Maintenance activities discussed in the previous article. However it would also be possible to automate the Inception, Release and Retirement activities. These are are areas for further development and all contributions are gratefully received.

Finally, the SOE-CI solution is currently being re-written to use Python and the Nailgun library, and all contributers are welcome.

  1. Version 2 of SOE-CI is currently a work in progress and has limited functionality. 
  2. Note that this is an older demo SOE that is puppet-oriented rather than Ansible-oriented. 
Spread the love


Leave a Reply

Your email address will not be published. Required fields are marked *