Functionally testing pipeline stage steps (build, run, publish, deploy,
exports) is tricky on account of either having to mock all calls to
workflow script and/or pipeline runner methods, or somehow setup
dependent backend systems such as k8s for systems tests.
The lack of coverage for these methods has resulted in a number of found
bugs and incremental fixes that could have been found through testing,
so the value of heavy mocking and system-test complexity in these cases
seems to outweigh the cost.
By installing a stubbed version of the `docker-pusher` script into the
Jenkins container, we can run a system test for the `publish` step, and
by stubbing out some of the previous stage/step context, we can more
effectively test `PipelineStage` step methods.
Change-Id: Ibed4f781e6c46dcbf5741eaf8adb79ebe4ef2d39
Groovy CPS does not correctly implement `replaceAll(Pattern, Closure)`.
Specifically, in cases where the pattern contains a capture group, it
does not pass each match for each capture group to the given closure as
a separate argument, as Groovy's implementation does.
The workaround in this case is to use a substring operation to omit the
surrounding `${}` from the matched variable expression when
interpolating string values against a node context.
Change-Id: I23dcceeb416d5c22ad0f36ab3dca8303a9ed7fd4
Importing PipelineStage into PipelineBuilder was causing an unknown
class `ExecutionContext.NodeContext` exception but only when importing
within a Jenkins CPS context. Instead of pulling out my hair trying to
debug anything related to CPS, I opted to remove the explicit type on
the PipelineStage.context property instead.
Change-Id: If33cb029b9d585bf2c6ac9754d46f64ac14143a9
Properly sets the build status for the non-exceptional code path when
teardown is reached, and fixes the conditional for finding the teardown
stage in the exceptional code path.
Change-Id: I61e01b6920488a330af59ead141842430572876c
Fixes a bug where any set value of `deploy.test` was overwritten with
null.
Refactored tests for default configuration to be clearer and included
tests for deploy step.
Change-Id: I2ecda922d7ddc2d873e4c31513f92a12e6ebfcf7
Refactored `ExecutionContext#getAll` to return a map of `stageName:
value` which allows one to retrieve multiple bounds and know which
happened in the same stage.
Refactored `PipelineStage#teardown` to include all image tags in the
Gerrit report.
Change-Id: I723ce2edcf1201be5eb1a726920395b24abadd9f
The previous implementation of `PipelineStage#getDefaultNodeLabels`
caused a `groovy.lang.MissingPropertyException` as it was assuming a
previously implemented but since refactored structure for the `publish`
configuration.
Tests were added to cover `Pipeline#getRequiredNodeLabels` which
transitively covers `PipelineStage#getRequiredNodeLabels` and seemed
like a better "user" facing interface to test.
Change-Id: Iced37fe8ab4482f7931f91c3a0d6255cbeda68ba
The configure stage previously depended on the `scm` job property being
available. This is only available, however, for jobs that source a
`Jenkinsfile` from the repo, not the pipeline scripts sourced from
`integration/config` groovy files.
Calling `PipelineBuilder#build` in the context of the latter results in
the following error:
> ERROR: 'checkout scm' is only available when using "Multibranch
> Pipeline" or "Pipeline script from SCM"
The builder needs to support being called from both a `Jenkinsfile` and
the scripts defined in integration/config.
Change-Id: I1dae576c35c4ea1a8746ec418290c52679314598
The "configure" stage needs somewhere to execute and the "blubber" nodes
seem like reasonable choices.
Change-Id: I81a50fad6c6dace77a72c6eef6cfe343968675be
Since the WMF docker registry only allows pushes from our internal
network, it makes more sense to always use the internal discovery name
when tagging/registering images, and the public name when qualifying
images for reporting, etc.
This should simplify the logic in downstream jobs that have previously
set the registry value based on where they will execute (which node and
its access).
Removed setting of docker registry from pipeline configuration as this
should not be a user-configurable value, but retained properties for
both public and internal registries so they may be overwritten by client
code.
Change-Id: If0567703257296a72edc3807ecb18fdc866c1078
Adds a `pipelineName` argument to `PipelineBuilder#build` that can be
used to restrict execution to a single pipeline defined in
`.pipeline/config.yaml`. This should allow for greater flexibility in
upstream gating systems like Zuul. For example, different pipelines can
be triggered for test vs gate-and-submit events, etc.
Change-Id: I47937d6a2354204ca81cf602815e04465749d34b
Pipeline setup/teardown methods were referencing the wrong constants for
the internal stage names.
Added test coverage of `Pipeline#stack` which calls the offending
methods.
Change-Id: Iaec8fe56e382c57d8530d44cceb19b80c5fca5aa
Provides `PipelineBuilder` for reading `.pipeline/config.yaml` and
mapping user-defined pipelines, stages, and execution graphs to actual
Jenkins Pipeline stage definitions.
Provides `Pipeline` class that constructs a "stack" of `PipelineStage`
objects from the user-provided configs, each with its own `NodeContext`
for binding output values to names and consuming bound values from
previous stages.
Provides `PipelineStage` that contains core stage step implementations
based on the existing `service-pipeline` JJB job definition in
`integration/config`. A closure is returned by each stage for passing
off to Jenkins Pipeline stage definitions by the builder.
Steps have a fixed order within a given stage: build, run, publish,
deploy, exports. This allows for concise definition of a stage that
performs multiple steps, and deterministic behavior of default
configuration that references locally bound output values (e.g. the
default configuration of `image:` for an `publish: { type: image }`
publish entry is `${.imageID}`, referencing the image built in the
current stage's `build` step.) If the user needs to change ordering,
they can simply break the stage out into multiple stages.
See the `Pipeline` class for currently supported configuration. Note
that the aforementioned context system allows for user's to make use of
the same value bindings that step implementations use internally. They
can also use the `exports` configuration field to bind new values.
To illustrate the minimally required configuration, the following would
approximate the current `service-pipeline-test-and-publish` JJB job for
a project named "foo".
pipelines:
foo:
directory: src/foo
stages:
- name: test # builds/runs "test" variant
- name: candidate
build: production
publish:
image: true
deploy: # currently only the "ci" cluster
chart: https://releases.wikimedia.org/charts/foo-0.0.1.tgz
test: true
And to illustrate how the "candidate" stage in this example could be
expressed as multiple stages using references to the output names that
steps bind/export:
pipelines:
foo:
directory: src/foo
stages:
- name: tested
- name: built
build: production
- name: published
publish:
image:
id: '${built.imageID}'
exports:
image: '${.imageFullName}:${.imageTag}'
- name: staged
deploy:
image: '${published.image}'
chart: https://releases.wikimedia.org/charts/foo-0.0.1.tgz
test: true
Bug: T210267
Change-Id: I5a41d0d33ed7e9174db6178ab7921f5143296c75
Our current pipelinelib based jobs require repos to conform to a number
of rigid conventions: assume the repo contains source for only a single
application, build "test" variant, run "test" variant, build
"production" variant, helm deploy/test, publish under a single tag name.
These jobs also assume all of these operations need to be performed
linearly.
While this design was sufficient for our very first use cases, its
convention based design it already proving prohibitively inflexible. For
example, teams maintaining repos that contain multiple interrelated
applications cannot build and test these applications as independent
images; Teams wanting to execute multiple test suites would have to wrap
them in a single entrypoint and implement their own concurrency should
they need it; Etc.
Instead of Release Engineering maintaining a new specialized pipeline
job for each team that performs only slightly different permutations of
the same operations (resulting in duplication of job definitions and a
large maintenance burden), we can instead establish a configuration
format and interface by which teams provide their own pipeline
compositions.
This initial commit in a series of pipeline related commits implements
two fundamental components to support a CI/CD pipeline that can execute
any number of user-defined variant build/test/publish/deploy stages and
steps in a safely concurrent model: a directed-graph based execution
model, and name bindings for stage outputs. The former provides the
model for composing stage execution, and the latter provides a decoupled
system for defining what outputs each subsequent stage operates upon.
First, an `ExecutionGraph` class that can represent a directed acyclic
graph given a number of linearly defined arcs (aka branches/edges). This
component will allow users to provide the overall execution flow as
separate linear processes but allow parallel branches of the execution
graph to be scheduled concurrently.
Example:
/* To represent a graph with separate parallel branches like:
*
* a x
* ⇘ ⇙
* b
* ⇙ ⇘
* y c
* ⇘ ⇙
* z
*
* One only needs to provides each linear execution arc
*/
def graph = new ExecutionGraph([["a", "b", "c", "z"], ["x", "b", "y", "z"]])
/* The ExecutionGraph can solve how those arcs intersect and how the
* nodes can be scheduled with a degree of concurrency that Jenkins
* allows.
*/
graph.stack() // => [["a", "x"], ["b"], ["y", "c"], ["z"]]
Second, a set of context classes for managing immutable global and local
name/value bindings between nodes in the graph. Effectively this will
provide a way for pipeline stages to safely and deterministically
consume inputs from previous stages along the same branch, and to
provide their own outputs for subsequent stages to consume.
For example, one stage called "build" that builds a container image will
save the image ID in a predetermined local binding called `.imageID` and
a subsequent "publish" stage configured by the user can reference that
image by `${build.imageID}`.
Once a value is bound to a name, that name cannot be reused; bindings
are immutable. Node contexts are only allowed to access namespaces for
nodes that precede them in same branch of the graph, ensuring
deterministic behavior during parallel graph branch execution. See unit
tests for `ExecutionContext` for details on expected behavior.
Put together, these two data structures can constitute an execution
"stack" of sorts that can be safely mapped to Jenkins Pipeline stages,
and make use of parallel execution for graph branches. Specifically, the
`ExecutionGraph.stack()` method is implemented to yield each set of
independent stack "frames" in topological sort order which can safely be
scheduled to run in parallel.
Bug: T210267
Change-Id: Ic5d01bf54c703eaf14434a36f1e2b3e276b48b6f
Writes and posts easy-to-understand Gerrit comments from the Deployment
Pipeline.
This is the alternative to the current circuitous path of
links users of the pipeline are meant to follow to find basic
information about build status, Docker images, and Docker tags.
Adds two new classes that are meant to be invoked from inside a Jenkins
job defined as a Jenkins Pipeline script. By providing access to a few
build parameters from within a Jenkins job, this patch will post a
comment to Gerrit using an output format that will be styled by Gerrit
commentstyles implemented in I5b04aa10d54b6f2587da196c02ebff9bfe5ba166.
Example posted message:
pipeline-dashboard: service-pipeline
pipeline-build-result: SUCCESS (job: service-pipeline, build: 27)
IMAGE:
docker-registry.wikimedia.org/wikimedia/mediawiki-services-citoid
TAGS:
test, latest
Change-Id: I2fc0924996eb1a969fcbf41bac333d3c35cd34ea
Simplified `Blubber` client to handle only the transcompilation of Blubber
configuration into Dockerfile and not the actual building of images.
Building was moved into `PipelineRunner` where all other Docker commands
are invoked.
Blubber client was refactored to use the WMF production deployment of
Blubberoid instead of relying on a locally installed binary of the
`blubber` CLI.
Bug: T212247
Change-Id: Ib403786af7af6ce9d469798452da512fa535f2b4
Implemented method to remove images. Note that the `--force` flag is
used to allow removal by ID even when one or more tags exists for the
image locally.
Change-Id: I4a09ab84fde76e5c91bb4d515b4af5f647b120b2
Implemented `kubeConfig` property for `PipelineRunner`, allowing a
specific Kubernetes config path to be set, and added
`--tiller-namespace` to the invocation of Helm using the same value as
for `--namespace`.
Added tests for Helm-related methods of PipelineRunner.
Bug: T204391
Change-Id: I16f4a1a3d1f8deedccdd0f24b8fcf2a6beca7e54
Additional `PipelineRunner` tests were also implemented to improve
coverage where the missing method exception previously surfaced.
Change-Id: Ib10701dd33ee10b9127e44d056a62b84333b35d3
Since shared Groovy libraries (and workflow scripts) are compiled and
executed on the Jenkins master, they cannot use native functions to
check for things like file existence as they do not have direct access
to the job's workspace. They must instead use abstracted Jenkins
Pipeline steps.
Change-Id: I5f411f462ef862d6ed28922916b3b5694ef98a13
Requiring a list of "name=value" pairs seems odd when it could accept a
Groovy Map instead. Refactored `Blubber.build` and
`PipelineRunner.build` to accept map arguments instead.
Additionally made the labels arguments optional.
Change-Id: I948b0e9d9cdebb59098095730189c19f7b4e3cda
Fixes exception due to `java.nio.file.Files` not accepting string
values. String path is now converted to `java.nio.file.Path` first, and
new test methods were implemented for `PipelineRunner.build` to verify
this and other functionality.
Change-Id: Ib9ce7e15e4493c6e5bb01c413ce15b0ffcb06d12
Created a new `PatchSet` class that provides an interface to patches
being gated by WMF CI.
To begin with, a static function was implemented to help with instantiation from the
parameters that Zuul provides, and an instance method for getting an SCM
mapping that the Jenkins builtin `checkout` function understands. This
allows for the following simple pattern to be used to clone a project
repo and check out the patch set currently being gated.
stage('Get patch') { checkout(PatchSet.fromZuul(params).getSCM()) }
Future functions for getting information from or manipulating a patch
set may be added to this class.
Change-Id: I1490e0f98af1556f2c6d816b8b5c04853b6b7b19
Implemented a new `PipelineRunner` class and included functions for all
current operations of the service-pipeline script in integration/config.
Change-Id: I7614709126d29546a10c4fc7ebec5d61187a5d1d
Establish a `org.wikimedia.integration` package for housing pipeline
related Groovy code, and a test of unit tests. Gradle configuration is
provided for running the tests either via Blubber/Docker (see
`.pipeline/blubber`) or directly (run `gradle test`).
Bug: T196940
Change-Id: I0a72200b9e24f71a706718a107e4941c0e772af8