Since the WMF docker registry only allows pushes from our internal
network, it makes more sense to always use the internal discovery name
when tagging/registering images, and the public name when qualifying
images for reporting, etc.
This should simplify the logic in downstream jobs that have previously
set the registry value based on where they will execute (which node and
its access).
Removed setting of docker registry from pipeline configuration as this
should not be a user-configurable value, but retained properties for
both public and internal registries so they may be overwritten by client
code.
Change-Id: If0567703257296a72edc3807ecb18fdc866c1078
Provides `PipelineBuilder` for reading `.pipeline/config.yaml` and
mapping user-defined pipelines, stages, and execution graphs to actual
Jenkins Pipeline stage definitions.
Provides `Pipeline` class that constructs a "stack" of `PipelineStage`
objects from the user-provided configs, each with its own `NodeContext`
for binding output values to names and consuming bound values from
previous stages.
Provides `PipelineStage` that contains core stage step implementations
based on the existing `service-pipeline` JJB job definition in
`integration/config`. A closure is returned by each stage for passing
off to Jenkins Pipeline stage definitions by the builder.
Steps have a fixed order within a given stage: build, run, publish,
deploy, exports. This allows for concise definition of a stage that
performs multiple steps, and deterministic behavior of default
configuration that references locally bound output values (e.g. the
default configuration of `image:` for an `publish: { type: image }`
publish entry is `${.imageID}`, referencing the image built in the
current stage's `build` step.) If the user needs to change ordering,
they can simply break the stage out into multiple stages.
See the `Pipeline` class for currently supported configuration. Note
that the aforementioned context system allows for user's to make use of
the same value bindings that step implementations use internally. They
can also use the `exports` configuration field to bind new values.
To illustrate the minimally required configuration, the following would
approximate the current `service-pipeline-test-and-publish` JJB job for
a project named "foo".
pipelines:
foo:
directory: src/foo
stages:
- name: test # builds/runs "test" variant
- name: candidate
build: production
publish:
image: true
deploy: # currently only the "ci" cluster
chart: https://releases.wikimedia.org/charts/foo-0.0.1.tgz
test: true
And to illustrate how the "candidate" stage in this example could be
expressed as multiple stages using references to the output names that
steps bind/export:
pipelines:
foo:
directory: src/foo
stages:
- name: tested
- name: built
build: production
- name: published
publish:
image:
id: '${built.imageID}'
exports:
image: '${.imageFullName}:${.imageTag}'
- name: staged
deploy:
image: '${published.image}'
chart: https://releases.wikimedia.org/charts/foo-0.0.1.tgz
test: true
Bug: T210267
Change-Id: I5a41d0d33ed7e9174db6178ab7921f5143296c75
Simplified `Blubber` client to handle only the transcompilation of Blubber
configuration into Dockerfile and not the actual building of images.
Building was moved into `PipelineRunner` where all other Docker commands
are invoked.
Blubber client was refactored to use the WMF production deployment of
Blubberoid instead of relying on a locally installed binary of the
`blubber` CLI.
Bug: T212247
Change-Id: Ib403786af7af6ce9d469798452da512fa535f2b4
Implemented method to remove images. Note that the `--force` flag is
used to allow removal by ID even when one or more tags exists for the
image locally.
Change-Id: I4a09ab84fde76e5c91bb4d515b4af5f647b120b2
Implemented `kubeConfig` property for `PipelineRunner`, allowing a
specific Kubernetes config path to be set, and added
`--tiller-namespace` to the invocation of Helm using the same value as
for `--namespace`.
Added tests for Helm-related methods of PipelineRunner.
Bug: T204391
Change-Id: I16f4a1a3d1f8deedccdd0f24b8fcf2a6beca7e54
Additional `PipelineRunner` tests were also implemented to improve
coverage where the missing method exception previously surfaced.
Change-Id: Ib10701dd33ee10b9127e44d056a62b84333b35d3
Since shared Groovy libraries (and workflow scripts) are compiled and
executed on the Jenkins master, they cannot use native functions to
check for things like file existence as they do not have direct access
to the job's workspace. They must instead use abstracted Jenkins
Pipeline steps.
Change-Id: I5f411f462ef862d6ed28922916b3b5694ef98a13
Requiring a list of "name=value" pairs seems odd when it could accept a
Groovy Map instead. Refactored `Blubber.build` and
`PipelineRunner.build` to accept map arguments instead.
Additionally made the labels arguments optional.
Change-Id: I948b0e9d9cdebb59098095730189c19f7b4e3cda
Fixes exception due to `java.nio.file.Files` not accepting string
values. String path is now converted to `java.nio.file.Path` first, and
new test methods were implemented for `PipelineRunner.build` to verify
this and other functionality.
Change-Id: Ib9ce7e15e4493c6e5bb01c413ce15b0ffcb06d12
Implemented a new `PipelineRunner` class and included functions for all
current operations of the service-pipeline script in integration/config.
Change-Id: I7614709126d29546a10c4fc7ebec5d61187a5d1d