Browse Source

pipeline: Builder and stage implementation

Provides `PipelineBuilder` for reading `.pipeline/config.yaml` and
mapping user-defined pipelines, stages, and execution graphs to actual
Jenkins Pipeline stage definitions.

Provides `Pipeline` class that constructs a "stack" of `PipelineStage`
objects from the user-provided configs, each with its own `NodeContext`
for binding output values to names and consuming bound values from
previous stages.

Provides `PipelineStage` that contains core stage step implementations
based on the existing `service-pipeline` JJB job definition in
`integration/config`. A closure is returned by each stage for passing
off to Jenkins Pipeline stage definitions by the builder.

Steps have a fixed order within a given stage: build, run, publish,
deploy, exports. This allows for concise definition of a stage that
performs multiple steps, and deterministic behavior of default
configuration that references locally bound output values (e.g. the
default configuration of `image:` for an `publish: { type: image }`
publish entry is `${.imageID}`, referencing the image built in the
current stage's `build` step.) If the user needs to change ordering,
they can simply break the stage out into multiple stages.

See the `Pipeline` class for currently supported configuration. Note
that the aforementioned context system allows for user's to make use of
the same value bindings that step implementations use internally. They
can also use the `exports` configuration field to bind new values.

To illustrate the minimally required configuration, the following would
approximate the current `service-pipeline-test-and-publish` JJB job for
a project named "foo".

    pipelines:
      foo:
        directory: src/foo
        stages:
          - name: test           # builds/runs "test" variant
          - name: candidate
            build: production
            publish:
              image: true
            deploy:              # currently only the "ci" cluster
              chart: https://releases.wikimedia.org/charts/foo-0.0.1.tgz
              test: true

And to illustrate how the "candidate" stage in this example could be
expressed as multiple stages using references to the output names that
steps bind/export:

    pipelines:
      foo:
        directory: src/foo
        stages:
          - name: tested
          - name: built
            build: production
          - name: published
            publish:
              image:
                id: '${built.imageID}'
            exports:
              image: '${.imageFullName}:${.imageTag}'
          - name: staged
            deploy:
              image: '${published.image}'
              chart: https://releases.wikimedia.org/charts/foo-0.0.1.tgz
              test: true

Bug: T210267
Change-Id: I5a41d0d33ed7e9174db6178ab7921f5143296c75
master
Dan Duvall 5 years ago
parent
commit
d5fedb1206
11 changed files with 1047 additions and 8 deletions
  1. +3
    -0
      .gitignore
  2. +8
    -0
      Makefile
  3. +178
    -0
      src/org/wikimedia/integration/Pipeline.groovy
  4. +83
    -0
      src/org/wikimedia/integration/PipelineBuilder.groovy
  5. +83
    -6
      src/org/wikimedia/integration/PipelineRunner.groovy
  6. +525
    -0
      src/org/wikimedia/integration/PipelineStage.groovy
  7. +17
    -0
      src/org/wikimedia/integration/Utility.groovy
  8. +9
    -2
      test/org/wikimedia/integration/PipelineRunnerTest.groovy
  9. +44
    -0
      test/org/wikimedia/integration/PipelineStageTest.groovy
  10. +93
    -0
      test/org/wikimedia/integration/PipelineTest.groovy
  11. +4
    -0
      test/org/wikimedia/integration/UtilityTest.groovy

+ 3
- 0
.gitignore View File

@ -1,2 +1,5 @@
/.gradle
/.idea
/.project
/build
pipelinelib.iml

+ 8
- 0
Makefile View File

@ -7,6 +7,14 @@ DOCKER_TAG := piplinelib-tests-$(shell date -I)
.PHONY: test
clean:
ifneq (,$(DOCKER))
$(DOCKER_STOP_ALL) 2> /dev/null || true
$(DOCKER_RMI) 2> /dev/null || true
else
@echo "Not using Docker. Nothing to do."
endif
doc: docs
docs:
gradle groovydoc


+ 178
- 0
src/org/wikimedia/integration/Pipeline.groovy View File

@ -0,0 +1,178 @@
package org.wikimedia.integration
import org.codehaus.groovy.GroovyException
import static org.wikimedia.integration.PipelineStage.*
import org.wikimedia.integration.ExecutionContext
import org.wikimedia.integration.ExecutionGraph
import org.wikimedia.integration.PipelineStage
import org.wikimedia.integration.PipelineRunner
/**
* Defines a Jenkins Workflow based on a given configuration.
*
* The given configuration should look like this:
*
* <pre><code>
* pipelines:
* serviceOne:
* blubberfile: serviceOne/blubber.yaml # default based on service name for the dir
* directory: src/serviceOne
* execution: # directed graph of stages to run
* - [unit, candidate] # each arc is represented horizontally
* - [lint, candidate]
* - [candidate, staging, production] # common segments of arcs can be defined separately too
* stages: # stage defintions
* - name: unit # stage name (required)
* build: phpunit # build an image variant
* run: "${.imageID}" # run the built image
* publish:
* files: # publish select artifact files from the built/run image
* paths: ["foo/*", "bar"] # copy files {foo/*,bar} from the image fs to ./artifacts/{foo/*,bar}
* - name: lint # default (build/run "lint" variant, no artifacts, etc.)
* - name: candidate
* build: production
* publish:
* image: # publish built image to our docker registry
* id: "${.imageID}" # image reference
* name: "${setup.project}" # image name
* tag: "${setup.timestamp}-${.stage}" # primary tag
* tags: [candidate] # additional tags
* exports: # export stage values under new names
* image: "${.imageFullName}:${.imageTag}" # new variable name and interpolated value
* - name: staging
* deploy: # deploy image to a cluster
* image: "${candidate.image}" # image name:tag reference
* cluster: ci # default "ci" k8s cluster
* chart: http://helm/chart # helm chart to use for deployment
* test: true # run `helm test` on deployment
* - name: production
* deploy:
* cluster: production
* chart: http://helm/chart
* serviceTwo:
* directory: src/serviceTwo
* </code></pre>
*/
class Pipeline implements Serializable {
String name
String blubberfile
String directory
String dockerRegistry
private Map stagesConfig
private List<List> execution
/**
* Constructs a new pipeline with the given name and configuration.
*/
Pipeline(String pipelineName, Map config) {
name = pipelineName
blubberfile = config.blubberfile ?: "${name}/blubber.yaml"
directory = config.directory ?: "."
dockerRegistry = config.dockerRegistry
stagesConfig = config.stages.collectEntries{
[(it.name): PipelineStage.defaultConfig(it)]
}
execution = config.execution ?: [config.stages.collect { it.name }]
}
/**
* Returns a set of node labels that will be required for this pipeline to
* function correctly.
*/
Set getRequiredNodeLabels() {
def labels = [] as Set
for (def nodes in stack()) {
for (def node in nodes) {
labels += node.getRequiredNodeLabels()
}
}
labels
}
/**
* Returns the pipeline's stage stack bound with an execution context.
*/
List stack() {
def graph = setup() + (new ExecutionGraph(execution)) + teardown()
def context = new ExecutionContext(graph)
graph.stack().collect {
it.collect { stageName ->
createStage(stageName, context.ofNode(stageName))
}
}
}
/**
* Returns a {@link PipelineRunner} for this pipeline and the given workflow
* script object.
*/
PipelineRunner runner(ws) {
def runner = new PipelineRunner(ws,
blubberConfig: blubberfile,
kubeConfig: "/etc/kubernetes/ci-staging.config",
registry: dockerRegistry,
)
// make the PipelineRunner configPath relative to the pipeline's directory
def prefix = "../" * directory.split('/').count { !(it in ["", "."]) }
runner.configPath = prefix + runner.configPath
runner
}
/**
* Validates the pipeline configuration, throwing a {@link ValidationException}
* if anything is amiss.
*/
void validate() throws ValidationException {
def errors = []
// TODO expand validation
if (PipelineStage.SETUP in stagesConfig) {
errors += "${PipelineStage.SETUP} is a reserved stage name"
}
if (PipelineStage.TEARDOWN in stagesConfig) {
errors += "${PipelineStage.TEARDOWN} is a reserved stage name"
}
if (errors) {
throw new ValidationException(errors: errors)
}
}
private ExecutionGraph setup() {
new ExecutionGraph([[SETUP_STAGE]])
}
private ExecutionGraph teardown() {
new ExecutionGraph([[TEARDOWN_STAGE]])
}
private PipelineStage createStage(stageName, context) {
new PipelineStage(
this,
stageName,
stagesConfig[stageName] ? stagesConfig[stageName] : [:],
context,
)
}
class ValidationException extends GroovyException {
def errors
String getMessage() {
def msgs = errors.collect { " - ${it}" }.join("\n")
"Pipeline configuration validation failed:\n${msgs}"
}
}
}

+ 83
- 0
src/org/wikimedia/integration/PipelineBuilder.groovy View File

@ -0,0 +1,83 @@
package org.wikimedia.integration
import org.wikimedia.integration.ExecutionGraph
import org.wikimedia.integration.Pipeline
class PipelineBuilder implements Serializable {
String configPath
/**
* Constructs a new {@PipelineBuilder} from the given YAML configuration.
*/
PipelineBuilder(String pipelineConfigPath) {
configPath = pipelineConfigPath
}
/**
* Builds a single-node Jenkins workflow script for each of the configured
* pipelines.
*
* If a pipeline defines any branching arcs in its directed
* <code>execution</code> graph, they will be iterated over concurrentlyin
* the order that {@link ExecutionGraph#executions()} returnsand their
* stages defined as <code>parallel</code> stages in the workflow script.
*
* @param ws Jenkins Workflow Script (`this` when writing a Jenkinsfile)
*/
void build(ws) {
def config
ws.node {
ws.stage("configure") {
ws.checkout(ws.scm)
config = ws.readYaml(file: configPath)
}
}
for (def pline in pipelines(config)) {
def stack = pline.stack()
ws.node(pline.getRequiredNodeLabels().join(" && ")) {
try {
for (def stages in stack) {
if (stages.size() > 1) {
def stageClosures = [:]
for (def stage in stages) {
stageClosures[stage.name] = stage.closure(ws)
}
ws.stage("${pline.name}: [parallel]") {
ws.parallel(stageClosures)
}
} else {
def stage = stages[0]
ws.stage("${pline.name}: ${stage.name}", stage.closure(ws))
}
}
} catch (exception) {
ws.currentBuild.result = 'FAILURE'
// ensure teardown steps are always executed
for (def stage in stack.last()) {
if (stage == "_teardown") {
stage.closure(ws)()
}
}
throw exception
}
}
}
}
/**
* Constructs and returns all pipelines from the given configuration.
*/
List pipelines(cfg) {
cfg.pipelines.collect { pname, pconfig ->
def pline = new Pipeline(pname, pconfig)
pline.validate()
pline
}
}
}

+ 83
- 6
src/org/wikimedia/integration/PipelineRunner.groovy View File

@ -4,6 +4,9 @@ import java.io.FileNotFoundException
import static org.wikimedia.integration.Utility.*
import org.wikimedia.integration.GerritPipelineComment
import org.wikimedia.integration.GerritReview
/**
* Provides an interface to common pipeline build/run/deploy functions.
*
@ -101,7 +104,7 @@ class PipelineRunner implements Serializable {
}
def blubber = new Blubber(workflowScript, cfg, blubberoidURL)
def dockerfile = getConfigFile("Dockerfile")
def dockerfile = getTempFile("Dockerfile.")
workflowScript.writeFile(text: blubber.generateDockerfile(variant), file: dockerfile)
@ -127,6 +130,19 @@ class PipelineRunner implements Serializable {
assert cfg instanceof Map && cfg.chart : "you must define 'chart: <helm chart url>' in ${cfg}"
deployWithChart(cfg.chart, imageName, imageTag, overrides)
}
/**
* Deploys the given registered image using the given Helm chart and returns
* the name of the release.
*
* @param chart Chart URL.
* @param imageName Name of the registered image to deploy.
* @param imageTag Tag of the registered image to use.
* @param overrides Additional Helm value overrides to set.
*/
String deployWithChart(String chart, String imageName, String imageTag, Map overrides = [:]) {
def values = [
"docker.registry": registry,
"docker.pull_policy": pullPolicy,
@ -139,7 +155,7 @@ class PipelineRunner implements Serializable {
def release = imageName + "-" + randomAlphanum(8)
helm("install --namespace=${arg(namespace)} --set ${values} -n ${arg(release)} " +
"--debug --wait --timeout ${timeout} ${arg(cfg.chart)}")
"--debug --wait --timeout ${timeout} ${arg(chart)}")
release
}
@ -153,13 +169,33 @@ class PipelineRunner implements Serializable {
[configPath, filePath].join("/")
}
/**
* Returns a path under configPath to a temp file with the given base name.
*
* @param baseName File base name.
*/
String getTempFile(String baseName) {
getConfigFile("${baseName}${randomAlphanum(8)}")
}
/**
* Deletes and purges the given Helm release.
*
* @param release Previously deployed release name.
*/
void purgeRelease(String release) {
helm("delete --purge ${arg(release)}")
purgeReleases([release])
}
/**
* Deletes and purges the given Helm release.
*
* @param release Previously deployed release name.
*/
void purgeReleases(List releases) {
if (releases.size() > 0) {
helm("delete --purge ${args(releases)}")
}
}
/**
@ -194,17 +230,58 @@ class PipelineRunner implements Serializable {
* @param imageID ID of the image to remove.
*/
void removeImage(String imageID) {
workflowScript.sh("docker rmi --force ${arg(imageID)}")
removeImages([imageID])
}
/**
* Removes the given images from the local cache.
*
* @param imageIDs IDs of images to remove.
*/
void removeImages(List imageIDs) {
if (imageIDs.size() > 0) {
workflowScript.sh("docker rmi --force ${args(imageIDs)}")
}
}
/**
* Submits a comment to Gerrit with the build result and links to published
* images.
*
* @param imageName Fully qualified name of published image.
* @param imageTags Image tags.
*/
void reportToGerrit(imageName, imageTags = []) {
def comment
if (workflowScript.currentBuild.result == 'SUCCESS' && imageName) {
comment = new GerritPipelineComment(
jobName: workflowScript.env.JOB_NAME,
buildNumber: workflowScript.env.BUILD_NUMBER,
jobStatus: workflowScript.currentBuild.result,
image: imageName,
tags: imageTags,
)
} else {
comment = new GerritPipelineComment(
jobName: workflowScript.env.JOB_NAME,
buildNumber: workflowScript.env.BUILD_NUMBER,
jobStatus: workflowScript.currentBuild.result,
)
}
GerritReview.post(workflowScript, comment)
}
/**
* Runs a container using the image specified by the given ID.
*
* @param imageID Image ID.
* @param arguments Entry-point arguments.
*/
void run(String imageID) {
void run(String imageID, List arguments = []) {
workflowScript.timeout(time: 20, unit: "MINUTES") {
workflowScript.sh("exec docker run --rm ${arg(imageID)}")
workflowScript.sh("exec docker run --rm ${args([imageID] + arguments)}")
}
}


+ 525
- 0
src/org/wikimedia/integration/PipelineStage.groovy View File

@ -0,0 +1,525 @@
package org.wikimedia.integration
import com.cloudbees.groovy.cps.NonCPS
import static org.wikimedia.integration.Utility.timestampLabel
import org.wikimedia.integration.ExecutionContext
import org.wikimedia.integration.PatchSet
import org.wikimedia.integration.Pipeline
class PipelineStage implements Serializable {
static final String SETUP = 'setup'
static final String TEARDOWN = 'teardown'
static final List STEPS = ['build', 'run', 'publish', 'deploy', 'exports']
Pipeline pipeline
String name
Map config
private ExecutionContext.NodeContext context
/**
* Returns an config based on the given one but with default values
* inserted.
*
* @example Shorthand stage config (providing only a stage name)
* <pre><code>
* def cfg = [name: "foo"]
*
* assert PipelineStage.defaultConfig(cfg) == [
* name: "foo",
* build: '${.stage}', // builds a variant by the same name
* run: [
* image: '${.imageID}', // runs the variant built by this stage
* arguments: [],
* ],
* ]
* </code></pre>
*
* @example Configuring `run: true` means run the variant built by this
* stage
* <pre><code>
* def cfg = [name: "foo", build: "foo", run: true]
*
* assert PipelineStage.defaultConfig(cfg) == [
* name: "foo",
* build: "foo",
* run: [
* image: '${.imageID}', // runs the variant built by this stage
* arguments: [],
* ],
* ]
* </code></pre>
*
* @example Publish image default configuration
* <pre><code>
* def cfg = [image: true]
* def defaults = PipelineStage.defaultConfig(cfg)
*
* // publish.image.id defaults to the previously built image
* assert defaults.publish.image.id == '${.imageID}'
*
* // publish.image.name defaults to the project name
* assert defaults.publish.image.name == '${setup.project}'
*
* // publish.image.tag defaults to {timestamp}-{stage name}
* assert defaults.publish.image.tag == '${setup.timestamp}-${.stage}'
* </code></pre>
*/
@NonCPS
static Map defaultConfig(Map cfg) {
Map dcfg
// shorthand with just name is: build and run a variant
if (cfg.size() == 1 && cfg["name"]) {
dcfg = cfg + [
build: '${.stage}',
run: [
image: '${.imageID}',
]
]
} else {
dcfg = cfg.clone()
}
if (dcfg.run) {
// run: true means run the built image
if (dcfg.run == true) {
dcfg.run = [
image: '${.imageID}',
]
} else {
dcfg.run = dcfg.run.clone()
}
// run.image defaults to previously built image
dcfg.run.image = dcfg.run.image ?: '${.imageID}'
// run.arguments defaults to []
dcfg.run.arguments = dcfg.run.arguments ?: []
}
if (dcfg.publish) {
def pcfg = dcfg.publish.clone()
if (pcfg.image) {
if (pcfg.image == true) {
pcfg.image = [:]
} else {
pcfg.image = pcfg.image.clone()
}
// publish.image.id defaults to the previously built image
pcfg.image.id = pcfg.image.id ?: '${.imageID}'
// publish.image.name defaults to the project name
pcfg.image.name = pcfg.image.name ?: "\${${SETUP}.project}"
// publish.image.tag defaults to {timestamp}-{stage name}
pcfg.image.tag = pcfg.image.tag ?: "\${${SETUP}.timestamp}-\${.stage}"
pcfg.image.tags = (pcfg.image.tags ?: []).clone()
}
if (pcfg.files) {
pcfg.files.paths = pcfg.files.paths.clone()
}
dcfg.publish = pcfg
}
if (dcfg.deploy) {
dcfg.deploy = dcfg.deploy.clone()
dcfg.deploy.image = dcfg.deploy.image ?: '${.publishedImage}'
dcfg.deploy.cluster = dcfg.deploy.cluster ?: "ci"
dcfg.deploy.test = dcfg.deploy.test == null ? true : dcfg.test
}
dcfg
}
PipelineStage(Pipeline pline, String stageName, Map stageConfig, nodeContext) {
pipeline = pline
name = stageName
config = stageConfig
context = nodeContext
}
/**
* Constructs and retruns a closure for this pipeline stage using the given
* Jenkins workflow script object.
*/
Closure closure(ws) {
({
def runner = pipeline.runner(ws)
context["stage"] = name
switch (name) {
case SETUP:
setup(ws, runner)
break
case TEARDOWN:
teardown(ws, runner)
break
default:
ws.echo("running steps in ${pipeline.directory} with config: ${config.inspect()}")
ws.dir(pipeline.directory) {
for (def stageStep in STEPS) {
if (config[stageStep]) {
ws.echo("step: ${stageStep}")
this."${stageStep}"(ws, runner)
}
}
}
}
})
}
/**
* Returns a set of node labels that will be required for this stage to
* function correctly.
*/
Set getRequiredNodeLabels() {
def labels = [] as Set
if (config.build || config.run) {
labels.add("blubber")
}
if (config.publish) {
for (def publish in config.publish) {
if (publish.type == "files") {
labels.add("blubber")
} else if (publish.type == "image") {
labels.add("dockerPublish")
}
}
}
labels
}
/**
* Performs setup steps, checkout out the repo and binding useful values to
* be used by all other stages (default image labels, project identifier,
* timestamp, etc).
*
* <h3>Exports</h3>
* <dl>
* <dt><code>${setup.project}</code></dt>
* <dd>ZUUL_PROJECT parameter value if getting a patchset from Zuul.</dd>
* <dd>Jenkins JOB_NAME value otherwise.</dd>
*
* <dt><code>${setup.timestamp}</code></dt>
* <dd>Timestamp at the start of pipeline execution. Used in image tags, etc.</dd>
*
* <dt><code>${setup.imageLabels}</code></dt>
* <dd>Default set of image labels:
* <code>jenkins.job</code>,
* <code>jenkins.build</code>,
* <code>ci.project</code>,
* <code>ci.pipeline</code>
* </dd>
* </dl>
*/
void setup(ws, runner) {
def imageLabels = [
"jenkins.job": ws.env.JOB_NAME,
"jenkins.build": ws.env.BUILD_ID,
]
if (ws.params.ZUUL_REF) {
def patchset = PatchSet.fromZuul(ws.params)
ws.checkout(patchset.getSCM())
context["project"] = patchset.project.replaceAll('/', '-')
imageLabels["zuul.commit"] = patchset.commit
} else {
ws.checkout(ws.scm)
context["project"] = ws.env.JOB_NAME
}
imageLabels["ci.project"] = context['project']
imageLabels["ci.pipeline"] = pipeline.name
context["timestamp"] = timestampLabel()
context["imageLabels"] = imageLabels
}
/**
* Performs teardown steps, removing images and helm releases, and reporting
* back to Gerrit.
*/
void teardown(ws, runner) {
try {
runner.removeImages(context.getAll("imageID"))
} catch (all) {}
try {
runner.purgeReleases(context.getAll("releaseName"))
} catch (all) {}
for (def imageName in context.getAll("publishedImage")) {
runner.reportToGerrit(image)
}
}
/**
* Builds the configured Blubber variant.
*
* <h3>Configuration</h3>
* <dl>
* <dt><code>build</code></dt>
* <dd>Blubber variant name</dd>
* </dl>
*
* <h3>Example</h3>
* <pre><code>
* stages:
* - name: candidate
* build: production
* </code></pre>
*
* <h3>Exports</h3>
* <dl>
* <dt><code>${[stage].imageID}</code></dt>
* <dd>Image ID of built image.</dd>
* </dl>
*/
void build(ws, runner) {
def imageID = runner.build(context % config.build, context["setup.imageLabels"])
context["imageID"] = imageID
}
/**
* Runs the entry point of a built image variant.
*
* <h3>Configuration</h3>
* <dl>
* <dt><code>run</code></dt>
* <dd>Image to run and entry-point arguments</dd>
* <dd>Specifying <code>run: true</code> expands to
* <code>run: { image: '${.imageID}' }</code>
* (i.e. the image built in this stage)</dd>
* <dd>
* <dl>
* <dt><code>image</code></dt>
* <dd>An image to run</dd>
* <dd>Default: <code>{$.imageID}</code></dd>
*
* <dt><code>arguments</code></dt>
* <dd>Entry-point arguments</dd>
* <dd>Default: <code>[]</code></dd>
* </dl>
* </dd>
* </dl>
*
* <h3>Example</h3>
* <pre><code>
* stages:
* - name: test
* build: test
* run: true
* </code></pre>
*
* <h3>Example</h3>
* <pre><code>
* stages:
* - name: built
* - name: lint
* run:
* image: '${built.imageID}'
* arguments: [lint]
* - name: test
* run:
* image: '${built.imageID}'
* arguments: [test]
* </code></pre>
*/
void run(ws, runner) {
runner.run(
context % config.run.image,
config.run.arguments.collect { context % it },
)
}
/**
* Publish artifacts, either files or a built image variant (pushed to the
* WMF Docker registry).
*
* <h3>Configuration</h3>
* <dl>
* <dt><code>publish</code></dt>
* <dd>
* <dl>
* <dt><code>image</code></dt>
* <dd>Publish an to the WMF Docker registry</dd>
* <dd>
* <dl>
* <dt>id</dt>
* <dd>ID of a previously built image variant</dd>
* <dd>Default: <code>${.imageID}</code> (image built in this stage)</dd>
*
* <dt>name</dt>
* <dd>Published name of the image. Note that this base name will be
* prefixed with the globally configured registry/repository name
* before being pushed.</dd>
* <dd>Default: <code>${setup.project}</code> (project identifier;
* see {@link setup()})</dd>
*
* <dt>tag</dt>
* <dd>Primary tag under which the image is published</dd>
* <dd>Default: <code>${setup.timestamp}-${.stage}</code></dd>
*
* <dt>tags</dt>
* <dd>Additional tags under which to publish the image</dd>
* </dl>
* </dd>
* </dl>
* </dd>
* <dd>
* <dl>
* <dt><code>files</code></dt>
* <dd>Extract and save files from a previously built image variant</dd>
* <dd>
* <dl>
* <dt>paths</dt>
* <dd>Globbed file paths resolving any number of files under the
* image's root filesystem</dd>
* </dl>
* </dd>
* </dl>
* </dd>
* </dl>
*
* <h3>Exports</h3>
* <dl>
* <dt><code>${[stage].imageName}</code></dt>
* <dd>Short name under which the image was published</dd>
*
* <dt><code>${[stage].imageFullName}</code></dt>
* <dd>Fully qualified name (registry/repository/imageName) under which the
* image was published</dd>
*
* <dt><code>${[stage].imageTag}</code></dt>
* <dd>Primary tag under which the image was published</dd>
*
* <dt><code>${[stage].publishedImage}</code></dt>
* <dd>Full qualified name and tag (<code>${.imageFullName}:${.imageTag}</code>)</dd>
* </dl>
*/
void publish(ws, runner) {
if (config.publish.image) {
def imageName = context % publisher.name
for (def tag in ([publisher.tag] + publisher.tags)) {
runner.registerAs(
context % publisher.image,
imageName,
context % tag,
)
}
context["imageName"] = imageName
context["imageFullName"] = runner.qualifyRegistryPath(imageName)
context["imageTag"] = context % publisher.tag
context["publishedImage"] = context % '${.imageFullName}:${.imageTag}'
}
if (config.publish.files) {
// TODO
}
}
/**
* Deploy a published image to a WMF k8s cluster. (Currently only the "ci"
* cluster is supported for testing.)
*
* <h3>Configuration</h3>
* <dl>
* <dt><code>deploy</code></dt>
* <dd>
* <dl>
* <dt>image</dt>
* <dd>Reference to a previously published image</dd>
* <dd>Default: <code>${.publishedImage}</code> (image published in the
* {@link publish() publish step} of this stage)</dd>
*
* <dt>cluster</dt>
* <dd>Cluster to target</dd>
* <dd>Default: <code>"ci"</code></dd>
* <dd>Currently only "ci" is supported and this configuration is
* effectively ignored</dd>
*
* <dt>chart</dt>
* <dd>URL of Helm chart to use for deployment</dd>
* <dd>Required</dd>
*
* <dt>test</dt>
* <dd>Whether to run <code>helm test</code> against this deployment</dd>
* <dd>Default: <code>true</code></dd>
* </dl>
* </dd>
* </dl>
*
* <h3>Exports</h3>
* <dl>
* <dt><code>${[stage].releaseName}</code></dt>
* <dd>Release name of new deployment</dd>
* </dl>
*/
void deploy(ws, runner) {
def release = runner.deployWithChart(
context % config.deploy.chart,
context % config.deploy.image,
context % config.deploy.tag,
)
context["releaseName"] = release
if (config.deploy.test) {
runner.testRelease(release)
}
}
/**
* Binds a number of new values for reference in subsequent stages.
*
* <h3>Configuration</h3>
* <dl>
* <dt><code>exports</code></dt>
* <dd>Name/value pairs for additional exports.</dd>
* </dl>
*
* <h3>Example</h3>
* <pre><code>
* stages:
* - name: candidate
* build: production
* exports:
* image: '${.imageID}'
* tag: '${.imageTag}-my-tag'
* - name: published
* publish:
* image:
* id: '${candidate.image}'
* tags: ['${candidate.tag}']
* </code></pre>
*
* <h3>Exports</h3>
* <dl>
* <dt><code>${[name].[value]}</code></dt>
* <dd>Each configured name/value pair.</dd>
* </dl>
*/
void exports(ws, runner) {
for (def name in exports) {
context[name] = context % exports[name]
}
}
}

+ 17
- 0
src/org/wikimedia/integration/Utility.groovy View File

@ -24,6 +24,16 @@ class Utility {
"'" + argument.replace("'", "'\\''") + "'"
}
/**
* Quotes all given shell arguments.
*
* @param arguments Shell argument.
* @return Quoted shell arguments.
*/
static String args(List arguments) {
arguments.collect { arg(it) }.join(" ")
}
/**
* Returns a random alpha-numeric string that's length long.
*
@ -32,4 +42,11 @@ class Utility {
static String randomAlphanum(length) {
(1..length).collect { alphanums[random.nextInt(alphanums.size())] }.join()
}
/**
* Returns a timestamp suitable for use in image names, tags, etc.
*/
static String timestampLabel() {
new Date().format("yyyy-MM-dd-HH-mmss", TimeZone.getTimeZone("UTC"))
}
}

+ 9
- 2
test/org/wikimedia/integration/PipelineRunnerTest.groovy View File

@ -35,6 +35,12 @@ class PipelineRunnerTest extends GroovyTestCase {
assert pipeline.getConfigFile("bar") == "foo/bar"
}
void testGetTempFile() {
def pipeline = new PipelineRunner(new WorkflowScript(), configPath: "foo")
assert pipeline.getTempFile("bar") ==~ /^foo\/bar[a-z0-9]+$/
}
void testQualifyRegistryPath() {
def pipeline = new PipelineRunner(new WorkflowScript())
@ -85,12 +91,13 @@ class PipelineRunnerTest extends GroovyTestCase {
mockWorkflow.demand.writeFile { args ->
assert args.text == "BASE: foo\n"
assert args.file == ".pipeline/Dockerfile"
assert args.file ==~ /^\.pipeline\/Dockerfile\.[a-z0-9]+$/
}
mockWorkflow.demand.sh { args ->
assert args.returnStdout
assert args.script == "docker build --pull --label 'foo=a' --label 'bar=b' --file '.pipeline/Dockerfile' ."
assert args.script ==~ (/^docker build --pull --label 'foo=a' --label 'bar=b' / +
/--file '\.pipeline\/Dockerfile\.[a-z0-9]+' \.$/)
// Mock `docker build` output to test that we correctly parse the image ID
return "Removing intermediate container foo\n" +


+ 44
- 0
test/org/wikimedia/integration/PipelineStageTest.groovy View File

@ -0,0 +1,44 @@
import groovy.mock.interceptor.MockFor
import static groovy.test.GroovyAssert.*
import groovy.util.GroovyTestCase
import org.wikimedia.integration.PipelineStage
import org.wikimedia.integration.ExecutionGraph
import org.wikimedia.integration.ExecutionContext
class PipelineStageTest extends GroovyTestCase {
void testPipelineStage_defaultConfig() {
// shorthand with just name is: build and run a variant
def shortHand = [name: "foo"]
assert PipelineStage.defaultConfig(shortHand) == [
name: "foo",
build: '${.stage}',
run: [
image: '${.imageID}',
arguments: [],
],
]
// run: true means run the built image
def runTrue = [name: "foo", build: "foo", run: true]
assert PipelineStage.defaultConfig(runTrue) == [
name: "foo",
build: "foo",
run: [
image: '${.imageID}',
arguments: [],
],
]
def defaultPublishImage = PipelineStage.defaultConfig([publish: [image: true]])
// publish.image.id defaults to the previously built image
assert defaultPublishImage.publish.image.id == '${.imageID}'
// publish.image.name defaults to the project name
assert defaultPublishImage.publish.image.name == '${setup.project}'
// publish.image.tag defaults to {timestamp}-{stage name}
assert defaultPublishImage.publish.image.tag == '${setup.timestamp}-${.stage}'
}
}

+ 93
- 0
test/org/wikimedia/integration/PipelineTest.groovy View File

@ -0,0 +1,93 @@
import groovy.mock.interceptor.MockFor
import static groovy.test.GroovyAssert.*
import groovy.util.GroovyTestCase
import org.wikimedia.integration.Pipeline
class PipelineTest extends GroovyTestCase {
void testConstructor() {
def pipeline = new Pipeline("foo", [
blubberfile: "bar/blubber.yaml",
directory: "src/foo",
stages: [
[name: "unit"],
[name: "lint"],
[name: "candidate"],
[name: "production"],
],
execution: [
["unit", "candidate", "production"],
["lint", "candidate", "production"],
],
])
assert pipeline.blubberfile == "bar/blubber.yaml"
assert pipeline.directory == "src/foo"
assert pipeline.execution == [
["unit", "candidate", "production"],
["lint", "candidate", "production"],
]
}
void testConstructor_defaults() {
def pipeline = new Pipeline("foo", [
directory: "src/foo",
stages: [
[name: "unit"],
[name: "lint"],
[name: "candidate"],
[name: "production"],
],
])
assert pipeline.blubberfile == "foo/blubber.yaml"
assert pipeline.execution == [
["unit", "lint", "candidate", "production"],
]
}
void testRunner() {
def pipeline = new Pipeline("foo", [
directory: "src/foo/",
stages: [],
])
assert pipeline.runner().configPath == "../../.pipeline"
}
void testRunner_currentDirectory() {
def pipeline = new Pipeline("foo", [
directory: ".",
stages: [],
])
assert pipeline.runner().configPath == ".pipeline"
}
void testValidate_setupReserved() {
def pipeline = new Pipeline("foo", [
stages: [[name: "setup"]],
])
def e = shouldFail(Pipeline.ValidationException) {
pipeline.validate()
}
assert e.errors.size() == 1
assert e.errors[0] == "setup is a reserved stage name"
}
void testValidate_teardownReserved() {
def pipeline = new Pipeline("foo", [
stages: [[name: "teardown"]],
])
def e = shouldFail(Pipeline.ValidationException) {
pipeline.validate()
}
assert e.errors.size() == 1
assert e.errors[0] == "teardown is a reserved stage name"
}
}

+ 4
- 0
test/org/wikimedia/integration/UtilityTest.groovy View File

@ -7,6 +7,10 @@ class UtilityTestCase extends GroovyTestCase {
assert arg("foo bar'\n baz") == """'foo bar'\\''\n baz'"""
}
void testArgs() {
assert args(["foo bar'\n baz", "qux"]) == """'foo bar'\\''\n baz' 'qux'"""
}
void testRandomAlphanum() {
def expectedChars = ('a'..'z') + ('0'..'9')
def alphanum = randomAlphanum(12)


|||||||
x
 
000:0
Loading…
Cancel
Save