Advanced Spinnaker Pipelines: "for-loops" in Spinnaker

Deploying the Same Manifest to Multiple Namespaces

Intro

Have you ever wanted to deploy the same manifest to multiple clusters or namespaces? Alternatively, have you ever wanted to iterate through an array, and take action against the list? These scenarios boil down to a simple question: how do you create a for-loop in Spinnaker?This article is going to explore how to build a series of pipelines that iterate through a list and take action against that array of values.  Specifically, we're going to create a JSON Array of namespace values and, for each namespace, create the namespace and deploy a manifest to that namespace.

Overview

First, let's review at a high level the overall control structure.  I have four pipelines: (A) one to kick start the process, (B) one to act as the main control loop, (C) one to run the deployment, (D) and the last one to assist with the looping.*  See figure below for a diagram of the control loop.

PIpeline diagram of overall control structure
Pipeline control structure

The Array of Values

One of the primary challenges to building this pipeline is figuring out how to pass an array of values to the pipeline.  Because Spinnaker has built-in functions to process JSON, a simple way for us to provide this list is through using a .json file.  
We can create a file by doing the following: place the file in a git repo and reference the file as an expected artifact.

Expected Artifact for .json file

Going forward, we can pass the contents of the file to other pipelines through a pipeline expression like this: ${#jsonFromUrl(trigger.artifacts[0].reference)}.

Pipeline Design

This next section goes into detail about how the stages are constructed within each of the four pipelines.

A) Pipeline: "Start"

We start by creating pipeline (A).  The Configuration stage contains an expected artifact as mentioned above. This pipeline simply has one stage to kickoff a helper pipeline (B), and passes an "index" of 0 as well as the array of namespaces.

What? That's it. Yes, let's keep it simple.

B) Pipeline: "Deploy Helper"

This is the main engine and is in charge of calling the deployment pipeline, updating the index, and, most importantly, determining if it needs to loop. The pipeline expects two parameters: index and namespace_list.  The values are initially supplied by pipeline (A).We call the deployment pipeline and pass the namespace as a parameter with this SPeL expression: namespace = ${parameters.namespace_list.namespaces[#toInt(parameters.index)].namespace}

Serial: Wait for results, Parallel: Don't wait for results.

We then update the index using an "Evaluate Variables" stage with newIndex = ${#toInt(parameters.index) + 1}

"Index++"

Lastly, we call pipeline (D) to help with the looping of this pipeline.  Most importantly, we add the following expression for Conditional on Expression under Execution Options:${parameters.namespace_list.namespaces.size() > #toInt(#stage("update index").outputs.newIndex)}
If the condition evaluates to true, then the stage executes, and we continue the loop.
Do not forget to uncheck Wait for results so that this pipeline can complete, and a new one can start.

To loop or not to loop?

C) Pipeline: Deploy

We are going to create the namespace and then deploy a simple manifest to the namespace.  To help promote parallel deployments, uncheck "Disable concurrent pipeline executions (only run one at a time)" under the Configuration stage.

Let's avoid some deadlocks.

We will use two Deploy (Manifest) stages with the respective manifests below.  For the deployment, we will also will override the namespace to ${parameters.namespace}.

D) Pipeline: Looper*

Spinnaker does not allow pipelines to call upon themselves.  Instead, we use this pipeline to call pipeline (B): Deploy Helper again with the updated parameters. It takes in the same parameters and passes them on.  
Remember to uncheck Wait for results.

Do you hear an echo?

Design Considerations

Theoretically, you could condense the entire control structure into two pipelines by combining the first three pipelines into one.  Here are some reasons I decided to split the pipelines into four:

  1. Clean and clear execution history.  By having a separate "Start" pipeline, your pipeline execution history does not get muddled with the calls by the loop.  You know the parameters (list) being used to trigger the loop (see below).
  2. A reusable deployment pipeline.  By separating out the actual deployment stages into a standalone pipeline, we could reuse the pipeline, and also make modifications to the deployment pipeline in one place.
  3. Unchecking Wait for Results for all stages calling pipelines. The typical workflow may be to wait for the pipeline to complete before continuing.  For our use case though, we actually want the loop to proceed so that we can deploy in parallel.  Feel free to change the behavior to your liking.
Pipeline A tracks history of executions with namespaces used.

Challenges

This design does not do well with errors - meaning errors in the pipeline will not roll up to pipeline (A) : Start.
Footnote:* An additional "looper" pipeline is required since a Spinnaker pipeline cannot trigger itself.