Companies need to become increasingly efficient at what they do because all metrics and ROIs are inevitably measured by time. All good techniques, tools, and best practices are ultimately designed with time efficiency in mind - and a question that becomes pertinent to advancement is: Can this be automated? If yes, why haven’t we done so?

The history of technology is a history of automation.

All jobs fall into four quadrants of work in a matrix with these two axes: Cognitive versus manual, routine versus non-routine (a formal version of this analysis was done by M.I.T.’s researchers). A vast majority of work is cognitive in the world of technology - even the work that software developers consider “manual steps” - because all steps require an intelligent human with relevant knowledge to make a decision (this is why they’re paid the big bucks): testing, deploying, maintenance, etc. However, technology has also improved to the point where tasks that developers consider “rote” work such as canary deployments and automated pipelines can be automated and overseen by non-human intelligence.

Kubernetes is a strong container tool (among others) that does its job well: making sure your applications are correctly silo’d off and interacting in a way that doesn’t sink the ship. Kubernetes is great for distributing apps across a large number of instances while handling system failures. It also has primitives in regards to deployments, but they’re “dumb”. The onus is on the developer to provide it information to make decisions. Developers are also left with the responsibility to come up with glue-code for how they want pipelines to run for each individual pipeline.

This “rote” work can be moved; when the organizational mind-set utilizes a different workflow, developers are no longer singularly responsible for the deployments and can instead tell an intelligent system to “get it done.” As intelligent systems become better at following non-linear flows of commands for mostly routine work our pipeline executions can be built to take advantage of these systems. When a developer can push their code to a system and know that the system will make sure it goes through all the necessary steps to get it live to production, the developer’s time is saved for further iteration.

“Automation” is simply a name we give for turning a routine task into something that can be done requiring the least amount of further human input: something as simple as making sure your sprinklers turn on at a specified time or as complicated as running a pipeline with multiple modal routes per stage as contingency plans. Current platform tools like Kubernetes and ECS were not designed with automating “execution” in mind - yes they’re great systems for organizing your library of apps but the system is hampered by the requirement of a human librarian making sure everything is placed where it should be.

Spinnaker’s biggest value on top of a platform like Kubernetes is the ability to “be” that human librarian. The automation process of deployments to Kubernetes and AWS can be controlled on a granular level like Stages and a macro-level like multi-cloud, multi-region deployments. In addition, Kubernetes is incapable of seeing outside its own box (literally) whereas Spinnaker sees the whole picture of your cloud deployments.

In short, Kubernetes isn’t always enough to give you a fully automated deployment flow from code to production. This is the gap that Spinnaker fills by allowing for orchestration of multiple steps through defining your deployment pipelines.

We’ve previously covered with two Google engineers the perfect blend of Spinnaker on top of Kubernetes, an integration that Google has devoted engineers and time to. We’ve also covered how to deploy Kubernetes with Spinnaker, if you would like to learn how to automate your Kubernetes deployments.