Armory sponsored this year’s inaugural Bay Area Spinnaker Meetup on April 20 at Google’s Launchpad location in San Francisco for discussion of the current highs and lows regarding the future of Spinnaker, including how it is expected to help businesses adopt Continuous Delivery (CD) as Spinnaker matures. It included Spinnaker in a production use case presentation by Lookout and a panel of Spinnaker creators and users that answered questions from an audience that is currently looking to adopt Spinnaker to serve their future cloud deployment needs.

Watch the full video & embedded presentation here:

The meetup started with Lookout’s presentation of their journey to implementation of Continuous Delivery in the past several years. Lookout is a cyber-security company that makes it possible for individuals and enterprises to be both mobile and secure. With 100 million mobile devices fueling a data-set of virtually all the mobile code in the world, Lookout can predict and stop mobile attacks before they do harm. Lookout has 120+ services running on 2500+ hosts in Amazon Web Services and analyzes more than 30 million iOS and Android apps, as well as over 90 thousand new apps every day.

Brandon Leach, Lookout’s Continuous Delivery team’s Engineering Manager, touched upon Lookout’s previous challenges with implementing and reaching CD adoption. As it moved hundreds of services from data centers to hybrid-cloud and then completely to Amazon Web Services, Lookout has gone through multiple iterations of custom built software delivery toolchains. Many of these toolchains have lost internal and industry support and have ultimately become stagnant at Lookout. As a result, this fragmented engineering workflows and slowed down delivery of features, fixes, and services at Lookout.

Top engineering pain points that Lookout identified were:

  • Unsupported and non-standard deployment tooling
  • Deployments are manual and failure prone
  • New service deployment is difficult
  • Duplication of infrastructure code
  • Manual secrets management

Brandon identified Spinnaker as the solution which Lookout adopted with Armory’s help due to Spinnaker features that would solve these pain points right out of the box. With Spinnaker, Lookout has made great progress towards CD adoption throughout the organization in just several months. Of particular note was a chart during the presentation that displayed a comparison of the metrics before and after deployment of Spinnaker within Lookout:

Brandon was followed by the team’s Lead Engineer, Rohit Rohitasva, who continued with a presentation of the internal technical changes Lookout underwent to facilitate the adoption of Spinnaker. Rohit gave a demonstration of Armory Spinnaker and the simplicity of its usage.

In the second half of the Meetup, Andy Glover, ‎Manager, Delivery Engineering at Netflix, moderated a panel on Spinnaker that covered these topics:

  • What is the #1 missing feature in Spinnaker?
  • How has Spinnaker improved delivery at your company?
  • What advice do you have for someone considering using Spinnaker?
  • Thoughts on how to improve the Spinnaker community?
  • Viewpoint on a multi-cloud world? Pipe dream or reality?
  • How do you measure success of continuous delivery?
  • In general, where do you see features like Lambda/cloud functions going?
  • Containers, Kubernetes, Docker, Rocket, Swarm, Mesos, etc - where is all of this headed?

The panelists included (left to right):

  • Rohit Rhotasva, Lead Engineer at Lookout
  • Brandon Leach, Engineering Manager at Lookout
  • Heph Adams, Staff DevOps Engineer at Optimizely
  • Matt Duftler, Senior Software Engineer at Google
  • Lars Wander, Software Engineer III at Google
  • Andrew Backes, Principal Engineer at Armory
  • Kevin Woo, Software Engineer at Luxe
  • Ian Smith, ‎Software Engineer at PlanGrid

Most importantly, the panel and audience members identified several top features they would like to see the community create and contribute to OSS Spinnaker, such as:

  • ECS Support
  • Character standardization/normalization
  • Lambda Support

We’re excited to see the community evolve and to participate in Spinnaker’s growth as it becomes a better deployment platform for everyone. Here are more pictures from the Meetup (you can find a full gallery of pictures here)

Tetra Transcript Below:

Brandon Leach:
... choosing Asgard in the previous job. As you can see, Spinnaker's out-of-the-box functionality would solve many of our customers pain points, so we decided to build our CD solution around Spinnaker. Here are some of the other factors that went into our decision. There are many companies supporting the Spinnaker community: Netflix obviously, Google, Microsoft, Pivotal, Target, Veratox, all these companies have engineers dedicated to working with Spinnaker.

There are also many companies that already had successful Spinnaker implementations: Adobe, Cloudera, Symantec, Twitch, Lithium, Reuters, Optimizely ... that's, yeah ... Gogo Air. All these companies have successfully deployed Spinnaker and were using it in production. Spinnaker has also been proven to work at a larger scale, Netflix, than we're experiencing. Finally, much of our legacy tooling was set up actually to make an easy transition to immutable AMI base points.

Once we decided to build our CD solution around Spinnaker, we started looking for industry experts who could help us quickly build a POC. We sought out armory to help kickstart our Spinnaker efforts. Our project had high visibility with executive sponsorship, so it was important that we showed quick progress. Armory helped us go from zero to 60 very quickly, allowing us to quickly demonstrate value to our customers. Armory came in, set up, operationalized Spinnaker in our environment so my team can immediately start focusing on working with customers to migrate their services.

With Armory as our partners, we laid out an aggressive timeline. By mid-November, we had identified our POC service and aimed to have it fully deployed to production using Spinnaker by December [30/3rd 00:01:46]. After completion of the POC, we moved towards beta. Our goal was to onboard five more services and expand our capabilities. Rohit will talk about later. On February 28th, we achieved our beta milestones by having five services deployed to production with Spinnaker.

Currently we're working on our general availability milestone, which is set for June 30. At that time, our goal is to enable engineers to have self-service onboarding for their services at Spinnaker, and we'll support about 80% of [inaudible 00:02:18] services [inaudible 00:02:18].

After we completed beta, we gathered some metrics on before and after Spinnaker. We evaluated five success metrics. Steps to deploy service ... manual steps to deploy service went from 25 to one. This is the number of manual steps it takes, including validation, to deploy a service through all staging environments to production.

Each manual step began introducing the potential of human error, which was responsible for many of our deployment failures and service outages at Lookout. Engineering time went from 60 minutes to one minute, to less than a minute. This is the amount of time on average it takes an engineer to deploy through staging to production. This represents a massive increase in engineering productivity. Automation time to deploy went from 60 minutes to 31 minutes. This is the amount of time that it takes to get a release to production regardless of what that methodology you're using. And finally, onboarding time went from three plus days to 30 minutes. This is the amount of time it takes a new engineer to be proficient and comfortable deploying coach production.

So some things that we learned from beta. Customers? CD is also a people culture problem as much as it is a technology problem. Customers needs to understand CD principles and concepts before it can be successful. The workflow is different from more manual, traditional deployment methodologies, and it's important that service owners understand these differences before they move to Spinnaker. Focus on automated tests to validate deployments. You'll only be able to move as fast as your automated testing. Be ready to have conversations with AWS Buddy [crosstalk 00:04:07].

Spinnaker makes very liberal use of AWS APIs. I've not talked to one person who's using Spinnaker at scale who has not encountered this issue. And at Lookout, many legacy CD tools also use the same APIs, which made it difficult for us to control our usage. This makes our conversation with AWS about these limits very difficult. As we migrate more services to Spinnaker and retire legacy tooling, we hope to get more control over this issue. Now I'm gonna hand off to Rohit. He's gonna talk about some of the technical aspects of [inaudible 00:04:42].

Rohit R.:
Thanks, Brandon. So in order to support our beta customers, we have to add certain capabilities to our existing LDP solution. Spinnaker provided great orchestration to execute the CD pipelines, but we needed to add more support, more capabilities, for to onboard all the Lookout services.

Chef cookbook pipeline. Most services at Lookout use Chef for configuration management. Our [inaudible 00:05:13] Chef workload is centered around pushing cookbooks, data banks, environments, specific to a specific Chef server, at which [notes 00:05:23] are bootstrapped at the time of the launch. Any update is managed by making changes to these Chef assets by converging on the actual notes. We needed to adapt this workload to work with Spinnaker, Spinnaker's baking of the inevitable images. In order to support this pattern, we had to eliminate the Chef server and bring in the Jenkins pipeline, which bundles the cookbook into [inaudible 00:05:48] packages. Which is downloaded and executed in the bake stage.

Application configuration. Most of the Lookout services, as I mentioned, use Chef. So obviously, with Chef comes the Chef environments, which are, again, when the variables were injected at runtime and during the Chef [inaudible 00:06:08]. With the Chef server gone, we couldn't do that anymore. We had to now move these configurations to reside in the application [inaudible 00:06:18] itself, and based on the CloudStack variable which, I'm guessing most of you guys are familiar with in Spinnaker, we are using these variables to insert environment names at runtime to get this application configuration.

Secret store. We encountered 90 servers in reinvent last year, and we decided to utilize it for our secret store. We needed a place to store our application secrets, which are easily accessible from [inaudible 00:06:49] with some form of authentication in case of servers [inaudible 00:06:55]. And because of the immutable nature of the Spinnaker deployments, we could put secrets in the bake stage and given different stages have different secrets, we decided we'll use servers.

On top of it we had certain requirements for having a secret store. It should have regular backups and is highly available. Its servers, it deploys all its instances behind ASG in different AZs, sorry. The [inaudible 00:07:25] secrets they are always backed up in SG buckets. So I'll explain what is this once I [inaudible 00:07:36]. To download these secrets, we created a service time called BAC. It's Bootstrap Application Configurator. It was developed in-house. To use this tool, we have to add a [YAMA 00:07:48] file like this in your application [inaudible 00:07:50] and it will download all your secrets and runtime before your application starts. BAC gives you couple of ways you can inject secrets. You can inject it as a file on a file system. You can inject this is as environment variable, or you can even replace certain values if you have secrets in a properties file or something like that.

Service discovery. So most of the service discovery at Lookout is done by a console and a taper off proxy configuration, which was tightly covered with our Chef cookbooks with an in-house utility ... print library. This set up was very complex and fairly [inaudible 00:08:31]. We decided to simplify this by using DNS naming conventions for the C names, which resolve into the ELV [inaudible 00:08:38]. This was advantageous for Spinnaker because Spinnaker uses ELV to check the health status of the instances after [it hit wireless 00:08:49]. For example if you're trying to look for service A, you would look for serviceA.theenvironmentname.thedomainname. So [they get 00:08:56] pretty easy for any service to locate any other service. Migrating service to Spinnaker required all the client services which were configured using this complicated [inaudible 00:09:09] proxy and [inaudible 00:09:10] method, to use our new [CNEM 00:09:13] reports.

So [DM 00:09:19] capabilities, so now we are done with beta at the end of February and we are moving to [SRGA 00:09:23]. So these are some of the improvements, enhancements that we decided we'll move, we'll introduce as far as the DM.

Configuration management. To accommodate our existing use of Spinnaker ... existing use of Chef Cookbooks with Spinnaker, we ended up with two entry points to our pipeline. One for the core changes, another for [inaudible 00:09:49] package for the cookbooks. In addition to make them aware of each other, now we had to include cookbook name and version to build our properties for the application. And on top of it, if things weren't complicated enough, there resided totally different reports which don't have any correlation with each other.

To overcome this limitation, we introduced Debien packaging, with greater plugin released by Netflix called Nebula. So Nebula enabled us instead of having two different [recalls 00:10:25] to store our configuration and application, it enabled us to create a simple Griddle bell file and store it in the same application recall. This enables us to store the application and the OS [inaudible 00:10:36] configuration in the same recall which made it very simple for anybody to go and have a look if you wanted to look at that or something.

This also greatly simplifies that Spinnaker [break 00:10:49] process. So this is a sample beta file, a little bit looks like Griddle but it's a sample file which you can add it in any application, [inaudible 00:11:01] in two parts and you'll create a Debien package of [inaudible 00:11:04] using [Doger 00:11:05] or if you prefer to use Griddle, you can use Griddle directly.

[inaudible 00:11:12] we try it to see this is something we started in beta with the Chef, and application to pipelines, meeting at the [packer 00:11:19] and [deploying 00:11:22] to Spinnaker, with Debien it became a lot more simpler, it's just one signal pipeline with one entry point.

Pipeline creation. After beta we realized there's more and more applications are coming on board. Manualification of pipelines is not an option. We didn't want the [inaudible 00:11:41] to go wander around, go in trying to figure out which egg box to [plate 00:11:46] to enable or disable any properties, or enable any feature. On top of it these changes in pipeline [inaudible 00:11:53]S3 back there, but I would not really expect somebody to go and read a [GSUN 00:12:02] And try to figure out what changed from the pipeline. They are pretty long pipelines. But for this we decided to go with a forward mast to release [inaudible 00:12:11] by Gogo Air, to give us a jumpstart in creating automated pipelines. We had to make some modifications to the source work which we were planning to push upstream after [RGA 00:12:22]. This ensured that the Spinnaker pipeline configuration ends in the same [recall 00:12:27] as the application [inaudible 00:12:27] and goes through the same change control process. This is a sample application of a configuration file, so, the values you see here are basically injected into the pipeline for that application from front to end.

Authentication. Spinnaker, when we went into beta, didn't have any authentication so this was one of our highest priority from a compliance perspective. So for [DA 00:12:58] we introduce authentication using mobile [inaudible 00:12:59]. So this right now [inaudible 00:13:00] to our Lookout.com domain using [Google Ops 00:13:07].

Apparently for [RGA 00:13:10] this is our entire CI/CD pipeline that it looks like, a push to the [inaudible 00:13:15] it just triggers Jenkins which [inaudible 00:13:19] iMac which is into iFactory, And at the end of the build it actually runs Foreman to create or recreate the pipeline, and triggers the Spinnaker pipeline.

Prometheus. With our CI/CD pipeline, having so many moving pieces, to name a few are Jenkins, iFactory, Spinnaker, GetUp, Networking, Debien packaging, Datadog, and SWAN, and a few more. We needed a way to continuously test the functionality of our pipeline end to end. What is repeated our in-house application [called 00:13:56] Prometheus, which actually builds, deploys, and tears it down every couple of hours. Actually every two hours. In addition to that it checks health for every component. Which is a part of CI/CD and alerts us if any of the components is failing.

This is a demo, [inaudible 00:14:23] [crosstalk 00:14:20]

Speaker 3:
You record your demo to make sure nothing goes wrong. [crosstalk 00:15:48]

Speaker 4:
So Brandon, who picked the name for Prometheus?

Brandon Leach:
The guy whose daughter is eternally eaten by a raven and then it comes back every hour or so yeah- [crosstalk 00:16:00]

Speaker 5:
It's a very fitting name.

Brandon Leach:
Good metaphor. How terribly erudite of you . [crosstalk 00:16:15]

Speaker 6:
Oh, there we go.

Rohit R.:
Sorry about that guys, sometimes authentication is not helpful at all. So this is Prometheus pipeline, for the sake of simplicity I actually triggered it manually, so basically [inaudible 00:16:35] it goes and finds the last daily [inaudible 00:16:35] and starts baking it. I hope I didn't put the bake step too long, but I was trying to cut it. [inaudible 00:16:50] So after baking it deploys to staging and production, this simply means it's time to deploy in our staging and production [inaudible 00:16:58]

At the end of the deployment the way it is this works is it also sensitive, also checks the Jenkins attaching to Jenkins and being able to build anything on Jenkins , it attaches itself as a Jenkins slaves which is kind of the easiest way ... sorry, attach itself as Jenkins slaves, so now is building the new Debien package for Prometheus. Even if nothing has changed it's made to force build the package, so it will try to build it and this time we'll try to push the package again to [RB 00:17:40] factory, so [inaudible 00:17:41] like the [inaudible 00:17:45] building Debien using Nebula and I can push it to autofactory. [The CI cluster 00:17:49].

The next step which I think the stage is kind of behind from [inaudible 00:17:55] is ahead of ... is behind .... So the evaluating stage is basically it has checks like am I able to cut into Datadog? Am I able to connect to Splunk? So basically it drops all of the confirmation on Jenkins slaves, then grants the [API 00:18:13] against the Datadog and Splunk, plus a few more, and there's the validation. At the end of it, it goes and just tears [inaudible 00:18:19] down and you might have noticed there's another pipeline which just triggered, so basically this pipeline is on a [inaudible 00:18:29] it triggers itself every ... hourly I think? [Correcto 00:18:32]? I changed this recently? Every hour breaks it up, does everything again, tears it down, so it keeps on doing this so basically it just there to give us any of these steps fail, it'll send us a message to then they'll alert locate something is wrong.

Thanks.

So what's next for us? Another we are very close to [GM 00:19:06]. Containers, so some of our bake steps take way too long. This is due to a combination of Chef [inaudible 00:19:10] or Debien [inaudible 00:19:12] going out there and downloading everything each time. Some of our application artifacts are also very big, so using content, that is why we are moving towards containers so that will reduce our bake time [inaudible 00:19:25]way less than what they are doing now.

[inaudible 00:19:27]validation. We want to have a way in which we already have [inaudible 00:19:32] which we use to create these pipelines but we're not validating those AWS, ALBs, ALCs if they are already there, so. We are planning to extend Foremans to do these valuations and before you actually go and create a pipeline.

Authorization. So we regard the authentication in place which is pretty good, but still so anybody can by accident go and delete somebody else's pipeline, or something which is not supposed to go untouched. So we are planning to use either VIAG or Oban to introduce more granular level of authorization authentication on Spinnaker different [inaudible 00:20:15].

One-click project creation. So for [inaudible 00:20:22] we have an onboarding guide which enables any application to onboard Spinnaker. This is good, but still error prone. We want to try a simple one click solution that will create your entire CI/CD pipeline, put all the hosts' permission, everything in place so as a double up you don't have to worry about what [inaudible 00:20:39] on the onboarding guide.

[inaudible 00:20:44] so we should, right now you can send Slack notifications from Spinnaker to your Slack channel. It says "[inaudible 00:20:52] you're waiting a pipeline now you go back to Spinnaker, click yes or no. We would like to go to a place where I can click yes or no to that particle by applying from my slack channel itself.

Intelligent canaries. So right now our Spinnaker doesn't support canary deployment. Armory is working on it apparently and we are hoping in future edition, in near future we'll have a deployment of canaries.

[inaudible 00:21:23]and reporting. Right now, Armory is building a feature which would enable us to have a dashboard for auditing, and reporting purposes.

Capacity and cost planning. We are trying to figure out how we can gather all this data from the deployment of Spinnaker. And do cost analysis and report for future planning.

Bake data and security pipelines. So right now on our data teams they have their own individual ways of deploying [inaudible 00:21:52] into different environments. We are hoping that in near future will be somehow they will be able to leverage Spinnaker and use Spinnaker for their deployments.

That was it. It's time for Q and A.

Speaker 7:
What do you use to [inaudible 00:22:16]promotion from staging to production, is that a manual step or is that-

Rohit R.:
Yeah for us we have mandated, anything that goes in production should have a manual step. This is due to the compliance requirements and in addition we have to resend Slack channel alert to our incidents response team.

Speaker 7:
Yeah but we're ... [inaudible 00:22:42] your teams are starting to invest heavily, right, in this type of testing, and before you have a solution like this it's interesting people really don't think about it, because it's like a manual step here a manual step there, hand it off to the QE team, they might execute some tests, they might have some tests [inaudible 00:23:00] some manual validation, so [inaudible 00:23:03]really starting to rally around this and realizing that until they have this testing in place they're not [inaudible 00:23:08]

Speaker 8:
I used to work at Lookout and I recall there was a service there that was getting 50,000 [inaudible 00:23:32] per second, like [inaudible 00:23:40] a massive amount of continuous traffic. I'm curious what you've done with Spinnaker's deployment strategies in order to handle that with zero downtime between [inaudible 00:23:45].

Brandon Leach:
So I remember, you're talking about [AppIntell 00:23:49], right? And that was when you were at Lookout. That was actually the first application of beta design [inaudible 00:23:56] out of the US and it was our first AMI based deployment as well, right? We hadn't moved that into Spinnaker yet, that's gonna be part of our post-GA migration, but yeah I'm interested in why you're asking that question. [crosstalk 00:24:13] I have reticence that there's always something that we should be looking out for, right?

Speaker 8:
I'm curious what other people's experience [inaudible 00:24:20] but certainly on our side, RedBlack deploys will sometimes introduce some downtime and in unexpected ways [inaudible 00:24:30] customer deployment strategies

Brandon Leach:
So we're not using any customer deployment strategies, we're basically just using RedBlack for everything I think we have actually some [Highlighter 00:24:39] ones as well, but nothing of those level like the 50000 requests per second for AppIntell can [inaudible 00:24:47] to, so I'd be interested in talking to you about that more [inaudible 00:24:47]

Speaker 8:
I can answer that for you, [AWS 40 ELB 00:24:56] has a connection rating, [inaudible 00:25:00] I have a Jenkins job that I just completed and it turns it on for every single [inaudible 00:25:06] yeah and then at that point you go RedBlack and then it just drains while loading [inaudible 00:25:14].

Brandon Leach:
Okay so maybe we can take a five minute break, and then we can resume with a panel discussion. Unless there's more questions. Let's take a break. Thank you. [crosstalk 00:25:50]

Speaker 9:
We have a bunch of Spinnaker stickers here, anybody wants Spinnaker stickers [crosstalk 00:26:02]

Intermission for break

Andy Glover:
Aha. We have more than one mic. I thought I'd be interesting to hear, potentially, from other people on the panel here, why Spinnaker. The requirements are, you have to do it in two sentences and they have to rhyme.

Speaker 2 :
I can't do that. But, I came to [inaudible 00:00:27] about two years ago, and we didn't have any structured story around how we were going to deliver our software, and we were going to do this and do this [inaudible 00:00:35] in a more uniform way, and so you had some people deploying their stuff with Chef and [inaudible 00:00:43] yes, people with [elastic beanstalk 00:00:44] talk, you had one person who spun up their own [tootem 00:00:48] account, which is kind of like a Roku thing, but they were complaining that it didn't integrate with any of our stuff.

So I was working on an [e-stu 00:00:59] container service base to deploy the system for us, empire, from the mind, and while I was setting this up I went to [UConn 00:01:11] where I saw a Spinnaker demo for the first time, and I went "Wow! This uses all the right nouns and verbs, this is going to be the new Jenkins but for continuous celebrities," and just dropped that whole thing and started working on Spinnaker. It was pretty great.

Speaker 3:
So I'm sort of coming from very much the opposite experience from Look Out, in the sense that we were a smart company with a smaller service staff, and right now we're doing [inaudible 00:01:43] on the Roku, which [inaudible 00:01:44] experience is hard to beat, but as you grow, as you have a margin on the services, as you have a margin on the developers, somehow you [inaudible 00:01:55] on the Roku, how do you keep all of your brand's environments consistent? How do you keep all of your brand's services consistent? [inaudible 00:02:04] your knowledge can prosper here. We started looking at Kupernetes for this, and decided that Kupernetes by itself wasn't quite good by the low maintenance developers' branch that you used for Roku, and at Spinnaker we will help providing similar sort of silver set of tools.

Speaker 4:
We used to do deploys on [inaudible 00:02:37] and with [inaudible 00:02:38] we actually have a downtime after midnight to about 4:00 AM, and guess when we deploy? At midnight to 4:00 AM. After doing that for a couple of months, it really stopped. Especially if you do it in the middle of the week. We were trying to find some kind of solution to do it in the middle of the day, so we moved to doing red-black deploys, like that, but we saw a lot of [inaudible 00:03:04] incidents. When we discovered Spinnaker really through a Google search our eyes lit up, we spent a week integrating it, and we haven't looked back since.

Speaker 5:
We have a lot of companies use Spinnaker. We get to see a lot of pretty neat use cases and reasons why people are using Spinnaker. A lot of it is very similar to Look Out, that they want a single paved road [inaudible 00:03:33] that everyone can go on, everyone can plug into, that's pretty neat.

But I think the coolest thing I've seen so far is people using it to move from AWS to Kupernetes. That was something that a couple years ago I thought it was going to be a hard thing to do, but Spinnaker, and [Stew 00:03:53] makes it a lot easier.

Speaker 6:
Thanks. I work on Spinnaker and Google, [inaudible 00:04:02] working on Spinnaker, and I [inaudible 00:04:05].

Speaker 7:
I don't think I can top that. We work on Spinnaker, supposed to be the users of Spinnaker directly, and our initial interest was back close to three years now, we were part of a team that sort of as side work inside of Google was trying to make sure that open source packages could be viewed as how all top tier, all phrase work on Google's were at the time a fairly new public cloud offer, so we got involved with Andy and Steven and we first dated support for Juicy for our compute engine, which is similar to any other [EM 00:04:48] based platforms, and quickly realized our interests aligned, the teams worked well together, and then it turned into a big full-time effort.

Lars joined team edited all of the Kupernetes support we've since sent in support of our app engine, we put tons of effort into figuring out how teams can stand up Spinnaker operated from scratch. Inside of Netflix they keep a large Spinnaker employment so they're oiled and running all the time, but that wasn't what our users were doing, our users were starting from scratch to figure everything out. We put quite a bit of effort into that, and now, since starting a pretty sizeable effort we have quite a few internal teams using it, we have some public visible teams using it like Wave, and they have full active deployments across both AWS and Juicy being 100% managed with Spinnaker.

Speaker 1:
In case you were all wondering, that mustache is real. [Laughs] He's serious, and if you wanted his attention just go over and grab it and pull it towards you.

Speaker 2 :
He loves it!

Speaker 1:
[Laughs] Jerry tried it. So let's get to brass tacks. Number one missing feature in Spinnaker today is?

Speaker 4:
Codefied pipelines. They're on their way.

Speaker 1:
I said missing.

Speaker 5:
Well, they're not there now so all the users right now have to solve this. Look Out talked about using [Formast 00:06:17], a lot of other companies just rule something themself. This is convenient if you have a lot of consistency inside your company, and you can just take something that works for everybody, but you don't have a lot of variability with that. [Formast 00:06:32] is kind of, you have to do things a certain way, so really looking forward to the work from you Rob.

Speaker 4:
Thank you. To tap onto not just the pipeline configuration, but also to modern codifier autoscaling settings, and every other setting that's "click click click click click." Like he said, we actually wrote our own thing to pull it and store it in the [J-SAN 00:07:06] and then shove it up that, but that's but that's really annoying.

The other thing, authentication, there's some stuff that we can do with that, but it would be nice to have it taken care of.

Speaker 3:
I was going to talk about [auth-action 00:07:24]. Brandon mentioned very early on in the presentation that this move to CD is very much a culture problem as much as a technology one, that's the approach working right now as saying, you can all go look at Spinnaker, you can all go look at what it's doing and how it's configured and where something is [red-black 00:07:44], but please don't touch anything. We don't want the [inaudible 00:07:46] to touch any buttons. It could be [inaudible 00:07:51] lockdown today. Say here are the people that you know, here are things for the that people to make changes, there are the people who are allowed to deploy, and here are the people that just get [promoted 00:07:59]. Everybody gets to love you.

Speaker 7:
That's a great point, and actually Travis on our team is the one who did Fiat, which we [paid sense for 00:08:08] fix it again, Travis. He has the read-only mode basically just about there, the story needs to be kicked across the finish line, but it does put you into a mode of where you can't turn all the knobs, but you can--

Speaker 3:
I've seen it in the HQ, I've been looking forward to it, so.

Speaker 7:
It's just about there, he's obsessively focused right now, I'm redoing all of the docks, so we can bring this to... [trails off]

Speaker 3:
That's the important part right?

Speaker 7:
Yeah, so it's where Travis is or something, but it's almost there. We could make a deployment.

Speaker 2 :
Now I just spent the last year setting up PCI compliance infrastructure [inaudible 00:08:55], that's... don't do it. But one of the biggest challenges that I had is figuring out how to do authentication in Spinnaker, and even asking Andy and his team, the answer was you just launched another Spinnaker and you just launched a whole other stack. The main one in Fiat right now is just great for offloading authorization, but there's nothing in the codes of really handle authentication past [inaudible 00:09:27] account route mapping, so it's hiding the [senge 00:09:30], hiding [emeralds 00:09:32], and permissions even attached to the Spinnaker box like this. That's... please, anything you can do to help.

Speaker 8:
You should email Travis.

Spk 9 w/ Accnt:
The same thing I found at Spinnaker was, and he already mentioned one of them, is pipeline conservation. When I started using it, it's really hard to get it on the [Gury 00:09:58] and I don't want any devil up front to get frustrated and [be productive 00:10:02], and it would by really nice if we had the authorization in build so we don't have to decide, are we going to use [Octa 00:10:09], are we going to use Google, or Fiat or something, it would be really nice if it was part of the package.

Speaker 2 :
I plus one the pipeline as well. That all says something, is the metrics. Intelligent canaries tied to application metrics, right, then I know that you guys have this, and Netflix. Yeah, that would be super useful, if we had something like that integrated with some monitoring thing, like a popular monitoring system like [Data Dog 00:10:44] or whatnot. That would be a really killer feature. Can I make it a new one? A new future request?

Speaker 1:
You know I only said one, okay?

Speaker 2 :
One more. I was plus one-ing his. A view into every running appliance would be really useful, because it's impossible to know when it's safe to kill Spinnaker and to do maintenance on it. You have to literally orchestrate a strategy against high appliance execution API [inaudible 00:11:17] there's been simple pane of glass seeing what's running right now.

Speaker 6:
There are some end points on the orchestration engine on Orca that you can produce [c-active 00:11:37] executions, and there are metrics published to reflect that same information, but it's not extremely obvious how to do that, so we could do a better job, and that's actually [inaudible 00:11:49].

Speaker 2 :
One of the behaviors that would make this really useful is similar to Jenkins, [inaudible 00:11:56]. Stop executing new jobs and Spinnaker would be amazing because then... we consistently have this problem finding a pipeline that was running while we're doing the restart, and then come to find online, it's still hanging out in [lettuce 00:12:09] and it's going to be there until you handle your...

Speaker 7:
I think what most are doing, they'll prevent new connections, new requests were reaching that "no" in a new monitor that exposed entry points for any remaining running work. There are zombie agents and things now that will clean up those pipeline set [inaudible 00:12:29]. The folks in that shift stage, that zombie logic that was added by Rob Fletcher and Netflix, so that when you produce it, when you try to cancel pipeline and use zombie, first it wasn't clear to you that [inaudible 00:12:41] zombie from the UI, and then you would try to cancel it with, roughly like [inaudible 00:12:47]. Now, you can deploy force it through, it will [inaudible 00:12:51]. So, yes. We agree.

In terms of talking to potential customers, most of what we're doing, at least at Google, is try to figure out how to make Spinnaker satisfy our users, as opposed to making it satisfy... so the number one things, we hear it from potential users most often, the first is to declarative config-based approach. That's us.

The second thing is that the canary work. Inside of Netflix they have automated canary analysis systems, there's [inaudible 00:13:26] integration, the Spinnaker, this stuff really doesn't exit outside of Netflix. There is work underway to make it exist outside of Netflix. It's underway. We're trying to work together on it, the rough play at the moment is that by the end of this quarter or thereabouts, we would have something in reasonable shape to try out that would support their in-house metric store, our metric store, which is a stack driver as well, so that's the initial target, and that would be the end of this month, which starting the end of this quarter would include a canary stage... it will permission-based line in canary server groups [inaudible 00:14:05].

It's a little early these days, but that's...

Speaker 1:
I think what's really compelling about the Spinnaker community is the innovation going on in various ecosystems, whether it be Netflix, Google, Look Out, everyone's having value, and the auth as a missing feature, I'll plus one it, because it's exactly what we do in Netflix is we have one stack that's open to anybody on the Network, and we have a very high trust culture, but we do have a PCI stocks compliance gag that's closed, and so Netflix is also really eager for Travis to finish Fiat, because we plan on leveraging Fiat as well. In fact, we're running it right now and running it through its paces and making sure that it can meet our needs, but I think that's the success story of this community in terms of everyone's adding value in different ways.

I want to ask the audience here, are there people here evaluating Spinnaker?

Okay, a few, good, because the next question I wanted to ask the panel is what is your advice? Look Out gave some which was really helpful. If you could distill it to a few sentences instead of a soap box. Matt.

Speaker 7:
Sorry.

Speaker 1:
[Laughs] What advice would you have for someone evaluating Spinnaker? What should they do at the enterprise, or what's the biggest thing they should be aware of before bringing Spinnaker in-house?

Speaker 4:
The amount of configurations. We started early on with a lot of integrations with Jenkins, so there is a lot of thinking that you have to really do. It took about a week to actually get proficient at clicking. Then after awhile you're a click master. If you can report it and figure out what you need to do, try to go from a basic model of a pipeline to what you want to orchestrate and then go really wide for everything else.

We have one basic pipeline that starts off initializing that the clusters, and then it just continues from there on. There's one little bug that if there's no existing cluster some of the deployments will fail because there's no existing cluster because each time to start a new cluster, then, yeah.

Speaker 3:
I'd say that it's very much that model of click around and then see what happens, so that the end experience that we're trying to get to at the fine print is very much about a fully automated form. You merge your check-ins in creation Docker container, Spinnaker pulls Docker, it goes out to whatever environment. You never touch the employee, never touch rollback, and so on and so forth. Really sort of in that the same thing as you'd read other peoples' tokens.

It's super powered. I found it really impressively easy to do something by hand in Spinnaker once, say, "okay, what could [inaudible 00:17:20] leave behind, what pipeline generated, how are you using Kupernetes, look at the Kupernetes output, get it generated," and then say, "okay, I want to automate that. Can I dump that in as three trench," dump that in Kupernetes, and let Spinnaker pick it up from there. So that is a powerful way to approach, what can I do with this [aid mast 00:17:41] tool that has a whole bunch of knobs, aside from just reading through docs and search material, because there is a lot [inaudible 00:17:47] to go with it, and figure out how to get to where you want to go.

Speaker 2 :
I'll piggy back on what he had said, I think I mentioned this a little bit in the presentation, but it's something that I overlooked. If people in your organization are going to be working on this, make sure that they really understand what CD is, continuous delivery is, what continuous deployment is, and that they share that vision with you, right? Because until you get everybody excited about heading in that direction and doing that you're going to hit some bumps in the road, whether you try it again or not.

Speaker 6:
Before all of that, if you're trying to set up Spinnaker for the first time don't be discouraged. It's pretty hard to stand up, just like [inaudible 00:18:35] services, they all have configuration, they're hard to update.

We're trying to fix this with twelve [inaudible 00:18:40], I would check that out. It's a little self serving, I'm working on it. It uses a bootstrapping version of Spinnaker to spin up another Spinnaker that red-blacks. Versions of [inaudible 00:18:52] can think that you're right, validates it to make sure you don't push the edge of the Spinnaker server. It really takes away a lot of the headaches that you probably experienced the first time you try to set up Spinnaker, when you actually tried to [inaudible 00:19:00].

Speaker 7:
One other thing I would say is, if I could roll it early and set up monitoring dash ports and figure out how to use them? There are tons of interesting things being published that are relating to thread pools and circuit breakers and cube work and all of those things that are not obvious, you're not going to find them scraping through source code. When something goes wrong, the dash port is almost always a better place to start than [inaudible 00:19:30] or source code.

Speaker 2 :
One thing that really surprised me about Spinnaker, going to this, was that as an old school office person, I thought I had a whole bunch of [inaudible 00:19:44] with launching assistance, but with Spinnaker you really don't. It's basic concepts around continuous delivery require, you're going to make an [all mute 00:19:55] once, and then obviously you'll be staging. It needs a way to configure itself to be staging, because later on you're going to take that [ovie 00:20:04], you're going to put [an induction 00:20:05], you're not going to build it again and inject a new configuration to it, it needs to know how to live in places. Or you need to have a platform that can allow to be configured with that with minimal input, because really the only places you can decide what configuration you're going to load onto these immutable businesses, are that little text box field at the very bottom of the deploy panel that says "user data script base 64 encoded," which I really didn't want all of the developers to have to deal with it.

Think early about how you can make your instances bootstrap themselves, with a minimal amount of custom configuration, because you're not going to want to put a hundred line user data script in [strep 00:20:46] Chef into that little text box every single time you want your modifications.

Speaker 5:
Brandon talked about it and Nate talked about it, but getting your engineering culture ready for the changes that Spinnaker brings is pretty important. I've been surprised how many people don't really know what's going to happen, people go from using Chef to mutate their infrastructure, they mess around, lately with Doctor [engines 00:21:15] and whatnot, and then they go to baking ALIs, which never works, it's kind of strange.

You got to know what you're getting into, and sometimes you just don't know, and when you're using Chef and you think you heard Chef and it takes thirty seconds, that's a lot different than reading fifteen minutes from an EMI to bake, and then doing a very safe red-black deploy, which could take any number of minutes to hours, I guess. So some people say that's not worth it, that's a lot of time, but as Brandon pointed out in the presentation, it's really engineering time that you should be tallying. I think that was a really good way to count. Qith Spinnaker, there's... just do it, and then you don't have to worry about it. With Chef, you have to worry about it, it's probably not going to work right. It is important to get the culture right.

Another thing is we work with a lot of companies who brought in Spinnaker, and we actually wrote a public playbook on how to bring Spinnaker to your organization. If you want to check that out.

Speaker 1:
Containers come up a few times. Ian talked about moving off [SM data EC 00:22:31] to compute one to Kupernetes, we have Lars here, roadblock to Kupernetes of limitation. The good folks at Look Out mentioned what's coming soon is you're looking at ECS. Something that no one's mentioned yet with respect to what's missing at Spinnaker, [inaudible 00:22:50] lambda, or Google's cloud functions. I'm giving you all an opportunity to have your one-on-ones, your soap boxes. Where are we going here? It doesn't have to be about Spinnaker, but where is the industry going with respect to server lists? Is it going eclipse containers or are containers going to make server lists pointless? Why aren't these features, let's say native to Spinnaker yet, and are they coming? What are your thoughts here, feel free to just talk.

Speaker 2 :
I'm just going to start this by saying, you had lambda on the roadmap for this two years ago.

Speaker 1:
Yes, it is still on the roadmap for the future! [crosstalk 00:23:32] We're waiting for someone in the community to build it for us!

Speaker 2 :
[crosstalk 00:23:31][inaudible 00:23:31]

Speaker 1:
That is a good point. Netflix, we are using lambda very limitedly, hence that's why it's not a first class citizen in Spinnaker, until such a point as teams are running after us with pitchforks and torches, we're not going to build it. I thought by now the community would have added it, but no one's added it. I'm wondering out loud, I want to get your thoughts here. Is it just a non-starter for big enterprises? Is Kupernetes the future, and the serverless thing is just for the two person start up?

Speaker 2 :
The only lambda [amp 00:24:14] limitations that we have at Look Out are mainly related compliance tools. Reading off of [foul trail 00:24:26]. They're super useful, and I could see... there was a lot of talk and really interesting things happening in this space, but from the ones that I wrote trying to enforce tagging lines and stuff like that, I will say that managing these things, it sucks, right? They do, there's this one thing called Taffa that I found, that got the [roboto 00:24:48] wrote, and I think that's what we use. That with Jenkins.

Still, I can't imagine how you would build any applications of scale and manage them with the tools that exist. I think that if you could do that, if that was built, I think that would open that up, because I can't imagine trying to build anything in scale and use the tools that exist and how to manage it.

Speaker 1:
I do want to clarify the efforts. We are leveraging lambda, kind of like you all for small one-off management auditing, checking, security is using it. Spinnaker does make some calls to lambda functions internally, but at the end of the day service teams are not building apps leveraging lambda.

Speaker 2 :
At a certain scale, you're much more efficient to run an instance that's pulling it and SQS for work to do, rather than having functions [inaudible 00:25:40] in the service. I think that, two years ago? To and a half years go, Docker, who needs Docker? Who doesn't have Docker installed in their laptops year? Who isn't using it in the last month or so? There's a certain amount-- if Docker had been around, let's see, it had been around for four years before that. There's a certain critical mass that these things have to build up, developers have to learn how to write effectively, and then there has to be frameworks and tooling built for this.

Honestly I'm kind of fine with Spinnaker not having [super philander 00:26:18] right now because it kind of does, in the fact that I can a script for a script stage that just executes serverless deploy. There's not a whole lot that I think you need to invest in right now, until it gets that critical mass, that developer driven adoption that we've seen. That the marketing people are really hoping for.

[00:26:49]
Speaker 7:

The guys at have been building this generic stage, you called it generic deploy stage, so it will be interesting to see how many folks in the community use that to to deploy and rollback [inaudible 00:26:58]. There's a [inaudible 00:27:00] servereless [inaudible 00:27:01] It'll be interesting to see how many people, it'll be trivial to use that for most kinds of deployment, it'll be interesting to see if people do. [inaudible 00:27:08]

Speaker 6:
[00:27:20]

[00:27:30]

[00:27:47]
[00:27:50]
It's also [inaudible 00:27:15] Spinnaker is a generation after Tidus, it could totally replace. I think it's going to take configurations to [inaudible 00:27:22], or lambda. It's not exactly clear how they pick the workloads. I think the dream is that with functions in lambda you don't have to care about what resource you run on, how to allocate this RAM or CPU, all that's figured out for you. But how that fits into a workload or how that fits into an office, and what it needs to employ that, I'd write your applications around that, . So I think it's probably going to take configurations to really build that up, something that can support it.

Speaker 1:
So then are you saying it's something after containers, or it's just a separate [crosstalk 00:27:59]?

Speaker 6:

[00:28:00]
[inaudible 00:27:59] is after containers, or maybe containers orchestrators could probably take advantage of systems built on top of them to run things, if they're running in lambda . I would say [inaudible 00:28:10].

Speaker 5:
Most of the customers we see, they don't actually have a lot of lambda. They talk a lot about lambda, and they want to use a lot of lambda, but they don't actually have a lot of lambda yet. So Spinnaker doesn't really need to treat it as a first class citizen yet, especially with the [inaudible 00:28:31] endpoint. You can pretty much get a deployence strategy in there, it's just not first class citizen. I think that will be good enough for a long time.

Also, it's not exactly clear [inaudible 00:28:46] what workloads you really want to be using lambda for. There's some clear cut cases, but it's not for everything, so I think that everyone still needs to feel it out a little bit with where lambda is going to land in all of this.

[00:29:07]
Speaker 4:

I really got nothing. We don't really use lambda. I've seen where the , but then...

[00:29:12]
Speaker 3:

[00:29:29]

[00:29:52]
[00:29:58]
[00:30:09]

Likewise I think there's maturity of our culture around this, what we actually do with this thing and then how do we operate it, maintain it, managing it. I think there's also a problem of not knowing what this is for. mentioned how it's [inaudible 00:29:30], and we look at [inaudible 00:29:33]. I remember that, that came in Look Out actually, when somebody pulled out how it's used in apps. This is going to make all of your local build problems go away, and definitely [inaudible 00:29:44] round up support for knowing whose right now, and I'm sure used with as that. Use them, and right now it's myself in Amazon, I've myself into that are not yet mature , and solidified some they're not going to change, and this really seems like a lot of risk without any clear benefit so far.

Speaker 2 :

[00:30:37]

[00:30:57]
I want to point out that some of us are crotchety old office people and we have opinions about this sort of thing. We're not the only people who are using this. If you want lambda support, you can totally hack it in. This stuff is written very clean, code, it's super easy to extend. We've done some of that internally. The community, to Andy's point, is really open to suggestions of what direction Spinnaker needs to be taking, so we're all a little bit "pu-pu" on this, but if you've got a really great idea for this, come on. Just , talk about it. Let's crank it out.

Speaker 7:

[00:31:24]
[00:31:26]
I think one of the things that Spinnaker is really good at is that some of these used to be pretty hard and manual, but making sure your new things came online is healthy, service requests. It makes those things easy. For something that has this scaled zero behavior like it isn't really there until a request comes in that thing, you know it's online, you know it's still there. A lot of we don't know yet. We don't know if people are using those things yet, it's just not clear.

Speaker 1:
Okay. Today some of the Spinnaker community had a conversation with a company called Server. They're going to be submitting serious PRs to add Mesosphere DCOS supports, so another cloud provider. Supports [inaudible 00:31:55] the end of the quarter.

More soap box questions, since we already said, hey, lambdas are two years off, containers are here right now. So we've got Kupernetes, we've got ECS, we've got Mesos, we've got Scorm, got Docker, we've got Rocket. We've got Tidus, which is a Netflix that I think is fairly public now, it hasn't been open sourced. [Seattle ace 00:32:19] orchestration systems, and then you have different formats. Where are we going, what do you guys see happening in this space, is Kupernetes going to rule them all? I'm sure that you Googled that. Voice an opinion on something around that.

So yeah, where is this going? Anyone in here from Docker? Anyone here use Scorm? Thought so.

[00:32:45]
Speaker 7:

For to do what we often have to do to explain to management, which we have a lot of, why we're helping direct editors, and enjoy the party. From our point of view, at least the development team's point of view, we view Spinnaker as the future. The more platforms that are supported, the more potential customers for us, the easier it'll be for folks to convince their own management to adopt Spinnaker, because it produces lock-in and those kinds of things. So from our point of view, more the merrier, and it helps us proof out the interface's new programs.

A year ago it was difficult to add support for a platform. [inaudible 00:33:26] took what, four months in Kupernetes. But now Oracle and the server folks have been able to add support for new platforms without even really talking to us. They show up and say, "hey, the whole thing works, how to do we submit these 12,000 lines in here?"

Speaker 2 :

[00:33:57]
[00:34:03]
That's really what attracted me to Spinnaker, was the robustness of its deployment model. At it's core when it was first released, the back end was AWS, it was just support. but it was said, "all right, I'm going to bake an [ommie 00:34:00], I'm going to put it in an on scaling group, I'm going to attach it , I'm going to health check it, it's going to do a rolling deploy... it's going to be a red light deploy, and it's going to ensure that your system is never down while [inaudible 00:34:12] this thing, if there's a problem you can have it automatically rollback, it handles this stuff robustly really well, with all of the little problems of continuous delivery, really really well, and then it's slow.

A developer told me yesterday, "I don't see any reason why our deploy should be measured in minutes."

I said, "You mean like, twenty minutes?"

He said, "No, like minutes at all. A running Docker should take a few seconds, it should be like that."

We're looking at switching Kupernetes because of that problem. We're looking at switching to Kupernetes on top of Spinnaker, because we get the robustness of the deployment strategies under Spinnaker and the speed of deploying containers under Kupernetes. Most of my friends are in the [inaudible 00:35:03] space, they love this thing, and I'm really glad to see it fully supporting Spinnaker, and I think that's really the direction it's going to go.

Speaker 1:
[00:35:21]
I remember evaluating all of these orchestration engines a few times when they first all came out. was probably first. Then looked at it all again with Kupernetes was about 1.0, and had to look at DCOS, I think it was pretty 1.0 at that time. PCS. I remember PCS used this horrible... it was one of the worst things I've ever. I don't know. And then I tried Kupernetes and it was really awesome. I tried Mesosphere next, and after trying Kupernetes before it, it didn't stand a chance.

[00:36:21]
For a long time, I thought Kupernetes is just going to win, it's going to beat everything, and then I looked at that graph a few months ago. PCS's popularity was skyrocketing. I couldn't believe it, and I hadn't looked at since it first came out. I looked at it, it was a lot better, but I think just the Amazon brand is so powerful. Now I'm wondering who's going to win it, , or ACS. But functionally I like Kupernetes better, so thanks for making integration.

Speaker 3:
I'm actually very much in the honeymoon period too, I think it's the coolest thing, but I think it's important to remember that we're not making or trying to make an all or nothing change. It's a big investment integrating Kupernetes, it requires a lot of work, but at the same time it's got support for secrets, it's got support for [ending 00:36:51]. We're not using any of that yet.

[00:37:10]
Will we down the road? Maybe. I think it's pretty cool stuff but we don't have to make that switch. So there's a lot of the mental models that both have an options, and you have options, have generated the support [inaudible 00:37:07] for years around, "oh, I have a set up that's separate from my deployment, it's separate from my orchestrations," and we need to work with that, so we can say, "okay, what are your [inaudible 00:37:18]?" with the existing environment [latchment 00:37:21], and maybe down the road we'll [inaudible 00:37:24] but it's very much a process of saying, it appears [inaudible 00:37:27] light, there's the vistaport that [inaudible 00:37:30] commit to today. Next year we'll look at, do we really want to keep maintaining a system of our own that tackles the environment we're in. But maybe we're ready to look at doing that in Kupernetes, so it's both an all or nothing approach and a very gradual approach.

Speaker 2 :
One aspect of that, that I really like about Spinnaker is that you can... it uses a consistent nouns and verbs. You've got clusters, everything is clusters. Any cloud is clusters, so you have the email for this event that said, "how does Spinnaker allow multi-cloud set up?" In the same way that it allows multi-clusters, that it allows you to transition between these systems and try them out, because you're going to reuse the same deployment strategies, the same concepts in your head of having load balancer, having the security group, and having the cluster, and having the deployment. Then if you want to change the back end, you don't have to change your deployment strategy very much. You just continue to think about the deployment, and the load balancer, and the cluster, and the security group.

So even if we get an all new [inaudible 00:38:48] Kupernetes dies next year, weakens, which to me stuff sort of weakens, which then back to [inaudible 00:38:56]. All of the same work flow continues to work.

Speaker 5:
I love sitting on the sidelines and watching these eight giants duke it out and get really bloody, because in the end really the consumer is going to win, right? Prices are going to drop, they're going to support all of these platforms. It would be so easy to switch, and you're just going to say, "Oh! I don't like this guy, I don't like the AWS for X and Y and Z, so let's just go ahead and switch to Kupernetes or Mesosphere." You can do that pretty much at any time.

Speaker 1:
It's interesting, and actually Matt mentioned earlier the ways we use the Spinnaker is active on GCV, and ADDLess, and something that caught on a bunch early on when we open sourced Spinnaker was hey, this thing supports multi-cloud. What I find interesting is you guys are talking about, say, moving from AWS to Kupernetes, right? You left out where is Kupernetes running in that case, which is really interesting.

The multi-cloud conversation is either changing from do I use basically an infrastructure like GCP native, or do I go on top of Mesos running on, I don't care. Is that the way the conversation is moving? Or is it still about conversation to talk about, I have assets and aid in the US, and I want to hedge fund that so now I want to put some of my assets in GCP, is that a pipe dream? Or is anyone here evaluating that strategy?

Speaker 3:
I have strong opinions about this.

Speaker 1:
Can't wait to hear them.

Speaker 3:
[00:40:42]
About two years ago, before I made the hop from [inaudible 00:40:41] over to doing more stuff, I was working on the single service that has been built up using Data Mapper as our over app, and we decided for various reasons that we'd outgrown Data Mapper, we needed some of this stuff to active [inaudible 00:40:55] app, and I was the person that sat on that long morning branch six fucking months. It was miserable. And I can't imagine what would happen if we also said, "oh, and we want to switch from [high sequel 00:41:09] to [plaster x 00:41:09]."

So the promise of go arounds -- one that promises go arounds. I think that they're great tools in a lot of ways, but one of the problems is, and then, oh yeah, if you want to switch from [mysocool 00:41:19] to SQLlite to [inaudible 00:41:21] to whatever Amazon custom thing you want to climb, you can do that. That has not turned out to be the truth. You can build maybe your side project in SQLlite and deploy it across [inaudible 00:41:32], that's still at that.

I think that's sort of what multi-cloud thing is, I think you're running your system on a paid-for cloud provider, you can make that make that transition. I think it's a lot cheaper using some of these middle layers like Kupernetes and Spinnaker, but you should not count that, oh yeah, I'm going to have some stuff running in AWS, and instances of the same service running as or just in case. You know, Amazon, God only knows what sort of natural disaster that would be. But I don't think multi-cloud is realistic for [inaudible 00:42:06], small centipede system.

Speaker 2 :

[00:42:22]
It's a huge operational burden, and we don't like to be woken up in the middle of the night. We don't like having to think about how you interface with multiple clouds. As operations people, we like our simple. As simple as we can. So, while Spinnaker is actually great for migrating around to multiple difference cloud providers, I think if you had asked me a month ago, I would've said absolutely no way I would ever run multiple clouds because Amazon has never had a [inaudible 00:42:47] their services. And then us three.

So at a point you have to evaluate whether you want to take on that huge additional operations burden, because it's not just a matter of, well, it's just another cloud provider and their all interchangeable, we've had to write some really explicit custom software to make them sort of act interchangeable, but it's still a huge operation. So I don't see small shops taking this on, I see bigger enterprises that absolutely have to contractually meet up time requirements taking this on pretty exclusively.

Speaker 5:
I think it really depends on the business and what their workloads are. So if you have a business that's mainly stateless, what I mean by that is you've taken the state and pushed it to the edge underneath the structure, maybe it's not even in Amazon, maybe you use Rice Labs, or you're using another company to take care of your data for you. Then it's just the compute around. So if you have that type of workload, I think you can move between clouds fairly easily. As soon as you have persistent dis per cloud, you have to have IDs for that baked into whatever manifest you're using, so Kupernetes, Google persisted this, you're going to have an ID and that's something different then your EVS ID. So moving that around is not really that easy.

Also, it probably wouldn't be that easy to dump an entire database out of Amazon and into Google. I wouldn't want to do that. So it really depends. I think you can move compute around really easily though.

Speaker 1:
I'm looking Lars and [Nodin 00:44:39] and... We are in a Google building with two Google individuals.

Speaker 7:
I didn't know this building existed until... I had to find it.

Yeah, it's an interesting problem on a bunch of levels. You have companies look at what it costs to lift and shift and then that cost-us [inaudible 00:45:02] becomes part of their conversation with their providers person here. The consumer of a public cloud. You're talking about spreading compute across clouds. You're probably talking about something like a clear connect or something between those data centers, so you can make them form how you want.

There's also the things that I think are less obvious. Like when you try to recruit people, do they have the skills you want? Then if you operate them at a higher level, where things look roughly the same, you've probably got a bigger pool of folks to pull from that use Spinnaker, regardless of what you're using it against, they're still common skills. I should [inaudible 00:45:42] everybody has outages. It's probably best to spread out what you can, but it certainly requires a big investment that you need. What we've seen is, even if you're using something like Spinnaker and you're using something like Kupernetes, there's folks who succeed, have expertise in the underlying platforms. It's not like you don't have to know [inaudible 00:46:03], that's just how it is.

Speaker 1:
So, interestingly, we talked on some of the aspects of missing features in Spinnakers, we have a pretty big community here, different interests in terms of, you've got Google, and Netflix, and you've got Look Out, and [arm race 00:46:22]. There's a mix of companies here that have different incentives and motivations from what using Spinnaker or moving to Spinnaker for the particular cloud or Kupernetes or whatever they're moving up, ECS.

There are many people in the room, some people evaluating Spinnaker, some people using it. So I'm curious from this group here, we've touched on how right now... We think the community is awesome, and I think it's true, it is awesome, we're getting a lot of innovation out of it, a lot cross-sharing. I can say specifically for Netflix, working with Google and Azure we've learned a ton about their clouds and it's helped us mature our model and how we look at it in the US. So I want to ask the panel, and then I'll open it up for questions from you all, what can we do as a community to make the community better, stronger, faster? Sorry, I'm messing around with you. So what can we do to make the community better? What's missing from the community? Please. Elaborate.

Speaker 5:
The question that we hear a lot is people who want to contribute to Spinnaker code base, or other parts of the community, and they don't really know how. I think what they're looking for is guidelines on how to participate in the community. That being said, it's pretty much going to the slide channel and start talking to everyone, and everyone's really friendly. I don't think it's that big of a deal, but it probably would make things a lot easier for people that are on the outside, trying to get on the inside.

Speaker 2 :
I would love to see you toot your own horn a little more, in the form of really continuing to release notes and engine plugs, because there's nine, twelve, thirteen microservices that make up Spinnaker, and it's a little difficult to know what's changing underneath you all of the time during active development. Sometimes you want to get in, and just don't know what you're going to bring.

Speaker 1:
Matt and Lars, did you have anything to add about the release notes?

Speaker 6:
We're formalizing Spinnaker's release process. We don't have a hard set date, but time beckons. Now you're at basically [inaudible 00:48:43] version of Spinnaker, toppleable brick, you don't care about what your market service margins are, you say, I want to deploy 1.0, here's a change log, everything has changed since code on 8 or whatever. All of your services are spun together and when you want to do an update, we make sure that everything is running, everything is validated in that version, and if we have to apply a patch, we apply it to the top [inaudible 00:49:02]. Basically we changed a lot of you build in, you don't have to worry about what change in the UI versus [inaudible 00:49:10], but change the [inaudible 00:49:11]

Speaker 1:
The good folks at Armory also had a similar process?

Speaker 5:
Yeah, we have a release process for our distribution, and our release runs from that also. It's very similar to what [inaudible 00:49:26].

Speaker 1:
So it sounds like we're doing an excellent job as a community. We've fixed the [inaudible 00:49:32] is no problem, guidelines, we're all over at that tonight. We're done?

Speaker 3:
I'd like to see some curation around third party work, and not stuff that necessarily makes it into Spinnaker core, but it seems like there's a whole bunch of companies that are saying, "we've got something that's not quite it Spinnaker." Spinnaker uses the [inaudible 00:49:57] to know that we'll build around, we'll print out a github. It's still in that early stages where it's not clear what are the tools that everybody likes to use, what are the ones that other developers are using, what's being brought into core or not. I think that ecosystem right now is in the very first steps, and I'd love to see that develop more.

Speaker 1:
Interesting. Very interesting. I hadn't heard of that before. That's good.

Speaker 5:
So I do have an Awesome Spinnaker gif repository. It doesn't get a whole lot of contributions, but, oh yeah.

I have an Awesome Spinnaker repository for Spinnaker. It doesn't get updated very often, but I have started to scour github and the rest of the internet to find third party contacts.

Speaker 1:
Is it actually called "awesome"?

Speaker 5:
Yeah! Awesome Spinnaker.

Speaker 1:
So you're not using that excessively, it's actually called Awesome Spinnaker?

Speaker 5:
Yeah. Oh yeah. It's very awesome.

Speaker 1:
Is it awesome?

Speaker 5:
Kind of.

Speaker 1:
So, opening up questions for the audience. Actually, the audience questions for the panel.

Audience 1:
Names. All lowercase. Any forward progress on making it possible to name things other than really short, lowercase [inaudible 00:51:36] things, no dashes, no underscores, no dots.

Speaker 1:
You're looking at me like it's my fault!

Audience 1:
You're the other guy with the mic.

Speaker 1:
Just the, naming patterns like framing?

Audience 1:
Yeah, just naming patterns, just when you name anything.

Speaker 7:
No, is the answer. There's been no progress to address that.

Speaker 1:
That would break the web!

Speaker 7:
There are a bunch of issues. One is, we didn't anything about it. That's not a great answer. But the second really is that there's a naming library Frigga that we use... it's not that it's all over the place, but it's in enough places that it would be real project to do it, including in the UI. Some of the underlying platforms have naming restrictions, so we have to put a fair amount of effort into being as [inaudible 00:52:21] as we can. But, no, is the answer. We haven't done anything. And we're not working on it.

Speaker 1:
So for what it's worth, Frigga is an open-source library that Netflix built may years ago, predates me, and actually the rest of my team at Netflix. As Matt alluded to, a lot of Spinnaker relies on Frigga.

To put it into perspective, there's a lot of platform tooling built around Frigga, and we recently introduced a single character change to our naming pattern that took a whole lot of conversations and hand holding and heavy petting to make sure no one was going to get upset. What do we have? A carrot? Yeah, a carrot. And because we didn't want to break, I mean, in this case Netflix's infrastructure, which would then break the web!

Speaker 7:
It comes up maybe once a month, somebody asked in the chatroom, and pretty much each time somebody from scratch says here's roughly the spots you would have to touch. I think the changes would probably not be to Frigga, but to abstract what we're doing with Frigga and the similar work we're doing in the UI. It also feels like a pretty good new-guy project or new-person project to tackle, because it's not that deep, it's just in a bunch of spots. And that's the way that code basers like to point about, what does it take me to get involved and start making contributions. The code base is broader than it is steep, it's a bunch of services. None of them are particularly deep, but it's a lot to wrap your arms around. But something like the naming thing spans all of them pretty much. It would be an interesting one to tackle, to come up to speed on the code base, and the community would definitely welcome that.

Speaker 5:
It's important to note that the name conventions are how Spinnaker groups all of its resources, so [inaudible 00:54:12] derived names for metadata, this would be a lot more approachable, and then you'd have a different way to name your services and you wouldn't be getting into all of these platforms' specific requirements. So we can think about it, if anyone want to try to do this.

Speaker 7:
I noticed the Oracle guys who were adding support for their bare metal cloud service. They were so... Spinnaker has this notion of a server group, which is different things on different platforms. An autoscaling group [inaudible 00:54:40]. Replica set for more and more of you folks. [inaudible 00:54:45] group, for hopefully some of you folks. But in their case, they don't really have such a thing as a server group yet, so they had to build this façade over the top where they say, now this is a server group, and they manually curate what instances are in the group, and they're doing this by persisting data to whatever their block store is. So they could clearly leverage that mechanism to apply a different naming convention if they wanted, because they do have that in and out of the sewerage pit, on [inaudible 00:55:13]. But nobody's done anything like that toward purposes of... [inaudible 00:55:18]

Speaker 1:
It's also key to point out why naming conventions are so important within Spinnaker and other tooling, is because the cloud, or the endpoint that deploys you is the source of truth, not some secondary system where we're storing this app, here's all of the assets, because then we don't have to worry about that. The system relies on naming conventions, changing them internally. Netflix is somewhat difficult I suspect, as an open-source driver would be very interesting.

Audience 2:
I've got two questions. One for Matt, I was wondering if and how you integrated with your internal infrastructure? Like the port, or whatever you're using right now.

The second question for everyone is, who is building Spinnaker from scratch and then deploying it as microservices with Spinnaker, I guess? And who is having one or two boxes that run everything? All of the services on one or two boxes.

Speaker 7:
I can say a little bit about the first piece there. It's a fairly regular conversation about using Spinnaker to manage or deploy services. We haven't done anything about it yet, but it's an ongoing conversation and has been for a long time. There are internal teams that use Spinnaker, they run it themselves, they deploy it themselves. The last couple that we spoke with are running it as [inaudible 00:56:36] services on Kupernetes, and they're deploying their services through cloud though, not [inaudible 00:56:42].

There's one other team that runs Spinnaker split across, there's some internal cloud stuff as well that looks like an older version of public cloud [inaudible 00:56:53], some court VMs there, and then some cloud, Juicy UVMs, and they had to split them for two weeks for restrictions on where they can have internal source code.

But nothing to report again, ongoing conversation, looks good to us. I mean, we'd love to. We don't get to entirely decide ourselves. It'd be cool.

Speaker 4:
We use one instance for Spinnaker, and everything is smashed into one UCQ. The only downside of that is we have one big point of failure. But we do try to back it up. Up in [inaudible 00:57:38] we put it behind a DMS so then we can operate it and make sure it works appropriately before we switch it off the old box into the new one. We're looking to maybe make it more robust by splitting it up, but have been [inaudible 00:57:54] it performs just as well and does everything that we need.

Speaker 5:
So we have a few different configurations. The most common is one box having all of the sub-surfaces, but behind an ASG. You'd have to get a little creative with how you do health checking them, and also how the services communicate with each other. You don't want them communicating with the other services on the same box, you want to make sure that they go through an ElB. So there are a few different approaches depending on the size of your load. You could have an ELB per service, or you could have just one internal ELB.

This model will work for quite awhile, but what will end up happening is as you get more and more load, the services don't take the load evenly. Some of them do a lot more work than others, so at that point you'll be forced into breaking them up.

Speaker 2 :
Just a tip for that, only run one [Agor 00:58:57] at a time and one cloud driver, because API limits, and you don't want it under taking shots from the fence. We run just one instance, and it's scaled up pretty well. We deploy a couple dozen times a day, but there's no need to scale it up yet.

Speaker 5:
And only one echo said to do scheduling, also.

Speaker 1:
I just want to throw out there, somewhat related to an earlier comment about API limits. Matt actually was saying it's a good project for a new person to embark on if they want to get involved in the community. API limits are a constant source of pain for the community, they are really a pain at Netflix because we have this thing called Eta.

Eta is open-source, but the Eta that's open sourced is different than the Eta that Netflix uses. So I will get on my soap box real quick to say it would be awesome for someone in the open-source community that has AWS frame limiting problems to basically update open-source Eta and make it awesome, for what it's worth, because we can outrun one single cloud driver, I think the last time I looked we had fourteen. More now. We have many different drivers, and we rarely hit API limits, although we do have finely tuned ones with are good friends AWS.

But it is a solvable. problem. I just want to repeat that for the community. It's very solvable.

Speaker 7:
In addition to the Eta, which is I think mostly solving the pulling calls, the caching calls, we have issues with mutating calls as well. Cameron and his swell folks did some initial work on a float control, sort of client sack rate limiting within cloud driver. It's not applied everywhere, it's not applied anywhere but AWS yet. That would be a very good thing for somebody to tackle.

We have slightly different issues, so we don't use this to compute, API doesn't have like a daily rate limit. So that's not an issue. You could exceed like a QPS per user limit, if you were to accidentally run a bunch of extra cloud driver nodes you didn't need, but that's still unlikely. But we do occasionally get a phone call like, "what are you guys doing?" Because we have user agents interested, they know that somebody's running Spinnaker somewhere, it's not us. And then in case you find out somebody spun up a bunch of extra nodes and forgot to turn them down, and they see those numbers, so we them for when someone needs to quote us somewhere, and then they call us. We've done it ourselves as well.

But some sort of client side float control that coordinated via Reddis would be awesome, and an interesting project, and doable.

Audience 3:
Hey guys, thanks for doing this. This isn't a question, this is more of a comment to an earlier point about multi-cloud. I disagree with some of the panelists, I think multi-cloud is the future, and it's driven by market forces more than anything. It's forcing Google to actually make their API look like Amazon's API, and there's a reason for that, right?

The other point I wanted to make is that as a technology leader, the thing that you really care about, a lot of times when you're [inaudible 01:02:16] your e-staff, is improving margins, costs, right? And if you can switch from one cloud provider to another, and that's going to help include your margin points by five bases or whatever it is, that's huge. That's millions and millions of dollars.

Then the second case for multi-cloud is actually [Q risk 01:02:36], right? You don't want to be the VP of engineering or the CIO or the CTO that's talking to its customer and saying, "yeah, we lost all of your data because we were just on S3, sorry." Your customer is not going to care. So the reason why we're looking at Spinnaker, why we care about things like Kupernetes is because none of this was possible before, but with things like Spinnaker and Kupernetes it actually finally gives you the ability to switch between different cloud monitors. We're just relying on the market forces to help us get to that point where, just like you can switch between Uber and Lyft, you should be able to switch between Amazon and Google, or [inaudible 01:03:14], or whoever it is.

We're a ways away from that, but I really see some strong trends in that direction. I'm just going to bring the mic over.

Speaker 1:
As you're bringing that, all right, I have a follow up question for you. Do you see that switch as a large switch, in terms of, let's say I'm bringing all my stuff to AWS and then I want to switch it over to DCP or I want to run it simultaneously, do I want to make that decision on a quarterly basis or do I want to make it on a deployment basis? The granule area of that switch.

Audience 3:
That's a really good question. So the reason I'm talking about it is, I was at Box.com before this, and this was exactly what we were trying to do, was to run in multiple cloud providers, and to answer that question, it depends. I think in the short-term it's going to be, let's try to figure out if we can move large slots of our infrastructure all at once, and try to get... because what Google and Amazon, they're trying to get as much of you as they can inside. So there's going to be a little bit of that, and to get the big discounts and to do all of the minimum spends and the commits and all that kind of stuff.

But in the future, what I see is at added perfect foreman bases, depending on what's happening in the environment of the infrastructure, or if Google suddenly spins up a new data center in a part where Amazon's not there, and we want to go after our customers over there [inaudible 01:04:29] it's just at a deployment level, and then it just becomes a much, much quicker [inaudible 01:04:34].

Speaker 2 :
Just one thing that I'd mentioned, as I'm bringing the mic over to you, ma'am, is in Armory, we believe the cloud's become more specialized and not commodities. So as the cloud's been specialized we want to deploy workloads to, [inaudible 01:04:51] are the cheapest or the best eval, or whatever it is, that is a really great story for Spinnaker overtime.

Audience 4:
Well, you already answered my question actually. I guess I'll just bring up a point that we have just started tackling at our company. Because of security reasons, we have to actually be using Red Hat for all of our everything. I was wondering if anyone in the community has started tackling bringing Spinnaker up in Red Hat versus [inaudible 01:05:26], and if not, well, you might get some [inaudible 01:05:29]. Very [inaudible 01:05:30].

Speaker 1:
Are you allowed to tell the audience where you guys are from?

Audience 4:
We're not allowed to tell the audience.

We're from Intuit.

Speaker 7:
That's cool. I'm kinda bad about taxes.

Audience 4:
We're in San Diego, we're in the small business group of peers, like Quick Books. You could pull our, uh.

Speaker 7:
[inaudible 01:05:56] So that really happens?

Audience 4:
Yeah, that really happens.

Speaker 1:
Off the top of my head, I remember Stitch Fix is using Spinnaker, and there's a presentation on... I feel like they were the guys who posted it. The good folks at Stitch Fix made Spinnaker work with Red Hat [inaudible 01:06:14].

Diana. She posted slides, and it's got a lot of lessons learned, so that could be a good place to start. I don't know if anyone else on the panel wants to add.

Speaker 2 :
The Docker composed file works really well into productions for quite awhile.

Speaker 7:
Also, how your project gives you a nice list to focus those kinds of efforts. There are interfaces and processors and if you can satisfy then, it'll work.

The hire thing is kind of neat, I don't think Lars was self-serving enough that... the same way Spinnaker falls from the Netflix team internally, witnessing how these teams work, and then making [inaudible 01:06:55] operations for those things, then making these and automate them.

Now we've been working with users in the chatroom for a year and a half, and we see what the issues are. Especially on the side of standing the thing up and configuring it and making changes and figuring out what went wrong. Hired is really the result of not wanting to do that anymore. Let's not do that. So try changing this file, and then restart it and see what happens. We don't want to do that. So how you're front loaded to all those kinds of validations and notifications. You put in credentials, and endpoints, and it makes sure they work. Then it'll push those changes, and do it with, like a headless Spinnaker that red-blacks the new component with the new config, at like an umbrella version point across all the micro-services. So all of those things that are still painful deal with, it addressed it. So support for new deployment target is awesome, and that's probably, I don't know if you wanted to add something, but that's probably the place to focus on.

Speaker 6:
Yeah, right now we... the [inaudible 01:07:53] is going to be big for the multi VM Spinnaker deploy. [inaudible 01:07:58]. We've got Jacob, I would talk to him and we can start producing RPMs?

Audience 4:
Yeah, RPMs.

Speaker 6:
Okay. We have RPMs. I'll bake those into images for you guys.

Speaker 5:
We make RPMs and deploy to Red Hat. It's pretty easy. Nebula is a great tool, all you got to do is change it from build dat to build RPM, and it'll pop out in RPM. The only differences are, keep in mind if you're coming from up start, they tend [inaudible 01:08:30] stuff. And then, also, as of recently, Roscoe does a really good job of baking RPMs. So I think that was, what, two months ago, the XI guys added some stuff to [inaudible 01:08:43] them, and it works very well.

Speaker 7:
Your services are all RPM based as well?

Audience 4:
Yeah, everything.

Speaker 7:
Yeah, so the bakery, if you use the baker, it should support that, there are folks using it.

Audience 5:
I heard some mention about ECS support. Can anyone elaborate on if ECS support is coming? Or cloud grapher? Or elastic [inaudible 01:09:07] service for AWS?

Speaker 7:
We're probably not going to do it.

Speaker 1:
So, to kinda answer like Matt, and that's like our containment strategies on a platform called Tidus, so that answers our problem. So Netflix, at least in the near future, won't be adding ECS support. But as the good folks at Look Out mentioned, they're looking at ECS, so.

General:
Help them!

Speaker 1:
Help them! Help them? Help us!

Audience 5:
So, I'll just work on [inaudible 01:09:48]

Speaker 1:
Yes! Help the good folks at Look Out. They're right here.

Speaker 2 :
It'll feel like about a quarter of full time work for somebody up to speed to add a new provider. And then they'll be some help stuff to deal with, like you'll uncover problems. But it's about a quarter of full time work for a person who knows what they're doing, who's familiar with the environment, who has some relationship with the people to ask questions.

Speaker 1:
I think what's really interesting, just to review what Matt said, yes, it's about a quarter of work, and as a community we're talking today because we had a conversation with the folks at Server. I don't have any folks who are working on it, but I get the opinion of this one guy, and he added TCOS, and did it without... I mean, practically no hand holding, he just came out of nowhere and was like, here's TCOS. Fully supported in Spinnaker. So. I think that's a great success story, and the same with the Oracle. Guys. They added BMC to their environmental cloud. ECS support isn't terribly difficult, in fact it's pretty easy now because of the good work that Lars did initially to make Kupernetes work, you're just kind of following what he did, and you're good to go.

Speaker 2 :
At Look Out we haven't really reached this probably [inaudible 01:11:03] just thought about it a little bit theoretically, but I think that one of the things that we'll be looking at whether or not it's easier to manage Kupernetes ourselves, or at ECS as a provider.

Speaker 7:
We're supposed to [inaudible 01:11:13] Kupernetes thing.

Speaker 1:
Awkward silence.

We did actually hear through the grapevine too, there was a person on the slack channel that was threatening to add Scorm. And I asked today, has anyone seen that PR land, it hasn't landed yet. So there could be another example to follow with Scorm. And I don't mean to make fun of Scorm, I think it's awesome. I was hoping there were some Docker folks here, since they're like around the corner.

Speaker 7:
The cool thing is now, just about any kind of deployment you want to do, you can find some example in the code base. Some of the platform integrations run local processes through CLI tools, some generate templates and send those off to some service to be processed, and some right straight off of API calls, logging calls, all kinds of thing. Just examples of everything in the code base.

Speaker 10:
Any other questions? Yup.

Audience 6:
A couple people asked, touched on this earlier. It had to do sending application configuration or runtime configuration at deployment? So you don't, like if you're doing, breaking immutable images, you don't necessarily want to bake off half a dozen for every single environment. So, how would you deal with that in terms of say, baking one deployable artifact that can go to, say all your pre-production environments and production, and to go from that immutable to... to work in the household as a part of this.

Speaker 2 :
I think there's a lot of ways to solve this problem. The way we decided to solve it at Look Out is every application is shipped with the configuration for every environment, right. We wanted to keep it really simple we wanted to just inject one environment variable in user data, which is just basically what stage you're in. So if you're in stage five, right, that's the only environment variable that we load up in user data, and the applications read that and load the appropriate configuration and run that.

Speaker 5:
We took a slightly different approach where we focused a lot on having self-configuring systems, so that when the machine moots out, an upstart script will run that checks certain environment, based on tags attached to the instance, and then reaches out for further bootstrapping based on that. This allows us to reload configuration or update configuration on clusters without rebaking them, just by redeploying them, just by basically doing a cycling restart provider service. Each machine will be reconfigured when it comes online.

Audience 7:
How do you make sure these come up with the same config? Does the config version plug your binary zar, or, what do you do to make sure they all come up with the same [inaudible 01:14:34]?

General:
[inaudible 01:14:39]

Speaker 5:
Yeah, pretty much. That's a good answer.

Speaker 7:
What's the answer? Cross your fingers and hope?

Speaker 5:
But I've not gotten into a situation where the configuration required that, and we've actually given our developers guidelines that say, "your configuration should be forwarded to the most compatible with the one released," and this works pretty well. That's actually the practice that we got from Look Out. As long as you're adding configuration as new keys, for example, or before you remove it, it floats pretty well. But that is a concern.

Audience 7:
Are any of you trying to make your configurations forward and backwards compatible?

Speaker 5:
Yeah, the only time this would be... mostly the only time this would be an issue is when you're doing a major database rotation kind of thing. In that case, we use service discovery based on the VMS.

Speaker 10:
Any other questions? I know that Matt and Lars, I think you flew in today and you're flying outside on a red eye just to be here today, so thank you, so if you guys have questions, this is our shot to get these guys.

Audience 8:
I'd actually like to ask a question, is anybody here using string cloud config server for Spinnaker? Or everybody's just using static howl files? Is there no better way?

Speaker 1:
How you're [inaudible 01:16:04]

Speaker 7:
If you use howl, there's a rater that's your chance that says I'm afraid I can't do that, that's actually in there.

Speaker 6:
It reads your username out of the sides.

Speaker 7:
It really does make it so you don't have to touch the [inaudible 01:16:17], you can if you want to, but you don't have to, and then how to sum things that make it nice to use config completion, all the way down to the flags and attributes, it's pretty slick. So it really should, it's just an issue of monkeying with the [inaudible 01:16:31].

Speaker 6:
But you can still do it.

Speaker 1:
All right. Well, I'll turn it over to Alex. Actually before I turn it over, thank you very much for coming [inaudible 01:16:51], thank you very much for panel, thank you everyone on the panel, all around of applause.

I do know that all these people are in the slack channel. Everyone's in slack, if you are on slack, just go to Spinnaker.io or join.Spinnaker.io, and you can join the slack channel that's completely open, and people like this, and many many more people in the audience are there, and they're going to help you and I hope to see you there soon. So thank you, Alex.

Alex:
Thank you guys so much. So, we have a raffle, everybody got the raffle tickets. Yes? All right. So. First number is, so we have some Amazon gift cards, courtesy of Armory. There's some ten dollar gift cards, twenty dollar gift cards, and then one hundred dollar gift card.

So, the first winner is 016034. You win! There you go, thank you.

Second number is, 016013. Who's the lucky winner? Oh, it's [inaudible 01:18:18], who is in my group. I mean, I mean.

Speaker 1:
Oh! Am I allowed to participate?

Alex:
Yes.

Okay, now we're getting to big numbers. Twenty dollar gift card. 016020. [inaudible 01:19:16]

Speaker 7:
It's orchestrated.

Alex:
Half of this audience is livid.

General:
You didn't even check though.

Alex:
016021.

Okay, and now, the grand prize. Hundred dollar gift card, courtesy of Armory. Thank you Armory. 016027. Congratulations.

Thank you guys so much for coming. Thank you guys so much for coming, we really appreciate it. Hoping you learn something today, got some information about Spinnaker. We hope to do this again, maybe monthly.

Okay, we'll come to New York. Thanks for Google launch pad, for hosting this, and Armory for providing video, as well as the gift cards for all of y'all. Thank you so much. And we've got a bunch of Spinnaker stickers up here if people want stickers, there's like a hundred and fifty here, so take as many as you'd like.

Learn More