The Bay Area Spinnaker Meetup held a Spinnaker 1.0 Launch party on June 20th. It included a presentation by Chris Sanson, Spinnaker PM at Google, detailing the achievements of Spinnaker 1.0, including:
- More detail on Spinnaker's multi-cloud support, including Azure, OpenStack, AppEngine, DC/OS, and Oracle Bare Metal Cloud
- Semantic versioning, where each bundle of microservices is a unique snapshot
- Halyard, a CLI tool that sets up, manages and configures the Spinnaker instance on Kubernetes and GCP (Halyard does not yet have AWS support. For AWS, Armory provides an installer).
- Role-based access control: FIAT, the Spinnaker microservice that controls access, includes authentication and authorization, and includes things like setting permissions for approvals on manual judgement stages.
- New website and documentation.
Armory wrote about what the Spinnaker 1.0 launch means for Armory customers here.
Chris also covered:
- How Spinnaker enables multi-cloud strategies
- Details on CI/CD cloud initiatives and the expanding use of containers
- Thinking of Spinnaker as a platform:
Andy Glover of Netflix shared what's coming next for Spinnaker past 1.0, including:
- Automated Canarying: Netflix is open-sourcing its Automated Canary Analysis platform for use by Spinnaker
- Declarative Delivery: Infrastructure as code via managed pipeline templates. Giving an app a file that provides, traits (like what tier, what SLA, who owns it, KPIs), expectations and policies (how to deliver the app to production), which Spinnaker can execute on.
- Work going into Spinnaker for container support, including Kubernetes, Titus ECS, DC/OS, Mesos
- Making Spinnaker into an extensible platform that the community can plug into:
The Meetup also included as well as lightning talks from the Spinnaker community on topics including:
- Best practices with configuration management tools like Chef, Puppet and Ansible when using Spinnaker. By Isaac Mosquera, CTO of Armory
- How Lookout is measuring SLAs for services, with SLAyer, an internal Lookout tool By Rohit Rohitasva, Lead Engineer, Delivery Engineering at Lookout
- How Halyard makes Spinnaker easier to deploy & update. By Lars Wander, Software Engineer at Google
- How to run Spinnaker as an enterprise in production on AWS. By Andrew Backes, Principal Engineer at Armory
Here's a transcript:
Speaker 1 [inaudible] which is around Spinnaker 1.0 [inaudible] to 2.0. [inaudible – 00:07] from Netflix and Christopher from Google [inaudible – 00:10]. So let’s give them a warm welcome.
Christopher I’m Christopher. I’m a product manager for Google. Spinnaker looks cool, but it was open-source in November 2015. And it was a really interesting product. It was open-sourced. I call it the Benjamin Button problem. It was fully mature but it skipped adolescence. It was used in production by Netflix for one of the biggest properties on the Web but skipped some things that other people may care about like authorization, onboarding, set, sort of basic things. So I think a lot of people, the customers that we talked to, potential users, looked at it and said, “This is really cool, but it’s missing some basic functionality. [inaudible – 00:56] doesn’t really necessarily look like Netflix [inaudible – 00:58]. So we’re not sure how this is going to work for us. And we did [inaudible] concept, but it’s not quite there yet. So our goal with this was to really make it enterprise-ready for [inaudible – 01:09].
Anyone recognize the image up there? That’s from Arrival, one of my favorite movies of the last couple of years. [inaudible – 01:16] Spinnaker kind of reminded me of that. Something cool was happening. There [inaudible – 01:21] pattern, but it’s hard to figure out exactly what it is, how do I use it. So we really wanted to make it more approachable.
First thing you do if you haven’t [inaudible – 01:30], there’s a brand-new Spinnaker.io site. So we basically redid the [inaudible – 01:34] from scratch as part of this launch. So a lot of the screenshots [inaudible – 01:38] from there. It was a whole-new design courtesy of designers from Netflix, Jeremy. And everything there is really oriented to people who are setting up [inaudible – 01:49].
If you’re going to be a multi-cloud platform, you should probably support multiple clouds. When we launched initially, it had support for Amazon ECS and Google Compute Engine (GCE). So then over the last year and a half or so, [inaudible – 02:07] Microsoft [inaudible] Azure. Kubernetes was a big one [inaudible – 02:10] run a lot of different platforms [inaudible] OpenStack and App Engine, which is another interesting one that’s more platform as a service [inaudible – 02:20] interesting [inaudible].
Looking ahead a little bit, [inaudible – 02:27] is currently in flight by [inaudible] as well. So we’re really building out a pretty good representation [inaudible] major [inaudible] providers.
This is a big one [inaudible – 02:41]. Prior to version 1.0, [inaudible] service had to be individually managed and set up. So people have gone through this already before you know how painful this was. Basically, every piece was independent. So we had [inaudible – 02:57]. But once we got it set up, [inaudible – 03:00] independently. And [inaudible – 03:04] micro services and for managing [inaudible] things like that. So with 1.0 now, each bundle of micro services is a unique snapshot and version. So from 1.00 to 1.01, we basically did a few [inaudible – 03:21] Spinnaker version. So now going forward, you doing really have to consider yourself too much with the different micro services, what version they’re at. It’s really just [inaudible – 03:31] Spinnaker itself. So I think that makes a lot easier to [inaudible – 03:35].
[inaudible], the other new tool is [inaudible] is CLI tool which basically sets up [inaudible] Spinnaker [inaudible – 03:47]. So it runs alongside or sort of external and is used basically to do [inaudible – 03:54]. So say you have a Spinnaker instance running. These [inaudible – 03:57] commands. [inaudible] available versions, configure your version. It’s a declarative model. So it’s config-based. So basically, you update your config file [inaudible – 04:06]. So I don’t know if anyone has tried some Spinnaker [inaudible – 04:13] beforehand. I don’t know how long it took you, whether it was hours or maybe days. But [inaudible – 04:19] how to do in a couple of minutes. And then even as you go forward and add providers for multiple clouds and things like that, again, [inaudible – 04:27]. So [inaudible] App Engine is a relatively simple example. [inaudible – 04:33] App Engine. Save your environment variables, and off you go. The last [inaudible – 04:41] role-based access control. Netflix is a pretty open organization, so [inaudible – 04:47] where before, [inaudible] Spinnaker, you could basically see everything that’s happening on that Spinnaker instance, all the applications, which for a lot of organizations doesn’t work. Some have regulations. Some [inaudible – 04:57] corporate [inaudible] that way. [inaudible] micro service [inaudible]. We added authentication and authorization. So [inaudible] recommended for authentication. We can [inaudible – 05:09] set it up [inaudible]. And then authorization, [inaudible] etc. And so that lets you gain access [inaudible – 05:19] those groups [inaudible]. And [inaudible] things like you can set permissions [inaudible] who can give you approvals [inaudible] so that only certain people can [inaudible – 05:32]. So [inaudible]. We’ve got [inaudible] role-based access control, more cloud providers, and [inaudible] CLI [inaudible].
Before I hand it off, I just want to talk a little bit about open cloud and what we mean by that because it’s a real passion of mine if I think about [inaudible – 05:53] software world, where it’s going, [inaudible] open-source projects [inaudible] most exciting ones out there. And it’s really cool how often you work on a project of Netflix, one of Amazon’s biggest customers. And Google and Microsoft are all really big contributors too. I think it’s really interesting that that came together, whether it’s Visual Studio [code – 06:16] for Microsoft or Kubernetes or [inaudible] or react, I think [inaudible].
This is from [inaudible – 06:29] survey. These are the top three concerns that IT managers have about moving to the cloud and have a big arrow there pointing to the one that I think is interesting. If you look at [inaudible – 06:41] ability to change vendors, that’s more than tripled since 2012 to 2015. If you look at the others, they’ve either gone down or implemented by [inaudible – 06:50]. The locking is really becoming a concern [inaudible – 06:53]. I think it’s kind of funny too. People get [inaudible] has really gone down. I think people went from uncertain to actually worry about it. [inaudible – 07:05] multi-cloud [inaudible] security. And then for CEO, I think a lot of CEOs were drawn to the cloud. They got to basically switch high-capital cost expenses to certain metered, ongoing, variable cost. But then they kind of realized that wow, this is actually a really [inaudible – 07:25] path that I have for my company. That’s sort of locked into an external provider that they change rates, they may compete with us, who knows.
So what are people doing about it? 85 percent of enterprise companies larger than 1000 people have a multi-cloud strategy. It’s a little [inaudible – 07:43] hybrid cloud in there as well. Sort of what that means exactly in terms of multiple public clouds. But regardless, I think everybody [inaudible – 07:51] multi-cloud. A lot of the customers that we talk to, even if they’re not implementing multi-cloud, just the fact that they know it’s an option should they need to migrate without having to do a full rewrite [inaudible – 08:01] is really the feeling.
Last slide here with the charts [inaudible – 08:09]. So cloud initiatives, you can see implementing [inaudible – 08:13] in the cloud, 38 percent. That’s pretty high obviously. I don’t know what the other people are doing. They probably already did it, I assume. But you can see it’s [inaudible – 08:22] containers, which I think is sort of a known entity in the market [inaudible].
So the context of all this [inaudible – 08:36] how do we think about Spinnaker and Spinnaker as a platform. Typically, this really comes into play mostly with the providers. This is where we’ve seen the most [inaudible – 08:43] contributions and we’ve seen it’s the most [inaudible] part, I would say, of the [inaudible – 08:47]. So we talk about [inaudible] there. If you look at the [inaudible] Spinnaker platform, there’s a lot more going on. So we have first-class notification integration, Slack, [inaudible – 08:57] and Jenkins [inaudible] talked about a little bit. And we’re also [inaudible – 09:15]. So that’s [inaudible].
So you look at the whole ecosystem [inaudible – 09:24] talked about this a little bit [inaudible] some of the stuff we’re working on [inaudible] 2.0. 1.0 was about getting the core product to enterprise and enterprise-ready. And I think as we look ahead [inaudible – 09:33] features, [inaudible] more [inaudible] certain add-on and [inaudible] nice transition to looking ahead at 2.0 [inaudible].
Andy I just want to underscore the work that [inaudible – 10:01] to [inaudible] Spinnaker 1.0 with [inaudible] authorization, awesome stuff [inaudible] Netflix [inaudible] exciting. We already set up Spinnaker many years ago. [inaudible – 10:19]. So we look ahead to once coming down the [inaudible – 10:39]. One of the cool features of Spinnaker at Netflix is that this is particularly heavily used [inaudible] analysis [inaudible]. And its first-class integration with Spinnaker was actually kind of a watershed moment for [inaudible – 10:58] engineering team [inaudible] Spinnaker [inaudible] Spinnaker [inaudible] basically jumped on Spinnaker as a platform because they got this [inaudible – 11:08]. And so this is the idea of being able to say, “Hey, look, before I roll this service out to 100 percent [inaudible – 11:14] traffic, I want to have a small amount of traffic [inaudible – 11:17] and I want to do an apples-to-apples comparison [inaudible] this version versus the old version.” Like I said, it’s [inaudible – 11:25] to delivery in Netflix. It’s very heavily used. And unfortunately, for the rest of the world, it’s not available to them. However, that’s changing. The [inaudible – 11:36] in Netflix has partnered again with [inaudible] Google. And they are working on open-sourcing the ACA platform, first-class integrated with Spinnaker. And it’ll encompass multiple [inaudible – 11:50], not just [inaudible] support [inaudible] really exciting. In fact, the [inaudible] Netflix is [inaudible] Netflix [inaudible].
One of the biggest features, aside from ACA and the community [inaudible – 12:27] this whole idea of [inaudible]. How can I define this stuff outside of Spinnaker? So when that effort is [inaudible] declarative delivery. And the first part of declarative delivery was this idea of managed pipeline templates. And managed pipeline templates are already out there [inaudible – 12:45] onboarding teams at Netflix already. But I suspect [inaudible – 12:53]. There are a number of efforts [inaudible] community [inaudible] some UI features but also [inaudible – 13:08] out some of the things we’re learning with [inaudible] templates. This effort again is leading to a larger effort called declarative delivery. And also [inaudible – 13:17] ultimately describes [inaudible] expectations, and some policies. And Spinnaker will infer ultimately the how to do all this in terms of delivery. [inaudible – 13:33] Spinnaker is very much [inaudible] I want to do this, this, this, and this. And what we’ve obviously seen in Netflix is that there are certain patterns that have come out of that. A lot of teams are pretty much doing the same patterns. So if we can pull back a little and say, “Hey, look, [inaudible – 13:50]. Who owns it? Maybe some other things. Give us some KPIs.” And [inaudible – 13:56] how this thing will be delivered into production. And we can infer a whole lot of aspects about that. And we can determine the how. And ultimately, that will provide the benefit and velocity for service teams using Spinnaker, but also will give us a whole lot of opportunities to have [inaudible – 14:18] to be gained [inaudible] figure out [inaudible] auto-deploy them onto Docker. It doesn’t matter. There are a whole lot of [inaudible – 14:25]. So this effort is well underway. If you are in the Spinnaker ecosystem, I suspect that you are probably aware [inaudible – 14:33]. But there is [inaudible] out there. So I look forward to [inaudible]. And that will give some more details in terms of how you can participate in the ecosystem. But one thing we want to be sure is that this is a platform that’s easy to plug into. [inaudible – 14:54] the good work that [inaudible] the work that Google is doing. One of the cool things about [inaudible – 15:00] coming out of the community. For example, [inaudible – 15:04] take advantage of those innovations. So we see [inaudible] Netflix that the more extensible this platform is, the more innovations we all get to take advantage of. So we are very much focused on figuring out how we can make this very, very [inaudible – 15:20] so that anyone can sit down, write a stage, write a whole UI component. In this quarter, at Netflix, we’re doing a lot of this work internally to ensure that service teams can build their own [inaudible – 15:33], even if it isn’t open-sourceable. The work that we’re doing [inaudible] for them. Coordination with Google [inaudible] will make it easy so that anyone can sit down and write a stage or even a whole, new UI that encompasses something with respect to [inaudible – 15:48].
Finally, if you aren’t aware of containers or [inaudible – 17:42] the world, the Kubernetes work that the good folks at Google and [inaudible – 17:47] participate a lot [inaudible]. There’s a ton of work going into Kubernetes. There’s some new stuff [inaudible – 17:54] touch on tonight. Netflix had its own internal container cloud called Titus, so we’re also investing heavily in containers. Titus will eventually be open-source. So there’s some [inaudible – 18:05] community that may enjoy Titus over Kubernetes perhaps. But ultimately, those innovations are going into Spinnaker or making it very, very easy for container developers to deliver those containers to whether it be Kubernetes or Titus and actually [inaudible – 18:20] conversations [inaudible]. So this is again, [inaudible] going on here. [inaudible] delivery platform to make it a just compelling, awesome experience that it already is and will continue to get better. With that, thank you.
Matt Duftler I will point out that Christopher forgot to point out what FIAT stands for, which is something we’re the most proud of. It stands for Fix It Again, Travis. Anyway, so the idea with this talk… I don’t know. I have 15 minutes. I have no idea if this is more or less than that. We got some feedback to people who’d like to participate [inaudible – 19:06] start or [inaudible] intimidating or something. So the idea was like give some information on how to get involved, a good place to get started, how to approach it. My general opinion on Spinnaker is it’s broader than it is deeper. It’s like a bunch of things spread out. So it is hard to find your way rather than any one, giant, deeply complex thing. So it requires a bit of a navigating to figure out where you are. So in terms of the community, I needed slides, so I had to put numbers on them. I don’t know if the stuff makes sense, but these are forks from like an arbitrary sampling of services. So that top one, Spinnaker, Spinnaker was like the umbrella project, which is less relevant now than we [inaudible – 19:50]. But it had all like the baseline configs that apply to all the services and a bunch of scripts and things, so a lot of forks there. Deck is the UI, [inaudible – 19:58] orchestration engine, and [inaudible – 20:00] is where all the meat of interacting with the platforms lives. So I don’t know. There are some number of hundreds of forks, so lots of people forking these repos.
Tons of Slack activity, couple of thousand members in general, a bunch in dev, and somehow, 106 for halyard, and we think at least half of those are bots that [inaudible – 20:21]. So I don’t know, lots of companies and individuals contributing more every day. Andy and Christopher touched on a bunch of this, but lots of folks large-scale integrations with providers, small fixes, features, patches, everything you can imagine, so pretty active.
A couple of questions. Show hands, who here has used Spinnaker, like end-user? Okay, who here has installed Spinnaker? All right. Who here has forked one of the Spinnaker repos? And who here has contributed code back? Okay, pretty good numbers, not bad, all right. So just page down really quickly, all right. So in terms of finding something to contribute, right? We use Spinnaker for the issues tracker. There’s a ton of stuff in there. Some of it is like nonsensical. Some of it is like concrete bug. Some of it is feature requests. We often, probably not as often as we should, but we go through there and try to triage things, pick out things we can sort of immediately address, things [inaudible – 21:28] longer-term efforts. And oftentimes, a lot of things that we think should be better handled in the Slack chat rooms. And we’ll tell people that and kind of direct them there. Somebody looking to get involved, that’s a good place to start, poke through those issues. See if there’s something that you’re interested in there. Same thing. All we typically do those issues if we don’t [inaudible – 21:48] out of the gate, which typically, we don’t just try to reproduce whenever there’s a reporting, do a little bit of sort of thinking on it and then try to engage with whoever reported that issue. So anybody can do that. It’s a good way to identify an area of work where you can make some kind of meaningful contribution.
Audience Person [inaudible – 22:04] beginner friendly fight.
Matt Duftler There’s a beginner friendly fight? I was [inaudible – 22:09] we should have a beginner friendly fight. We have a beginner friendly [inaudible – 22:13] apparently. So I didn’t know it existed. What did you put that [inaudible – 22:17]?
Audience Person A lot of stuff.
Matt Duftler A lot of stuff. Quite a few of the items here are containable problems. It’s not that hard to figure out what people are getting into. But you have to dive into it. And the place where people ask most questions is in the general channel. There are other channels. People ask questions all over the place. But general is the place where you find the most end-user questions. And the community is really active. We see lots of people pitching in and helping each other out, and that’s a good way to get involved [inaudible – 22:46] question and try to help out. Docs, with the 1.0 refresh, we read in all the docs, we read the landing page and the whole website and it’s much better. But there’s always room for improvement with the docs. So you see something missing, see something wrong, jump in. These are all meaningful contributions and really a great way to come up to speed on the project. And code labs, we have a bunch. They are pretty useful. We all typically run through them every now and then from scratch. We can always use more. If you’ve solved some interesting problem with Spinnaker and you see people kind of asking about it, write up how you did it, make it a code lab, [inaudible – 23:21] site, you’ll be famous. You won’t make any money off of it, but we’ll throw it up there. And we have a roadmap on that site, so Andy touched on a lot of the things on the roadmap. But get involved with those things if you have opinions, which I think most of us do. Share them. There’s plenty of room to sort of get more folks involved with those things. Thomas, I think you have a question. Thomas is saying there’s a blog on the site blog.spinnaker.io. Write an article. We’ll absolutely add it to the blog, more the merrier. There’s quite a bit of material there already. All right, so if you want to make a code contribution, they fall into two categories, something trivial, just submit a [inaudible – 24:02], use [inaudible], figure out who to address it to. There are a bunch of micro services. We all have way too much email. You’re really better off identifying somebody to direct the PR to. [inaudible – 24:15] seen them in the chat room talking about things that are related. If it’s a non-trivial change, the best bet is to like strike up a conversation with the folks most likely to be interested in that area. Just discuss it first. Come up with some proposal, some things required, design [inaudible – 24:31], but non-trivial changes really you should sort of prime the pump, have a conversation up front, know who’s going to be interested, get them on board, sell it a little bit, and then start to work on it. Once you get a little bit of kind of a meeting of the minds, create an issue on Spinnaker, Spinnaker, and then start to push code, which we’ll get into in a second. Make the PR’s as small as possible. Tag that same issue on each PR so you build up a record of all these changes. It’s fine if things don’t turn out exactly the way you wanted. It’s quite easy to change things. It’s not like you push that PR and it’s there for the rest of time. So better off have small changes that all build up to something rather than one, giant monster of a change. And [inaudible – 25:16]. The code is changing. Thousands follow their conventions for the commit messages and will more likely not to push breaking changes and describe them as patches in those kinds of things. So follow those conventions. Couple of examples, write tests, they should be meaningful tests. But definitely write tests. The [inaudible – 25:34] verifying [inaudible] starts up, surprisingly, often, we do it to ourselves all the time. We push a change. Works fine for me. Someone else pulls it, doesn’t work. Sometimes Travis, not that Travis but [inaudible – 25:46] Travis chokes on it. But often, it’s the person reviewing it, pulls it down, it doesn’t start. It’s kind of like the first thing. So I mean I’m now in the habit of like provision of VM with nothing else on it. Check out the branches. Make sure it starts out. It’ll save a bunch of time. And again, assuming it’s a non-trivial change, you’ve probably been in conversation with somebody about this change and just tag them on that PR, unlikely to come across it like by happenstance if you don’t point it out to them. So are you going to get some comments, hopefully, on the pull request? Just push additional commits. There’s been a bit of back-and-forth discussion on this. Everybody has a preference. I mostly prefer like just add additional commits so you can see that the changes were in response to some comments. And then once you’re all the way through the whole thing and it’s approved and you’re ready to merge it, you can squash it, [inaudible – 26:37], and then force-push the whole thing back up. Don’t merge master [inaudible – 26:43] not to. We do it all the time. The middle one, for whatever reason, GitHub doesn’t notify that you push another commit. So typically, what will happen is you push that commit. It addresses all the comments. And then a week later, nothing happened. And then you say, “Hey, I pushed this thing. And then I go back and say I need a rebase. And then you rebase it.” But you still don’t tell them, and then you just keep doing that. So Travis, not [Mel – 27:10] Travis, [inaudible – 27:11] Travis pointed out today that you can actually subscribe to a PR and then it will notify you of commits. And he showed me an actual email from GitHub that said, “Somebody pushed me,” which I’ve never seen before. So you can do that too. But I recommend just telling the person that you made the change. All right, so trying not to be snarky, but this is my snarky slide. So if what you like is getting like a never-ending deluge of emails, it start with [inaudible – 27:39] for the delay. The way to accomplish that is to submit huge [inaudible – 27:43], like thousands of lines, because you will get a lot of emails that start with sorry for the delay. If you don’t like getting that, then make the smallest PRs you possibly can. They don’t actually have to move the ball forward much but like creating a new provider. You can add the stuff that gets the credentials in place for [inaudible – 28:03] no operations, then add one operation at a time, small PRs, because what happens is people open the email, they click on the link to the [pull – 28:11] request, and they see like 3400 lines of code, and then [inaudible – 28:15] email. And then they hope somebody else comes across it, and then you work on the [inaudible – 28:21] I didn’t see it [inaudible]. So small is better. I bet if you measured it, that’s a pretty accurate statement, the first one. It’s just going to take a while. Smaller is better. I mean that’s all I can say. Make them as small as possible. So in the same vain, right. When somebody is actually at the point [inaudible – 28:44] reviewing a pull request, why this pull request exists should be obvious and pretty much self-contained within that pull request. Storage should link out to something that makes this obvious. Other people are reading all of these things and are just going to inject themselves if they can’t figure out why this pull request exists. If there are implications, they should be made explicit, shouldn’t have to guess at it. And given that at some point, somebody is going to hit the Launch button. You want to make them comfortable with that. So do whatever you can to make them comfortable with the fact that this is a low-risk merge. That’s where to find us.
Let’s see, so the last disclaimer, I recognize that it’s a bunch of big companies involved, but it’s a pretty friendly group of people. Everybody is like pretty much first name basis and kind of chatty and jokey. It’s really a good bunch of people, and it’s not hard to get involved. There’s tons of stuff to do, more ideas than we have people to work on these things. So really do get involved. Despite my inability to figure out how to use this thing. It’s not an intimidating bunch. It’s a pretty fun group of people, actually.
Any questions? Does it sound like crazy talk or not make sense? No, all right, go, thanks.
Isaac Mosquera All right, how’s it going? My name is Isaac Mosquera. That stash is epic. Don’t ever, ever take that off. [inaudible – 30:13] favorite thing of the Thursday meetings, just your face and the stash. All right, so I’m going to go over best practices with configuration management, tools, things like Chef, Puppet, Ansible. And we see a lot of folks using them with Spinnaker.
I’m going to start with this email I literally received about 45-50 minutes ago. [inaudible – 30:35]. And [inaudible – 30:37] fighting fires for the past couple of days because [inaudible] Chef server went down. And now all the servers it configures are jammed up. Lol, unreal. And when I thought the email was… What made this email amazing was it was sandwiched between, “Hey, did you go drinking. Here’s my problem. And when you come back, we need to drink again.” That configuration management servers will cause you to drink. So with Spinnaker, I suggest if you’re using a configuration management server, don’t because you guys have invested a lot of time and energy and infrastructure already into these tools and you want to make that work with Spinnaker. So it does work well with these tools. You just have to do it in the right way. So let’s go over what those right ways are. So Spinnaker works out of the box with a few system packaging tools already like Debian and RPM. And in order to make it work really well, you have to make Chef and puppet and Ansible and whatever configuration management tools you’re using and treat it in the same way. And what that means is creating deployable artifacts with Chef and puppet and Ansible. So this means taking your recipes or your playbooks and creating a version of them at build time. And this is as a build artifact out of Jenkins. And this is really critical for you to have things like a version management throughout the Spinnaker deployment life cycle. I don’t know what that [inaudible – 32:06], but I had to add once I picked that one. So I don’t know what it really means. But the next thing is to remove centralized configuration server. So back to that first slide where the guy was saying his centralized servers were crashing and causing a lot of pain. You want to get out of that. And all of these tools, all of the configuration management tools can run in a standalone mode. And if you do the first part with just creating a deployable artifact, those two things mean that you actually don’t need to depend on some other centralized server that you then have to manage, deal with permissions, networking. All of the other things that can fail during build time, that will fail and will cause you to drink.
Another thing is once you get out of using configuration management, the first question that comes up is what I do with things like my data bags or the secrets and all the configuration management that I had in that system before, well, the thing to do with them is to move them to their proper tool. So things like secrets, there’s a ton of solutions out there. You can craft your own. Whatever you do, just get it out of the configuration management systems.
So then comes time to create packer templates. And here the best thing to do is to try to keep and minimize the number of packer templates that you have to, if possible, one, but try to reduce, reuse, and recycle templates. I see this a lot where people will create a new packer template per application. And if you’re doing this, it is a headache for the administrators of the Spinnaker instance. So ideally, what you want to do is extract that all of the common variables and bubble that up to the bake stage to use extended attributes. And I want to point this out because a lot of people don’t see this. So when this is isn’t checked, you don’t see any of this. So it looks like you can’t really extend the bake stage to do whatever you want to do. But this is really kind of the best way of doing it. Extend that out so that your application developers can pass in your package name. In this case, if you’re using Chef, it would be your recipe or your Ansible playbook, and allow them to extend and customize the rest of the bake stage below. But do not create multiple templates for each individual application because you’ll be [inaudible – 34:26].
But also, at the same time, don’t let this happen where the variables start to get way out of control. And what we see as well is now we have dependency management in Spinnaker being exposed. You see the Tomcat version, NGINX version. And then also, in the configuration management system. And I’m not sure why people decide to do this. I’m just telling you not to do it. So the best thing is to keep your dependencies contained inside of the dependency management tool that you chose to use like Chef and Ansible. Don’t let that really leak out into your Spinnaker pipelines or your bake stages. Make sure they’re absolutely contained and your life will be better for it. But if you do need to have variables or there are a lot of common variables between groups of applications or sets of applications make use of the wildfire name for packer. It encapsulates all of these variables. So for instance, if you have a particular region that you want to bake in and the subnets and the VPCs are all the for that particular region, encapsulate that into a variable file and let your application developers know to use that if they’re going to bake in that particular region. And this simplifies it for them so that every single pipeline doesn’t have the same repetitive variables like VPCs and subnet IDs. So this is another fail that I see rarely get used. But it does help your application developers kind of hide that complexity from them so that they can just give them the recipe name and go on with their day.
And one of the last things we’ll go over is chaining pipelines to reduce the entire bake time. So when you went from the configuration management at run-time, you configure that whole machine. You started out with base image. And then you did your Chef or your puppet or your Ansible scripts. And it would configure the entire machine at that one time. In some cases, that could run for 30 minutes. In some cases, 40, or even up to an hour. When you transfer that into Spinnaker because of the workflow engine that Spinnaker provides, you’re able to break these things down into just consumable chunks and reusable chunks. And because Spinnaker caches the bakes, you’re able to reuse them for future bakes. So if you’re doing the security bake and tools bake and application bake all at one time and if I’m the application developer, it costs me that 30 minutes every single time, I want to make an application change, even though that’s actually changing is that last five minutes. So breaking them up and chaining them together again provides a really good experience for the person using Spinnaker so that they’re just iterating and they’re just baking five minutes at a time instead of 30 minutes at a time. And that cycle time is really important for an application developer. And then the last thing is don’t use configuration management systems. But if you have to follow all those rules, again, you’ll be better for it. That’s it.
Alex Bello And so next up is Rohit.
Rohit So now that we have Spinnaker and all the services running [inaudible – 37:44] Beta [inaudible], what we really [inaudible] SLAs for all these services. For this, we actually [inaudible] service [inaudible – 37:56]. Basically, all it does is this is a simple, [inaudible – 37:59] application which has a [inaudible] for all the other applications like Jenkins, [inaudible – 38:06], Spinnaker itself. But basically, it does all the user actions and we are extending it as we are going [inaudible – 38:13]. So basically, now you [inaudible – 38:15] and it does all these coverages. So [inaudible – 38:19] GitHub, [inaudible] same [inaudible] all the Jenkins [inaudible] are up and running. And you can actually trigger Java. Lot into servers [inaudible – 38:31] secret can be downloaded and can be used. Last, but not least, log into Spinnaker [inaudible – 38:37] trigger a pipeline. And all [inaudible – 38:40] every 30 seconds. And they end up in [inaudible – 38:42]. So now if [inaudible] other [inaudible], we can just go here and say like, “Hey, GitHub’s SLA philosophy was 97.88 and [inaudible – 38:53] for last month was 98.50.” [inaudible] confused. And all of these [inaudible – 39:08] in our service-generated pipeline, we needed a way to actually validate that every piece is working all the time before actually, the user gets to it. [inaudible – 39:18] hey, your [inaudible] but something doesn’t work [inaudible]. For this, we’ve created a simple Prometheus application [inaudible – 39:28] application, nothing fancy about it. So the idea of this was like [inaudible – 39:33] multiple [inaudible], Amazon accounts, [inaudible] multiple masters, how should we test it? So it basically starts with baking the application [inaudible – 39:42] even if it doesn’t change anything, it basically seems [inaudible – 39:45] okay, I have a change. Somebody [inaudible] GitHub. I [inaudible] in multiple regions. After baking, right now we just use [inaudible]. So it requires that those two regions [inaudible – 39:56] hey, now [inaudible] what if I have to run some tests from Jenkins is [inaudible]. So [inaudible] stage here. [inaudible] Jenkins [inaudible]. So at this [inaudible] I have a [inaudible] Debian packages. So after attaching, it actually builds itself without any change. And [inaudible – 40:18] for the next iteration. [inaudible] only in the production [inaudible]. After that, it [inaudible]. It checks, “Hey, is my [inaudible] application up and running? Am I able to reach [inaudible]?” Am I able to reach [inaudible – 40:38]. After all that is, the clear stage which is encapsulated in the [inaudible], it actually clears the [inaudible]. [inaudible] every hour, [inaudible] about 400 AMIs in a week [inaudible] what are you doing. In the end, it destroys [inaudible] at the end of each run. And it runs every [inaudible – 40:59]. So this encapsulates everything. [inaudible] on this one [inaudible] into RSGB platform. These are just a couple of [inaudible]. Jenkins has access to GitHub. And GitHub will be able to trigger [inaudible] Jenkins master and the Spinnaker reached both the Jenkins Masters. [inaudible – 41:21] instances have access to [inaudible] on the deployed [inaudible] configure properly.
Christopher Raise their hands. I would actually love to talk to you guys or get your contact information to follow up with you because right now, we’re sort of understanding our using Spinnaker now in a self-hosted environment. Would you only use a SAS version? What about a SAS version would draw you to it? The more we understand about that, I think the more progress we’re going to make towards that offering. So please find me or I will find you. Sure [inaudible – 42:02]. The next question. Did that answer the question for whoever asked it? Okay, that was my sneaking way of finding that [inaudible – 42:14]. Netflix uses one, big Spinnaker cluster or each micro service team has its own cluster. Question mark?
Unknown Person Each micro service has its own… Thomas, please, yeah.
Thomas The best way to find out this answer is to actually go to the blog post. One of the first blog posts actually has a three-part series on how we run and manage Spinnaker and Netflix. The short answer to this is we run one cluster for each micro service in Spinnaker that are deployed independently. So at any point, you could have a hundred instances of Spinnaker services running around. And we even [inaudible – 42:56] larger clusters into read-only and write clusters. So if you’re interested, there’s a blog post with much, much more details. There’s a three-part series. I encourage you to look at that in blog.spinnaker.io.
Christopher Third one, any plans to support [inaudible – 43:12] cloud functions in a native way? I hope so. Yeah, yeah, I think so. So [inaudible – 43:18] a little bit about the ubiquitous continuous delivery in different deployment workflows and scenarios with CDN, binaries, things like [inaudible – 43:27] cloud functions [inaudible] fall into that category, right?
Andy Glover So the last meetup that we had here, there was a pretty good discussion about kind of where lambda… I use lambda in generic, not AWS lambda but [inaudible – 43:39], functions and service-type thing, where it is in the spectrum of kind of adoption. So I’d encourage you if that’s something you’re looking for within Spinnaker, as Matt pointed out, kind of bring it up in Slack. Let’s talk through it. At Netflix we’re evaluating lambda usage as well. There’s a small portion of stuff that’s being kind of tested out. The use cases I’m aware of at this point from an industry standpoint are pretty nascent. So that’s why I suspect that if you bring this to the community, let’s talk through it because indeed, we definitely plan to support lambda delivery through Spinnaker, whether it be the cloud functions, AWS Lambda, or other platforms. And we just want to understand kind of how people are doing this.
Christopher Yeah. I mean I think the functions is a really interesting use case because right now it seems like people are playing around with it. They use the sort of workarounds for sort of their own WebHook that they want to set up. I haven’t really seen any actual production instances of what happens when you have 500 functions and you really run it as a serverless app and how do you deploy that. I don’t know the answer to that, but I think when it gets to that point, which we think it will, then I think Spinnaker becomes really interesting.
Lars Wander All right, sorry for the week. My name is Lars. I work on the Spinnaker team at Google. I’ve been working on this tool called [inaudible – 44:51] for a while now. The goal is to make Spinnaker easier to configure [inaudible – 44:55]. How many have you tried to install Spinnaker before? [inaudible]. How many people have tried to deploy Spinnaker before [inaudible – 45:02]. How many people did it successfully on the first try? Okay, it’s a hard one to set up. We’re looking at fixing this. In the past 1.5 years since Spinnaker was open-sourced, we noticed that there were kind of like two groups of problems that showed up as we saw people trying to figure out how to deploy Spinnaker on their own, especially in a production environment. And the first one I think you’ll agree with me on is that micro services are hard. They’re hard in general. Spinnaker is made up of nine micro services, asking any DevOps team to take on their life cycle. It’s quite a burden.
Second thing is Spinnaker, it sounds kind of obvious, but there’s a lot of Spinnaker-specific knowledge in Spinnaker. There are hundreds of configuration parameters. Some services need special update semantics if you want to roll out inversions. The tools that Spinnaker was built with might not be familiar to you. And all these things together make Spinnaker… or at least pose a pretty big hurdle early on for someone adopting Spinnaker.
Looking at the first question, Spinnaker is basically made for deploying micro services, so why not take advantage of that. And for the second problem, a lot of the Spinnakers [because the – 46:10] technology really can be codified quite well. We can apply validation. We can make sure that however you define your Spinnaker cluster actually is valid [inaudible – 46:18] deploy. We’ll work before [inaudible]. So just do a quick demo of halyard. So before the demo, I set up a quick Spinnaker instance with halyard is running Kubernetes. It’s public domain name. You can try and log in if you want to. Hopefully, it doesn’t work. Otherwise, if they call on Travis, that would be [inaudible – 46:40] Travis. Nothing fancy going on. So here you’re looking at the different micro services running in Kubernetes that actually make up this cluster of Spinnaker right here. You have a [couple of cloud drivers – 46:51], a couple of [inaudible]. These things don’t make sense to you unless you are familiar with Spinnaker. And that’s good. You shouldn’t have to be familiar with all the internals all the time. So let’s say for the sake of production [inaudible – 47:02] Spinnaker… It’s going to be hard to type with one hand. One second. We’re going to enable monitoring for our Spinnaker cluster. So we have a monitoring daemon that chips with Spinnaker. It’s not easy to configure. You have to attach it to every single one of your micro services, provide service [inaudible – 47:20] configuration for each one of those. [inaudible] config files. And you have to have some way of shipping secrets into that monitoring daemon [inaudible – 47:28] can push metric events to your metric back end. And once you’ve configured all of this, you have to probably cross your fingers and try to redeploy Spinnaker. With halyard, this is really just a few commands now, so give me a sec. [inaudible – 47:57]. We’re going to configure metric store. So we’re going to say how config metric store we’re going to do [inaudible – 48:04] for the purpose of the demonstration. And we’re going to provide it with an API key. So it tells me [inaudible – 48:10]. I think the value is in [inaudible – 48:13] looks at it and says, “Hey, that looks okay. That’s going to work for you.” And now all we have to do at this point is we have to deploy that change. So we’re going to do [inaudible – 48:26] deploy apply. And for the sake of brevity, we’re only going to specify the services we want to update. So like we said before, halyard really only sits on top of Spinnaker and issues commands against it. So it’s already deployed [inaudible – 48:39] looks like a bootstrapping profile or bootstrapping Spinnaker, which is like the original Spinnaker. But all of the parts that you don’t need to just talk to the API stripped out. And now it’s sending commands against [inaudible – 48:52] that’s been deployed there. And it’s telling it to Red/Black all of the different Spinnaker services. So Red/Black is like the canonical Spinnaker deployment strategy. We can see here it’s basically just echoing what [inaudible – 49:02] monitor the deploy. If you’re familiar with the Spinnaker UI, these messages will look familiar. And because it is a Red/Black, we can go back to Spinnaker and we can still use it normally while it’s being updated in the background. So we could run a pipeline. And all the while, if you go back here and take a look at what’s actually being deployed, we see [inaudible – 49:23]. We have [inaudible] until the old ones get [inaudible – 49:26] after they passed their health checks. And going to Datadog, I’ve backfilled some of the metrics, but you’ll see we actually have all these great Spinnaker metrics now populated for us. And it was really just to [inaudible – 49:38] see like things specific for micro service. You can scroll down and see how fast Google Compute Engine is or how fast the [inaudible – 49:46] is at this moment. And I guess the idea is really all these kinds of things become easy with halyard, but they’re reasonably complicated before. That’s it for me.
That’s right. It’s about 14.04. Who wants more platforms? Yeah, 16 is right around the corner. It was just a matter of getting the nightly builds in place for that and the [inaudible – 50:18] unit files. Who here uses Red Hat? [inaudible] strong Red Hat contingent? I bet most people want to run it on their Mac. I don’t really know much about Mac. I might need help from someone. If there’s a willing community member, you can follow maths guidelines for contributing.
Andrew Hello. Ah, it works. Hi, I’m Andrew with Armory. And today, I’m going to talk for a very short time about Spinnaker on AWS. [inaudible – 50:51] this was really vague, and I should’ve renamed it like [inaudible] on AWS because Spinnaker on AWS [inaudible – 50:59] Netflix is a little painful. So I’ll talk about two pain points and kind of how to solve them and kind of things you have to do to run it as an enterprise in production. So the one that everyone knows and loves is throttling. So everyone’s chanting it in the audience as everyone else is talking. Amazon will throttle you on about at least eight APIs. So it happened very quickly as soon as you go into production. So there are two ways to mitigate these. One thing you can do is try and prevent it in the first place. So it’d be really cool if there is nothing else in your organization that was [hidden in these – 51:43] APIs. That’d be the best, but that’s not going to happen. So the other thing you can do is you can configure service limits in cloud driver. So I think that there is actually a dock for this on Spinnaker.io, but I couldn’t find it very easily. So if you just go to that poll request, 1291, just documentation on how to do this, but it’s pretty good, and it’ll help you prevent the problem, but that doesn’t necessarily solve it completely, because what happens is when you are throttled, your pipeline will fail. So even if you configure your service limits and you’re trying to be very careful, you’re still going to get throttled from time to time and if you have other users in your organization hitting that API. So I call that like a reactionary response, and we’re working on that soon, and we’re working on that, and it should be ready soon, so that should help everybody with that [pain – 52:42]. The other one is IM permissions. So cloud driver is the biggest single talk about here, but [inaudible – 52:50] does need certain set of permissions. You can just grab that from the packer website. They’ll tell you this is the minimum set. And then front 50, you really just need the S3 access and you should be good. But a cloud driver, you need a lot of very… if you want to find the minimum set and you don’t want to just do power user… Well, actually, power user isn’t enough. If you don’t want to do full administrator, you’re going to need to find the policy. So I have a generator that you could use, and there’s a link right there. And you can generate a policy for every version of cloud driver, so you just point it to a cloud driver, and it’ll generate a policy based on whatever is checked out. Also, with that, there are several policies needed to actually run cloud driver. So the account that Spinnaker lives in, we’ll call that the managing profile or managing role. It doesn’t really need very much. I think it needs two actions allowed to it, which is describe VPC and describe region, I think.
But the main thing is any account that you actually want to deploy to, you need to set up a special profile, and that’s the one that gets generated by that script, and that will call the managed profile or managed role. And then the last step is you need to set a trust relationship between those two roles. If you do all of that, you will have a minimum set of permissions to run Spinnaker on AWS. That’s all.
Brandon Actually, set up a specific stage in Spinnaker called the executor stage. And in this executor stage, we actually feed an environment variable to the application in boot time that actually decides whether or not to run migrations. So the cloud detail environment variable defines a server group type. When instance comes up, its behavior is determined by that cloud detail variable. Deployment moves forward only if the migration logic concluded successfully. So here’s some [inaudible – 55:01] that shows kind of what happens. Instance comes up. If cloud detail was executor, then it runs its migration. So another question might be, “Well, how do we know whether or not the migrations have actually completed.” We have a service that we wrote called candle that actually comes up if the migrations were successful and it passes the health check for the EOB. And thus the execution of the pipeline goes forward. If it doesn’t succeed and it times out, its grace period runs out, we assume that the migrations have failed. Also, [inaudible – 55:39] thing is that we also have [inaudible] for these migrations being sent into [inaudible – 55:43] engineers. So any questions?
Audience Person Have you guys considered using the run job stage available as the [inaudible – 55:55] Kubernetes container that does this because then, just based on the success or failure, you will stop your pipeline.
Brandon Yeah, we did consider that. But still, we still ended up having to roll out remote access into these nodes to be able to run this. So that’s one thing that we did not want to allow.
Audience Person Does user activity play a role in there?
Brandon Say it again?
Audience Person Does user activity play a role in there?
Brandon No. Thanks, guys.
Alex All right, so this concludes our lightning talks. Let’s look at the questions that came up in the slider. I’ve removed some, so let’s see. What is the website for Slayer? There is no website for Slayer. It’s a lookout internal [inaudible – 57:04]. What’s that? Okay, next one is can we use Spinnaker to deploy Spinnaker.
DROdio Hey, Alex. One quick second. There was a follow-up question, I think, to one of the questions up here.
Audience Person Earlier, I think a gentleman from Netflix, I think you guys were talking about how you had Spinnaker instance for each micro service. I guess this kind of answers it. Are you using Spinnaker to deploy those instances of Spinnaker or Jenkins or some other [inaudible – 57:36]? Okay, so you have Spinnaker on Spinnaker in the workflow.
Audience Person 2 Or production Spinnaker deploys are production.
Audience Person 3 So can we use Spinnaker to deploy Spinnaker, yes? Okay, more Slayer.
Alex Well, you name Slayer [inaudible – 57:54] for the album names here, yes.
Unknown Person Hold on, there’s one other thought coming back here.
Unknown Person So you said you use Spinnaker to deploy everything. So assuming you do that for Cassandra too. Do you have any other databases where you can just take an instance out where you would use Spinnaker too, like let’s say, I don’t know, [inaudible – 58:19] or something like that.
Unknown Person The thing to realize is that Spinnaker has pipelines but it also has a very robust API. Everything that you do through the UI or when you’re running [inaudible – 58:31] pipelines is also available to you via the API. So teams like Cassandra, what they do is they take the Spinnaker API [inaudible – 58:38] write like specific tooling that is better able to look at state of Masters, slave nodes, and that kind of stuff. Generally, there’s a rolling push strategy that will allow you to [inaudible – 58:50] of those nodes. So if you have like [inaudible – 58:55], then always maintain like [inaudible] rolling push strategies user for those kind of instances. And [inaudible – 59:04] will have [inaudible] either custom tooling or custom pipelines that then do additional checks in terms of [inaudible – 59:11] workload. I don’t know [inaudible].
Unknown Person 2 I’ll add real quick at Netflix we use Spinnaker to deploy Jenkins, [inaudible – 59:21] stash to deploy [inaudible], presto, druid, [inaudible] basically all infrastructure to deploy via Spinnaker. You can pretty much plan anything [inaudible – 59:36] basically all infrastructure to deploy via Spinnaker. You can pretty much plan anything in [inaudible – 59:36] point [inaudible] for stateful services. How are you dealing with users deployed via Spinnaker?
Unknown Person [inaudible – 59:43] want to elaborate on the question?
Unknown Person Authentication to the servers. There’s probably a better question for someone else [inaudible – 59:50] Netflix. We have a very, very liberal, off, and kind of [inaudible] policy at Netflix. Basically, everyone has root to everything. So the cancer at Netflix is we don’t really deal with it. So I suspect other companies probably have a more detailed answer than probably other people want to hear from. So I will defer to someone who’s solved this problem elsewhere. Silence.
Audience Person [inaudible – 1:00:12].
Unknown Person The guy’s name is Rohit. [inaudible – 1:00:17]. How does the community devolve Spinnaker locally?
Matt I could say for Google, I think for Netflix too, we mostly run the service you’re working on [inaudible – 1:00:31] local development workstation. Sometimes the downstream services as well, if you’re working [inaudible – 1:00:38] bunch of things [inaudible] downstream services to run. Almost everybody uses VMs to kind of run everything, and we’ll then forward ports and things to the services running on that VM that you’re not developing on locally. But like any of the repos that you check out, we use [inaudible – 1:00:55] build system. By convention, it includes [inaudible] easy to start up. If you type [inaudible – 1:01:01] idea, it’ll generate all the artifacts for [inaudible – 1:01:03]. It’s pretty easy to get up and running. You can use a debugger right out of the box, no issues there. I can answer the next question. [inaudible – 1:01:12].
Unknown Person For Netflix, the way that we work is we typically just run a single service or a couple of services locally. And then we have an internal certificate that allows us to just talk directly to our pre-production or production Spinnaker services. So we don’t need to run everything locally if we’re working on a larger feature that spans multiple applications.
Matt That’s Rob from Netflix. And this goes to what Andy said earlier that they keep like a big sort of Spinnaker instance or two like up and running and kind of oil it over time. And most of us aren’t doing that, so we have to run the components ourselves. So we each pretty much have our own VMs running, whatever we need.
Audience Person To add to what Rob said, we actually have automated a lot of our checkout of repos and installation of services and stuff like that. So to a new developer coming in, they just need to download this Spinnaker automation, repository [inaudible – 1:02:11] internally. They run it. You download [inaudible – 1:02:14] repositories. [inaudible] give you a fairly robust local environment to work in. So if you never want to set up Spinnaker locally, you just come work in Netflix.
Unknown Person Also, if anybody has any… this is a very in-depth session. If anybody just has any more general questions about Spinnaker, don’t be afraid to ask those as well. I’m sure [inaudible – 1:02:34] showed up because they’re interested in Spinnaker at a high level. So feel free to ask anything.
Unknown Person Is anyone working on ECS support?
Unknown Person Not that [inaudible – 1:02:41].
Unknown Person Spinnaker uses [inaudible – 1:02:48] 2.0 for [inaudible] for [inaudible] supposed to be [inaudible] framework [inaudible]. There’s a reason behind this choice. Hi, everyone. My name is Travis. I’m the [inaudible – 1:02:58]. So what’s happening behind the scenes is yes, OAuth is technically an authorization framework. What we’re doing is we’re asking the user for permission to query either GitHub or Google Groups or whatever your OAuth identity provider is. And we’re asking them, “Can we get your email address?” And then we go and query them through the OAuth flow asking for your email address. And that is how we authenticate you as you are who you say you are by your identity providers saying, “This user has successfully logged in, and they’re giving you permission to go get your email address.” On the other half of that question is [inaudible – 1:03:44] authorization provider, [inaudible] authentication provider, yes. Samuel is also single sign-on authentication. But there are some SSO providers that also provide your group membership. So what gate does, the gateway API server, when you’re doing your single-service logon, it actually returns all of your group information to say this user is a member of groups A, B, and C. And we can use that group information to then enforce to say what applications or what accounts you have access to within Spinnaker, specifically FIAT and the authorization [inaudible – 1:04:29] couple of times tonight. If you guys have any questions about that, come ask me.
Unknown Person Other questions. Surprised we haven’t seen anything about Spinnaker and Spinnaker versus Kubernetes or [inaudible – 1:04:42]. As Armory, we see those questions come up from customers all the time. So if anybody has questions like that, we can take those. Alex, you want to take the ECS one?
Unknown Person So question to the audience, is anyone working on [inaudible – 1:05:15] support. Does anyone care? [inaudible – 1:05:25].
Unknown Person So at lookout, we are fully on AWS. And so we are looking into ECS support, just because we don’t want to manage Kubernetes. Before on Google cloud, probably. [inaudible – 1:05:42] support. So we’re kind of looking into it. So we may run Kubernetes. We may run on ECS. We don’t know yet. We know that containers are our kind of next step after we onboard all of our services onto Spinnaker. So we’ll be looking at that in July, which is coming very soon.
Audience Person The question to Andrew. But it’s EDDAs, EDDA. So EDDA for the person asking that question [inaudible – 1:06:15] on Netflix [inaudible]. So there’s a Netflix [inaudible] GitHub and EDDA is in there, EDDA. So if you’re doing [inaudible – 1:06:28] config management tools [inaudible] look like. I think Isaac talked about configuration management tools. You want to talk about some of the…
Audience Person Yes. If you’re not using configuration management, I believe everything in Netflix is Debian packages. You guys don’t use any configuration management, right? So through Debian and RPM, you can lay out the DiSC however you like. Any other kind of runtime configuration management that you might need, you can use things like vault that does change obviously certain things on DiSC. You’ve got a [inaudible – 1:07:06] eventually on [inaudible] for the application to use it. But with those two systems alone, you can almost get rid of anything that you’re doing with Chef or Ansible and just kind of get rid of that complexity and simplify it and really kind of let your application developers have control of their applications. [inaudible – 1:07:26]. There are other tools that I think Netflix is using like [inaudible – 1:07:31] console. But console is the open-source equivalent, I suppose, or the HashiCorp equivalent. And so that also allows you to put properties on the file system at runtime and change things at runtime, although you’ve got to be careful with that. That’s a slippery slope. I think we’ve seen people put entire shell scripts and Ruby scripts. I think you guys have seen that in a configuration management system, which is a pretty dangerous thing to do. But there are other solutions to get rid of the complexity and the nightmares of the configuration management tools of the past.
Unknown Person So some of the slides, depending on the author’s permission are going to be available on meetup.com/Spinnaker. So you can go there and see all of the recent meetups photos and also the materials from all the meetups. I’m going to let [inaudible – 1:08:31] Prometheus. No comment. Any thoughts about [inaudible] Spinnaker?
Audience Person So internally at Netflix, we use Adda. So our thoughts are it’s great. But the Adda that is used internally is slightly different than the one that’s in open source. So the last meetup I did kind of put out the challenge if the community could take the open-source Adda and kind of bring it more up to speed. There are a couple of, I believe, AWS kind of objects that are being cached in Adda in the open-source version that we are internally using. But Andrew from Armory, it’s said that I guess you are aware of some people having success outside. That’s great. Oh, we answered that one. So I think [inaudible – 1:09:23] from [inaudible], how do you deal with the…
Audience Person So I guess the best answer is we don’t. Most of our [inaudible – 1:09:34] that we use, like if the migration does not complete, it will automatically roll it back anyways. And then from that point forward, [inaudible – 1:09:43] engineer. Migration does not succeed. It fails. It is kind of automatically rolled back by the ORM. And then their only really thing is to do another release, either drop the migration and move forward with another release without the migration or to fix it, figure out why it didn’t complete.
DROdio Any other questions from the audience?
Unknown Speaker All right. Is there any [inaudible – 1:10:15] size using Spinnaker, also team size managing it? I can speak of lookout. We have about a team of six, people, full-time, implementing Spinnaker, not really managing it but it’s been about a six-month journey to convert four or five different deployment methodologies that look out [inaudible – 1:10:42]. So today, or at the end of this month, we will have a GA-quality Spinnaker deployment that look out where we’ll be able to [inaudible – 1:10:52] 80 percent of our services.
DROdio And Ben, do you want to talk about some of the companies we surveyed that were using Spinnaker?
Matt I was just going to say that this is a really good question. We had one internal team that we spoke to maybe six months ago, maybe a little more that was interested. I think they’re running on [inaudible – 1:11:14], maybe, had asked us about running Spinnaker. And we sort of had an hour-long conversation in [inaudible – 1:11:20] sort of their footprint and kind of came to the determination that probably not big enough to warrant taking on the burden of running Spinnaker themselves [inaudible – 1:11:30] themselves. So we were going to do it. So we sort of talked them out of it. And now with halyard, [inaudible – 1:11:37]. But now with halyard, we really do think like the risk of taking on the burden of running Spinnaker is greatly reduced and we’re speaking to that team again. And they’re likely going to now run Spinnaker but manage it with the new set of tools. So that sort of determination looks quite a bit different than it did pre-halyard, which is why we did halyard.
Ben Yeah, this is Ben from Armory. We recently surveyed about eight companies that are running Spinnaker in production. And the average size of the engineering team managing it is eight, eight full-time engineers. In terms of the other question, is there any [inaudible – 1:12:13] organization size, I don’t have any, but I would say that I guess the pain around deployments increases with the size of the organization because even more engineering teams [inaudible – 1:12:25] deploying.
Unknown Speaker So at lookout, we have hundred plus engineers. And so we have a team of six that’s been working full-time for the last six months, bringing Spinnaker in. So it’s everything. People, process, and technologies. So it’s not just let’s install Spinnaker and let all the developers that look out use it. There’s a lot of handholding. We have five or six different [inaudible – 1:12:53] deployment methodologies. So there’s a lot of transition, a lot of education, documentation, handholding, getting people bought in into continuous delivery. I think that was probably the most challenging thing.
Unknown Person Any other questions? All right, I think [inaudible – 1:14:14].
Unknown Person 2 One more in the back here.
Unknown Person One more? All right, cool.
Question from the Audience I think it’s just kind of a follow-up, which is what’s the smallest sized number of services you’d start looking at Spinnaker for of managing.
Unknown Person Ben, do you have any stats or like the smallest organization? Look out, we have 80 plus services [inaudible – 1:14:44] probably shouldn’t be…
Questioner We talk to a lot of companies that are evaluating Spinnaker. Now it’s a… I mean this is a very rough estimate, but 100 engineers-ish, 100 services, maybe, is the tipping point where something like Spinnaker really makes sense. To give you some perspective, my startup is three months old. So we got to start everything from scratch, which is nice. And I’m the only DevOps person. I’ve been doing [inaudible – 1:15:16] for 20 years. I’m the only DevOps person. And my goal is to now go super complex because all the developers will look at me, like what the hell are you doing. So I deployed Spinnaker on my own [inaudible – 1:15:28] my own. We’re up to eight or nine applications, all with their own, unique sets of stages and pipelines. And we’re only eight people. I mean I’m the only DevOps person.
DROdio That’s awesome. All right, thank you, guys. [inaudible – 1:15:46].
Unknown Person Yeah, thank you, guys, for coming. Hopefully, this was a lot of good content for you. And we hope to have more of this meetups. Thanks to all the speakers, sponsors. We’re going to be moving this party to Louis bar at 55 Stevenson street, sponsored by Armory, so come and get a drink with us.
DROdio Come, drink.