Kubernetes co-founder on the container revolution and the future of VMs

Interviewed by Anthony James on 10/24/2018

Containers have exploded in popularity in recent years. To help with the deploying, scaling, and managing of containerized applications, Brendan Burns co-founded Kubernetes - a production-grade container orchestration system. In this episode, Brendan shares how he and his co-founders came up with the idea, how they got started, and what containers mean for the future of Virtual Machines.

Downloads

Links and Resources

Expand

Interview Snippets

[00:00:00] Anthony: Welcome, everybody. This is an interview with one of the co-founders of Kubernetes -- Brendan Burns. Brendan also currently works at Microsoft on the Kubernetes Azure team.


I'm really curious about your backstory. It looks like were are a student and research associate at the University of Massachusetts. Where did you go from there? What were you doing? How did you end up at Google where you co-founded Kubernetes?


[00:00:34] Brendan: Sure. Yes. I did my PhD in robotics actually, haven't used it other than teaching, haven't used it since. Well, hobby robotics. I grew up in Seattle and I really wanted to come back. It's been a while since I've been living in Seattle, and I had an opportunity to come back and start working at the Seattle Office of Google. I did that. I worked on web search. I worked on Twitter search and a bunch of other low latency indexing in the web search stack that was done. That was about 2008 through 2012.


At Google, you were working on the cloud team. Where did the idea of Kubernetes come from?


[00:01:23] Brendan: I like to say we were down in the bowels of the steamship. I was so far away from users. They use your software, but you're far away from users. I was thinking about how do I get back in touch with people who are actually using the software?


Cloud was growing, so I transitioned over into cloud. I like to say it was a shock to the system because I was used to a particular way of building and deploying applications. I stepped into this world where it's VMs, bash scripts and SSH. It was like a horror show.


At the same time, it was hard to see how it transitioned forward, right? It was just such a different world that you didn't necessarily see how you could take someone along. You can sort of see shades of it. I mean, like the Netflix example, immutable architecture stuff where everybody was baking VM images was gaining some degree of popularity, but baking VM images was slow and hard. Some people were using things like Salt or Chef to try and do similar things, but they're glorified scripts.


At some level, they have a lot of the same failure modes because you're still yanking down packages on the fly as you try and deploy a machine.


Then Docker came along and I like to say what they really did is they mainstreamed to containers, right? All the technology was there previously, basically, but they integrated it together into this perfect storm of experience and meeting people where they were and all of the important bits when you want someone to adopt your technology, so that rocket ship took off.


We're looking at it in cloud, and what we saw as the real gap was like, "Great, you've packaged your application. Yes, you've deployed it to one machine. How do you actually deploy a complete application? How do you get traffic? How do you load balance traffic? How do you do storage?"


They're all these open questions around how you actually orchestrate an application deploying the software. How do you deploy software from v1 to v2 to v3 safely, reliably?


You can start to see people seeing these problems, and they're starting to solve orchestration. I don't know how many people really remember that we were paying attention in that period, but we were paying attention extraordinarily closely in the early 2014 time frame. It seemed like every week there was another pseudo orchestration layer coming out on GitHub, right? People who are starting to use Docker had started to see that there were these problems, and they were trying to solve them.


That was really the impetus for creating Kubernetes with Craig and Joe was this sense that we really had a sense for where it should go and what it needed to be. I like to say everybody had the puzzle pieces, but we had the puzzle box. Everyone was randomly trying to put the pieces together, figuring out how it would fit together, but we had the picture, and that allowed us to really create something that I think captured all of the pieces that were necessary.


Fortunately, people saw it or were inspired by it and decided to throw their weight behind that effort in that project. People from the likes of Red Hat and CoreOS and others really jumped on it pretty early and really helped drive it because I think connecting it to the real world and connecting it to real customers helped produce what is a de facto orchestration standard now, today.


Kubernetes obviously has gone mainstream. It's one of the biggest architecture paradigm shifts in our industry in the past 10 years. Now, what's it like, knowing? What's it like being the co-founder of one of those industry game-changing products like Kubernetes?


Interested in learning how to use Kubernetes? Learn by taking this Kubernetes the Hard Way course which includes hands-on labs for you to practice.

[00:05:23] Brendan: Yes, that's a trip. It's funny to think it once was just more or less a mix of shell scripts in Java on my laptop, right? That's what it started as, and as a prototype, basically. Like, "Here's an experience that we could do."


It was really pretty hacky. I actually went digging, I tried to go find the old source code because I thought that'd be fun. I think I re-imaged that laptop, so it's gone. It's totally too bad. I found this one laptop in my closet and I was like, "Maybe it's on that one. No, it's not." It's too bad.


I have to say that I really want to give credit to the community here. It's a group effort. I like to say I threw a pebble or a little snowball off the mountain and it turned into an avalanche, right? It's hard to take a ton of credit because I think there's been so much work by so many people to get us to where we are. I think we set the seeds, we set the trajectory in the right way. I really wanted to have an open project, I really wanted to have a project where people could come and feel like they were empowered and people could come and feel like they could really contribute, because having been a student of a lot of open source communities over the years, I think that especially in the infrastructure space, the ones that make room for other people to be successful are the ones that ultimately win.


If you get too opinionated, if you get too dogmatic, or try and hold things too tightly, you just end up pushing people away, but the problems are still there, so they'll just go find another, more open, way of solving the problem. You really have to build this ecosystem, build this community. I think that's part of it. It was pretty trippy.


I went to Kubecon this year and talked. I have to admit, I took some pictures of the empty seats. It's amazing to walk into an auditorium with like 3000 empty seats and realize they're going to be filled with people listening to you talking for an hour. I definitely took some pictures and sent them to the family.


You mentioned that the community was key and is key for building tools like Kubernetes. Now I'm really curious, I know a lot of our listeners are as well, how do you go about fostering that open community that supports growth of something so amazing like Kubernetes?


[00:08:10] Brendan: Especially early on, I would say you have to treat every single user with just the utmost respect and understand that they're coming to you-- even if it's a question, even if it's a thousandth time someone has asked that question, they're putting themselves out there to ask you.


If you alienate them, they're not coming back. If you do that too many times, then network effect starts, right? I think one of the hallmarks, one of the things that I think is really special about the community has been the level of professionalism and respect that we expect of every single person who enters into that community.


I think that's a part of it. I think it depends on the project. I think some projects can have a benevolent dictator and be successful. I don't think Kubernetes could have been successful with the benevolent dictator approach. I think having a really clear sense of why your project is useful and interesting is important. I started joking with someone that GitHub was social media for a certain generation, a generation that's younger than me. I think if you're launching a project because it's a social media thing, you're probably doing it wrong. You need to really understand why somebody would want to use the software that you're writing.


Just going into it and with that idea that you're trying to build something. I view it like a fire. Early on, you have these kindling and you have to be careful. Every single piece of wood that's in there is something that you want to treat carefully, right?


That's maybe the way to approach it. I don't know. A lot of feedback too. We really proactively-- well, we got on IRC, and then Slack and Stack Overflow, and really proactively tried to go and find people who are asking questions and engaged with them in all sorts of different media, getting up on Twitter, and all this other stuff to try and find the people who are using it, regardless of where they're asking questions and really get engaged with them. I think that helped a lot too.


You mentioned letting them drive the direction of it, but as you were building this as co-founder, there had to have been something that you were very opinionated on, or that your team was very opinionated on, that you really pushed and really were insistent upon doing. What was that, or what was one of those things as it relates to Kubernetes?


[00:10:47] Brendan: I think we had one, what I like to call the leaving the floppy drive out of the iMac moment. I don't know if you remember paying attention, but when iMac came out, there was this whole hullabaloo about like, "It doesn't have a floppy drive. What kind of computer doesn't have a floppy drive?" Six months later, no computers had floppy drives.


Similarly, when we came out, one of the really opinionated-- I know I said mostly I try not to get that opinionated, but one of the really opinionated things we did was to say every container, every pod is going to have its own IP address.


We said that and there was just this hue and cry amongst everybody about like, "How in the world are we going to create a network model where that works? How do you do the network?" One of the big complexities early on for people installing a Kubernetes cluster was just making the networking work, but we really stuck to our guns. There was a place where we were like, "No, this is really important. We really, really, really have to do this." People at CoreOS came out with Flannel that made it a lot easier. Six months later, it was the accepted way of doing this stuff. It was people like, "Oh, right."


I think that's an example where leadership actually is important and having opinions is really important because if we hadn't done that, I think we would be in a world of court remapping and machines or processes running on random ports and stuff like that, and things like DNS don't work. It would be a much worse world, but no one would have really known because no one knew that you should say like, "No, this is actually something we really need to do," because people didn't have some of the experience that we had. That's been interesting. That's an example, I think, where we had an opinion and we really stuck to it and it proved out.


That networking is a great example of something that seemed really challenging. There had to have been more challenges as it relates to building something like Kubernetes. What were some of those that you encountered?


[00:12:56] Brendan: Early on, we had just homogeneous replication. You want to replicate containers, every single container is identical. If we're going to scale down, we're going to choose the container at random to destroy.


Really told the application developer like, "They're all the same. You have to treat them all the same. If you want to treat them specially, you need to do that in your code. Kubernetes is not going to do anything about that. It's your problem." We saw people starting to try to deploy things that weren't necessarily written for a world like that.


The classic examples where things like Mongo or Redis, or some of these more stateful workloads, and people just struggled and were doing these incredibly hacky scripting gymnastics to take an application that expected each member to be identifiable and bridge them to a world of Kubernetes where each member of a replication set was pretty much interchangeable. It was really ugly. That feedback from people about how bad it was led to the development of stateful sets where we stepped back from our purism a little bit and we said, "Okay, we're going to actually let Kubernetes know that these containers are individuals, and they have individual identity, and we'll give you some guarantees about-- We'll create the zero-th replica and wait for it to go healthy before we create the first replica. Then we'll create the second replica, and so on."


You look at the difference in complexity in deploying Mongo in a stateful set versus a replica set, and it's like night and day. It's like stateful set is like, boom, done, five minutes.


Replica set is like giant bash scripts, flaky. I think that's a really good example of where we had a strong opinion but we were wrong, or at least it was an opinion that wouldn't work well for a large part of the community. We actually listened and did the work to develop a solution that worked a lot better. I think you've got to do both.


Sometimes your ideas prove out and sometimes you were just wrong or it was just too hard, and you need to have some flexibility on either side. I think those help illustrate both sides of that coin.


Looking ahead, I'm curious what you see happening with virtual machines? Do you feel they're going to be completely replaced, disrupted? Where do you feel containers are heading from a serverless standpoint, from a futuristic standpoint? What are your thoughts on that?


[00:15:30] Brendan: I think one thing is for sure is that the legacy will always be with us. I just was reading some article about how some nuclear power plant is planning out on running a PDP 11 until 2050. The legacy is always going to be with us. I don't think you'll ever be in a world where there aren't virtual machines around. I do think that most developers want to consume this as containers. I don't think they really care about the operating system or want to know about the operating system.


I think that you're starting to see that technological shift happen with the cloud. I think outside of the cloud, it doesn't matter perhaps quite as much, because of course, the machines have to be there so somebody has to run them, but in the cloud with things like Azure container instances that we launched recently, it's serverless containers.


You give us a container, we run it for you, it fails, we restart it. You don't ever see the machine that it's running on. I think that's an abstraction that a lot of people want to consume. I do think that containers are going to become increasingly the run-able thingy, especially in the cloud.


I hope that Kubernetes fades to the background as we see people build things on top. I view in the Kubernetes and the Kubernetes APIs as being like POSIX. Every program that you ever run on a Linux system runs via POSIX APIs, but you don't really think about them very much. You learn them in operating systems, and maybe you do some Pthreads or whatever, but you don't think about them very much. I hope that that's where we end up with Kubernetes as well, that it fades to the background. It's important, and it's useful and it's something that is backbone of everything that you do, but you're thinking about higher level abstractions.


I don't think that the Kubernetes abstractions were built for developers, and you're starting to see that. You're seeing people layer functions as a service on top, or you're seeing people layer package management on top. I think you're just going to see an increase of people building more opinionated experiences on top of Kubernetes that attract a particular subsection of the developer community.


I think the great thing with Kubernetes though is you can mix and match. You want to use functions as a service or some stuff? Great. Install a functions as a service on top of your cluster, you can use the functions as a service developer pattern.


You want to use package management? Great. You can go use Helm to install Cassandra or Helm to install MySQL. You need raw access to Kubernetes because you're doing advanced stuff? That API is still there, and the good thing is they're all inter-operated and they all run on the same machines. I think it provides a nice abstraction layer, a foundation that you can build up from. As we do that, I hope that it fades into the background.


[00:18:40] Anthony: Brendan, this was absolutely amazing. It was a pleasure speaking with you, and I'd love to have you on again in the future. I really appreciate you taking the time to meet with us today.


[00:18:49] Brendan: Absolutely, would love to do it. Thank you so much for taking the time to chat.


[00:18:51] Anthony: Everybody don't forget to subscribe for more amazing interviews from Scale Your Code and also check out our other podcasts that are part of Linux Academy such as Linux Action News, Linux Unplugged, Coder Radio, TechSNAP and User Air. Look forward to seeing you in the future and subscribe to follow along.



How did this interview help you?

If you learned anything from this interview, please thank our guest for their time.

Oh, and help your followers learn by clicking the Tweet button below :)