Working on the Red Hat Ansible Core team and the power of open source community

Interviewed by Anthony James on 08/01/2018

Adam got a Red Hat tattoo on his forearm even before working there because of his passion for the open source community. This episode explores why he fell in love with the open source community, as well as the contributions that he and his team have put forth into the Fedora project, Ansible, and OpenShift. We discuss why and how these tools have changed the way people work, and how they can be used with container technologies such as Docker, Kubernetes, and more.

Downloads

Links and Resources

Expand

Interview Snippets

Welcome everybody. My name is Anthony James, Founder and CEO of Linux Academy and co-host of Scale Your Code. Today, I'm joined in office by Adam Miller, who has been running the Ansible Fort Worth meet-up, and that's how we actually crossed paths.


Welcome, Adam. Tell us a little bit about yourself.


[00:00:17] Adam Miller: Well, thanks for having me. My name is Adam Miller. I work at Red Hat. I've been there a little over six years. I'm currently on the Ansible core engineering team working on the core execution run-time, as well as trying to do everything I can to foster the community; just because I'm both a developer and a giant fan of the technology, which is what led me to work there in the first place. Which is also what led me to seek out a location to host the meet-up, and you all were gracious enough to do that here. Beyond that, I also work on some various Ansible integrations, one of them being outreach to technology, such as OpenShift containerized things, and other spaces that I can hopefully be helpful in.


When did you join Fedora as a community contributor? Was it in 2007?


[00:01:23] Adam: Yes, I believe so. I probably am off a little bit on my date, but I'm almost positive it was sometime in late 2007 when I first started contributing. I began as a Red Hat Linux user in 2000, very early 2000; it was like January, February. I remember because I asked for a copy of Red Hat Linux for Christmas, which is a whole different story.


[00:01:46] Anthony: Did you get it?


[00:01:47] Adam: I did.


[00:01:48] Anthony: Nice.


[00:01:49] Adam: Yes, I got Red Hat 7 deluxe workstation. I literally wrote on the paper, I was like, "Go to Best Buy. Ask somebody where this is, if they don't know, tell them it's by Windows." Because at the time, you could go into Best Buy and there was Windows, there was Red Hat, there was Suse, and nobody does box sets anymore except for Windows, because they still target the consumer market, but anyways, yes.


I started as a user, and then I just became very enthusiastic about the technology: what we could do with it, the power of open source, open source as a mantra, or as a standing ovation to how a software could or should be developed in the community space, such that the collective whole of our knowledge will always be exponentially larger than any one of us can possibly contain or know.


We only have so much brain capacity. It was really cool to get involved and meet people in the community, and all that stuff. As soon as I felt like I knew enough about computing, I started contributing back, and that was around 2007.


[00:02:49] Anthony: It's a true passion. It's a passion for the open-source community; it's a passion for the Linux community, and that's just awesome. In fact, let's talk a little bit about that passion by showing a picture. If you don't mind, just hold up your arm real quick, [Adam raises his arm to show a Red Hat Tattoo] and let's talk about this Red Hat logo that you have— and Jim Whitehurst actually refers to it on what page in the Open Organization book review; what page are you on?


Adam Miller Red Hat tattoo


[00:03:23] Adam: I'm on page 41. I committed it to memory because I was just really excited about it. I communicated to my whole family; I called my grandma, the whole spiel.


At heart, I am a computer nerd. I will always be a computer nerd; that's why I'm here. Aside from my family, that's why I wake up in the morning. The open source movement has become a huge part of my life. I started in 2000 with Red Hat Linux, and then during the Red Hat Linux Fedora split, I took a vacation, because— I call it a hiatus onto other Linux distributions, because I didn't know enough about Linux to really debug the fact that Fedora Core 1 wouldn't boot on any computer I owned.


However, a few releases later, everything was stabilized, it was a whole new bring up of a new project focusing on an operating system. It was a large undertaking. Once it stabilized, I made my way back, and I've been with Red Hat ever since; at least the Red Hat family of technologies ever since. The Red Hat family of technologies, from a community perspective, from the fact that the community outreach, the community embracement, the development model being upstream all the time, the fact that 100% of the products' source code is open source, all of the projects that the products are based on, we support and contribute back to, either through code or monetarily or both.


That ethos actually spoke to me, and guided and built my career without me even realizing it at the time it was happening. The tattoo marked 10 years of being a part of the Red Hat open-source community through Red Hat various technologies, that kind of thing. I always have to tell that story, because a lot of people were like, "Oh. You're dedicated to the company, to the brand" and like that, I've had a few people— there was an article in The Washington— The Wall Street Journal about tattoos, and companies and that spawned off this other article about how companies are evil, and all of that stuff and, no. You have to understand why I got it, and why it matters to me and how they shaped my life.


[00:05:30] Anthony: It's the mission.


[00:05:31] Adam: It's the mission—


[00:05:32]Anthony: It's the culture it's—


[00:05:32]Adam: Yeah, it's the whole thing in the sense that knowledge should be shared not hoarded, and if we could ever get around the whole concept of paywalls for doctoral research, we would be a little bit better at it. From the scientific perspective with the scientific community, it is very big about sharing that knowledge, and in software, historically, it's been like, "No. This is my intellectual property."


As a side effect, Red Hat even embraces the open intellectual property. I can't remember the appropriate name of the project, but there's actually a project where we've entered into a business venture with other companies to have these open patents that anyone can use, and if somebody uses it and tries to sue you or something, a collective of companies will defend one another; that kind of thing.


[00:06:18] Anthony: Kind of share the resources to help each other out.


[00:06:19] Adam: Absolutely, and that kind of thing, that the whole concept —not only from the code perspective, because from the code perspective, we've been doing that forever— that's not new, but actually doing that from a business perspective and doing that from a legal standpoint.


That ethos runs through the company, all the way through. I got the tattoo, and then about nine months after that, I finished up my graduate degree. Once I finished grad school, not long after that, I had a very positive opportunity —and it's something I'll be forever thankful for— to join Red Hat, and I've been there ever since. It's one of those things where, as an outsider, as a community member, you look at it, and you're like, "Okay, I have this idea, and this ideal in my head of what it is, and how amazing it could be." There's always that fear of going in and it's like Wizard of Oz, it's like, "Oh, the person behind the curtain"; it's amazing, I've loved it.


Has joining Red Had been everything you thought it would be?


[00:07:20] Adam: Yes. My favorite story to tell is our CEO has a computer science degree from Rice University and runs Fedora Linux on his laptop. We don't have a divide between the business and the technologists; we don't have a divide between non-developers and developers in terms of the ethos. Everybody lives this as a component of who we are. Ever since that, it has just been a reinforcement of the tattoo, and people are just like, "What if you get bought?" And I'm like, "Yeah, well, it'll be an homage to a better time."


Business schools will tell you to look after the shareholders as your main priority. Instead, what's really awesome about Red Hat, and Jim Whitehurst [the CEO], he would say, it's about the culture. He's not trying to change that Red Hat culture for shareholders. He recognizes the values of the open-source community, the values of what you're trying to do, what they're trying to do, what the community is trying to do, and really embraces that from a technological standpoint, versus dollars and cents. It's really great to see that, and I love that business model.


Real quick, before we go into fun technology stuff, did that [referencing the tattoo] help you get the job at Red Hat because you had that before, right?


[00:09:25] Adam: I had it before. I'm a remote employee, about half of our engineering department, company-wide are remote; geographically dispersed to wherever they feel like working remote from. I was interviewed entirely over the phone. Nobody knew I had it.


Nobody knew I had it except for one person. I don't know if they disclosed it to others, but one person knew, because I knew them from the Fedora community for many years. However, nobody throughout the company, up the management chain, knew about it until after I came for new-hire orientation, and that was a lot of fun because I was the new person that had the tattoo and that had never been done before. That was really cool.


[00:10:07] Anthony: Did you keep saying "No no, I had it before?"


[00:10:09] Adam: Yes.


[00:10:09] Anthony: "I didn’t do it because I got the job."


[00:10:11] Adam: Yeah, I was the third one to have it, that I know of. I don’t know who was first between the other two, but there were three of us originally, and the three of us in that photo that came in the Wall Street Journal, there was a picture of us at summit, and everyone was like, “Oh, we finally got the three together.” But by that point in time, there were seven of us. As of today, I believe there’s like 15 or 16 of us that have the tattoo.


[00:10:33] Anthony: Oh, wow.


[00:10:34] Adam: It’s just people who have been driven by that passion for some number of years and it just kind of speaks out to them.


[00:10:41] Anthony: It’s what it represents, right?


[00:10:43] Adam: Yes. For me, it’s similar to how people have symbols that identify things that are important to them in life, and that’s what it is to me.


[00:10:54] Anthony: It represents that community, it represents how you got involved with it and what it means to you.


[00:10:58] Adam: Absolutely.


Being on the Ansible core team and knowing that OpenShift is a huge initiative at Red Hat for container deployment and orchestration and all that other great stuff; we're going to have some great conversations here, but first off, when you think about Ansible, what drew you to Ansible? What’s your role on the Ansible core team? What are some big problems that excite you that you’ve been able to work with or solve with Ansible or solve in tandem with Ansible on the core team?


[00:11:39] Adam: Oh man, that’s a big question to ask. I actually, I’m going to start answering it from before I was on the Ansible core team because-


[00:11:48] Anthony: But you were at Red Hat for this?


[00:11:49] Adam: Yes.


[00:11:50] Anthony: Okay, what team were you on there then?


[00:11:51] Adam: When Ansible came into existence, I was still on the OpenShift online team.


[00:11:55] Adam: I was part of the operations team that ran OpenShift online.


[00:11:58] Anthony: When did OpenShift first become available? What was the first release? What year was that?


[00:12:04] Adam: It was before I worked with the company. OpenShift existed before I worked there; openshift.com.


[00:12:09] Adam: The previous platform which it was based on, actually cgroup —which is a technology that current generation containers use— so inside of the Cloud Native Compute foundation, there is the OCI, Open Container Initiative project. The OCI has this definition of what is a container and the container formats for the images. They’re currently working on a registry for distribution of patterns, that kind of thing.
Cgroups is a core component of what creates a runtime for containers.


So, runC, Coreos's rocket, Docker, and on the back-end of Docker, they actually use runC and containerD. All these things, they all use cgroups to actually do resource constraint. OpenShift v1 way back when, and this was, I think, like seven years ago when it first launched. When I joined it, it had been around for a little while and then about a year into me being there, we actually launched the OpenShift enterprise version.


[00:13:12] Anthony: Which is still available, right?


[00:13:13] Adam: It is. Yes, absolutely. It’s now called OpenShift Container Platform to kind of designate the differentiation between what their previous technology was and the current generation technology is. Obviously, the major innovation from 2.x to 3.x is to kind of signify the architectural change. There’s a lot of messaging around that. We can get into some of the history of that if you want to.


There are so many tangents, non-tangents, and it’s like: rabbit hole, rabbit hole, and next thing you know, you’ve chewed up a day. We worked with a lot of the same technologies that fed into the creation of the current generation container technologies. But we have this internal deployment mechanism, and the internal deployment mechanism was custom-written.


We, ironically enough, also chose YAML as our input format to say these are the steps that we want to take place. Do the deploy of this, take these out of the load balancer, do the deploy of this, do database scheme migration; all that stuff step by step. When we needed to do ad hoc tasks, we actually used PSSH.


Then Ansible came into existence, and once it kind of reached a certain level of maturity, we saw the benefit of utilizing that for that piece of work. At the time, we were still using another configuration management utility for the actual configuration management, and that worked out really well. We had all this time invested into that; we already had all of our configuration there. There was no need to rip and replace. Ansible was able to augment that for us and just replace that customer end-thing that we’d created. The reason we wanted to do that is because, number one, what we had wasn’t the most reliable thing. Every now and then we pushed a bug and said, “Crap, goodness.”


But also we had a pile of parts that was developed by us and that we were left with maintaining. In line with the open source mantra and the open source way, there was this open source community; it was vibrant, and it was growing. It had this common toolset that did effectively exactly what we wanted and even way beyond the capacity or the capability what we needed. We joined in that, and then because of my drive and passion for that technology in Ansible, I started working on it a little bit on my free time. I joined the community. It became something I'd write code on periodically. It was so interesting to me because I have a system administration background. I went to school for computer science, and I was like, “I’m going to be a programmer for life,” and then I fell into a system administration job running Linux, and I loved it. It was a lot of fun. I really enjoyed the work.


I loved infrastructure, and I was fortunate enough to work in a company where we racked and stacked our own servers, we ran network cables, we did everything from PDU management for power all the way up to system administration. You have root on everything. You’re on call if the server dies in the middle of the night; you’re coming up, that kind of thing. That was an amazing learning experience.
We had a lot of challenges at the time that would have easily been solved by technology like Ansible. Ansible showed up; I was immediately excited about it. I was immediately just drawn to it, and we were able to solve the problem of, "Okay, this thing that we have that we’ve custom-written, there’s a capacity problem. Once we hit a certain node count, it starts having trouble."


Scaling was an issue?


[00:16:47] Adam: Scaling was an issue for us. Some of that might have been— We were using a message queue on the back-end to do all of the back-end coordination of that. Some of that could have been our inexperience with the message queuing technology that we had chosen. However, I pretty much rule that out because we actually called in some of the developers who wrote that message queue like, "Hey, are we doing this right?" and they were like, "Yeah, that looks good." We were pretty sure it wasn’t the message queue, it was something else, and it was just one of the things where it became a maintenance problem where we have all these other operational workloads that we're busy with.


We have all this other work that we're trying to do to bring out services and maintain a presence for our customers and all those things. But there was always a problem. Deployments would take longer because we’re fighting fires on the run. We slowly phased out our custom thing and brought in Ansible. It was great because we were able to augment the configuration management tool that we already had. For everything in /etc, so the /etc directory was managed by the configuration management tool. We didn’t want to mess with that.


Maintaining your configuration state so if you change something somewhere you want it to change everywhere.


[00:17:53] Adam: Absolutely. The eventual consistency, the agent checks in. We also actually would do pre-builds of the configuration of that so that we didn’t have a thundering herd problem on the management. When we were doing a deploy, we would deploy to the entire infrastructure at once. With Ansible, we would just scale up our Forks, and we would fire, and we would knock out deployments. We brought down our deployment time by a sizable amount and also we had a more maintainable more scalable solution and with that I immediately— I was enamored and sold on Ansible. Then going from OpenShift to Fedora, because I was a Fedora contributor for a very long time. I was a Fedora contributor in college. To me, it was the summit upon which that you must reach. I was working full time on the Fedora team because they are an amazing team. They do great work, and it was a place I wanted to be.


It’s a small team, isn’t it?


[00:18:50] Adam: It’s a very small team. I think this is relatively uncommonly known information.


The Fedora team is actually about 10 to 12 people depending on— it’s like well, the kernel team is actually the Fedora team. But they have a very focused job, and the kernel is a sizable mammoth of technology itself to focus on. But they do— they’re integrated into the Fedora workflow. They keep up with current trends. They focus on initiatives that Fedora’s chasing after, for example, like power management in the laptops and that kind of thing.


For the mobile devices, they’re like okay, well, let’s spend some cycles focusing on those kinds of features, upstream, that kind of stuff. They’re definitely tied in. But anyways, relatively small team, a lot of work to do. I think unbeknownst to that team themselves, but they were mentors to me when I was in college; when I was an early contributor.


I feel as though the experience I got working on open source and the experience I got in the infrastructure group as well as from the packaging and developing software, and for the first time ever, really, really learning how to port software between hardware architectures.
That was just mind-boggling to me, because before that just there was only one and it was the one that I owned, and I'd never known that. In later years in college, you get introduced to Comp-Org, and they're like, "Oh, yes. There's this whole world of stuff out there." But by that point in time, I hadn't run into it. I learned probably almost as much, if not more, from the Fedora team who became mentors to me as I did at college. It's just like two sides of the same coin; it's the theoretical and then the practical. It was amazing.


You immersed yourself in it. You followed them, and you looked at what they did, and that's one of the best ways that you learned.


[00:20:36] Adam: Absolutely. I wanted to be there. Well, I joined there, and what's very interesting about the Fedora team is two members of the Fedora team actually contributed to the creation of Ansible. Because of that and because of their motivation, some people might not know, Ansible is an evolutionary approach to a similar problem that a tool called Func, F-U-N-C, aims to solve. Func was developed inside of Fedora.


Ansible solved a lot of its design problems. There were a couple of people within the Fedora community who actually contributed to its creation with Michael DeHaan, and because of that, because they had input and a lot of what it aimed to solve, they immediately started to pivot to it from their previous toolchain.


The Fedora infrastructure team was rapidly doing that, and then I join in, and they've already done all of that. Now, everything's powered by Ansible. We went into this new workflow of trying to solve multi-phase infrastructure rollouts as well as release engineering-like business workflow. I know business workflow's the wrong term, but basically this workflow for a stepwise procedure that's not necessarily thought of as infrastructure, because release engineering normally happens beforehand and somebody that throws some software over the fence is like, "Op guys, go do that thing."


So you do that, and we had this really interesting inflection point where it's, "Okay, we all use the same toolset. We can very easily look at problems cross-team, and now we're one big team."


Because the Fedora release engineering team at the time actually was from a different group — for seven years, one guy just carrying across the finish line — but over time, community members joined in. The community members outweighed the people who actually got paid full time to do all this. Now we have a similar toolchain.


I was advocating for this "Ansible everything" concept, and it's like, "Okay, every time you're going to go write a custom script to do something, write an Ansible module, or an Ansible playbook, or an Ansible role and always start with the lowest barrier of entry first; make a playbook, that doesn't quite work, make a role. If that doesn't quite work and you need actually some programmatic things, okay, write a module or a plug-in."


And then, by doing that, if you run it on one system, now we can run it on many. And if you wrote a module for some reason, okay, well, we can tie that in other playbooks, we can reuse that code, just like if you had written a library. Ansible became our API for that kind of stuff.


You had this code that allowed you to scale and focus on new code rather than writing the same code over again.


[00:23:31] Adam: Absolutely. That was very powerful for our workflows because we have, in Fedora, the unique opportunity to have a message bus, a unified message bus across the entire infrastructure. Anything happens in a web app; a message sends out, anything happens in the event log, a message sends out; anytime a build happens in our build system that feeds into the repositories that people install software from, a message goes out.


And it has all kinds of metadata in it. We can take action on those. We can actually see, "Okay, from a release engineering standpoint, at the point in time the CI job finishes because this composed build occurred based on some criteria over here, now we can take action on that and start our series of tasks."


Whereas before, somebody would show up in the morning, check the logs, see what was good, and then start typing commands. Then over time, it's like, "Okay, well, let's make shell scripts." It's just this pile of shell scripts, that's now a decade old, and anyone who's ever messed with a decade-old shell scripts knows, it's painful.


[00:24:37] Anthony: Super easy to use. That's what you meant to say.


[00:24:38] Adam: Yes, perfect. Then, the question comes up it's like, "Okay, well, is this portable, and by portal I mean, is this POSIX compliant?" And then you get into this whole world of POSIX compliance, and that's a problem, and it's like, "Why do we care about portability? We're only building on one distro." So it's like, "Well, the problem is that the builders are various releases of the distro."


You don't really know which shell features you're going to have available because Bash 4 came out with association of arrays and like that kind of stuff. During that transition period, if you try to take advantage of something new and that fire— it would just be a whole thing. You still had to think about things like that, and over the course of a decade, you cross some of those bridges.


Anyways. We're now in the grand new world of Ansible and we're able to use all these things where, really, what we care about is the version of Python installed, which for the sake of current generation Linux distributions, Ansible covers all versions of Python of currently supported non-end-of-life's enterprise Linux distributions as well as all the way up through 3.7; the latest Python.


It became this point of like, "Okay, well, now we worry less about those compatibilities because Ansible abstracts that for us, and we have that power." It's like "Okay, all right, what is our workflow? We need to provision a virtual machine, we need to give it networking. We need to then take action on the operating system inside that virtual machine, and then we need to do some kind of validation to make sure that the service we just deployed is up and functional. Now, maybe we need to put it into a load balancer behind the proxies, and then when you make an edit to HAProxy or something, or we need to add it to a cluster pool, and then once it's in the cluster pool, we need to do some tasks to enable it or do some kind of check —health check."


Being able to reach out and touch these complete disparate systems in different ways and command-and-control these different things for us, for our use case transcended the concept of configuration management, and during the transition period, when they were transitioning over, they augmented. Because absolutely, why throw away that thing?


The Ansible core team has this joke or unofficial motto, it's not really a joke, it's a serious thing, but we play well with others.


A lot of people in the marketplace like to compare us to certain other technologies, and for most of them, we think it's an odd comparison, this is like, "We work with that." And more often than not, it's like, "We have a module. We actually have an Ansible module in Ansible core that we ship and distribute to our users that you can actually command-and-control that thing. So one of your tasks in your automation series triggers that other thing."


That's a tool in the toolbox, right? You get to use the best tool for the job.


[00:27:31] Adam: Absolutely. Yes, and for the concept of automation, and if you want to talk about market trends, everything, automation is becoming increasingly important because of things like immutable infrastructure. We're no longer mutating state on endpoint systems, and that leads into containers...


Let's talk about Containers


[00:27:47] Anthony: It's easier to throw it away and start over than it is to try to maintain that state.


[00:27:52] Anthony: Less resources too.


[00:27:54] Adam: Yes. A little more storage, but we have copy-and-write file systems. We have the ability to clean that up, but yes. The idea is that with containers, now your application doesn't actually get configured in place, it gets configured at build time, because for their use case of Docker, give a Dockerfile. Every time we make a modification, you've built the new one, you have a new set of layers, or you have a whole new image depending on how you're doing your layering and caching and that stuff.
And you can push that, and then you pull that new distribute and deploy it, done. If you need to make another change, push and deploy; if something goes wrong, rollback.


We're starting to see that happen more in the operating system space as well. CoreOS had CoreOS and then was called Container Linux, and then Red Hat had a talk Red Hat atomic host, Project Atomic. Now, CoreOS joining the Red Hat family, those are merged or in the process of merging.


Is it going to be Atomic or Core? Do you know?


[00:28:48] Adam: It got announced at Red Hat Summit. It is Red Hat, CoreOS. Now, we have the ability to do that, and we're not mutating state on the system anymore. Maybe, we get the state from cluster environment like etcd, or maybe we're deploying new layer images, because at the core of it —at the core of CoreOS— you actually have an image based update system.


Again, if you want to do that kind of thing, you can do your validation of the sum of parts. I don't know if a lot of people realize this, but packages, the software packages — RPMs, dpkgs — when you use those, there's a transaction that happens in place. There's actually a database in the backend being written to, and I always like to go through the fun exercise of...if you're updating GLib C or if you're in the middle of making an NRD for your kernel and you kick power out from underneath machine, does it boot?


It's Schrodinger's cat of infrastructure. We don't know. We don't know, because it depends on at what point in the transaction it is, how what it files in.


[00:29:55] Anthony: I'm in love with that analogy, by the way. That's the greatest example ever.


[00:30:00] Adam: Let's talk about that. It's like that's the idea and that's the promise of this whole concept of nuance infrastructure. From maintaining eventual consistency on the node, what do we need? Well, now we need an automation tool to automate the deployment of those artifacts.


Which is what you were referring earlier as a process. To automate the process flow which is differently configuration management.


[00:30:22] Adam: Absolutely. To me, we're addressing different problems. In the event diagram of capabilities, there is absolutely overlap which is why I think that you can replace a configuration management tool with Ansible, but you don't have to. That's powerful. That leads into the container tool methods and the Kubernetes and Docker and OpenShift, all these things.


OpenShift specifically because what I'm most experienced with and that's the technology I tend to evangelize when I talk about container space because I think a lot of the features that it brings to the table on top of Kubernetes are very advantageous. The OpenShift installer is built on Ansible. It's a giant set of roles and plugins and modules and playbooks that you deploy OpenShift with, and it handles all sorts of infrastructure. On-premise, virtual machines, AWS, Azure, Google Compute; you have a lot of options. I'm sure there's other community contributed ones. I'm thinking of the ones I know of that are certified.
That's really cool in the sense that it allows you a jump point because if you're already familiar with that tooling, then when you're doing day two operations you're doing cluster level lifecycle management outside of the cluster itself.


Your cluster administrators who are administering inside the cluster, it's a huge system, it's extremely powerful, it does a lot of great things and allows developers to push builds in. Those builds automatically turn into containers that are going to the CI pipeline and automatically deploy. Load balancers do blue-green deployments of the containers inside, that's amazing, that's super cool.


For system administrators and those of us who have sysadmin backgrounds, we're like "Okay, wait a minute how do I manage that thing? When I have an upgrade come out, what do I do?" A very popular workflow for that is you create a bunch of new nodes because a lot of people will run this inside of a virtual environment or on infrastructure as a service say, "Okay I'll bring a bunch of nodes up, I'll add them into the cluster, because I have a three node structure of the master nodes. I'll do those piece by piece adding them back into cluster, I'll add my nodes in the cluster. I'll evacuate all of my pods, which is the intuitive compute that runs containers inside of Kubernetes. And also fun fact, OpenShift is actually CNCF, Cloud Native Compute Foundation Certified Kubernetes, so I'll sometimes use the terms interchangeably.


It will then evacuate those nodes, bring it over, and then you can destroy those nodes and take them out of the rotation. Verbally, that's a relatively— it makes sense. It's a logical progression, and it's not a super complex task except for the fact you have to automate and orchestrate a lot of different pieces.


You have to touch different technologies, and you have to say, "Okay, what tool will I use to bring up the virtual machines, and then add them to my VPC or add storage to them, or provision resources and environments to test the upgrade or do something with your load balancers to bring things in and out," which isn't probably a big concern because of the multimode master. But, it's a decently good practice even if you have that fail over to take it out of rotation, so you're not having cache misses on your proxy.


Have the load balancer bring it up, do something on the operating system, even if it's just simply execute a command. Or, in Ansible, we actually have Kubernetes and OpenShift native modules that use the back-end API so we don't have two run shell commands because, just like decade old shell scripts, they can get sticky over time. That's a liability.


Also, something with Ansible is a majority of the modules are what we call idempotent, which means if you run it once, it will make it change; if you run it again, it won't make a change.


It's hard to do that with a shell script


[00:34:53] Adam: It's hard to do a shell script. We have some patterns that we document of how to do it.


Anyway, we do that, and you have this tool that you can then do this blue-green deployment complete automation, or you can do day to day operations.


The idea — and this goes back to the idea of the Ansible everything for the Fedora infrastructure— is like if you utilized the utility of the power of the tool as infrastructure glue, anything that you make for one machine can be done overall.


Talking about scale, so scalability. We have the option of doing multiple forks, and if you do Ansible tower or the open source upstream of towers AWX, if you use those technologies you actually have a clustered back end and you can schedule out groups of runners that will actually run and do the deployment and then aggregate up into the web console. We have a lot of options for scale. You're basically limited on how much you can throw behind it; kind of like design decision. How much capacity do you need? When you talk about scales like, well okay, "define scale" -- do you need to be able to fire at the exact same time tens of thousands of nodes, or do you just need all those tens of thousands of nodes to be on the same page within the window of an hour or something? Because then you talk about what your requirements are for handling those different use cases.


Different use cases for scalability, what's important there?


[00:37:00] Adam: Within the parameters or within the set of variables that you exist. You have to define those things. That's the goal. That's day zero and then day one and two you have the integrations to be able to talk to Ansible and talk to OpenShift from Ansible.


Beyond that even talking about other technologies and that kind of thing. Open service broker: Open service broker API is another thing out of CNCF that is a standard API that people can define and write services that can talk to Amazon. AWS was actually showing a demo in their booth at summit last week about how you can actually launch services inside of AWS from the OpenShift service catalogs, so you can log in and click a button. If you're a developer, you can click a button or run a command, and then you can actually have a good development environment be augmented by AWS.


There are other ones as well that one is just fresh on my mind since I saw the demo last week. Another one is the Ansible service broker. You can actually take that knowledge that you've developed through your operational team deploying these things and working with these things in Ansible.You have to create a set of playbooks, provision and deprivation bind and unbind. I can't remember the fifth one. You bundle in roles or modules or anything that you rely on, and that gets pushed up into OpenShift in a way that can be advertised again to your users or your developers.


You can actually provision services and advertise them to the container environment through Ansible. Again the goal is to not lose your time and effort invested in learning a technology and having a technology in your workflow; knowing standard practices and those kind of things and being able to take advantage of that in other outlets. That's an integration between OpenShift and Ansible and bringing in that platform for your containers, but then being able to augment it with your operational automation. From there you've got Red Hat insights. That's a newer thing that a lot of people aren't super familiar with; hopefully, we're working on that.


Insight is actually a predictive analytics stack, which basically, it's a Software as a Service component. You install a couple of agents in your environment. This can be your standard role of hosts. This can be in OpenShift, OpenStack; there’s a few other ones. Basically, let's say a CV comes out, and we have this knowledge base of all of this stuff that's going on in the products and everything. We noticed that 10% of your nodes, for some reason, haven't gotten updated for that CV. We can alert and say hey, these machines aren't updated. Here is a knowledge base article of how to remediate that and here's the Ansible playbook; and you can actually tie that into your Ansible tower environment and just like push a button to remediate.


Beyond that, to take it a step further, now there's an OpenShift container native iteration of that they'll actually do scans on your container images. All the images you have inside your environment that are being iterated on, your developers are working on, that kind of stuff. If for some reason somebody is not updating the base image, of the base layer, that they're building on top of for whatever reason. With OpenShift image streams, there really shouldn't be.


But if for some reason that was like pulling some component from external whatever, we have the ability to scan those and again, give that alert, give that inflection and give that introspection. Introspection is the world I was looking for. Give that introspection into the environment and allow the analysis to say, "Hey, you have this potential issue that you should probably look into." And that's not even just like CVs.


Let's say you have a bug, let's say there's a bug in the kernel that for some reason on this specific CPU architecture after 34 days of uptime, it kernel panics. We can give you that information ahead of time and say, "Hey, this is your runway, maybe update and reboot those for that runway." There are all kinds of use cases. It's not just for the CV's that come out, there's also for any kind of bugs that get fixed in software; there's the ability to do the analysis and kind of like give you a predictive outlook on your entire environment from the container component, the OpenShift cluster, and the operating system.


And that's called Red Hat insights.


[00:41:40] Adam: It's called Red Hat insights. The reason I call it an Ansible integration, because on the backend it's actually all implemented in Ansible, and then the Ansible integration of offering up —it's the lot code but— it's powered by Ansible in the back end, and then also you have the integration of it offers playbooks to remediate and that kind of stuff.


There's just a lot of cohesive kind of a technology integration points where Ansible and OpenShift get together, and that's so exciting to me because I'm a fan of both of those technologies. Sometimes because I was previously on the OpenShift team, I get asked, "Why did you leave?" It's like, well, because for, I don't know, gosh. At the time, it was like 12 years I'd wanted to work on the Fedora team, and when the opportunity came up I couldn't say no, I still love the team.


It's like a dream come true.


[00:42:33] Adam: Yes, it was a dream, and I had like this weird Esoteric finding myself problem of leaving that team because I thought that was going to be the team that I stayed with forever. But Ansible became just a driving passion because of what it could do and the kind of capabilities that we can help drive and pursue by offering system administrators, which is my background —that's kind of like my tribe in the ways of operational components— being able to automate that, and that was just a huge thing that spoke to me. That was my progression.


Anyways, because I am such a big fan of OpenShift as a platform and what the power of containers bring to the world, and what we promised with immutable infrastructure and actually deliver, and I've seen it work and I've helped deploy it in those kinds of things, and seeing how we can really automate all of that and to try and scale it all out— I just had a really good phrase that I'm forgetting all of a sudden. But basically, you allow your operational team or your Dev Ops workflow practitioners to increase their capacity of throughput scale themselves as well as scale the work they do throughout their infrastructure because we have tools and we have utilities, we have software that can accomplish that.


Or make the easier things to free up your capacity, solve new and bigger problems.


[00:43:58] Adam: Yes and allowing the computer to do work for us that we no longer have to do manually or with human people.


[00:44:08] Anthony: Now you're almost diving into AI. People fear AI, we don't have to dive in there, but it's almost freeing up the specific capacity to work on solving bigger problems but I'll tell you what, I can talk about this stuff all day long. I think you can see we're probably going to need to do some more interviews, but I'll tell you what, Adam, I greatly appreciate you coming.


[00:44:23] Adam: Thanks for having me.


[00:44:23] Anthony: You are welcome here and thanks for everything you do here and your story and the community contributions and the roles at Red Hat, the stories are great. We're going to have to do this again, thanks again.


[00:44:31] Adam: Awesome, thank you.



How did this interview help you?

If you learned anything from this interview, please thank our guest for their time.

Oh, and help your followers learn by clicking the Tweet button below :)