Well, sure. Thank you very much for having me, João. My name is Axel Fontaine. So I’m the creator of Flyaway and the founder and CEO of Boxfuse. So I have been in the software industry for quite a while since the end of the ’90s always being evolved at the intersection between development and operations and that’s an area that has fascinated me every since 2010. I have been quite active on the conference circuit as well. I did lots of presentations in that space. I have — I’m Java One Rock Star, just kind of did a lot of stuff in that area.
Well, at Boxfuse, we’re really a deployment tools company. So we are about making deployments for JVM and Node.js-based applications to AWS effortless. So how do we achieve that? We strongly believe in a couple of modern concepts like Immutable Infrastructure, Blue/Green Deployments and Zero Downtime updates and we’ve applied all those [concepts] to those type of applications.
Well, I would say so, absolutely. We are using it today. Most of our users and ours accounts are using it today and we see many other people across the industry starting to adopt it today as well. So as I would certainly say so that it’s certainly gaining that option and it’s in use in production today.
Well, I would say that it depends on how far long the adoption curve you are and what you have already adopted from the set of best practices that we would see today. That will depend on just how big a gap it would mean for you to cross to get to Immutable Infrastructure. I think Immutable Infrastructure is really dependent on practices like having a centralized logging server dealing with state properly and a number of other concerns and so you need to have those results first before you are able to move in that direction. It’s not something that is achievable for every company overnight, but if you have already progressed nicely along that journey then it is definitely very attainable today.
Well I think you have to look at your infrastructure really from two major pieces. So if you look at the compute piece, our compute tier, we are iterating rapidly with new versions of the code being released very often. So what we want to do there is have a very deterministic process to change that very quickly. And if you look at how persistent state fits into that, you kind of have a bit of a conflict with the disposability of the instances and you kind of have to throw them away. So you have to accept that persistence state does not belong on these instances anymore.
So you have to keep it external. You have a bunch of options to keep it external. You could have some hosted servers for object storage or for databases that you could access. You could have your own database server but I think the key is that you have to separate persistent storage from compute. So even on the database server if you were to run it as an Immutable server, you would then have the data on a separate volume and you would have the code on another volume and you would then attach the data volume to the code volume.
Joao: You mentioned the adoption curve. So where do you think that Immutable Infrastructure is right now? Is it in the early adopter stage or…
I would say. So I think we are definitely fast innovators. I think at this point, it’s really early adopters, companies as I said that I have adopted a number of the prerequisite techniques that are using Immutable Infrastructure today in production and I think for other ones, well there is still somewhere along that journey to be able to get there but I would say today it’s early adopters.
Oh, I think you need to have a couple of pieces in place. Some of them we already discussed and we talked about having state really removed or at least persistent state, removed from your instance so that you should then aim to store logs outside of your instance in a central logging server. For example, if you have large objects, you can send them to some object server like S3 for example or if you have a database, you should have that off instance as well. Then you have other things you have to consider. It’s okay for example to store temporary data as long as you are ready or prepared to lose it. You have to be able to assert the health of your instances as they come up. So you need some kind of mechanism in place for health checks that will then be able to tie in to other components that will ensure a reliable deployment process.
So a deployment process with Immutable Infrastructure is really about spinning up new infrastructure and asserting that it’s working correctly and then shutting down the old one. So in a sense, we are kind of expanding our infrastructure footprint and contracting it again and along the way, we hope to make a Zero Downtime transition. So how does that usually work? You have some central entry point for your application. You can think of it as a virtual IP or elastic IP in AWS terms and some kind of load balancer that’s keeping track of the instances that are running behind that. You have got a version up and running and if when you are ready to deploy the new version, you push out the code that’ll build another image. You’ll spin up a couple of instances and then you will verify whether they are healthy by checking the health checks. If that’s okay, you will then shut down the old ones and contract that again.
So it’s really a lot of Blue/Green Deployments. It works very well with the Cloud where you are not constrained by finite amount of capacity. You are living in a world of abundance where for a short amount of time, it’s okay to actually double the capacity you are using during the deployment to have all the new at the same time. It will cost you a little bit more but not very much for that one hour. And then when you are happy with the new version, you shut down the old one. Or if it doesn’t work, you simply shut down the new one and you keep the old one running. So you basically guarantee that you always have one version of the code that is working correctly at all times.
Well, I think certainly you live within different constraints when you’re on-prem but I think it’s absolutely possible. You do not really get the world of abundance that you have with a Cloud. So a pure Blue/Green deployment is doable but maybe financially difficult to achieve. But what is possible is to use alternative scenarios where you say we do rolling updates for example. So we still work with images. We still install them. But we shut down one server at a time, start a new, ensure it’s healthy, shut down the next one, fire up the new one and so on so that you really can still achieve it but you don’t get the full benefit of having full capacity of the old version, start full capacity with the new version, ensure that’s healthy and then do the check, but you can still get a good part of the way on-prem.
8. Can you give some examples of those tools?
I think I can use whatever build tools you have been using before to build your artifacts so that this could be for the JVM world, your Maven, Gradle or whatever your tool of choice would be and then for your CI server. You can use whatever CI server you have been using before. So I’m a fan of the solutions we have available in the Cloud today like Travis CI and others but you can use a Jenkins or whatever other tool you choose to use there as well. And then you need some packaging tool to kind of bring your application that you build and to convert it into an image format that you can then use for your Cloud provider or for your infrastructures. So that’s of course a space where we are active as well with Boxfuse for AWS and JVM and Node applications for example and then you get the deployment part which is also something we cover and we need your platform of choice where the stuff will actually be running.
Well, bootable app is really what you can… you can see it as where you distill an application down to its bare essentials all the way down to the hardware. So it’s really all the parts of your application and of your system that it’s actually adding business value. So it’s the bare minimum. You can think of it of a full system without all the cruft… where we have trimmed away all the unnecessary parts. We have trimmed away all the cruft and we are left with just a stack that is actually adding business value, what is really executing there. So in our context here, this will usually be your application with an application server embedded or not, a runtime, the libraries that your runtime requires and an OS kernel and that’s about it and when you bring all that together, that’s when you get a bootable app which is really in a sense a very, very small hardened image.
10. What’s the difference from a container?
Well I think from, if you look at it, we include a little bit more. We include an OS kernel in this case. Conceptually it’s not that far apart. I think just a container is a bit of a general term. Container like a VM could include anything. I think with a bootable app, we have gone to the direction where you really reduce it to the bare minimum. And so at this point, we package it as VMs.
Well, I think you have to put it into perspective and it depends where you want to deploy. I think on a private Cloud, there are fantastic use cases for containers because you do not need the isolation that VMs provide you or you do not necessarily need it. You have density opportunities that you can use there. I think that’s all very fine. On the public cloud however, you are always running within a VM.
Whether you want to or not, every single Cloud provider out there today will let you only run within a VM and then you have the choice to pack more stuff into that VM. You can then pack a container runtime and let it run containers but the problem is you are kind of running a double virtualization infrastructure where your Cloud provider already provides you a number of things like if you are on AWS for example, they will provide you with image management for your AMIs and they’ll store them and they kind of deal with all that for you with a registry and they’ll kind of do scheduling as well.
So you don’t need to worry on which physical host your stuff will land. They take care of that for you. You don’t have to worry about capacity planning. It’s their problem to acquire enough machines. They take care of storage as well. They make sure there is enough of that. They take care of the software-defined network and the security and that’s all kind of part of the base package you buy anyway whether you choose to move to the cloud. When you actually then decide to add containers to the mix, you have an entire other layer, you now need to manage yourself.
You can use some hosted services or run things yourself but you need to start worrying about having a registry for your container images. You have to start worrying about having a scheduling solution amongst the different hosts in your ECS cluster. You can then use ECS, for example, or Kubernetes or something else but you need to worry about that. It’s a problem you need to solve, the same thing about capacity. You need to have enough space in your cluster to be able to schedule things. You need to have a solution for your container volumes and moving them and how that all fits. You need to have a networking solution.
You kind of need to deal with all these problems and we just look at it from an economical perspective. It really depends what makes economical sense for you on the cloud. When you look at it these days, a t2.nano on Amazon costs about five pounds for one-and-a-half months of computer. So if you bring that down to one hour, it’s half a penny per hour. So this is really the smallest unit of granularity you are kind of working with for your budget when you are kind of scaling your instances and dealing with all that.
And so the question is, can you afford that? Can you afford half a penny per hour as the smallest unit of granularity? If the answer is yes, I would say there is no reason to further invest in the engineering effort to further subdivide that capacity. If the answer is no, then yes it may make sense for you to move into containers and investing to solving these problems at the container level again. So it’s really — it’s pure economics. It depends on your business and what you can afford to do is 0,5p per hour, something you can afford as the finest granularity or not.
Well I think to a certain aspect, yes because it’s kind of moving further along the DevOps story we have been doing so fast. So it’s kind of bringing more pain forward towards the developers that you are trying to achieve perfect environment parity where every environment is exactly the same from development all the way to operations. And so certain things that would only have been taken care of from the ops side before, now we are kind of coming to dev territory. So you need to increase collaboration on that side. You need to make sure all these different parties work together. So I think there are certainly some changes ahead there.
Well I think I’m just like you would sell any other technology or any other change really, you have to phrase it into benefits that this person would enjoy. So I think in terms of Immutable Infrastructure, the nice thing is what we just mentioned, the environment parity. Not having surprises late in the game but having surprises early where you can — where it’s then cheap to fix them and iterate quickly and so you can have much higher reliability. If you combine it with the Blue/Green deployments, you can make deployments pretty much transactional where at the end of the day, it’s always a running version and if you attempt to replace it by another one that doesn’t work, the other one will get destroyed. It will only get replaced if the other one has been proven correct. So you can have a very, very nice benefit and it kind of simplifies the general idea that you have of the system because you reduce the number of moving parts. You don’t have things that don’t fit together. So it’s kind of — it just kind of makes it easier to reason about, I find.
Joao: Okay. Thank you so much, Axel, for your time.
My pleasure, João. Thanks for having me.
转载本站任何文章请注明：转载至神刀安全网，谢谢神刀安全网 » Interview: Axel Fontaine on Immutable Infrastructure