Magnetic.io are creating a new open source microservice deployment platform named VAMP , or Very Awesome Microservices Platform, which offers a ‘platform-agnostic microservices DSL’ for deployment, A/B testing, canary releasing, autoscaling, and an integrated metrics and event engine.
VAMP can currently be deployed on top of Apache Mesos or Docker Swarm, and a VAMP microservice deployment consists of: breeds, static artifacts that describe single services and dependencies; blueprints, which describe how breeds work in runtime and what properties they should have; deployments, which are running blueprints; and gateways, stable routing endpoints defined by ports for incoming traffic and routes for outgoing.
Routing defines a set of rules for routing traffic between different services within the same cluster, and the rules can be constructed by specifying a weight in the percentage of traffic, a filter condition to target specific traffic, or by setting a filter strength in the percentage of traffic matching the filter conditions.
Metrics and events can be collected by the platform, and corresponding service level agreements (SLA) and escalations can be specified. A ‘workflow’ can also be defined, much like a recipe, which allows the creation of a grouped series of automated changes to a running systems and its deployments. This allows the implementation of a feedback loop within a microservices platform, for example, to deploy, measure and react.
InfoQ recently sat down with Olaf Molenveld , CEO and co-founder of magnetic.io (the company building VAMP), and ask about the hunt for the correct microservice platform abstractions, container scheduling and Platform as a Service (PaaS).
InfoQ: Hi Olaf, and thanks for joining InfoQ today. Could you introduce the VAMP microservices platform please?
Molenveld: Thank you! VAMP is an open source solution that delivers easy canary-testing & releasing features to container- and microservice-based systems. Additionally it offers powerful workflow features like autoscaling with correct draining of services.
To solve scaling and time-to-market challenges we see online organisaties quickly adopting technologies like ( Docker ) containers, microservices, APIs, big-data and continuous-delivery. These separate technologies bring clear advantages, but to really unlock the business value potential these separate components need to work in tandem to deliver a continuous improvement feedback-loop that fits in with methodologies like LEAN, agile and scrum.
To grow from continuous integration/delivery to a continuous-improvement loop it’s essential to have an experiment system available that acts as a safety net, and that can bring together IT and business. Successful online companies like Netflix , AirBnB and Facebook have shown that canary-patterns – such as experimenting online by constantly releasing new software versions to a small percentage of visitors with specific criteria, measuring technical and business performance, improving and repairing where necessary, and then scaling up, in a continuous process – are a very good way to implement such a safety net and to make business and IT work together on delivering business value.
Until recently it was quite complex and expensive to implement and apply these canary-patterns. The goal of VAMP is to deliver an open source solution to make it easy and straightforward to start working with canary-testing and -releasing features.
We’ve been working on VAMP since early 2014 with our Amsterdam-based team of experienced engineers and architects, and currently VAMP is being used or evaluated by organisations worldwide in industries ranging from finance, media, e-commerce to SaaS and hardware manufacturing.
InfoQ: There are currently many microservice platforms emerging. Where does VAMP fit in with the existing offerings, and can you offer any thoughts on choosing the correct abstraction for a platform?
Molenveld: VAMP is not a full microservice/container PaaS/stack on it’s own, but focused on delivering specific higher-level canary-testing/releasing and autoscaling features for common microservices/container stacks. VAMP integrates with container and microservices platforms such as Mesosphere’s Open DC/OS , Apache Mesos / Mesosphere Marathon , Docker Swarm and (soon) Kubernetes , and also stacks like Cisco’s Mantl , CapGemini’s Apollo or Rancher Labs’ Rancher , or container-clouds like MS Azure Container Service that make use of one of the container orchestrators that VAMP integrates with.
The current crop of container and microservices platforms does not offer VAMP’s business-oriented canary-testing/releasing features, but instead has simpler technical load balancing and "rolling upgrade” features. Business people are used to A/B testing tools like ‘Visual Website Optimizer’ and ‘Optimizely’ and are looking for canary-testing features that enables them, for example, to expose a new version of the software to 2.5% of all visitors using Chrome on an Android device and coming from the UK. This kind of granularity is not possible without a lot of hacking and manual work with the current container/microservice stacks. VAMP integrates easily and adds these higher-level features.
Additionally, because VAMP makes it easy to canary-test/release your software, it’s also essential to have your services/containers scale automatically as the number of visitors, and thus the load on the software, increases. VAMP has built-in Service Level Agreement (SLA) and escalation workflows that deliver easy auto up- and down-scaling of containers based on user-definable metrics like response-time or requests per second.
VAMP aggregates metrics over clusters (as you’re typically not interested in the health of specific instances, but only of the total cluster of instances) and also makes sure when scaling down that services are correctly drained, taking into account sticky-sessions and time-to-live. However, autoscaling is just one implementation of a metrics-driven optimisation-workflow. VAMP’s workflow-engine and metrics and event API makes it very easy to create all kinds of custom event- and metric-driven optimisation workflows, like follow-the-sun, bin-packing, cloud bursting, cost/performance optimisation and brown-out scenarios.
In summary, to deliver a continuous-improvement feedback loop, a system that orchestrates the three main domains of feedback-loops – deployment and scaling of services/containers, programmable routing/load-balancing, and big-data metrics-aggregation – must be put in place. We believe that VAMP meets this criteria.
Addressing the abstraction level we are focusing on, VAMP offers higher-level canary and optimisation-workflows features on top of “container clouds”, which you might also call container-PaaS or CaaS. The core programmable-routing and load-balancing features of VAMP (i.e. routing percentages of traffic with specific conditions to different versions of software) also work without using containers. But when you move into the real-time dynamic auto-scaling features of VAMP, the scaling features of containers and container-managers are what we are using and thus containers are needed for these scenarios.
InfoQ: VAMP supports running on Docker and Apache Mesos, with Kubernetes and Docker Swarm coming soon. What made you choose Docker and Mesos as the starting platforms?
Molenveld: When we started designing and developing VAMP early 2014 the most mature container-cluster manager and orchestrator around was Mesos, and the common container-unit was Docker. So it made sense to first start working on integrations with these platforms. As Docker matured, adding Docker Swarm as a cluster-manager also made a lot of sense, which is helped by the fact that Docker and Docker Swarm share the same API.
Lately Kubernetes adoption has increased explosively, and Kubernetes shows very interesting potential. We strive to constantly add integrations with well-used and promising container-platforms. We are collaborating with several companies on delivering integrations as add-ons to VAMP, and for example one of these companies is currently working on a Rancher integration for VAMP.
InfoQ: Scheduling is a current hot topic in the container space. Can you share your thoughts on the future of scheduling, and is there potential for self-optimising microservice architectures that scale and arrange themselves depending on demand?
Molenveld: Yes, scheduling is very interesting to us, as it’s an important part of systems-optimisation. And as we are focusing on enabling continuous-improvement feedback loops, we are logically working on several features that enable self-optimising systems. One of the most common and well-known optimisation workflows is auto-scaling. But autoscaling is just one example of a continuous metric-driven optimisation workflow.
Because VAMP can orchestrate the three core areas of a continuous-improvement loop (deployment orchestration and scaling, programmable routing, aggregated metrics/big-data) it’s easy to create and deliver additional optimisation workflows. Examples of these are follow-the-sun scenarios (scale up resources during daytime, scale down at night), bin-packing problems (optimal utilisation of available infrastructure) and brown-out feedback loops (using throttling of traffic to slow down the entire system within acceptable ranges to avoid a total outage) .
So yes, we firmly believe that sooner than later, the teams that are responsible for making the system perform correctly will be focusing much more on defining high-level bandwidths and rules for business and technical performance, and that the system will make sure that the optimal balance between cost and performance within these rules and boundaries is found, while the teams can focus on delivering business and consumer value. And of course this is a real-time process, and thus the system will constantly monitor, measure and correct/improve itself.
Deep-learning and AI will become essential components of these systems as huge amounts of data are generated and can be processed to learn, simulate and improve the performance of the system.
Of course this will need even more collaboration between development, operations and business than today. But smart companies are already doing this, and we expect to see big progression in this area, and of course this is also what we’re betting on with VAMP.
InfoQ: VAMP in some ways also looks similar to PaaS platforms, such as Cloud Foundry and Rancher Labs’ Rancher. Do you see VAMP evolving into a full PaaS offering?
Molenveld: Not at this point. When we started out in 2014 we had a much more generic focus, and it was also difficult to predict what was going to happen within the Cambrian explosion of container and microservices solutions. Right now the market has become more clear, and we know much better where our main focus and added value is. VAMP is not an alternative for, or competing with, PaaS or container-platforms, but a powerful addition when you want to start working with canary and optimisation-workflow patterns.
Because we integrate with the most used container-managers, VAMP’s higher-level features are also transported over different container-solutions, and thus can also enable multi-container-cloud and multi-scheduler requirements, reducing dependency on a single container-cloud or vendor that way.
InfoQ: What is the best way for readers to get involved and contribute to VAMP?
We offer a single self-contained QuickStart package that can be downloaded from our website http://vamp.io and that will install all necessary components with a single command. We have a getting-started tutorial to get a feeling for the power of canary-features that can be worked through with the quickstart.
For people working with the new open source Mesosphere Open DC/OS , VAMP is also available as a DC/OS universe-package, so you can quickly install VAMP on DC/OS from it’s dashboard.
If you like what we offer, and want to improve or contribute to VAMP, we are welcoming collaborations in any form! Our open source projects are available on Github , and we can be found on Gitter for realtime chat and help. And of course people can always send us direct email: email@example.com or firstname.lastname@example.org
InfoQ: Thanks again for speaking to InfoQ. Is there anything else you would like to share with the readers?
Molenveld: Thanks for the opportunity and the great questions! We really believe that the use of containers, microservices and continuous-improvement feedback-loops will move us to a next level where teams can focus better on delivering real world value to business and consumers, which works out better for all parties involved.
We love to collaborate on delivering this vision, so if you are interested in joining our team, want to participate by adding new cool features or generally improving our code-base, or want to integrate VAMP with your platform to deliver canary and optimisation features, we would love to talk! And of course we’re also very open to helping organisations to start leveraging the canary and optimisation-workflow powers of VAMP. Feel free to reach out to us!