One size does not fit all. There will not be one silver bullet that magically works for all your needs. The same goes for cloud and infrastructure choices. The number of options is expanding, to meet the unique business needs of all types of organizations. We see increasing numbers of hybrid cloud solutions as well as various configurations and options within private and public cloud environments. With this choice comes risk; the wrong choice of workload placement can degrade performance and scalability and ultimately make applications unusable.
When considering workload placement you have to consider a wide variety of factors, ranging from technical requirements to legal aspects, as seen below:
- Performance requirements (like IO profile, latency and jitter requirements)
- Resiliency requirements
- Needed underlying services (data, network and security services)
- Data privacy consideration
- Software license terms
- Infrastructure management requirements
- Integration to existing systems
Larger companies easily have hundreds, if not thousands, of applications. With this, most companies do not have an up-to-date catalog of the applications; dependencies between the applications; and defined infrastructure each one is using. This, as starting point, is challenging for any workload placement decision and leads to a trial and error approach. Knowing this, the foundational first step is to get a decent view of the application landscape.
While collecting this information, the key is to collect all the data that is needed throughout the workload placement decision-making process, not just in the first step. You need to understand what is relevant data, and not to overdesign and collect unnecessary data, as that just creates complexity.
Excel is easy to start collecting data but will quickly become a bottle neck. Having the proper tool will help to collect and store needed data, and it provides the power to analyze and view the data from multiple angles.
Putting the legal and other non-technical issue aside, the biggest question of workload placement to a cloud environment is to determine how cloud-native your applications are. Usually people refer to The 12 Factor App methodology and microservices when defining apps as being cloud-native. Keep in mind that applications don’t need to be 100% cloud-native (like horizontal scale out) when considering a move to an IaaS cloud. The most critical aspects of being cloud-ready include assessing the following:
- Ability to run on standard, virtualized infrastructure
- Resiliency from the underlying infrastructure
- The amount of proprietary or hard coded dependencies to underlying services
We have built applications the past 30 years based on the “always on infrastructure” principle. We expected infrastructure to provide stable and predictable services and there was no need to code resiliency inside the application. And it’s not just the infrastructure, but this also goes for all services that the application is consuming and using. In the cloud, you need to cope with situations where latencies can change from milliseconds to seconds and even an occasional unavailability of a service.
You can also expect your applications to have a number of dependencies, requirements and hard-coded configurations related to underlying infrastructure and services—it could be as simple as a certain version of an application server. Creating applications that are agnostic to underlying infrastructure and services takes considerably more time and effort to do. With the budget and time constraints everybody has to live with, and frankly because we often get lazy, we take the shortcuts and use proprietary or hard-coded services which make it harder to move applications.
Achieving Price Benefit
Assuming your applications are cloud-ready, the next element to consider is cost optimization. Public cloud pricing models are different than what you have gotten used to with internal IT or with outsourcers. It has been years, if not decades, where we have optimized our workloads to squeeze the best results out of the existing models. Given the complexity, variety and velocity of cloud pricing models, there will be a learning curve requiring new procurement skills from your organization. With this, getting the cost benefit can require you to optimize the applications for cloud pricing models. The bare minimum is to set policies and enforce timely decommissioning of services in order to avoid invoice surprises.
Does Workload Placement Matter?
At the end of the day, workload placement can help drive down operating costs, create healthy competition, and serve as an alternative to the current infrastructure. A workload analysis will also yield improved situation awareness (an applications catalogue with dependencies) and any improvements that have been done to the application architecture and deployment processes (DevOps) to get them cloud-ready. These will increase your agility and reduce incident resolution time and risks.