To Virtualize, Or Not To Virtualize
Despite the buzz that server virtualization has generated over the past decade, you may be surprised to hear that the decision about whether or not a workload should be virtualized is not a closed case. Even as IT organizations move into a post-virtualization era where virtualized workloads are the norm, there’s plenty to consider when considering how to support any given workload.
Workload infrastructure is a pretty interesting place and there is far more variation than people may realize. Although many organizations have made it their mission to eliminate physical workloads in favor of virtualized ones, that’s not always the right answer. Further, there are emerging workload services that need to be considered, particularly since they align so well with new software development methodologies.
If you thought the physical server was dead, think again! Even today, well over a decade since the inception of VMware, physical workloads remain popular for a number of reasons. We still have applications that need the kinds of performance that only a physical server can provide and there remain applications that are less than friendly toward virtualization from a licensing perspective. Although supremely frustrating, there are even companies out there that still refuse to support their applications if they’re installed on a virtual machine.
With those thoughts in mind, it’s clear that physical servers will need to enjoy continued support for the foreseeable future, but that doesn’t mean that they should be considered an exception. In fact, new workload dynamics – for example, I/O requirements for big data and analytics - may mean that you should make sure that these systems are just as well supported as everything else in your environment.
Today, standard mainstream workloads are generally virtualized. Over time, the percentage of systems that have made the jump from physical to virtual has steadily increased, particularly as hypervisor vendors have continued their efforts to make virtual machines first-class citizens. They have accomplished this by enabling what have become known as “monster VMs” and by continually adding more and more capability to their products.
It’s clear that today, virtual machines rule the roost and get most of the mindshare, but that doesn’t mean that they should get special treatment. After all, (almost) all your workloads are probably very important to you, no matter where they run. Your infrastructure should reflect that. And, over time, you should expect to see newer workloads begin to operate in newer environments.
Which brings us to containers: the new kid on the block in some ways. Although containers have actually been around forever, in recent years, they have gained a new lease on life as companies like Docker have found ways to make them easy to use, which has brought them to the masses. Whereas virtual machines force you to install a separate operating system instance for every single virtual machine, each of which requires a certain baseline of resources to run the operating system, containers work inside a single operating system instance (Figure 1). So, rather than have sixty copies of a Linux server running, with each one supporting a discrete task, you can have those sixty services running inside containers on a single operating system instance.
Containerized applications are packaged with their individual dependencies and configurations, making them eminently portable across systems. Moreover, due to the fact that containerized workloads run atop operating system provided binaries and libraries, containers can operate in both physical and virtual environments.
As an aside, as you review the world of containers, you will probably see people asking if you should be doing containers or if you should be doing virtualization. It’s really kind of a misleading question since you can do containers on top of virtualization. Further, they’re both simply workload foundations. The choice you make around workload architecture is really dependent on the workload you’re running. If it’s a traditional application, you run it in a virtual machine. If it’s a microservices-like app or containerized, you run it in a container (which, again, may be hosted on a virtual machine).
All things considered, there may be more than one correct choice for hosting a given workload. You’ll need to consider the organizational impacts of running it one way or another, the cost implications, and the performance ramifications. Beyond that, you’ll want to be thinking about the future. Which method of architecture offers you the right amount of flexibility for the long run? The pace of IT is continually increasing and you never know when requirements may change.