Editor’s Note: This blog was originally published in February 2020. It has been updated to include the latest information.
A server is a server—right? You install your applications and database onto a server, test and then migrate everything to production servers when it’s ready. Flip the switch and now your new service is live.
Actually, it’s not quite that simple—and it probably never was. Even when dedicated physical servers were the norm for most enterprises, there were still many details to consider. For example, admins need to think about setting up the server with the basic configuration and services the applications will need, connecting it to the network, implementing maintenance and backups, and much more. The applications and database also need to be configured to achieve optimal performance on the specific server they’ll be installed on. If they’re later migrated to a different type of server, or one that has different software and network interfaces, the applications may not perform as well.
Virtualization changed a lot by decoupling the applications from the underlying hardware, allowing developers to concentrate on building applications without worrying about server or device configurations. And from there, approaches to virtualization have evolved rapidly, from virtual machines to containers, and then again from containers to “serverless” computing. The change has happened so quickly, it can be hard to keep up with all the different flavors.
This quick guide from our “How to Speak Like a Data Center Geek” series will help set things straight.
From bare metal to serverless: A quick primer
Before diving in, it’s helpful to level set on the basics. First, there’s no “right answer” when it comes to how to deploy an application. Choosing the model will depend on your organization’s and your application’s requirements. Bare metal provides the most control and performance, but also sports more layers of complexity. As you move to the other end of the spectrum, virtualized compute services offer increasing levels of convenience but less control. The less you have to do for yourself, the more naturally reliant you are that the decisions made by the builders took care of those things for you.
While there are different ways to substitute and/or combine these models, the diagram below provides a point-by-point comparison of each approach.
- Bare metal is a physical server dedicated to one tenant.
- A virtual machine (VM) virtualizes the hardware.
- A container virtualizes the operating system (OS).
- Serverless virtualizes the runtime environment.
Sources: Arun Kottolli (average bare metal boot time = multiple sources)
Bare metal provides maximum control and performance
Bare metal (dedicated servers): Bare metal refers to a physical server dedicated to one tenant, or customer, usually without any virtualization. At one point, all servers were bare metal, but virtualization and cloud technologies made it possible to share server resources across multiple tenants. A bare metal approach works well when you need maximum control and performance—for example, when running data-intensive workloads or processing sensitive information.
Bare Metal as a Service: Bare Metal as a Service (BMaaS) is no different from traditional bare metal, except that you’re renting the servers from a cloud provider or digital infrastructure company. Some of the benefits of cloud are available in this approach, including the ability to deploy storage, networking or other data center services on demand. With BMaaS, there is no multitenancy (server sharing) between different companies. A single customer can configure and use the server however they want, including running different processes or applications on different BMaaS servers.
Virtualization ushers in speed and efficiency
Virtualization: Virtualization lets you create a virtual replica of something physical, such as a server, storage or network device. The software hypervisor separates the physical hardware from the virtual environments and allocates system resources as needed.
Virtualization is used in both single-tenant and multitenant cases and has several benefits, including:
- More efficient use of server resources
- Faster provisioning of applications and resources
- Reduced cost
As an example, the three dedicated servers shown in the diagram below could be reduced to one server running three VMs. And, since the hypervisor is the intermediary between the VM and the underlying hardware, it makes it easy to take a complete snapshot of an application and copy it to a VM on another server.
Source: Red Hat
Containers: Software containers are another method used to run multiple apps on the same physical hardware. Whereas VMs each have their own OS, containers share the same OS kernel, and applications are isolated to their own runtime environment. Basically, VMs are virtualizing the hardware while containers are virtualizing the OS. Containers are lightweight, fast to launch and consume less memory than VMs, making it easy to quickly scale in and out of containers as needed.
Serverless: Serverless computing is a cloud service model that removes the need for developers to manage server software and hardware. Instead of writing code to handle low-level infrastructure decisions, developers can focus on building application functions higher up the technical stack. There are four characteristics common to serverless architecture:
- Servers are provisioned and managed by the cloud provider.
- You only pay for the resources you consume.
- Autoscaling is built in.
- Availability and fault tolerance are managed by the cloud provider.
Function as a Service: Function as a Service (FaaS) is the development platform for a serverless approach. It’s similar to Platform as a Service (PaaS) but allows for more granularity. With PaaS, an application is deployed and scaled as a whole unit. In contrast, FaaS allows developers to break applications up into individual functions or modules that scale automatically. Functions are small pieces of code often designed to run on demand for short periods of time. This approach is useful for supporting event-driven workloads such as those that arise in the Internet of Things (IoT).
What’s interconnection got to do with it? Everything!
Regardless of which approach you take to deploy IT resources, performance, security and scalability will always be top priorities. High-speed, low-latency connections between system components, networks and clouds are essential for applications to perform well, whether your bare metal or virtual IT infrastructure is on-premises, in the cloud or a combination of both.
Connecting different business systems over the public internet may not be practical due to latency and security challenges. Long-haul MPLS networks and dedicated point-to-point circuits can provide better performance, but are costly to maintain.
By leveraging a global interconnection platform such as Equinix Fabric®, any of these deployment approaches can be integrated into a globally distributed, hybrid IT infrastructure with lower latency and networking costs. Equinix Fabric connects on-premises infrastructure to multiple cloud platforms on Platform Equinix® in minutes over private, high-speed virtualized connections, providing on-demand access to physical and virtual resources wherever they reside.
Proximity to dense ecosystems of network and cloud service providers also makes it easy to dynamically migrate on-premises applications and data to more agile compute and storage resources for improved scalability, without affecting system performance or integration dependencies. For instance, Equinix Fabric integrates with Equinix Metal®, our Bare Metal as a Service solution.
With Equinix Metal, you can deploy high-performance, single-tenant bare metal servers in 30+ markets worldwide. You get the control and performance of bare metal with a flexible, cloud-like consumption model. By pairing Equinix Metal with Equinix Fabric, you can create a digital presence in a new market in minutes, and then interconnect that market with your existing global infrastructure.
To learn more about Equinix Metal or get started with your deployment, visit us today.