A server is a server – right? You install your applications and database onto a server, test and then migrate everything to production servers when it’s ready. Flip the switch and now your new service is live. Actually, it’s not quite that simple and probably never was. Even when dedicated physical servers were the norm for most enterprises, there were many considerations to take into account. For example, the process of setting up the server with the basic configuration and services the applications will need, connecting it to the network, implementing maintenance and backups, etc. The applications and database also need to be configured to achieve optimal performance on the specific server they will be installed on. If they are later migrated to a different type of server, or one that has different software and network interfaces, the applications may not perform as well.
Virtualization has changed everything by de-coupling the applications from the underlying hardware. Developers can concentrate on building the applications without worrying about server or device configurations. But virtualization approaches have evolved so fast, it can be hard to keep up with all the different flavors. This quick guide from our “How To Speak Like A Data Center Geek” series will help set things straight.
Download the Equinix Cloud Exchange Fabric™ Data Sheet
Establish data center-to-data center network connections on demand between any two ECX Fabric locations within a metro or globally via software-defined interconnection.Read More
Virtualization has changed everything by de-coupling the applications from the underlying hardware. Developers can concentrate on building the applications without worrying about server or device configurations.
From bare metal to serverless – a quick primer
Before diving into each approach, it’s helpful to understand the basics. Choosing the right deployment model will depend on your organization’s requirements. Bare metal provides the most control and performance, but it also has the highest level of complexity. On the other side, virtual services offer the most convenience but less control. While there are different ways to substitute and/or combine these models, the diagram below provides a point-by-point comparison of the typical characteristics for each type of approach.
- Bare metal is a physical server dedicated to one tenant.
- A virtual machine (VM) virtualizes the hardware.
- A container virtualizes the operating system (OS).
- Serverless virtualizes the runtime environment.
Sources: Arun Kottolli (average bare metal boot time = multiple sources)
Bare metal for maximum control and performance
Bare metal (dedicated servers): Bare metal refers to a physical server dedicated to one tenant, or customer, usually without any virtualization. At one point, all servers were bare metal but virtualization and cloud technologies made it possible to share server resources across multiple tenants. A bare metal approach works well when you need maximum control and performance – for example running data-intensive workloads or processing sensitive information.
Bare metal cloud: Bare metal cloud, or hosted bare metal, is essentially the same thing as bare metal except that you are renting the servers from a cloud provider. Some of the benefits of cloud are available in this approach, including the ability to deploy storage, networking or other data center services on an as-needed basis. With bare metal cloud, there is no multi-tenanting (server sharing) between different companies. But a single customer can configure and use the server however they want, including running different processes or applications on different bare metal cloud servers on-demand. Some businesses use bare metal cloud as an interim step in moving workloads to the cloud.
Virtualization ushers in speed and efficiency
Virtualization: Virtualization enables the creation of a virtual replica of something physical, such as a server, storage or network device. Software called hypervisors separates the physical hardware from the virtual environments and allocates system resources as needed. Virtualization is used in both single tenant and multi-tenant cases and has several benefits including: more efficient usage of server resources, faster provisioning of applications and resources, reduced cost and more. As an example, the three dedicated servers shown in the diagram below could be reduced to one with three virtual machines. And, since the hypervisor is the intermediary between the virtual machine and underlying hardware, it makes it easy to take a complete snapshot of an application and copy it to a virtual machine on another server.
Containers:Software containers are another method used to run multiple apps on the same physical hardware. Whereas virtual machines each have their own OS, containers share the same OS kernel and applications are isolated to their own runtime environment. Basically, virtual machines are virtualizing the hardware while containers are virtualizing the OS. Containers are lightweight, fast to launch and consume less memory than virtual machines, making it easy to quickly scale in and out of containers as needed.
Serverless: Serverless computing is a cloud service model that removes the need for developers to manage server software and hardware. Instead of writing code to handle low level infrastructure decisions, developers can focus on building application functions higher up in the technical stack. There are four characteristics common to serverless architecture:
- Servers are provisioned and managed by the cloud provider.
- You only pay for the resources you consume.
- Autoscaling is built in.
- Availability and fault tolerance is managed by the cloud provider.
FaaS (Function as a Service): FaaS is the development platform for a serverless approach. It is similar to Platform as a Service (PaaS) but allows for more granularity. With PaaS, an application is deployed and scaled as a whole unit. By contrast, FaaS allows developers to break applications up into individual functions or modules that scale automatically upon demand. Functions are small pieces of code often designed to run on demand for short periods of time. This approach is useful for supporting event-driven workloads such as those that arise in the internet of things (IoT).
What’s interconnection got to do with it? Everything!
Regardless of which approach you take to deploying IT resources (physical or virtual) – performance, security and scalability will always be top priorities. High-speed, low latency connections between system components, networks and clouds are essential for applications to perform well whether your bare metal or virtual IT infrastructure is on-premises, in the cloud or a combination of both. Connecting different business systems over the public internet may not be practical due to latency and security challenges. Long-haul MPLS networks and dedicated point-to-point circuits can provide better performance but are costly to maintain.
By leveraging a global interconnection platform such as Equinix Cloud Exchange Fabric™ (ECX Fabric™), any of these deployment approaches can be integrated into a globally distributed, hybrid IT infrastructure with lower latency and networking costs. ECX Fabric connects on-premises infrastructures to multiple cloud platforms on Platform Equinix® within minutes over private, high-speed, low-latency virtualized connections, providing on-demand access to physical and virtual resources wherever they may reside. Proximity to dense ecosystems of network and cloud service providers also makes it easy to dynamically migrate on-premises applications and data to more agile compute and storage resources for improved scalability, without affecting system performance or integration dependencies. In some cases, interconnection can deliver a 328% ROI to your company’s bottom-line.
Learn more about ECX Fabric. Also read our announcement to acquire bare metal leader Packet to find out how we are helping enterprises more seamlessly deploy hybrid multicloud architectures on Platform Equinix®.