Connecting to the cloud via a private interconnection can be confusing, but it doesn’t have to be. In this blog, we will simplify the solution by abstracting the individual components that are common to interconnecting to all major cloud service providers (CSPs). Once you have an understanding of the building blocks, then you can reassemble them as needed for each respective CSP.
Private interconnection components
Please refer to Figure 1 below as we walk through the components. We will use the abstracted names at the top and you can map them to any CSP implementation below. Please note that each CSP has a similar implementation for private partner interconnection to their cloud platform – they are just called different names respective to each CSP’s brand (i.e., AWS Direct Connect, Microsoft Azure ExpressRoute, Oracle® Cloud Infrastructure (OCI) FastConnect, Google Cloud Interconnect).
A quick side-note regarding the private partner interconnection component: Private interconnection can be delivered in one of two ways: direct or hosted. A direct connection is a Layer 1 fiber connection between your customer premises equipment (CPE) node and the CSP edge node. In the case of a direct connection, your provider is responsible for providing the Layer 1 connection and then you build Layer 2 connections directly with the respective CSP. Conceptually, if you remove the Layer 2 interconnect from Figure 1, then that is what a direct connection looks like. A hosted or partner interconnection is when there is a switch (e.g., Equinix Cloud Exchange Fabric™) between your CPE and the CSP edge node as depicted under the L2 Interconnect in Figure 1.
The connection between the CSP and your CPE device is physically connected between the partner interconnect and the respective CSP edge nodes. Multiple Layer 2 virtual connections can be established over this link. For each virtual connection, there will be two VLANs created: a buyer and a seller.
The buyer VLAN is facing the buyer of the service, which in this case is the enterprise, and the seller VLAN is the CSP. During the virtual circuit creation, a seller-side VLAN ID is dynamically assigned by the CSP and you will assign the buyer-side VLAN, which then gets configured as a .1q trunk on your CPE device. The .1q is the networking standard that supports virtual LANs on an IEEE 802.3 Ethernet network. Figure 2 below shows an AWS Layer 2 connection using ECX Fabric and AWS Direct Connect. In this case, VLAN 308 was dynamically generated when the virtual circuit was created and then VLAN 3033 was assigned by the buyer. The ECX Fabric is a switch that will bidirectionally map the buyer-side VLAN 3033 with the seller-side VLAN 308 to establish Layer 2 connectivity.
The private network (PN) is the starting point for instantiating private services in the public cloud. The PN is a logically defined section of the cloud that is isolated to your private environment and is completely controlled by you. In software-defined networking, it is an overlay network that is abstracted from the underlying hardware, so your architecture and design is specific to the overlay network and not defined by the underlying hardware. This gives you more freedom to cost-effectively setup and manage the network. IP addressing, summarization, managing route tables, segmentation and multi-tier designs are still vitally important to get right – the hardware is now just abstracted away from those processes.
Once you have created a PN, it must be attached to a private gateway (PG) so the rest of your enterprise can reach it. A gateway is a device that acts as a gate between two networks. In the case of cloud, it is a software-defined construct like the private network that acts as a router. The private gateway is attached to the PN and is then used to establish boarder gateway protocol (BGP) peering with your on-premises router or CPE. BGP is the routing protocol of choice to the major CSPs and you must establish BGP peering sessions for reachability.
A private interface (PI) is essentially the Layer 3 point-point connection between the BGP peers. Depending on the respective CSP, the private interface will either be configured separately or part of the BGP configuration. The key thing to remember is the PI provides the Layer 3 connectivity for establishing BGP peering.
AWS Direct Connect example
We can now put all the components together. Figure 3 below shows a connection to AWS. Each of the components are now stitched together to provide reachability between the AWS private network and the enterprise. The Layer-2 connectivity is provided by the ECX Fabric switch that is bidirectionally mapping VLAN 3033 and VLAN 308. This allows us to build a Layer-3 point-to-point connection for establishing BGP peering between the CPE router and the gateway. The gateway is then attached to the private network via a virtual private connection (e.g., AWS Virtual Private Cloud (VPC)). That is advertised to the CPE router, which in turn advertises the enterprise routes to the cloud private network.
Hopefully, by breaking down the components of cloud connectivity this provides a simpler way of looking at cloud on-ramp. Instead of trying to memorize all the different CSP names for the components – just remember the abstracted names and then apply the abstracted names to the respective CSP. For example, think “I need to connect to AWS so what do they call PN, PG, PI and PC?” The answer would be “I need to create a VPC in AWS and connect that to a VPN Gateway and then create a private interface that is attached to Direct Connect.”
There is also the benefit of speed in which ECX Fabric acts as an on-ramp to various CSP’s platforms. Setting up virtual connections to these CSPs with ECX Fabric via its API or portal only takes minutes, rather than days or weeks. So, not only is the interconnection process simpler and more cost-effective, it is a faster way to access not just one, but multiple cloud services to scale your business.
For more information on how to deploy hybrid of multicloud environments, read our ECX Fabric data sheet.