Running Hybrid and Multicloud Containers with Google Anthos

Running Hybrid and Multicloud Containers with Google Anthos

In recent years, enterprises have realised that not every workload should run in the public cloud. Google Anthos offers a platform to deploy and manage containerised workloads across different cloud infrastructures.

Authored by:
Kent Hua, Customer Engineer - App Modernization Specialist, Google
James McElvanna, Solutions Architect, Megaport
Nick De Cristofaro, Networking Specialist Customer Engineer, Google
James Ronneberg, Partner Development Manager (Cloud), Megaport

Introduction


In the early days of cloud computing, the narrative from the CSPs was that nearly everything would be run in the public cloud, but as Warren Buffett wisely said, “Don’t ask the barber if you need a haircut.” In recent years, enterprises have realised that not every workload can or should be run in the public cloud. Most have split services and applications between on-prem and private or public clouds based on criteria such as cost, latency, security and compliance.

CSPs Move Towards On-prem


The major cloud providers acknowledged this trend, and in 2017, Microsoft introduced Azure Stack, which allowed customers to run Azure in their own data centres using approved partners and hardware (e.g. Dell, HPE, Lenovo, and others). AWS announced Outposts, a similar solution that runs on AWS kit and is supported and managed by AWS directly. Around the same period, we also saw the traditional on-prem players such as Oracle, HPE, IBM, SAP, Nutanix, and NetApp all begin to push either their own cloud services or partner with the Big Three CSPs.

Anthos


In 2019, Google introduced Anthos, which, at first glance, may appear very similar to Azure Stack and Outposts, however it’s fundamentally different. Anthos was designed to help customers modernise legacy applications with Kubernetes, a system for automating containerised application deployment, open-sourced by Google. Rather than a pure on-prem hardware play, Anthos is more like a platform to deploy and manage containerised workloads across different cloud infrastructures.

 

Unlike Outposts (AWS-owned hardware) or Stack (Azure approved OEMs), Google Anthos can run on virtualised infrastructure – public CSPs or hypervisor-based on-prem. Earlier this year, they announced support for bare-metal deployments, thereby removing the need for a hypervisor layer. This enables enterprises to improve performance – by running workloads closer to the hardware – and also takes advantage of existing hardware investments.

Learn more about Google Anthos.

Containers


To fully understand Anthos, it’s helpful to go deeper into Kubernetes and the early days of containers. Virtualisation and hypervisors provide a way to abstract away from the hardware and quickly deploy multiple virtual machines (VMs). However each VM still contains a complete operating system as well as the application code. Containers abstract away from the operating system, isolating just the elements you need to run an application. 

Google Kubernetes Engine


Google faced immense challenges scaling infrastructure to support millions of users on applications such as Gmail and Google Maps. In 2004, Google began using containers to power the backends of these applications. In 2013, they open-sourced LMCTFY (Let Me Contain That For You) which became the foundation for Dockers. In 2013, Google developed Borg, their own internal large-scale container system, which was eventually open-sourced, becoming known as Kubernetes. To give you an idea of how well Kubernetes scales, recent estimates show that Google launches over four billion containers a week to run their global services.

There are a number of key architectural components within Google Anthos:

  • Google Kubernetes Engine (commonly known as GKE): Deploys, manages and scales containers across multiple compute engineers pooled into a cluster. 

  • Anthos clusters: Management of Kubernetes clusters residing outside Google Cloud Platform (GCP), on-prem, or other cloud providers (AWS & Azure) managed from a single pane of glass.

  • Anthos Service Mesh (ASM): Simplify service delivery, from traffic management, observability and mesh telemetry to securing communications between services across private and public clouds.

  • Anthos Config Management (ACM) : Allows uniformly consistent containers configurations across multiple clouds. It’s a secured, version-controlled repository for everything related to administration and configuration.  Policy controllers can check, audit and enforce cluster compliance with policies related to security, regulations, or arbitrary business rules.

  • Attached Clusters: Single place to operate on clusters in other clouds. Leverage services such as Anthos Service Mesh and Anthos Config Management on EKS on AWS and AKS on Azure.

Learn more about Kubernetes.

Designed for Multicloud


Portability and scalability are among the key appeals of containers and Kubernetes, and arguably Google Anthos’s most unique feature is its ability to support multiple cloud infrastructure formats including:

  •  single public cloud and on-prem 
  •  multiple public clouds and on-prem 
  •  multiple public clouds. 

Anthos allows you to manage all these different scenarios as if they were a single cloud, and that’s where Megaport can help. Both Anthos and Megaport are cloud-agnostic, so customers can move their containers and workloads to whatever platform suits them. Similar to the Megaport’s Network as a Service (NaaS), Anthos gives their customers the ability to move workloads around in real-time and scale up and down as they need.

 

Find out more about Megaport Cloud Router, our virtual routing service that provides private connectivity at Layer 3.

Megaport/Anthos Example 


The diagram below shows an enterprise with three GKE clusters. The first running in GCP, the second in AWS and the third in their colocation facility. All three clusters are seamlessly managed using the Anthos platform hosted in GCP. Depending on customer requirements, whether it’s pure demand, low-latency or data residency workloads, data can easily be diverted to any of the three clusters via Megaport’s high-speed, layer 2 backbone. This design also utilises Megaport Cloud Router (MCR) as the routing hub, exchanging full BGP routes between the three locations. This enables all the GKE clusters to reach each other (both the control and data plane) without having to hairpin back to the colocation facility or travel over the public internet via a VPN. 

For example, an application can have its front-end hosted in GCP via an L7 ingress, while its backend components are hosted on prem or in other clouds and managed with ASM. An alternative design could use replicated environments in GCP and in AWS, with external traffic ingressing through GCP via Traffic Director and then be delivered to an AWS endpoint via MCR’s private connection.  

Summary


Gartner predicts that by the end of 2022, 75% of global organisations will be running containerised applications in production. Together, Google Anthos and Megaport can provide a solid foundation for the growing multicloud containerised world.

Megaport is one of Google’s leading global interconnect partners. No provider has more GCP on-ramps than us in North America. 

Learn more about Google Anthos and Megaport


To learn more about Google Anthos, visit them here. You can also read more about Megaport’s partnership with GCP here.

Related Posts

SaaS, AI, and Connectivity in the Second Wave of Cloud

SaaS, AI, and Connectivity in the Second Wave of Cloud

Software as a Service (SaaS) applications and AI are evolving and stimulating innovation for cloud consumers. In order to capture the momentum of new technologies, organizations require an optimized and cohesive interconnection and data centre infrastructure solution. Today’s SaaS Applications SaaS has evolved to solve a broader set of business issues, just as organizations have become more entrenched in how they can support customer needs. The SaaS applications of yesterday were not the complex layered applications we regularly use today; the likes of Microsoft Office 365, Google’s G Suite, and Dropbox have advanced far beyond basic web pages. This added complexity, along with the heavier throughput associated with today’s SaaS applications, has driven the need for reduced latency and increased performance at the network layer to ensure business critical applications work to their peak abilities. Best-effort public internet access is not architected for guaranteed or consistent performance, but many organizations have to use it due to lack of access or knowledge on how to access SaaS applications via private connections. Inconsistent network performance degrades the user experience, productivity, and impacts revenue. Security and reliability are top-of-mind when deploying critical workloads and accessing applications housed in public cloud infrastructure. There are also major concerns around the ease of troubleshooting issues quickly; if a CTO is pushing data to numerous SaaS applications and their network connection is unreliable, or even has inconsistent round-trip latency, the consequences for their organization could be huge. As a result, there is an increased demand for a different way for organizations to connect with, utilize, and consume their overall cloud portfolio.

Read More
IDC Calls 2021 the “Year of Multicloud.” What Does That Mean for You?

IDC Calls 2021 the “Year of Multicloud.” What Does That Mean for You?

In the IDG Connect Management Brief The Year of Multicloud: Using NaaS to Deploy Your Strategy, IDC dives into the rise of NaaS, why the ecosystem matters, and why multicloud increasingly means cloud-to-cloud.

Read More
Case Study: A Leading Fuel Company’s Digital Transformation

Case Study: A Leading Fuel Company’s Digital Transformation

Read how a leading fuel company in New Zealand modernized their network and transformed their customer experience by using Megaport for multicloud connectivity.

Read More