Despite the hype, most companies are just getting started with their microservices and container adoption journeys. Throw in a little marketing hyperbole, some competitive FUD, and it’s no surprise to hear from both operations teams and executives a bit of confusion as to which projects, products or services are the most appropriate to evaluate, based on where they are in their microservices journey.
Our goal in this post is to make it as easy as possible to understand what the main categories of products, projects and services are in today’s rapidly evolving microservices ecosystem and which of them lead each space. We’ll also cover how these technologies can be combined to create a more fully realized microservices architecture and highlight a few interesting and noteworthy technologies along the way.
We break the microservices ecosystem down into the following broad categories:
- Infrastructure: Where should I host my microservices?
- Orchestration: How should I manage the containers that host my microservices?
- API Gateways: How should I expose, secure and manage access to my microservices via APIs?
- Service Middleware: How should I route, proxy, cache, and secure outside (north-south) and inside (east-west) traffic to my microservices?
- Service Meshes: How should I manage the service-to-service communication between my microservices?
- Serverless: Should I consider consuming cloud resources in a “pay-as-you-go” model or pre-purchase the capacity I need?
- Security: How should I secure my containers and infrastructure?
- Monitoring: How can I monitor the health and performance of my microservices and hosts?
- Cloud Traffic Control: How can I control the complex emergent behaviors that an organically growing microservices architecture exhibits?
We then conclude with an outlook on the future of the microservices ecosystem.
Microservices have to run somewhere, and public cloud is often the most accessible place to host containers or virtual machines with just the right amount of resources you’ll need for each particular workload.
What makes AWS stand out is the sheer number of “lego blocks” Amazon offers to cloud architects and its relentless pace of innovation. As a result, there is quite possibly nothing you’ll wish AWS would provide for a successful microservices deployment. The most popular services relevant to microservices are Amazon EC2 Container Service (ECS) and Amazon Elastic Container Service for Kubernetes (EKS) for container orchestration, Amazon API Gateway for creating and securing APIs and the tried-and-tested Elastic Load Balancer (ELB) for routing traffic.
On the Azure side, Microsoft offers Azure Service Fabric. This open source project from Microsoft is designed to help developers build and run decomposed applications. The Service Fabric offering features built-in container and service orchestration, supports both stateful and stateless applications, and integrates with a variety of popular IDEs. In fact, Microsoft “eats its own dog food” and claims that Skype, Azure Cosmos DB, Dynamics 365, and about half a dozen other Microsoft products rely on the platform. Besides the Service Fabric, Azure also provides Kubernetes for running container workloads as Azure Kubernetes Service (AKS).
Aside from the Amazon and Microsoft public clouds, there are some additional public and on-prem hosting options available for microservices deployments including Google Cloud Platform, IBM Cloud, Oracle Cloud, and Pivotal. As the creator of Kubernetes, Google prides itself in being the premier cloud for Kubernetes deployments.
Containers offer isolated slices of computing resources on the host operating system with far less overhead than full, hypervisor-based virtual machines. This makes containers the perfect environment on which run elastic microservices. But of course, when an application is decomposed into many microservices, it is going to need many containers and more containers mean more potential problems and new complexity! How does one go about automating the deployment, management, scaling, networking and availability of these containers?
Kubernetes, which was originally developed by Google, is the most popular container orchestration tool today and has the most robust ecosystem around it. It groups related workloads logically into “pods” and then ensures that there are always enough containers running for any particular service. One of the most attractive features of Kubernetes is its portability. When you build on top of Kubernetes, you can move workloads across infrastructure, assuming the infrastructure supports it. Essentially, “write once, run everywhere.”
Aside from Kubernetes, there are other container orchestration tools including Docker Swarm and Apache Mesos, as well as “Kubernetes-as-a-service”-type of offerings from all the major public clouds like Amazon EKS, Google Cloud Kubernetes Engine and Azure Kubernetes Service (AKS).
An API Gateway is a reverse proxy that maps microservices to APIs and exposes them to the outside world. As the name implies, it acts as a “gatekeeper” between the clients and microservices. The basic features of a typical API Gateway include the ability to authenticate requests, enforce security policies, provide load balancing and throttle requests if necessary. For a more detailed discussion about API Gateways, check out our “What is an API Gateway?” post.
Amazon’s API Gateway is a fully managed service for developers to create, publish, maintain, monitor, and secure APIs. If you want to expose AWS workloads as APIs, it’s worth considering it due to its seamless integration into the AWS Management Console and other AWS services.
Ambassador is a Kubernetes-native API Gateway built on top of the Envoy proxy that serves as an ingress controller. Its easy integration with Kubernetes and simple configuration using YAML files makes it a good option for microservices running exclusively on a Kubernetes cluster.
Kong is a cloud-native API Gateway written mostly in Lua that is extensible through both open source and proprietary plugins. It is easily integrated with Kong’s Service Control Platform, which is an API management solution from the same vendor.
The “service middleware” category encapsulates the software that load-balances, caches, proxies, routes to microservices and potentially secures incoming requests. Although there is some overlap with the functionality of API Gateways, service middleware does not itself expose a microservice as an API.
NGINX is one of the most popular web servers on the planet that can be used also as a reverse proxy, load balancer and cache. In the context of a microservices architecture, organizations leverage NGINX as a way to load-balance and route requests to microservices, perform service discovery, encrypt communications and cache data.
Envoy is a Layer-7 proxy often used as the Data Plane in Service Meshes. Envoy was developed by Lyft and quickly displaced other proxies due to its convenient configuration API, which allows Control Planes to adjust its behavior quickly and in real-time.
First things first, a service mesh is not a “mesh of services.” It is a mesh of proxies that microservices can plug into to completely abstract the network away. In a typical service mesh, service deployments are modified to include a dedicated “sidecar” proxy. Instead of calling services directly over the network, services call their local sidecar proxies, which in turn encapsulate the complexities of the service-to-service exchange. For a deeper dive into service meshes, check out our “What is a Service Mesh?” blog post.
Istio is an open source service mesh designed to make it easier to connect, manage and secure traffic between, and obtain telemetry about microservices running in containers. Istio is a collaboration between IBM, Google and Lyft. It was originally announced in May 2017, with a 1.0 version released in July of 2018. Due to its tight integration with Kubernetes, it is the service mesh with the most “buzz” as of this writing.
Linkerd (pronounced Linker-DEE) is an open source network proxy designed to be deployed as a service mesh. Linkerd was the first project to be referred to as a “service mesh” back in 2016 by the project’s sponsor, Buoyant.
With serverless computing, cloud resources are delivered in a “pay-as-you-go” model vs. pre-purchasing the capacity. Function-as-a-Service, or FaaS, is serverless computing that focuses on individual functions, actions or pieces of business logic. The expectation is that when a service calls a function, the process will start within milliseconds, do its work and then spin down the resources. In this model, computing resources end up being completely abstracted away from the developer. If the service’s needs are event-driven, can be scheduled or queued, or chained together, FaaS should be something to consider bringing into your microservices architecture. Use cases for FaaS include real-time file and stream processing, ETL and IoT.
AWS Lambda is hands down the most popular FaaS on the market today. It lets organizations run code without provisioning or managing servers. The fact that it also integrates with many popular AWS services makes it a natural choice for those looking to start experimenting with FaaS.
To no one’s surprise, Microsoft has an answer to Amazon’s AWS Lambda. Azure Functions offers much of the functionality of Lambda, integrates tightly with other Azure services and can be used for similar use cases.
Securing microservices and the containerized environments they run in pose a unique set of security challenges for developers and operators alike. Although containers are not yet as ubiquitous as physical and virtual servers, the 2018 Cyberthreat Defense Report, which surveyed over 1,200 enterprise security professionals, already concluded that “application containers and mobile devices comprise the greatest security challenges to today’s organizations.” What types of challenges do organizations face when moving to containers?
Development and Deployment Speed: As build and deployment flows continue to become optimized within organizations, shipping more code means more potential vulnerabilities.
Traffic and Scale: As more applications are decomposed into smaller services, we can expect more data traffic between the services. This often results in having to create and maintain more complex access control rules.
Complexity: With hundreds or even thousands of interconnected services, it can be challenging to “see the big picture,” let alone secure it. This is especially true when emergent or unpredictable behaviors between the services spontaneously start to occur.
Shared Environments: Many organizations are running their microservices in public or hybrid clouds. This not only introduces potential security vulnerabilities with other services running on the same physical host but also because not all of the same tooling used to secure an on-premise environment is available in the public cloud. And if it is, it might fundamentally work very differently.
Twistlock, founded in 2015, was one of the first container security companies to emerge on the scene. They offer container security and vulnerability monitoring that helps organizations go from build to production.
Tigera, founded in 2016, is focused on offering organizations zero trust network security and continuous compliance for Kubernetes.
Aporeto is a start-up that offers cloud native, zero-trust security for containers and microservices. Their solution uses identity context, vulnerability data, threat monitoring and behavior analysis to build and enforce authentication, authorization and encryption policies.
Aqua offers security for a range of microservices-centric technologies, including Kubernetes, Docker, OpenShift and AWS services like Lambda.
Monitoring a monolithic application and assessing its health differs fundamentally from monitoring the dozens of services that can make up a decomposed version of the same application. For example, while a monolithic application is in practice either “up” or “down,” a microservices architecture, if implemented correctly, might still be able to deliver sufficient levels of functionality without being completely available. This is why monitoring the health of a microservice is often more about understanding its impact on the system as a whole, rather than its own state. Another thing to consider is data volume. More containers also mean more monitoring data to collect, analyze and act on. By now, all organizations have implemented some form of APM solution, so, the question becomes “can my existing APM solution effectively handle monitoring and analyzing data from dozens or hundreds of services and help me understand the impact of individual outages or slowdowns on the system as a whole?”
Prometheus is an incredibly popular open-source monitoring and alerting project that is hosted by the CNCF and used widely by both small startups and large enterprises. Prometheus’ popularity has a lot to do with its robust and extensible feature set, which includes things like a time-series data model, a flexible query language and a pull vs. push model of collecting data, along with a wide variety of 3rd-party graphing, dashboarding and alerting integrations.
Instana has differentiated itself from other APM vendors by proclaiming microservices as “first-class” monitoring citizens instead of an afterthought. Instana’s platform helps organizations discover and map the container deployments, manage SLAs, identify the software composition of containers and trace requests from beginning to end.
Aside from Prometheus and Instana, there is also the option of using many of the well known APM vendors and testing the depth of their microservices support. These vendors include AppDynamics, New Relic, Datadog, Dynatrace, SignalFX and Lightstep.
Glasnostic is a cloud traffic controller that lets digital enterprises detect and remediate the complex emergent behaviors that their connected Service Landscapes exhibit so they can innovate faster and with confidence.
As a cloud traffic controller, Glasnostic is a control plane for organic architectures. When an organization can control its microservice environment, it can grow and evolve its product portfolio in a more rapid and agile manner. At a more technical-level, Glasnostic inserts itself cleanly into the network data plane without affecting developers, processes or stacks. It uses no agents, sidecars or similar voodoo. It plays nice, works with every platform, orchestrator, service mesh or technology stack, and runs everywhere.
At Glasnostic, we believe the continuing trend to “decouple everything” will drive developments in the microservices ecosystem that will make for some interesting changes in the market landscape we presented at the outset. For instance, if Istio, as we predict, will become “the” service mesh and ultimately turn into a feature of Kubernetes, service mesh as a category may cease to exist. Similarly, if API gateways continue to manage access to applications from different groups across the enterprise, as we also predict, they may become a feature of API management.
As each category continues to jostle for position in the race to become the “most valuable category,” we’ll see increased functional overlap between them and a corresponding shift of the entire landscape. Watch this space!