It can be said that microservices have created a great disturbance in the force when it comes to building apps and the monolithic 3-tier development architecture that we’ve known so well for many years. In a microservice architecture (MSA), software modules can be developed, deployed and maintained independently and then re-used in multiple applications. Individual modules, or microservices, are well-defined, discrete capabilities that are available for use through simple, lightweight, language-neutral APIs. Each microservice is, in essence, a building block that can be used to assemble an application.
The promise of the microservices approach is compelling. Instead of large development teams building, integrating, testing and later maintaining a monolithic application, we can now have small, focused teams working on tightly defined functionality, testing, deploying and revising work independent from other teams. No integration nightmares, no finger-pointing, no months-long QA and revision cycles. Add all this up and the result is faster software development, tremendous efficiency, flexibility and agility.
Sounds fantastic! What could possibly be wrong with all this, you ask?
What we’ve learned from the first wave of early adopters is that there are some unexpected development and operational pitfalls with microservices that can dilute the expected gains and even endanger the entire project.
From a development perspective, there are three broad areas where most microservice initiatives risk running aground: unexpected complexity, technology shortcomings and execution honeypots.
First is the often-surprising level of complexity inherent to a microservice architecture (MSA). While moving to more compact, granular modules does make microservices easier to write, the complexity of the monolithic codebase doesn’t disappear; it merely shifts to the network and into the laps of application developers tasked with composing the application from modules. Most MSA early adopters have found that coordinating microservices in application business logic requires, to put it mildly, a deft touch. Handling lookups, latency, fault tolerance, different message formats, load balancing, caching, context coordination over the network and across many microservices is the stuff of nightmares for a developer. Unsurprisingly, the cyclomatic complexity of the application code skyrockets having the effect of making the app both fragile and hard to maintain. Not only that, much more skilled developers are needed and project timelines become more difficult to predict. Cost overruns, poor product quality and missed deadlines are the outcome. Not good.
Second are the fundamental technical limitations of REST and HTTP/JSON transport when used for microservices interaction. HTTP was designed for text-based document retrieval using a browser and lacks the basic capabilities needed to build application protocols. As examples, there is no bi-directional interaction, flow control, exception handles, cancellations, load balancing … and the list goes on. Inevitably, most MSA implementations resort to a patchwork of external infrastructure (sidecars, message brokers, pubsub caches and so on) to fill in the gaps and get bi-directional and asynchronous interactions working. By the time all is said and done, most are left with a Goldberg-like contraption that often is as difficult to configure and maintain as the monolithic application that is being replaced. Not good at all.
Finally, while trying to overcome HTTP/JSON’s shortcomings, it is very easy to fall for a common, rather insidious honeypot. Since there is no natural way to specify a bi-directional, RPC-style interface in the REST/HTTP world, microservices developers resort to “shipping” their APIs as libraries. Not only that, it is very tempting (and indeed very common) to add non-interface logic into these libraries. Error handling, caching, defensive code and business logic frequently seep into the “API.” This introduces a hard coupling between the application and microservice. If the microservice business logic changes or gets revised, the application will need to re-pull, re-test and re-verify, effectively making the deployment behave like a monolith … the very thing MSAs seek to eliminate. Unacceptable.
The first wave of MSA implementations have yielded many valuable lessons. Issues related to the limitations in technology, as well as the challenges application developers face, have become better understood. Thankfully, this has sparked fervent activity in the form of new technologies, startups and projects that aim to solve these problems.
My advice to anyone embarking on a new MSA implementation is to bake these new developments into their plan. Projects such as RSocket and gRPC are a good place to start. Others, such as Netifi Proteus, are simultaneously tackling both ease of development and performance at scale with an IDL compiler, application-centric transport and clever routing. Start with these new generation, microservices-centric technologies as a base; don’t default to reinventing the wheel with REST/HTTP.
If your MSA project is already underway, be mindful of inadvertent coupling via shared libraries. Consider these rules of thumb offered by one company specializing in microservices:
- If it contains business logic, it shouldn’t be a library;
- If it changes frequently based on new requirements, it shouldn’t be a library;
- If it introduces coupling, it shouldn’t be a library.
Now, let’s look at runtime operational issues associated with the implementation of microservices. When deployed, a microservices assembly looks very different from a traditional, monolithic application. Instead of a handful of hosts running a well-contained set of application bits, we now have a large number of services running across potentially heterogeneous networks and on an unknown set of hosts (or no hosts at all, in the case of serverless components). What’s more, the topology isn’t static, as microservices are reusable and often participate in multiple application assemblies. This presents a huge challenge for operations teams tasked with keeping things running smoothly. The two biggest headaches here are using the wrong automation tools and failure to appreciate the difference in behavior between MSAs and monolithic applications. Let me explain what I mean by that.
Tools available to DevOps teams have made great strides in recent years. It’s safe to say that we are now spoiled for choice when it comes to implementing the continuous integration, delivery and deployment pipeline. When it comes to monitoring, however, things haven’t quite moved at the same pace. Monitoring and diagnostics tools are still by and large stuck in the agent based, on-host, whitebox-metrics paradigm. MSAs by their distributed and dynamic nature need a fundamentally different approach to monitoring. The same holds true for diagnostics and root cause identification. Early MSA adopters attempting to use traditional monitoring tools have learned this the hard way.
Additionally, the loosely coupled, distributed structure of a microservices assembly often causes trip-ups. As different microservices begin to act as an entity in an application assembly, an additional layer of complexity is introduced. For instance, a performance degradation in one microservice may impact the other microservices in the application. Worse yet, a random traffic spike for a single microservice may cascade through the assembly and grind application response time to a halt (much like a handful of rubberneckers can cause traffic to back up for miles). By comparison, in a traditional, monolithic application architecture, everything was integrated and tested as an entity, so such unpredictability was seldom an issue.
By now, it’s clear that microservices force us to think differently about how application deployments behave and the tools we need to manage them. Host-based monitoring is no longer a sufficient approach because it’s the interactions between microservices that determine performance and availability. Our tools and remediation strategies must reflect this. Trailblazers in deploying microservices, such as Netflix, sensed this very quickly and created their own tools for dealing with this shift in paradigm.
Fortunately, as with the development side of things, activity in the operations space has heated up with emerging projects and vendors attempting to address the gap between traditional tools and the needs of MSA deployments. Projects such as Istio are aimed at load balancing, control, etc. in microservice deployments. Others, like the pure-play startup Glasnostic, monitor microservice interactions directly and offer sophisticated remediation for microservice assemblies. There are several others. These exciting, and very necessary, developments augur the arrival of true MSA-native tools.
My advice to those deploying microservices is to avail themselves of these new tools today. Like the development challenges, operations issues with microservices are not insurmountable. It’s merely necessary to recognize that working with microservices is different from the traditional monolithic architectures that we used for so many years and we need to adjust accordingly.
We can consider these issues to be “growing pains” as the microservices architecture matures. I don’t believe there is any holding back the progress and increased adoption of microservices with its many advantages.
I’m not alone in my thinking. Industry analyst firm IDC just recently made its “IT Industry 2018 Predictions” that included:
“(Prediction 5:) By 2021, Enterprise Apps Will Shift Toward Hyperagile Architectures, with 80% of Application Development on Cloud Platforms (PaaS) Using Microservices and Cloud Functions (e.g., AWS Lambda and Azure Functions) and Over 95% of New Microservices Deployed in Containers (e.g., Docker)”
To summarize, there are tremendous benefits to deploying microservices, but we’ve learned some hard lessons about dealing with its complexities and what it takes to overcome those. The good news is that automation tools are emerging that will help get the job done.