The current shift to microservice architectures has many benefits, but at the cost of increased complexity. In this post, I’m going to discuss some ways to mitigate the complexity issue in areas faced by everyone who adopts microservices.

Defining an API

REST interfaces are a big step forward for web applications but it can sometimes be difficult to define them. The following issues often arise.

Not all interactions map well to a resource-centered model

Just as some programs are more suited to a functional style than an object-oriented one (personally, I think “some” could be replaced with “most” but that’s a discussion for another time), some web applications are inherently more verb-centric than others.

Pure REST approaches can be chatty

For asynchronous operations, it’s often necessary to construct “task” resources that represent ongoing activities. Once a poll responds successfully, the client has to send an additional delete request (unless the server provides some kind of cleanup logic which adds needless complexity).

Also, complex resource graphs require many round trips in a pure REST API. If you find yourself processing highly connected data, latency of multiple requests can become a serious issue.

If you find yourself in either of these situations, you might want to consider using a different model like GraphQL, which is a good fit. See this post for more discussion on GraphQL.

Building and Testing

If you were an early adopter of the microservice pattern, you probably deployed each microservice in its own virtual machine. The resource requirements of VMs meant that you couldn’t run many of them at the same time on development hardware. So, you mocked out services you needed when testing others. The complexity of this approach grows quickly with the number of microservices.

Today, with the wide adoption of containers, there is often no need to mock any services at all. You can simply run your entire stack on a developer’s laptop. Though you need to maintain multiple configurations for your orchestration environments (or use a tool that generates configurations from a common model), it’s well worth it to be able to run against the actual service code used in production.

This is useful for manual testing and user experience vetting as well as automated testing. It’s also a great tool for sales to use in demonstrations.

Packaging

In the past, packaging involved elaborate bundling of application code and dependencies to run in a variety of different deployment environments. Packaging systems (like InstallAnywhere, etc.) were themselves complex applications and often required significant user interaction to successfully deploy software. Now, packaging for deployment can be almost as simple as packaging for testing. If you package your microservices as containers, you can simply push them to private repositories (either self-managed or managed by a hosting provider like AWS) and run them in production via orchestration engines like Docker Compose / Swarm or Kubernetes. The only thing that needs to be packaged for your actual deployment is the configuration information for the orchestrator.

Maintaining a Production Environment

Production deployments have a reputation for being painful and expensive, requiring a lot of care and feeding. Ten to fifteen years ago, almost everything had to be done by hand unless you were a user of Cassatt Active Response (and, trust me, there weren’t very many of those). Auto-scaling and replacement of physical hardware is a difficult problem. Things became much better with the advent of VM orchestration environments like AWS and others. Such environments made it easy to configure scaling parameters and health monitoring which was a boon to production operations. However, the cost of such a deployment made it expensive to duplicate for testing and bringing up new VM instances was a relatively slow process, making it hard to respond in a timely fashion to events. We basically had to choose between slow response and wasted resources (if we kept sufficient excess capacity operating to allow for spikes).

Container-based deployments don’t suffer from these issues. Starting a new container is fast and cheap. Also, the free container orchestration environments like Docker Swarm and Kubernetes now both provide health check services so we can test our deployment configurations locally at low cost. Auto-scaling is already supported in Kubernetes and will likely soon be supported in Docker Swarm. In the meantime, you can test entirely with Kubernetes locally or run unit and integration tests against Swarm and switch to Kubernetes for pre-production testing. Again, all you need to change is your orchestrator configurations.

Summary

Containers and flexible client APIs have made microservice architectures much more accessible than in the past and drastically reduced the costs and effort involved in deploying them. We’ve reached the point at which the only question you need to answer is whether or not it makes sense to structure your application as a collection of microservices; not whether or not you can afford to do so.


Interested in learning more about Yipee.io? Sign up for free to see how Yipee.io can help your team streamline their development process.