At the beginning we all start building an application. Then some part of it takes off and before you know it you are throwing new pieces on the original application until it becomes unweildy. Thus is born the Monolith. In time your system gets large, cumbersome, and hard to maintain. Then you hear about microservices (or maybe you started that way because it was in fashion when you started development) and start building them. But in developing microservices we can often forget about what they are and why we are doing it. It’s not because it’s the cool thing to do nor is it the easiest thing to do (quite the opposite) but it’s a smart thing to do.
This is a very valid question. Many people can talk about the ease of creating and maintaining a Monolithic application. In a way you have just one thing you need to manage that may live on only a few servers. But as the Monolith grows over time it gets larger and more internally complex over time. Soon your little baby monolith that was easy to manage and deploy becomes cumbersome and unwieldy.
Then you begin to long for the simple days where a large regression suite doesn’t need to be run on the smallest changes, or worrying if your server is going to have enough power to handle it. Maintenance becomes a nightmare, team management becomes a pain, and you move slowly.
The solution, break things up into independently releasable pieces of software. Which is where microservices come in.
But what is a Microservice?
If you do a search on Google you will find a ton of different videos, articles, tweets about microservices and what they are and how to implement them. But I feel like this quote by Sam Newman strikes at the core of what they are:
Microservices are small, autonomous services that work together
What I like about this definition is how broad it is. Most people assume a microservice is an API endpoint that communicates via HTTP. To view microservices from that lense is very limited. A microservice could be a worker sitting on the end of a queue, a process listening to an event on OTP, or an AWS Lambda function watching for a file drop on S3.
Essentially it boils down to breaking functionality into smaller and smaller pieces that can be deployed without affecting other services. The autonomous part is key here. By being autonomous we can have smaller groups of people working on updates that can be deployed on their own terms. Moving in this direction you can eventually hit a continuous delivery model which allows for patches and features to go live as fast as possible.
Where To Next?
We are far from having a fully implemented microservice architecture. We have done the first phase which Martin Fowler calls the “Strangler App” phase. That is wrapping an existing project in microservices as you slowly replace bits and pieces of it. This is accomplished by developing microservices and slowly replacing the monolith.
But how do you get started? First you need to start thinking of what "domain" an application or service might serve. A domain is quite simply a logical grouping of data that represents a specific function of a business. These in turn can have subdomains that help construct the underlying Domain. "Billing" for instance can be considered a domain with subdomains of "Payment Methods" and "Billing Addresses" as subdomains. By understanding what domain we are serving we can better understand how we draw the lines between them. Drawing the lines starts the process of decoupling the application from one another which allows us to independently release products without fear of impacting others (as long as your contract doesn’t change).
In a monolith it is very easy to just join tables and jump through different domains to get what you need. But in a microservice world you need to establish who owns what. A good practice is to imagine you can’t do joins on tables. How do you communicate? Who owns what?
Eventually you are supposed to be able to break up your monolithic database as well so that each microservice has its own database in order to properly represent the data. This allows us to use the “right tools for the right job” instead of sticking to specific languages all using a MySQL Database. We could instead build an API that’s job is to deliver highly available content through Redis or a worker that pulls information from a Mongo document store.
How can we make it better?
One of the best ways to make microservices work is to build “shippable” code. What does this mean?
Well for starters we need to make sure through a series of tests that your code does what it’s supposed to do. That means from unit tests to integration tests we need to have confidence that it works. We will also need to know if it breaks and “contracts” or existing communication lines between other products. This kind of contract verification can be done through testing frameworks like Pact.
Once everything builds and we are confident that the code works we will need to make it portable. This is most often done in the industry through the use of containers. This provides a level of abstraction that allows our code to work the same on every machine. By building and pushing a container we can be confident that your API will work on your machine as well as in production with minor configurations for that environment.
Once this is implemented you can have confidence that what you have will work in production so you can implement continuous delivery as well as a failsafe rollback mechanism that allows you to come in and write a bugfix in the morning and push it before standup. And if it goes wrong just roll it back to the way it was in the morning and have it fixed before you go home.