Skip navigation

You've developed a microservice. You know it is because it does one thing well, can be independently versioned as well as deployed, and best of all the consultants you employed to help say it is too! Maybe you've even had it in production use for a while, receiving positive feedback on the benefits a service oriented approach brings. Let's assume you developed the service so it can run within a Linux container (some other container technology, including one based on the JVM, would be just as suitable for this example.) Hopefully you've embraced immutability and therefore take the approach of producing a new instance each time you need to make a modification. So far, so good.

 

As I've mentioned before, once you start down the microservices road, as with other services approaches dating back beyond even CORBA, you immediately enter the world of distributed computing, with all that entails. Therefore, it is inevitable that at some point either you, your team, or some group of developers at some point in the future, will wonder what they can do to improve performance or reliability in the face of distributed invocations and partial, independent failures. Co-location of services will likely be close to, if not top of, the list of things to try. Let's face it, the ability to improve the networking interconnect is limited in any meaningful timeframe, as is finding money to purchase machines with higher MTTF and lower MTTR (plus entropy increases, so you're going to have failures eventually). That leaves moving services (physically) closer together to reduce the network latency and increase the probability that they fail as a logical unit. Ok let's stop there for a second and back up a bit: just to be clear, I'm talking about services which are related closely such that they rely upon each other to work though can be invoked independently as well.

 

At some point some group or groups of developers will come (back around) to making microservices infrastructures dynamic in so much that individual placements of services are (initially) made based on heuristics from inter-service communications (interactions) to reduce network overhead. And these placements will (eventually) be computed frequently to enable services to be redeployed if those usage patterns change and new clients come in to play which need the services (or copies) placed closer to them. So it goes that eventually microservices will want to be placed within the same container. As I mentioned before, this could be the same Linux container, especially if each service is a separate process, or could be in the same language container, such as an OSGi container if each service is an OSGi bundle. And whilst these co-location deployments could be done in a volatile manner initially, such that the reboot of the container causes the services to no longer be co-located, it makes sense that a new durable instance of the container be created if the updated configuration proves valuable.

 

However, that then leads me finally to the question in the title: you had multiple microservices before they were co-located in the same container but does that change now? Are they still microservices? Maybe not if they can't be redeployed independently, but as I mentioned earlier, maybe they aren't that independent anyway. But in which case maybe they should have been collapsed into a single microservices in the first place? Lots of questions!

 

To be perfectly honest I'm not hung up on the "independently deployable" aspect of microservices in general. I think dependencies between components, objects, services etc. in distributed systems are things which may ebb and flow over time and usage patterns. I think the more important aspects are the service oriented nature, with a well defined contract between user(s) and service, quick deployment ('quick' being a relative term), and well defined APIs. Therefore, in my book these co-located microservices may still be microservices, or maybe composite microservices (what about a milliservice?) But one thing I'm sure of is that some people will disagree and the "goodness" which is the lack of standards in this area, will encourage these kinds of discussions for a while to come.

I've been designing, developing and otherwise involved in distributed systems for 30 years. I love the challenges they present, especially around my own speciality of fault tolerance. Whether it's different consensus models, the duality of orchestration and choreography, replication techniques or different transaction models, to name but a few, working with distributed systems is thought provoking. And in today's world of ever connected devices at scale, it's even more so than at any time in the last decades.

 

There are many good architectural reasons why you might want to, or need to, employ a distributed approach to your application. Code you rely on may be running elsewhere to your own business logic, may be implemented in a different language, or may need to be replicated to improve availability, for instance. It may even be the case that your distributed system evolved over time from a more localised implementation, e.g., a capability you wrote now needs to be shared between groups and it makes sense to replicate copies physically closer to them.

 

Distributed systems make a lot of sense for many applications and developers. In the words of RFC2119 they MAY help to solve some particularly tricky issues but they WILL cause other problems of which you MUST be aware. Distributed systems are great. But you know what? A centralised system may be far more appropriate for what you need. Why do I mention this and why do I think it's important that developers and business owners realise this? Because if you listen to our industry at the moment you'd be forgiven for believing that all applications need to be decomposed into (micro) services, each residing in its own (Linux) container and communicating using HTTP (hopefully at least using REST too).

 

If you've got a centralised system that doesn't mean it's necessarily a monolith. Likewise distributed systems aren't necessarily more agile, lean or less monolithic in nature. As a developer, architect or business owner you shouldn't feel ashamed to admit "I'm centralised and I'm proud!" Don't feel that microservices are going to solve architectural problems due to their distributed nature and if they do, they will definitely introduce challenges you don't have to worry about in a local environment. Now don't get me wrong, I appreciate the ideas around microservices as they are influenced by SOA and other experiences over the years. Unfortunately some of those who are pushing microservices strongly fall into one or more of the following categories: they don't care to learn about distributed systems, they don't believe they have the time to learn about the pitfalls of distributed systems (our industry moves at a pace), they have an agenda which isn't necessarily conducive to your productivity, or maybe they really do believe they're doing the right thing by adopting these new fangled ways. And of course there are proponents of microservices architectures who really do understand the trade-offs they represent and will faithfully represent them to you so you can make an informed choice.

 

Furthermore, citing examples of successes such as Netflix or Amazon is hardly being fair to the large numbers of applications and vendors who don't use distributed systems, let alone microservices, and would still consider themselves to be successful. Of course there are things we can learn from the likes of Netflix. Of course there are lessons we can apply when considering microservices. But just because you are developing a centralised system does not mean you are a failure or should be consigned to the garbage can of history!

 

Alright, if you've read this far you would be forgiven for thinking I don't like microservices. But you'd also be missing the point. Just as we've been shown over the years that writing distributed systems is often necessary and a core requirement for some applications, so too is developing using a microservices architecture. What I'm trying to show though, is that you'd better understand why you need to distribute your services as well as the fundamental implications that such an approach entails. And maybe, just maybe, going back to, or remaining, centralised is really the right thing for you.

Filter Blog

By date:
By tag: