Yes, it's a rant and I intend to write a more level-headed followup on the new Red Hat Developer blog in the next few days. In the meantime take it for what it's worth.
No one starts out wanting to build a monolith. There are no design meetings where the architect or developers say "you know what? I think it'd be a great thing if we built something that will be hard to evolve and maintain." I think it's also fair to say that conversations like "we've got a great architecture for our system that has helped us be successful, so we need to be sure that it evolves towards a big ball of mud" rarely happen!
And yet monolithic applications do exist. Probably less than many people might want to admit, but they are there nonetheless. So the question has to arise: why? I suppose there's another related question: how? As with so many things in this life, there's no one straight answer; it's a combination of things including:
- Expediency; far too often it's just too easy for developers to hack solutions into an otherwise good architecture without spending time to understand whether that breaks the architecture. What starts as a simple, small hack can also then grow, acting as a catalyst, and a small break in the architecture then turns into a fracture.
- Lack of architect (leadership); the original architect(s) leave the project and those who come in to replace them (if they are replaced) can't control the developers or perhaps don't understand the architecture enough to ensure it remains "pure". Likewise, different developers coming into the project to either add to or replace those there already, can dilute the group knowledge and understanding of the architecture, leading to unforeseen and accidental divergence from the original plan.
- Natural evolution; any system than can be said to be architected has a point at which it's simply impossible to evolve and retain the original architecture. Take a look at any (historic) building which may have once been considered an architectural marvel and if it was left mainly alone (not extended) once complete then it's likely still something to behold and admire. But if it had extensions, new wings etc. then it's likely to be a monstrous carubuncle, unless the original architect was involved, or someone who appreciated/understood the original. Sometimes it's just easier to start from scratch and approach the problem afresh, than try to tack on new features.
- Related to the above, sometimes people try to extend software systems (services) to do more than they really should and in doing so break the architecture or create monoliths.
- Poor tools with which to visualise the software system/architecture, leading to making it harder to track changes and ensure they don't move the system towards an unmanageable monolith.
Now nothing I've mentioned so far has been specific to localised applications. It's just as applicable to distributed systems and in fact in a distributed environment the architectural issues can become even more important to understand and track. If you've arrived at a monolith then trying to fix that may involve breaking it into components/services/microservices which reside in a distributed environment, but that's not necessarily the only way, or the best way, in which to resolve the monolith problem. In fact if you don't understand the architectural issues which have resulted in the monolith then breaking it into components is more likely to result in a distributed monolith (or micromonoliths) than to fix the problem!
Yes, I mentioned the microservices word above for the first time and this is really an article about them again. As I've mentioned elsewhere, I believe in and understand the need for distributed systems composed of (micro) services. However, what worries me about some of the current emphasis around microservices is that somehow they will naturally result in a better architecture. That's simply not the case. If you don't put in to place the right processes, design reviews, architecture reviews, architects etc. to prevent or forestall a local monolith then you've no hope of achieving a good microservices architecture. And if you don't keep them in place then there's a good chance you'll evolve towards a distributed monolith.
You've developed a microservice. You know it is because it does one thing well, can be independently versioned as well as deployed, and best of all the consultants you employed to help say it is too! Maybe you've even had it in production use for a while, receiving positive feedback on the benefits a service oriented approach brings. Let's assume you developed the service so it can run within a Linux container (some other container technology, including one based on the JVM, would be just as suitable for this example.) Hopefully you've embraced immutability and therefore take the approach of producing a new instance each time you need to make a modification. So far, so good.
As I've mentioned before, once you start down the microservices road, as with other services approaches dating back beyond even CORBA, you immediately enter the world of distributed computing, with all that entails. Therefore, it is inevitable that at some point either you, your team, or some group of developers at some point in the future, will wonder what they can do to improve performance or reliability in the face of distributed invocations and partial, independent failures. Co-location of services will likely be close to, if not top of, the list of things to try. Let's face it, the ability to improve the networking interconnect is limited in any meaningful timeframe, as is finding money to purchase machines with higher MTTF and lower MTTR (plus entropy increases, so you're going to have failures eventually). That leaves moving services (physically) closer together to reduce the network latency and increase the probability that they fail as a logical unit. Ok let's stop there for a second and back up a bit: just to be clear, I'm talking about services which are related closely such that they rely upon each other to work though can be invoked independently as well.
At some point some group or groups of developers will come (back around) to making microservices infrastructures dynamic in so much that individual placements of services are (initially) made based on heuristics from inter-service communications (interactions) to reduce network overhead. And these placements will (eventually) be computed frequently to enable services to be redeployed if those usage patterns change and new clients come in to play which need the services (or copies) placed closer to them. So it goes that eventually microservices will want to be placed within the same container. As I mentioned before, this could be the same Linux container, especially if each service is a separate process, or could be in the same language container, such as an OSGi container if each service is an OSGi bundle. And whilst these co-location deployments could be done in a volatile manner initially, such that the reboot of the container causes the services to no longer be co-located, it makes sense that a new durable instance of the container be created if the updated configuration proves valuable.
However, that then leads me finally to the question in the title: you had multiple microservices before they were co-located in the same container but does that change now? Are they still microservices? Maybe not if they can't be redeployed independently, but as I mentioned earlier, maybe they aren't that independent anyway. But in which case maybe they should have been collapsed into a single microservices in the first place? Lots of questions!
To be perfectly honest I'm not hung up on the "independently deployable" aspect of microservices in general. I think dependencies between components, objects, services etc. in distributed systems are things which may ebb and flow over time and usage patterns. I think the more important aspects are the service oriented nature, with a well defined contract between user(s) and service, quick deployment ('quick' being a relative term), and well defined APIs. Therefore, in my book these co-located microservices may still be microservices, or maybe composite microservices (what about a milliservice?) But one thing I'm sure of is that some people will disagree and the "goodness" which is the lack of standards in this area, will encourage these kinds of discussions for a while to come.
I've been designing, developing and otherwise involved in distributed systems for 30 years. I love the challenges they present, especially around my own speciality of fault tolerance. Whether it's different consensus models, the duality of orchestration and choreography, replication techniques or different transaction models, to name but a few, working with distributed systems is thought provoking. And in today's world of ever connected devices at scale, it's even more so than at any time in the last decades.
There are many good architectural reasons why you might want to, or need to, employ a distributed approach to your application. Code you rely on may be running elsewhere to your own business logic, may be implemented in a different language, or may need to be replicated to improve availability, for instance. It may even be the case that your distributed system evolved over time from a more localised implementation, e.g., a capability you wrote now needs to be shared between groups and it makes sense to replicate copies physically closer to them.
Distributed systems make a lot of sense for many applications and developers. In the words of RFC2119 they MAY help to solve some particularly tricky issues but they WILL cause other problems of which you MUST be aware. Distributed systems are great. But you know what? A centralised system may be far more appropriate for what you need. Why do I mention this and why do I think it's important that developers and business owners realise this? Because if you listen to our industry at the moment you'd be forgiven for believing that all applications need to be decomposed into (micro) services, each residing in its own (Linux) container and communicating using HTTP (hopefully at least using REST too).
If you've got a centralised system that doesn't mean it's necessarily a monolith. Likewise distributed systems aren't necessarily more agile, lean or less monolithic in nature. As a developer, architect or business owner you shouldn't feel ashamed to admit "I'm centralised and I'm proud!" Don't feel that microservices are going to solve architectural problems due to their distributed nature and if they do, they will definitely introduce challenges you don't have to worry about in a local environment. Now don't get me wrong, I appreciate the ideas around microservices as they are influenced by SOA and other experiences over the years. Unfortunately some of those who are pushing microservices strongly fall into one or more of the following categories: they don't care to learn about distributed systems, they don't believe they have the time to learn about the pitfalls of distributed systems (our industry moves at a pace), they have an agenda which isn't necessarily conducive to your productivity, or maybe they really do believe they're doing the right thing by adopting these new fangled ways. And of course there are proponents of microservices architectures who really do understand the trade-offs they represent and will faithfully represent them to you so you can make an informed choice.
Furthermore, citing examples of successes such as Netflix or Amazon is hardly being fair to the large numbers of applications and vendors who don't use distributed systems, let alone microservices, and would still consider themselves to be successful. Of course there are things we can learn from the likes of Netflix. Of course there are lessons we can apply when considering microservices. But just because you are developing a centralised system does not mean you are a failure or should be consigned to the garbage can of history!
Alright, if you've read this far you would be forgiven for thinking I don't like microservices. But you'd also be missing the point. Just as we've been shown over the years that writing distributed systems is often necessary and a core requirement for some applications, so too is developing using a microservices architecture. What I'm trying to show though, is that you'd better understand why you need to distribute your services as well as the fundamental implications that such an approach entails. And maybe, just maybe, going back to, or remaining, centralised is really the right thing for you.
Of all the open source projects that have impressed me over the years Vert.x comes close to the top - probably at the top if you catch me on a good day. It's relative simplicity belies a combinatorial complexity that allows developers to go from zero to enterprise ready in small steps or giant leaps. The fact it has been imitated in recent years in other languages is just another clear indication of the power of the project. And yes, it clearly takes a leaf or two from Node.js and others but as has been said before: "Good Artists Copy; Great Artists Steal."
I've mentioned a few times about reactive, asynchronous patterns and how they're appropriate for things such as microservices. Vert.x is already a great way of developing microservices from scratch but I also believe that the concepts of the polyglot, non-blocking, event bus is ideal for bridging and integrating with existing applications or services. There are going to be a variety of ways for creating (micro) services and applications, but I believe that Vert.x offers some building blocks they all need. I'm hoping to see us use it much more in the coming months.