Skip navigation
1 2 3 Previous Next

Mark Little's Blog

172 posts
Mark Little

We are moving ...

Posted by Mark Little Aug 31, 2017

JBoss and Fuse has always been about developer focus. Whether you're someone who builds applications with our projects and products, or someone who contributes to the construction of those projects, or a combination of the two, over the years we've been pretty successful at appealing to the Java and JVM communities. With initiatives like JBoss Everywhere, acquisitions such as FuseSource or FeedHenry, we've constantly grown our developer footprint and appeal. How we reach those developers, educate them on what we're doing, how we need their help etc. has always been a multi-faceted effort with JBoss.org playing a central role. If you've been with us long enough then you'll remember that JBoss.org has had a number of personality changes, going from driving interest in JBossAS as a project and other commercial activities, through the introduction of JBoss.com and making JBoss.org a purely project oriented site with focus on our many upstream projects and JBoss Labs, and then back to a more product focus coinciding with our move to give developers free access to our products, and more recently once again adding in more community efforts.

 

In the past few years we introduced the Red Hat Developer Program which is meant to appeal to a wider community of developers than just JBoss or Fuse. The long term aim has always been to cater to contributor developers (those who help us build our projects) and user/builder developers (those who use our projects and products to build their own applications). Slowly but surely we have moved closer towards that plan and now we are at a point where we have to consider how JBoss.org and developers.redhat.com can work better together. If you hadn't noticed, much of the JBoss developer work had moved to the developer.jboss.org location, leaving the main www.jboss.org page to focus on products (of course there are exceptions, including the various microsites like our research or IoT pages). The eventual plan is to fold the JBoss developer content into appropriate pages within the main Red Hat developer site; that's going to take a while though due to the other services and sites which are hosted from that site. But for now the product-oriented pages need to move over to developers.redhat.com and we are left with a question of where does www.jboss.org point to? There are two obvious options:

 

  • It points to developer.jboss.org and in essence returns www.jboss.org to the community focus it had a few years ago.
  • It points to developers.redhat.com, giving www.jboss.org a feel closer to that which existed at the start of the JBoss adventure.

 

Throughout our history JBoss and Red Hat have had an enviable track record of looking for input from our wider communities on a wide range of things we are contemplating. For example, probably one of the biggest I can recall in recent years was the JBossAS rename. It's for that reason I'm writing this blog entry, to inform our communities that we're going to be making a change. I value your input and although I and the team have our own thoughts on the right answer, I don't want to just drop this on everyone without some consultation. After all, www.jboss.org is not a site used only by Red Hat employees! One easy way for us to determine is from tracking your usage and for that reason www.jboss.org will soon show two options and when you land on the homepage you'll be able to either go to developers.redhat.com and find information on product downloads, tutorials etc. or continue to developer.jboss.org and locate your favourite community project and associated information. Let's see how this works and then we'll report back after a meaningful period of time.

 

Onward!

Mark Little

Java EE and open source

Posted by Mark Little Aug 18, 2017

By now I'm hoping that most people will have seen the announcement from Oracle around Java EE and possible moves towards open source foundations. Of course there are a few media articles on the topic now because if this does happen it's pretty significant. I haven't got much more I can add at this time from Red Hat that hasn't been said by John Clingan but I did want to echo the sentiments: I think this is a very positive thing to do and likely sits up there alongside Sun's open sourcing of Java as one of the most significant events to happen to the wider Java ecosystem. Of course the devil's in the detail and those details are few and far between at this time, but Red Hat is very happy to support this effort in whatever we we can to help ensure a positive outcome and future for Java EE and its enterprise components. Clearly I also see this as beneficial to our collective MicroProfile efforts and we will have to see how both of these things will evolve over time. Onward!!

Mark Little

Why we voted NO on JSR 376

Posted by Mark Little May 11, 2017

Now that the vote has passed and the EC face-to-face meeting in Austin on May 8th and 9th has also concluded, I wanted to write down a little about why we voted NO on JSR 376. Scott has already posted an article on the concerns from Red Hat Middleware teams and other members of the Expert Groups and wider Java community (note that at least 50% of the EG contributed to the document - it is NOT just from Red Hat). This has never been about Java EE versus Jigsaw as some might think; it has always been about the benefits for the wider Java communities and making Java 9 a success: Red Hat, and JBoss before it, has invested a lot in the Java ecosystem over the years and our input on JSR 376 has always been because we want Java and the JVM to continue to thrive; we understand it needs to evolve and sometimes that might mean breaking backwards compatibility. However, there's a huge JVM community out there which has developed over two decades and we have to be cognisant that we need to bring as many of them with us when we do release Java 9 rather than risk driving them to other newer languages. And unfortunately the majority of those community members don't participate in JSR Expert Groups so their feedback, both positive and negative, often comes after the fact. With something as invasive as Jigsaw, where reversing it out of the JVM if it was to break too much is probably impossible to do, it is therefore imperative that the representatives on the EG are confident as much has been done as possible to make moving to Java 9 as easy and natural as possible.

 

It’s a very complex situation we found ourselves in and I’ve spent months listening to the arguments from all sides, not least of which are our own OpenJDK and middleware teams. I flip-flopped between an abstain vote and a no vote, trying to remain objective on the outcome. On the whole, what swayed me to vote no was not the arguments for why WildFly or any specific module system, such as OSGi, might not work well on Jigsaw: those were important concerns but so were some counter arguments from our OpenJDK team. What did it for me was the belief that Jigsaw as it currently stands still needs modifications: in a world as polyglot as we find ourselves in today, the general lack of consensus in the EG and beyond makes it important we stop and consider the potential impact as I mentioned above; if there are smaller changes which could be made which would mitigate the issues we and others have raised in the wider Java community than just the EG, then I think it’s worth spending a bit more time doing so.

 

None of us have a time machine to see what will happen or predict the future. I believe I remained objective and I believe I made the right decision. However, I want to make it clear that whilst the Red Hat OpenJDK team, the WildFly team, Hibernate, Drools and many other groups gave input for and against Jigsaw, the buck stops with me: I made the decision and whether anyone believes it is right or wrong, I stand by it. At this stage we all need to come together and work together to make Java 9 a success.

 

I want to finish by returning to the idea of who was concerned about Jigsaw. Whilst the focus seems to only fall upon Red Hat and IBM having concerns, the document Scott posted has wider representation that that. Other discussions around Jigsaw, such as on InfoQ and social media, are full of similar concerns from individuals and other companies than just ourselves. I can't speak for IBM but I can say that this is not a vote we wanted to take in the way we did and it's certainly not a vote we entered into lightly. Whatever the outcome of the vote I hope that the Jigsaw EG, the OpenJDK teams and Red Hat can move forward positively to continue to ensure Java and the JVM are relevant to a wide range of enterprises. That's certainly our intent and we won't be putting roadblocks in the way of collaboration; if everyone can take what has been said and done on all sides to date from the perspective of assuming positive intent then I'm sure we can make this work!

Back at Red Hat Summit and DevNation we announced that we were working with IBM, TomiTribe, Payara and LJC (and now SouJava!) in an upstream community effort to gain experience on best practices for developing microservices using Java EE, which we called the MicroProfile. Obviously our own efforts around WildFly Swarm fed into these discussions and we've been actively participating in the forum discussions ever since. Now at DevNation we committed to agreeing on a 1.0 version of our project, i.e., what those baseline (minimum) EE components would be for developing meaningful microservices, before JavaOne. At the time we thought it would be JAX-RS, JSON-P and CDI but we really wanted wider input from users and developers. We got a lot of input and I encourage anyone who hasn't joined the group yet to do so if you want to help influence the collaborative efforts.

 

Screen Shot 2016-09-17 at 13.12.53.png

 

Well the good news is that we finished 1.0 a few weeks ahead of schedule with 6 different implementations! You can read more about the announcement but the obvious question is what next? Technically Java EE still has a lot more to offer microservices and we've been discussing these on the mailing list, including JPA, JTA (yes, I think transactions have a role to play!), Bean Validation and Concurrency Utilities, to name but four. We need to look at extensions to these efforts, or things which go beyond where they currently sit. For example CQRS is important for more advanced microservices developers. Monitoring, logging and tracing in a distributed system is critical on a number of fronts, not least of which is debugging performance problems and errors, so we need to do more here. One of my pet favourites around microservices is the move towards reactive and asynchronous, as epitomised by projects like Vert.x. It's much more than just making JAX-RS a bit more asynchronous so we have a bit of a slog here to make improvements in Java EE components/standards; maybe we'll decide it just isn't worth it and we need to look at defining new things from scratch.

 

I could go on and on with some of the technical things I believe we need to look into next, including service discovery, components from Netflix OSS, CDI annotations for things like circuit breakers and of course language level features such as JDK 8 Lambdas and Streams, or Java 9 modularity, but you can find more details at the official announcement blog. I will point out one other thing though: we've got to move away from just assuming HTTP is the only way microservices communicate. Yes, HTTP is convenient, but a text-based protocol is not the way to go if you want high performance. HTTP/2 helps, as will WebSockets (yes, more benefit from Java EE) but we need to look at messaging solutions (not just JMS). Oh and fault tolerance, reliability and security are other areas we need to focus on - it's going to be busy so get involved and help us break this down into smaller challenges!

 

Of course there's more than the technical aspects to consider. We now believe we have something worthy and useful in MicroProfile 1.0. We've always talked about standardisation once we have gained enough experience - that's where standards bodies work really well. This is something we need to discuss in the community, as with everything else we've done so far, but my personal opinion is that we are ready to move MicroProfile 1.0 to a standards body. Which standards body is again open to discussion, but it would seem logical for the JCP to play a role here since at the moment we're based entirely on Java EE. Get involved! Give your own opinion. Finally, we're going to move the existing MicroProfile effort to a Foundation. Discussions are on going (yes, again in the upstream forum) but I think we're close to a decision and should be able to announce something real soon now! Stay tuned!

 

OK that's it for now. It has been a great community effort around MicroProfile since we announced it only a few small months ago. My thanks go out to everyone who has contributed in a small or large way, no matter which company you work for or if you're an individual contributor. It all helps us to understand where enterprise Java microservices need to head. Please keep involved and help us shape the future of the industry! And if you're around at JavaOne ...

 

IMG_0534.JPG

 

Onward!

Mark Little

The MicroProfile

Posted by Mark Little Jul 4, 2016

Last week at DevNation we announced the MicroProfile, which is work we're doing with IBM, TomiTribe, Payara and the London Java Community, amongst others. Since then I've seen a few people write articles or talk about it on social media and there appear to be a few things we definitely need to clarify.

 

For a start, the work we're doing is not a standard. As I mentioned during the keynote, we may eventually take it to a standards body (more on that later), but at this stage the world of microservices is relatively new and evolving, let alone the world of enterprise Java-based microservices. In general, standardising too early results in ineffective or otherwise irrelevant standards so we don't want to consider that until we're further down the line. Now that doesn't mean we won't be using standards to develop. Far from it, as I mentioned we're thinking about a minimum profile based on Java EE (probably 7 since 8 isn't yet finalised) initially. Portability and interoperability are key things we want to achieve with this work. We may never be able to get everyone to agree on a single implementation, but at least if they can port their applications or communicate within heterogeneous deployments, then that's a much more realistic goal. After all microservices, and SOA before it, isn't prescriptive about an implementation, and probably never should be.

 

Although we're starting with Java EE as a basis, we're not going to tie ourselves to that. If you look at some of the other approaches to microservices, such as Netflix OSS, or OpenShift, there are features such as logging, or events or even asynchrony, which aren't currently available as part of EE. Again, I mentioned this during the announcement, but we all expect this work to evolve enterprise Java in these and other areas as we progress. Java EE represents an evolution of enterprise middleware and we all believe that enterprise Java has to evolve beyond where it is today. Maybe we'll take these evolutions to a standards body too, but once again it's way too early to commit to any of that.

 

Another thing which we brought out during the announcement was that we want this work to be driven through good open source principles. We're working in the open, with a public repository and mailing list for collaboration. We're also not restricting the people or companies that can be involved. In fact we want as wide participation as possible, something which we have seen grow since the original announcement, which is good! This means that our initial thoughts on what constitutes the minimum profile are also open for discussion: we had to put a stick in the ground for the announcement, but we're willing to change our position based on the community collaboration. We've placed few limitations on ourselves other than the fact we feel it important to get an agreed initial (final) profile out by around September 2016.

 

I think this leaves me with just one other thing to address: which standards body? The obvious candidate would be the JCP given that we're starting with Java EE. However, as I mentioned earlier we may find that we need to evolve the approach to incorporate things which go way beyond the existing standard, which may make a different standards body more appropriate. We simply don't know at this stage and certainly don't want to rule anything in or out. There's enough time for us think on that without rushing to a decision.

Last week at Red Hat Summit several of us, myself included, paid tribute to the fact that June 2016 was the 10 year anniversary of closing of the deal which brought Red Hat in to JBoss. I had a few things to say about this during my portion of the DevNation keynote, highlighting the CDI work we've done, the various acquisitions such as FuseSource and FeedHenry, or the innovation around projects such as Vert.x and WildFly Swarm. As I mentioned at the time, we've accomplished so many things over the last decade that I really couldn't hope to do them justice in a single keynote. It truly has been a defining decade for enterprise Java within Red Hat.

 

It was also an emotional anniversary for me which didn't really hit until Summit, or maybe it was because of Summit. While there I met and interacted with people I've known for many of those 10 years in Red Hat and who have worked as part of the middleware efforts, but some of whom have now moved on to other areas of Red Hat. It was great to see them doing so well but slightly sad that we're not working as closely these days. Another change was that the JBoss/JUDCon keynote and associated demo, which started out years ago playing to a packed room of a hundred or so has now become part of the main keynote stage in front of thousands: wonderful to see!

 

10 years of JBoss in Red Hat has been a great ride. We've been able to act as a catalyst for innovation both inside and outside of Red Hat. The business has grown from only a couple of products to a dozen or so. Revenues are also up significantly. And the teams, including engineering, QE, docs, product management etc. have also exploded in size. I look forward to what the next 10 years will bring! Onward! And We Love You !

I was at DevoxxUK last week on a panel session about the future of Java EE (my thanks to Antonio for inviting me). I suppose it wasn't surprised that you had all of the panel members united in their view that not only has Java EE been good for the enterprise middleware space, and of course it has a future role to play in areas such as cloud and IoT. It wouldn't be too hard to suggest that the panelists weren't objective in their assessment but that would be too easy and overlook the reality: Java EE has been the most successful cross-vendor enterprise middleware standard we've ever known; of course it's not perfect; of course there have been a number of iterations required to improve its applicability to an ever evolving set of use cases. But warts and all, Java EE implementations such as EAP have been deployed widely across the globe and sit at the heart of many of the mission critical environments we take for granted, such as banks, hospitals, air traffic control systems.

 

I've been around the proverbial block long enough to appreciate the incredible amount of technical work that has gone into Java EE and J2EE before it. But what I appreciate more, and what some people are too quick to ignore, is the incredible amount of cross-vendor/cross-community agreement that has gone into these standards. We often take for granted how well, on the whole, open source developers can collaborate under the banner of a single open source project. However, until relatively recently open source was not a main source of collaboration for Java EE. Even then, working within standards bodies can be a long, slow process, fraught with the usual tensions of getting agreement on interfaces, behaviours etc. when there are already multiple competing implementations for this or that feature. And the amount of time and effort this takes is often proportional to the number of participants. Take a look at how many people have worked on Java EE over the years and marvel at what they managed to achieve, whether or not you agree with it all.

 

As I've said many times in the past, the principles on which Java EE are based are pretty common to distributed systems in general. CORBA has them. DCE before it. Countless bespoke implementations both before and after, as well as currently, need some or all of the same capabilities you find within Java EE. With the new generation of EE implementations, such as EAP, which are agile, small and extremely fast, getting access to high performance messaging, bullet proof transactions, or something else by using EE is easier than many people believe and often easier than some of the new generation of frameworks in Java or other languages.

 

Back to the DevoxxUK panel: during the event several people in the audience expressed their agreement with the assessment that Java EE has a future, or at least should be allowed to aim for one, but that they were surprised this was the first time they were hearing from the vendors and communities represented by the panel that they were willing to work together to try to assure that future. I suppose on the one hand we've all felt that actions speak louder than works, and the fact that we've all been working together for years on the betterment of Java EE was assumed to be sufficient evidence that we'd continue to do so - are continuing to do so. However, given rumours and other concerns about the future of Java EE, I can certainly empathise with developers who want to hear that the major vendors are standing behind it. Well Red Hat and those on the panel at DevoxxUK hopefully made it clear: we are prepared to continue innovating with and on Java EE, and it's a key part of our strategy.

Mark Little

A REST+microservices rant

Posted by Mark Little Apr 21, 2016

Yes, it's a rant and I intend to write a more level-headed followup on the new Red Hat Developer blog in the next few days. In the meantime take it for what it's worth.

No one starts out wanting to build a monolith. There are no design meetings where the architect or developers say "you know what? I think it'd be a great thing if we built something that will be hard to evolve and maintain." I think it's also fair to say that conversations like "we've got a great architecture for our system that has helped us be successful, so we need to be sure that it evolves towards a big ball of mud" rarely happen!

 

And yet monolithic applications do exist. Probably less than many people might want to admit, but they are there nonetheless. So the question has to arise: why? I suppose there's another related question: how? As with so many things in this life, there's no one straight answer; it's a combination of things including:

 

- Expediency; far too often it's just too easy for developers to hack solutions into an otherwise good architecture without spending time to understand whether that breaks the architecture. What starts as a simple, small hack can also then grow, acting as a catalyst, and a small break in the architecture then turns into a fracture.

 

- Lack of architect (leadership); the original architect(s) leave the project and those who come in to replace them (if they are replaced) can't control the developers or perhaps don't understand the architecture enough to ensure it remains "pure". Likewise, different developers coming into the project to either add to or replace those there already, can dilute the group knowledge and understanding of the architecture, leading to unforeseen and accidental divergence from the original plan.

 

- Natural evolution; any system than can be said to be architected has a point at which it's simply impossible to evolve and retain the original architecture. Take a look at any (historic) building which may have once been considered an architectural marvel and if it was left mainly alone (not extended) once complete then it's likely still something to behold and admire. But if it had extensions, new wings etc. then it's likely to be a monstrous carubuncle, unless the original architect was involved, or someone who appreciated/understood the original. Sometimes it's just easier to start from scratch and approach the problem afresh, than try to tack on new features.

 

- Related to the above, sometimes people try to extend software systems (services) to do more than they really should and in doing so break the architecture or create monoliths.

 

- Poor tools with which to visualise the software system/architecture, leading to making it harder to track changes and ensure they don't move the system towards an unmanageable monolith.

 

Now nothing I've mentioned so far has been specific to localised applications. It's just as applicable to distributed systems and in fact in a distributed environment the architectural issues can become even more important to understand and track. If you've arrived at a monolith then trying to fix that may involve breaking it into components/services/microservices which reside in a distributed environment, but that's not necessarily the only way, or the best way, in which to resolve the monolith problem. In fact if you don't understand the architectural issues which have resulted in the monolith then breaking it into components is more likely to result in a distributed monolith (or micromonoliths) than to fix the problem!

 

Yes, I mentioned the microservices word above for the first time and this is really an article about them again. As I've mentioned elsewhere, I believe in and understand the need for distributed systems composed of (micro) services. However, what worries me about some of the current emphasis around microservices is that somehow they will naturally result in a better architecture. That's simply not the case. If you don't put in to place the right processes, design reviews, architecture reviews, architects etc. to prevent or forestall a local monolith then you've no hope of achieving a good microservices architecture. And if you don't keep them in place then there's a good chance you'll evolve towards a distributed monolith.

You've developed a microservice. You know it is because it does one thing well, can be independently versioned as well as deployed, and best of all the consultants you employed to help say it is too! Maybe you've even had it in production use for a while, receiving positive feedback on the benefits a service oriented approach brings. Let's assume you developed the service so it can run within a Linux container (some other container technology, including one based on the JVM, would be just as suitable for this example.) Hopefully you've embraced immutability and therefore take the approach of producing a new instance each time you need to make a modification. So far, so good.

 

As I've mentioned before, once you start down the microservices road, as with other services approaches dating back beyond even CORBA, you immediately enter the world of distributed computing, with all that entails. Therefore, it is inevitable that at some point either you, your team, or some group of developers at some point in the future, will wonder what they can do to improve performance or reliability in the face of distributed invocations and partial, independent failures. Co-location of services will likely be close to, if not top of, the list of things to try. Let's face it, the ability to improve the networking interconnect is limited in any meaningful timeframe, as is finding money to purchase machines with higher MTTF and lower MTTR (plus entropy increases, so you're going to have failures eventually). That leaves moving services (physically) closer together to reduce the network latency and increase the probability that they fail as a logical unit. Ok let's stop there for a second and back up a bit: just to be clear, I'm talking about services which are related closely such that they rely upon each other to work though can be invoked independently as well.

 

At some point some group or groups of developers will come (back around) to making microservices infrastructures dynamic in so much that individual placements of services are (initially) made based on heuristics from inter-service communications (interactions) to reduce network overhead. And these placements will (eventually) be computed frequently to enable services to be redeployed if those usage patterns change and new clients come in to play which need the services (or copies) placed closer to them. So it goes that eventually microservices will want to be placed within the same container. As I mentioned before, this could be the same Linux container, especially if each service is a separate process, or could be in the same language container, such as an OSGi container if each service is an OSGi bundle. And whilst these co-location deployments could be done in a volatile manner initially, such that the reboot of the container causes the services to no longer be co-located, it makes sense that a new durable instance of the container be created if the updated configuration proves valuable.

 

However, that then leads me finally to the question in the title: you had multiple microservices before they were co-located in the same container but does that change now? Are they still microservices? Maybe not if they can't be redeployed independently, but as I mentioned earlier, maybe they aren't that independent anyway. But in which case maybe they should have been collapsed into a single microservices in the first place? Lots of questions!

 

To be perfectly honest I'm not hung up on the "independently deployable" aspect of microservices in general. I think dependencies between components, objects, services etc. in distributed systems are things which may ebb and flow over time and usage patterns. I think the more important aspects are the service oriented nature, with a well defined contract between user(s) and service, quick deployment ('quick' being a relative term), and well defined APIs. Therefore, in my book these co-located microservices may still be microservices, or maybe composite microservices (what about a milliservice?) But one thing I'm sure of is that some people will disagree and the "goodness" which is the lack of standards in this area, will encourage these kinds of discussions for a while to come.

I've been designing, developing and otherwise involved in distributed systems for 30 years. I love the challenges they present, especially around my own speciality of fault tolerance. Whether it's different consensus models, the duality of orchestration and choreography, replication techniques or different transaction models, to name but a few, working with distributed systems is thought provoking. And in today's world of ever connected devices at scale, it's even more so than at any time in the last decades.

 

There are many good architectural reasons why you might want to, or need to, employ a distributed approach to your application. Code you rely on may be running elsewhere to your own business logic, may be implemented in a different language, or may need to be replicated to improve availability, for instance. It may even be the case that your distributed system evolved over time from a more localised implementation, e.g., a capability you wrote now needs to be shared between groups and it makes sense to replicate copies physically closer to them.

 

Distributed systems make a lot of sense for many applications and developers. In the words of RFC2119 they MAY help to solve some particularly tricky issues but they WILL cause other problems of which you MUST be aware. Distributed systems are great. But you know what? A centralised system may be far more appropriate for what you need. Why do I mention this and why do I think it's important that developers and business owners realise this? Because if you listen to our industry at the moment you'd be forgiven for believing that all applications need to be decomposed into (micro) services, each residing in its own (Linux) container and communicating using HTTP (hopefully at least using REST too).

 

If you've got a centralised system that doesn't mean it's necessarily a monolith. Likewise distributed systems aren't necessarily more agile, lean or less monolithic in nature. As a developer, architect or business owner you shouldn't feel ashamed to admit "I'm centralised and I'm proud!" Don't feel that microservices are going to solve architectural problems due to their distributed nature and if they do, they will definitely introduce challenges you don't have to worry about in a local environment. Now don't get me wrong, I appreciate the ideas around microservices as they are influenced by SOA and other experiences over the years. Unfortunately some of those who are pushing microservices strongly fall into one or more of the following categories: they don't care to learn about distributed systems, they don't believe they have the time to learn about the pitfalls of distributed systems (our industry moves at a pace), they have an agenda which isn't necessarily conducive to your productivity, or maybe they really do believe they're doing the right thing by adopting these new fangled ways. And of course there are proponents of microservices architectures who really do understand the trade-offs they represent and will faithfully represent them to you so you can make an informed choice.

 

Furthermore, citing examples of successes such as Netflix or Amazon is hardly being fair to the large numbers of applications and vendors who don't use distributed systems, let alone microservices, and would still consider themselves to be successful. Of course there are things we can learn from the likes of Netflix. Of course there are lessons we can apply when considering microservices. But just because you are developing a centralised system does not mean you are a failure or should be consigned to the garbage can of history!

 

Alright, if you've read this far you would be forgiven for thinking I don't like microservices. But you'd also be missing the point. Just as we've been shown over the years that writing distributed systems is often necessary and a core requirement for some applications, so too is developing using a microservices architecture. What I'm trying to show though, is that you'd better understand why you need to distribute your services as well as the fundamental implications that such an approach entails. And maybe, just maybe, going back to, or remaining, centralised is really the right thing for you.

Mark Little

Vert.x impressions ...

Posted by Mark Little Apr 3, 2016

Of all the open source projects that have impressed me over the years Vert.x comes close to the top - probably at the top if you catch me on a good day. It's relative simplicity belies a combinatorial complexity that allows developers to go from zero to enterprise ready in small steps or giant leaps. The fact it has been imitated in recent years in other languages is just another clear indication of the power of the project. And yes, it clearly takes a leaf or two from Node.js and others but as has been said before: "Good Artists Copy; Great Artists Steal."

 

I've mentioned a few times about reactive, asynchronous patterns and how they're appropriate for things such as microservices. Vert.x is already a great way of developing microservices from scratch but I also believe that the concepts of the polyglot, non-blocking, event bus is ideal for bridging and integrating with existing applications or services. There are going to be a variety of ways for creating (micro) services and applications, but I believe that Vert.x offers some building blocks they all need. I'm hoping to see us use it much more in the coming months.

Mark Little

Frameworks versus stacks?

Posted by Mark Little Jan 26, 2016

For many many years the Java world has popularised frameworks for developers. Most developers understand that frameworks sit on top of stacks and typically act as a way of simplifying the interaction with the complexities that exist within. Experienced developers have long been able to build complex systems without frameworks but it has often been at the expense of time and reliability (some definition of efficiency). The best frameworks are extremely prescriptive, taking the plethora of configurable options which are presented to the developer by the combination of software components that make up the stack and narrowing them down to a very specific subset. They take the complexity and flexibility and condense it down to something which more than 80% of developers will find useful or easy to get to grips with.

 

Now that's not to suggest that this complexity and flexibility is bad. Far from it: one size rarely fits all and whilst one developer may not understand why it's even necessary to have an option to tweak Widget XYZ, another developer would find it impossible for that to not be available. Component developers have always tried to provide 1001 options for the wide range of users they can perceive and that's the right thing for them to do. However, many component developers are unable to consider how their implementations may be best tailored for the majority: in fact it may be anathema to them to do so. In some ways I speak from experience with our experiences around Arjuna! It's only in recent years, with the STM implementation, that I think we've managed to provide a framework that hits most of the user needs.

 

But I digress. This is not meant to pit framework developers (or users) against component/stack developers (or users). As software developers we need them both. However, in some areas, and it seems specifically in the Java space, we often hear about the frameworks more than the software entities upon which they rely. I suppose it's similar to movies: we hear more about the actors on screen than the plethora of people behind the camera who actually make the films possible. Another analogy might be the iceberg, where the majority lies below the ocean surface! This is a problem I've run into many times over the years and written about a few times. I'd put myself more firmly into the component/stack category of developers, though I've tried to learn from some of the more successful framework implementations and developers when coming up with the Narayana STM solution. Having many of those framework developers on my staff has helped, so it's probably an unfair advantage I have over others! However, my main point in writing this entry isn't about me or what I've done, it's about how frameworks and components/stacks exist in a symbiotic relationship: frameworks need the stacks on which they are deployed, or they have nothing to make them work, and likewise the stacks need frameworks if their capabilities are to be made readily available to a wider range or developers.

 

What "gets my goat", as we're used to saying in the UK, is that some framework developers are often less than up front with the fact that their offerings are critically dependant on the stacks "below". Don't get me wrong: I'm not making any judgement about the relative importance of the framework versus the stack in the final solution. However, I do believe in credit where credit is due. Now the users of frameworks are excused from knowing too much about what is happening under the covers: it's like driving a performance car and not knowing the details of who designed and built the engine; but at some point you do need to know that what makes a car perform is at least as important as how it looks! Despite the fact I work for Red Hat and am JBoss CTO I'm trying to be objective here. We've made some really good frameworks that build on our stacks. We've also been less than successful on others. Likewise some of our partners or competitors have made some great frameworks that also rely on JBoss components. The right framework can make you incredibly productive. But the right framework also needs the right support from the components and stack upon which it is deployed. If you don't have a good framework but do have a good stack then a good developer can still be productive but not necessarily efficient. If you have a good framework but a poor stack (e.g., not reliable or scalable) then a good developer is going to realise quickly that all that glisters is not gold!

 

OK so you may well ask what's the conclusion? First of all I'm not sure there's just one conclusion. If you're a developer then of course you're going to look for the framework that makes you the most productive (definition: get done what's needed in the shortest period of time, but ensuring when you move on to your next company or project you don't leave a steaming pile of poop for someone else to pick up). These days we're a wider Java community that focusses a lot on the implementation language and the framework, since they typically go hand in hand. Projects often have tight deadlines and aggressive milestones making it easy to justify selecting your approach based on the experiences of others who may also have been subject to tight deadlines. But ultimately we need to take the time to investigate the entire solution not just the framework because one really can't succeed without the other.

Mark Little

Is Java EE Still Relevant?

Posted by Mark Little Nov 15, 2015

I had a great time at JavaOne this year and Red Hat had a fantastic showing, with many sessions at the event, many more booth sessions (which were packed out as usual) and of course I made a brief appearance during the kickoff keynote. There had been a lot of interest in this event due to some recent events and rumours about what Oracle would or wouldn't say, specifically about the future of Java EE. This was even a topic of conversation during the JCP EC face-to-face meeting preceding JavaOne. But nothing much really happened and life, it seemed, would go on as usual. However, there was still a lot of discussion during several sessions and through the usual chatter on the floors or parties about the relevancy of Java EE these days.

 

But let's face it, this isn't a question that is only being asked in 2015; I've been involved in enterprise Java since before it was called J2EE and within a few years of bring created people were asking the same question. As I've mentioned many times over the past few years, people have asked the question when Web Services came along, then REST, PaaS/Cloud, mobile and now IoT. Each time the concept of middleware has evolved and Java EE, and its associated implementations, with it. People get hung up on the concept of Java EE and application servers as bloated, monolithic entities without actually looking at the reality of today's implementations (well, at least one). However, I'm not here to repeat what I've said time and time again (remember this from 2011?), most recently at HPTS 2015. Approaches such as WildFly Swarm and the massive interest it has seen, show that there's still a lot of interest. And I've already suggested how Java EE can fit into the next generation platforms.

 

Rather than revisit this question, which I think I've answered sufficiently, I wanted to link to Ian's entry on the JavaOne session he did, which I attended. It's always good to hear from someone else on the topic, though I'm sure some will argue he's not objective about this and neither am I. If you're still not convinced, think of the core capabilities within any Java EE application server, such as transactions, messaging, security, cacheing etc; services/capabilities that are needed way beyond Java and Java EE, pre-dating Java by decades and used together or independently in many applications. You can consider Java EE as a way of packaging these things together into a convenient bundle, where they are guaranteed to work well together or independently, and in most cases you probably don't even know they're there. Over the years, before Java was created and way beyond when it is just a mention in history books, that packaging will change but you'll still have something recognisable within middleware implementations, perhaps not co-locating services, perhaps not all implemented in the same language etc. And your future applications will probably not be able to tell the difference or know they're there ... again.

 

So whilst I think the original question is an important one to ask, I think a much better question is "Where should I use the core capabilities within Java EE?" And if you've got a suitably flexible and agile application server implementation, your application won't need to care that Java EE is, or is not, under the covers providing the desired dependability and reliability.

I've written before about the way I think the next generation platform will evolve. Here I want to take a moment to suggest how this might be approached in the future within Red Hat middleware, using JBoss, Fuse and other technologies including Vert.x. I suppose if I were working for any other company than Red Hat I might have to say that there are some disclaimers about my opinions being my own etc. However, whilst they certainly are to a degree, because we're doing this in open source you can see what we're doing and even get involved yourself!

 

Of course microservices are the future. Ok, maybe there was a hint of sarcasm in that last sentences! Microservices have a role to play, just as SOA does (yes, I still believe the two are closely tied). There is some truth in there though: more streamlined, agile and dedicated services will be the basis of future application development, whether using (immutable) containers such as Docker or just the standard JVM, perhaps with fatjars. However, anyone who believes that the future of software (middleware) will appear instantaneously has obviously not looked back at other transitions, such as bespoke-to-CORBA or CORBA-to-J2EE. These things take time and evolution rather than revolution is the natural approach. Even if you've not been involved in middleware there are similar examples elsewhere in our industry: COBOL really is still in deployment today! Look at the interest we have around Blacktie!

 

Therefore, the future will evolve. Yes people will want to develop new applications (so called greenfield sites) using the latest and greatest framework or stack. But they'll also want to integrate with existing business logic and services written in a variety of present day technologies. So there'll still be Java EE application servers (e.g., EAP) with business logic within them, some of it legacy, some written from scratch today and into the future, despite what some may believe.

 

I believe that due to the fact Java EE has been the dominant non-Microsoft development and deployment platform for well over a decade, there are so many developers out there who are comfortable with it. Yes some may complain about the apparent bloat of implementations, but the reality is that it's still very easy to develop against and use from a variety of different programming languages. That's why the evolution towards any new paradigm is going to be heavily influenced by it, if not driven directly by those developers. So yes, I believe that microservices and Java EE go hand in hand for a large percentage of developers. Approaches such as WildFly-Swarm offer precisely what I'm describing: a comfortable entry point for developers and even existing applications, yet the power to move to a more flexible DevOps driven paradigm. WildFly, when used correctly, offers a mature and easy to use platform that has a minimal footprint and faster boot times than the most popular web servers around! And don't forget that Swarm builds on WildFly so we immediately get the maturity of implementations(s) from it.

 

However, this mad rush towards microservices, trimming of application servers, creation of applications from fatjars etc. needs to be approached with caution. Our industry is renowned for offering panaceas to problems that require throwing away all we've done before and relearning all of those hard earned lesson! We've got to break that cycle and in Red Hat we've got the pieces to do so: with so many open source projects out there purporting to be right for enterprise applications and new ones springing into life almost on a daily basis, it's easy to understand why people believe every new project is good enough for their requirements. Although I believe open source is a superior development methodology, it takes time and effort to build enterprise-ready components such as transaction servers, messaging brokers, etc. They don't just spring into life ready formed and fully capable. "Good enough" is rarely sufficient for enterprises. It's the edge cases like management, reliability, recoverability, scalability and bullet proof security that are hard to do and get right, yet it's precisely those edge cases that matter time and again. Through core development or acquisition, we've built up a stack that is mature and capable. Whether deployed as a stack or individual pieces, it's what we should be building the next generation of middleware solutions upon. A strong base exists today and we need to reuse as much of that as possible rather than rewrite from scratch in some new popular programming language.

 

Now as I hinted above, maybe we don't package our future stack or platform in quite the same way as we do today. Microservices offers an approach that is in line with the kinds of trimming we've been seeing anyway. Bundling individual components from the application server as easily deployed (container based) services that can then be exposed to other programming languages, frameworks, solutions etc. is definitely part of the overall solution space. Those core services, such as transactions or storage, could be deployed as individual services or, as is more typical with something like Swarm, deployed with the business logic that uses them. I keep coming back to the JBossEverywhere initiative we had a few years ago - ahead of its time!

 

Ok so we've looked to microservices and how Java EE fits in. But that can't be the entire answer - and it isn't. As I've mentioned before, at least for a very important set of applications and use cases the future is reactive and event driven. Now that could mean Node.js but just as likely in the Java world it probably means Vert.x. Note, given that many of the Java EE APIs aren't reactive or asynchronous in nature then we'll need to evolve them if we wish to tie them in to Vert.x and I do think we need to do that. Whilst some people will want to develop their applications and microservices in Vert.x from scratch, others will want to tie in legacy systems or have access to some of the core services I mention earlier. I see Vert.x as the ideal backbone or glue that brings all of these things together. The mature core services that we've got are precisely the sorts of things that enterprise developers will need for their applications as they grow in complexity - and let's face it, eventually many applications are going to need security, transactions, high performance messaging etc.

 

In the Java world the unit of containerisation is essentially the JVM. However, most Java developers realise that unless you ship a farjar, which contains everything you need to run your application/service, it's often typical to find that changes in third-party jars downloaded at deployment time can result in the application or service failing to run first time. This is where operating system containers, such as Docker, really come into their own. The ability to create a deployment unit which runs first time and every time is crucial! Container orchestration technologies, such as Kubernetes, are likewise important if you want to deploy services (via Containers) which are highly available, load balanced etc. Therefore, hopefully it's not too difficult to see where Containers will fit into the future architecture - not mandatory by any means, but definitely a piece of the puzzle which should be considered from the outset.

 

The combination of OpenShift for Container deployment and management, with Fabric8 for developer experience with CI and CD, provides a compelling hybrid cloud environment, especially once you consider all of the JBoss/Fuse middleware integrated, i.e., xPaaS. As I've mentioned before though, xPaaS isn't about simply adding the middleware products to OpenShift; we're also going to make them much more cloud-aware/cloud-enabled. This has a number of implications, but the one I want to mention specifically is that the core capabilities will be made available to developers in a more cloud-natural manner, e.g., users who want reliable messaging won't need to understand the various intricacies of JMS to use A-MQ and in fact won't even have to know A-MQ is working under the covers. And yes, for those of you still paying attention, those core capabilities I mentioned are precisely the same core services we covered earlier on. See the connection?

 

Up to this point we've really been playing in the traditional enterprise deployment arena: clients, middle tiers and servers. The cloud comes into play here at the back tier (servers), but what about mobile, IoT and ubiquitous computing? I've discussed this a few times before so won't repeat much here except that I think everything we've discussed so far has immediate applicability for ubiquitous computing and that mobile, as well as IoT, are just limited aspects of it. In fact as I showed separately, if the cloud is to scale, mobile/IoT needs to take on a more fat-client approach - anyone remember what I wrote about Shannon's Limit over 4 years ago? Mobile, which really means developing applications for phones that tend to rely upon backend services, is a specific implementation of IoT, which really means developing applications for a range of devices that tend to rely upon backend services (ok, with some gateway technologies in there for good measure.) See what I mean?

 

If you follow that assertion that everything we need to do going forward is some aspect of ubiquitous computing, then it follows from what we discussed earlier that the new stack approach of core services, Containers, management etc. all come into play and across a variety of different languages and frameworks. Whether you're developing enterprise applications for mobile devices, clouds, involving sensors, or traditional mainframes, you need a stack that is mature, rich, scalable, reliable, trustworthy and open. The Red Hat stack, which has evolved over the last decade and is continuing to evolve, is the only one that matches all of the requirements!

 

Below is a hand-drawn outline of where I see these things going. Apologies that it's not a nice block diagram and for my handwriting

 

Screen Shot 2015-10-27 at 15.49.17.png

Filter Blog

By date: By tag: