Skip navigation
2014

In the words of Rod Serling and the Twilight Zone: "Submitted for your approval". Consider a world where you're a developer and you've spec-ed out a large project you need to develop, either alone or with a group, and you know you need integration, a web server and definitely some enterprise capabilities such as reliability, transactions, security etc. And you'll be using good SOA practices throughout, of course! You sit down at your laptop, which doesn't have anything installed on it yet, and start to leaf through an online repository of software (app-store/service-store) that will help you achieve your goals. There's a core development environment (implemented on the JVM of course, though perhaps you want to use a combination of languages so maybe Java isn't your first choice) and for the sake or argument let's assume you're developing locally initially, so no Web-based IDE. Your initially selected packages download and off you go in a matter of minutes, with your friendly neighbourhood source code repository either created as part of this process or one you've got to hand already.

 

Now whilst it's nice to work alone at times, this project really requires more than one person and it's unlikely to ever be deployed on your laptop. So … to the cloud! All of the software components you need are either already deployed or deployed automatically as you migrate your requirements. As with the local (original) configuration, they're also integrated seamlessly with each other and with your IDE, so the only differences you might notice will be due to the interconnect between yourself and the cloud (though if you have redundant connectivity options then even this may be temporary and perhaps not even noticed by you). There are also core services deployed into the cloud, such as messaging (Messaging-as-a-Service), data (Database-as-a-Service), and others that you can use directly where necessary. As the team of developers continue to work, using their shared code repository and infrastructure, you may start to think about various non-functional aspects of the finished application which are critical when it comes to be deployed. These would include QoS requirements, such as average uptime, peak load requirements etc. All you have to do is define the QoS, clear in your understanding that the Cloud on which the application will be deployed knows how to distribute your application, non-functional components/services that you may not even be using yet but which are required to match your QoS, state etc. to any needed machines automatically and opaquely, i.e., without human intervention.

 

As the application development goes on you need more capabilities, such as cacheing or access to a large-scale in-memory data grid; you may even need to move up the stack and pull in requirements from different categories of users. For instance, your development team may create a range of services which can be used directly by users or composed together into other services to provide a composite capability/service. To do this it may be easier to think in terms of workflows or task flows; in fact the infrastructure on which you are developing may even pop up and suggest this to you (hopefully not as annoying as Clippy!) Further up the application stack the user may move from a developer who thinks in terms of code, interfaces, exceptions, messaging etc. and works with concepts such as choreography, orchestration, swim lanes. This is precisely where Business Process Management tools come into play and fortunately your Cloud has that service to offer on demand when you or others in the team need it; just select it and it is deployed into your development environment and automatically searches for the services you've created, making them available for further use.

 

Nearing completion of the application, you decide that whilst you want this application to run on the Cloud most of the time, you don't want to rely on a public Cloud entirely. Maybe you have some data that cannot be pushed to the public Cloud due to the sheer size of the data or for regulatory reasons (fortunately your development environment also has data virtualization techniques available which will allow you to partition the data for your application and also associate metadata with it such that the underlying Cloud knows where it can and cannot migrate the data to meet other QoS requirements). Maybe for peak periods you want to cloud burst to a private data centre (or vice versa). Whatever the reasons, you need to ensure that there's a similar infrastructure deployed on to these private machines (let's call it a private cloud for now) and tied into the public cloud. Again, a few clicks of the application repository (or administration menu) and everything is selected and downloads begin in the background, or even on demand, i.e., only encumber these machines when there's a need).

 

That's it. You step back and take a look at what the team have developed and how. In fact you can view all of this at various levels of granularity, from individual lines of code, to entire service interfaces and contracts, using the same toolchain you've been using. All of the interconnects between services that your team have created are shown up, and once it's running you'll be able to drill down onto specific machines, services and users to determine hotspots. gauge how well the application is running etc. All of this has been done by your team with no help from external service providers, IT teams or others not core to developing the application. Anything that has needed to be deployed, either to assist in the development or in the deployment has been triggered by your team and fulfilled by the Cloud infrastructure itself. Running the application requires very little (hopefully no) administration by you or the team, since the infrastructure is self-monitorig and adapts to changes in requirements, fault hotspots, patches itself when needed (maybe ignoring some patches that don't fix issues relevant to the application), …


Perhaps over the weeks and months that follow, your team or the client for whom you're working, buy more hardware (laptops, desktops, even IoT devices) that can be provisioned for us by the Cloud and this application: all you do is register them with the Cloud you're using and it downloads some basic infrastructure (fabric), perhaps add metadata to tell the Cloud when (time of day) these devices can be used or under what conditions (e.g., load is below 20%). Again, no programming/development is required: you "just" click-and-go! The development infrastructure may even be able to predict what you need before you do and install components so they are there when needed. Not so unreasonable given techniques such as Bayesian Inference Networks, or the fact that an algorithm is now a company board member!


Hopefully this doesn't sound too far fetched. And hopefully it sounds like something which makes sense to you, because this is the kind of thing we're trying to accomplish with xPaaS. Some of this isn't too far from what we demonstrated earlier this year. And whilst there's still some way to go before we have all of this, we're making great progress with efforts like Fabric8. However, a lot of the work we need to do is presentation related: how we make available these technologies so it really is a lot more point and click and a lot less writing of much code. Don't get me wrong: I've never been a believer in "zero coding" development tools aimed at programmers, but for certain categories of developer it's possible to get close (but no closer).


Adding in capabilities to your development (and deployment) environment as you need them is something we've become used to over the years with the likes of maven, but what I've mentioned above goes much further. It needs to be as natural and opaque as selecting an app on a tablet or mobile phone; any dependencies (such as other apps) are downloaded automatically. As you download capabilities (augment your environment) so too will they be downloaded to your co-workers on the same project - perhaps there'd be the notion of groups for developers so only if you're in a specific group(s) do you receive this auto-update when your colleagues in the same group realise they need some new capability. And like I said several years ago, how this happens under the covers is not necessarily something that is exposed to the users (the opaque references I keep making), but could be based upon dynamically updating containers with capabilities as and when they're required.


Finally another key component to this Utopian Future is the typical cloud billing mechanism: pay for what you need when you need it. The developers don't have to buy development licences or support ad infinitum but only for the duration of the development process (though some are obviously needed for support and maintenance later). The customer needs buy deployment/runtime licences or support for the duration of the application's lifetime, which could be measured in months or years, but which typically still only get billed per minute of execution. So if the application gets shut down for specific periods of the week or year, there's no need to pay for them.


Maybe this all seems far fetched. Maybe it's obvious to you. After all we and others have been talking about things like this for a while. We're continuing to head in these directions with xPaaS, Fabric8, WildFly, OpenShift and a large range of other efforts. Working with upstream communities, partners and customers this is an exciting time!

Filter Blog

By date:
By tag: