This page details a list of projects students may be interested in, which relate to Infinispan. They are often standalone, or minimally dependent on core Infinispan internals, often research-oriented or experimental in nature. Contributors interested in working on these should get in touch with the Infinispan developer community via IRC or via the developer mailing list and suggest proposals and discuss solutions.
- ISPN-127 Ability to bring up/take down nodes based on SLAs
- ISPN-57 Support Google App Engine
- ISPN-462 ISPN-463 ISPN-464 ISPN-465 ISPN-466 Handle HTTP and EJB session state for other commerical and open source app servers
- Implement topology change updates in Infinispan Hot Rod Python and Ruby clients
- Visualization and tracing of messages between nodes
- Proof of correctness for complex distributed patterns
ISPN-127 Ability to bring up/take down nodes based on SLAs
Infinispan currently uses JOPR - an open source management console - to visualize and present statistics. Such statistics are also available in JMX. This project is about making use of a rules engine - such as Drools - to capture such statistics, and to allow for users to pass in thresholds and service level agreements as rules, which may trigger firing up more Infinispan nodes - or even taking some down.
ISPN-57 Support Google App Engine
Google's AppEngine for Java imposes several restrictions on what can be run. This project is to look at Infinispan and investigate what needs to happen to make Infinispan workable on Google AppEngine.
ISPN-462 ISPN-463 ISPN-464 ISPN-465 ISPN-466 Handle HTTP and EJB session state for other commerical and open source app servers
Infinispan will be the central clustering library to handle HTTP and EJB session state for JBoss AS 6. The same pattern can be applied to other app servers, and the JIRAs above pertain to WebSphere, WebLogic, Glassfish, Tomcat and Jetty respectively.
Implement topology change updates in Infinispan Hot Rod Python and Ruby clients
Hot Rod protocol is a binary, plattform independent, protocol created to enable clients to communicate with Infinispan servers. Hot Rod protocol clients can receive, as part of operation responses, cluster topology update information. The aim of this task is to implement this logic in Hot Rod's Python client and Ruby client. See issue 3 and issue 4 for more info.
Visualization and tracing of messages between nodes
We know what kind of messages should be generated between nodes to perform specific operations - in theory -, still to debug problems of configuration or implementation on the whole stack (application + Infinispan + JGroups) we often need to look into the logs, having thousands of trace lines even when sampling for small periods of time.
It would be very useful to have a way to automatically extrapolate the interesting patterns out of a running system, we could collect reliable information for example using (just ideas):
- A custom JGroups protocol
- Byteman to instrument JGroups for specific events (like network socket usage, or thresholds being reached in internal structures like resend tables or threadpool sizes)
- Simple log file parsing
The collected information could then be used to generate condensed reports highlighting the patterns being used in practice to compare them with expected patterns.
I have two different kinds if output in mind:
- A graphical visualization, showing the cluster nodes and a sequence of colored arrows showcasing what is being done
- A short text representation, to be used by:
- automated tests to verify invariant expectations are not broken on code changes
- future possible tool to formally proof correctness / race conditions
Proof of correctness for complex distributed patterns
The core of Infinispan can be represented in very simple "primitives": a set of nodes send messages to each other. The fundamental rules are also relatively simple:
- a message can't be received before it's sent
- a message could be lost
- a node could be killed at any time
From these basic building blocks one can start building some logical consequences:
- messages sent to multiple nodes might arrive at different times
- multiple messages sent to multiple nodes might be delivered in different order
Some of these problems are resolved by JGroups - but even then it must be configured accordingly, meaning Infinispan' s usage of it is sensible to using the wrong method or flags.
While the Java/Scala code in Infinispan is not overly complex, it's not suited for reasoning on the consequences of often-needed changes in the codebase. It would be very useful to be able to define patterns in an ad-hoc meta language, and provide a proof correctness of the patterns it uses, or at least proof the events it should avoid can not happen.
A great help for the project would be to sketch such a language and try it out on some of the distribution schemes Infinispan uses to proof they are correct or identify flaws in correctness. In a second step (optional) one could build some tooling around this to provide automatic demonstration / simulation for proposed changes expressed in this meta language.
The Promela language could be used for this purpose; so one could build tooling around it, try to apply it on Infinispan, and possibly work on ad-hoc extesions.
See also previous proposal: "Visualization and tracing of messages between nodes" -> that would make it possible to trace the real Infinispan behaviour and then demonstrate it's correctness or identify problems before they happen.