-
1. Re: Teiid and JMS
rareddy Aug 27, 2012 7:31 PM (in response to rokhmanov)Andriy,
There are various reasons why we abondoned the JMS stuff from legacy versions. The direction is, you can easily integrate with JMS using JBoss ESB project and Teiid. I believe they have a JDBC adaptor. However this is on responding to incoming events from JMS. There should be an existing example somewhere, I will see if I can dig it up.
As per the consuming the data from JMS, we have not seen that much requirement on that side. What you mentioned above seems feasible with your usecase of continuous queries.
Ramesh..
-
2. Re: Teiid and JMS
rokhmanov Aug 27, 2012 10:26 PM (in response to rareddy)Thanks Ramesh,
The JBoss ESB project does not support JBoss 7 if I am not mistaken.
-
3. Re: Teiid and JMS
rareddy Aug 28, 2012 10:38 AM (in response to rokhmanov)Take a look at SwitchYard project.
-
4. Re: Teiid and JMS
rokhmanov Aug 28, 2012 11:26 AM (in response to rareddy)How Teiid can be integrated with SwitchYard? I do not see in SwitchYard docs any mentions of JDBC adaptor.
-
5. Re: Teiid and JMS
markaddleman Aug 28, 2012 12:08 PM (in response to rareddy)We were talking about this yesterday and I think there is a missing bridge between messaging semantics and Teiid's / relational semantics. Messages are streams of data and, in general, processing requires an set of messages before you can perform most relational operations. To take an example from our domain: A continuous query that reports a stream SNMP traps. Traps have fire-and-forget semantics so any new query directly against the translator would naturally block until the next trap is delivered. The client needs to display all active traps, join them with other data, etc. This implies we need to buffer the stream of traps in a table that supports efficient continuous query semantics. Temp tables provide some of the solution but not everything. I think there is a basic operation that Teiid could provide that is hiding in here somewhere.
I think of this operation as a buffering or data windowing service. I think the parameters to this service are
- a set of continuous execution source queries that fill the buffer,
- merge logic that takes rows from the source queries and fills/updates the buffer,
- a set of continuous execution queries that perform non-destructive reads from the buffer, and
- a cleanup operation that performs arbitrary operations on the buffer
From what I'm discovering in our use cases, we would want to create the buffer then later, at arbitrary points in our application, add and remove source queries. The buffer and all source queries must have compatible schemas. In our application, the queries that read the buffer may be either continuous or non-continuous. Under continuous execution, the clients should receive a result set immediately after executing the first query and, subsequently, only when the data in the buffer changes (note: for us, it's ok if the client receives the same result twice but there's no need to bombard the client with results if no source query has provided new data to the buffer).
The cleanup operation and merge logic is where things get complex, I think. In our simplest use case, we aren't dealing with stream data at all. Instead, we just want to maintain fresh data in the buffer. In this case, there is a single source query. Before each of the query's result sets, we would delete all the data in the buffer. The merge logic would insert each from from the source query into the buffer. In this case, each row in the buffer is atomic so there is no need to ensure read consistency and no complex transaction semantics.
In our more complex use cases, we have multiple source queries whose data must be merged (in the H2 MERGE sense) into the buffer. At some point the cleanup logic must run and consolidate rows and/or delete rows in the buffer. Here, data atoms are a bit more complex and, perhaps, requires explicit locking operations. If we allow the buffer to be backed by the translator system rather than Teiid temp tables, then I would say let the translators handle any of read consistency issues.
As I write this, I'm realizing that we could simplify this service by requiring a single source query which is specified at the time the buffer is created. If the client wants new source queries, we must create new buffers. This would proliferate the number of buffers but we could simply UNION all the buffers together when we want to query them. I'm not sure if this is a good approach or not.
-
6. Re: Teiid and JMS
rareddy Aug 29, 2012 10:24 AM (in response to rokhmanov)SwitchYard supports HornetQ, they do have integration with that. I do not know enough details about the project, may be you can ask their community for any examples.
Also note that this is from the client prespective, not from consuming prespective in the translator. Based on some of same issues raised by Mark, we do not provide built-in support for this in translator layer.
-
7. Re: Teiid and JMS
rareddy Aug 29, 2012 10:32 AM (in response to markaddleman)Mark Addleman wrote:
As I write this, I'm realizing that we could simplify this service by requiring a single source query which is specified at the time the buffer is created. If the client wants new source queries, we must create new buffers. This would proliferate the number of buffers but we could simply UNION all the buffers together when we want to query them. I'm not sure if this is a good approach or not.
I think this will simplify your usecase. As per UNION case you can also take look at multi-source models or Partitioned Union optimizations.
-
8. Re: Teiid and JMS
markaddleman Aug 30, 2012 10:42 AM (in response to rareddy)Thanks, Ramesh. Actually, I'm *relying* on the Partitioned Union optimizations to make our solution work. A couple of questions:
- We'll likely be using partitioned unions as inlviews rather than explicit, first-class Teiid views. I assume that the optimizer will still recognize this case?
- I can imagine a hundred or so expressions in the inline view union. Any limits that we should be aware of?
-
9. Re: Teiid and JMS
shawkins Aug 30, 2012 12:47 PM (in response to markaddleman)1) Yes the optimizer will treat inline views as partitioning eligible.
2) Not explicitly. We have run into situations such as https://issues.jboss.org/browse/TEIID-2039 where due to view unnesting and liberal use of subqueries the resulting pushdown sql is too large. The NOUNNEST hint and other workarounds can be employed in those situations.
-
10. Re: Teiid and JMS
markaddleman Aug 30, 2012 12:54 PM (in response to shawkins)Thanks, Steven.
Any thoughts on the windowing service?
-
11. Re: Teiid and JMS
shawkins Sep 14, 2012 9:26 AM (in response to markaddleman)Sorry Mark I missed your question here. In scaning your post it seems like the biggest take away is the need for temp table creation on a source. Can you add a vote for https://issues.jboss.org/browse/TEIID-196 - we'll likely move it up from 9 into 8.3 since it also of interest for built-in handling of data shipment joins.
Steve
-
12. Re: Teiid and JMS
markaddleman Sep 20, 2012 10:27 AM (in response to shawkins)Do you consider global temp tables in scope for TEIID-196? I can't find the specific jira right now.
-
13. Re: Teiid and JMS
shawkins Oct 8, 2012 12:17 PM (in response to markaddleman)Sorry for the delayed reply. TEIID-2067 can be related to TEIID-196, but it needs to be it's own issue.