The ExecutionFactory.setSourceRequired(flase) helped - good to know, I haven't seen this feature before.
For the continuous execution: I did some tests today with passing Date to DataNotAvailableException - I see it works as I expect. Having a date in exception constructor is actually a very good feature - gives us a couple of new business cases (for example - easy schedule continuous execution to run with defined interval for a some period of time in future), and also resolves some synchronisation headaches. Thank you very much!
Just a little update, hope it will be useful: I noticed that in some rare cases (1:100 or more) throwing the DataNotAvailableException.NO_POLLING causes no effect - the execute() repeats. Probably some concurrency stuff.
In my particular case I want continuous execution to stop producing results, but keep Statement open for a some time.
Seems throwing new DataNotAvailableException(new Date(Long.MAX_VALUE)) is more stable - I haven't seen repeats of execute() for the last couple of thousand executions so far.
1 of 1 people found this helpful
See the DataNotAvailableException strict setting. NO_POLLING is not strict (as it was in prior releases) it is quite possible for another poll for results to happen depending upon other factors, such as client requests, other sources, prefetch timing, etc. For your needs you probably always want a strict DataNotAvailableException, which can also be set when using the delay constructor. You can also remove most, if not all in your instance, timing issues by setting ExecutionFactory.isForkable to false. This will use the processing thread for your translator execution as well - but of course should only be done on low overhead sources.
Thanks Steven. I set isForkable() to return false in my transator, but did not noticed any difference. Am I understand correctly that setting isForkable to false enforces more strict transaction brackets?
Forkable just indicates whether another thread can be used to interact with the translator. Forkable false means that the engine thread will always be used, thus removing any timing issues between the engine thread and the connector workitem thread. But I'm using the term timing issue loosely here. If the DataNotAvailableException is not strict, then we do make a guarentee on a minimum delay.
Looks like this discussion will never ends!
Another interesting behavior I'd like to share. A small reminder: we have a view (backed by custom translator) which continuously generates a data with specified time intervals (every 5 seconds for example). I join this view in queries with another (regular) views and tables.
select t.* , P.symbol, P.COMPANY_NAME
from timetable t , Accounts.PRODUCT P
where t.ts_start = now()
and t.ts_stop = timestampadd(SQL_TSI_SECOND, 20, now())
and t.period = 5000
and P.symbol = 'TS'
The resulting query runs continuously, and results from both tables are returned with specified time period (5000 milliseconds for example). This is exactly what we need, and we love this feature. The thing with bothers me - you might notice that tables in the query above are not specifically joined together. In another words - there is no "t.field1 = P.field2" or such. Per my understanding, Teiid should split such queries and execute them independently (the second query related to Accounts.PRODUCT - non-continuously, since it is a regular H2 model table). I tend to think that teiid is smart enought to see that this is a continuous query and treats all query as atomic. And it is very flexible this way - I can run any query with time delays without bothering of properly joining them with our timetable view.
Anyway, I'd like to have your oppinion on that feature. If one day it will be considered as defect - we might have to do something on our side to restore the functionality. I can provide logs and execution plan if needed.
There is no explicit need to join the results. The issues to consider would be order of execution and whether you want the non-continous results cached across repeated executions. Without a join, the join will be executed in either order and possibly in parallel dependning upon your other settings. As long as that meets your needs, then you're ok. If you need to control which gets consulted first, then a join can simply that situation. You could then potentially use an access pattern, a makedep/makeind hint, or if nested table / lateral join. All of which would force one side to be consulted first.
As for caching accross repeated executions, there is now the connector caching facility which can be used to create session/user/vdb scoped entries in the Teiid result set cache or if you expect results to be limited then you could also use a delegating translator to wrap your non-continuous sources such that they also return reusable executions, which would simply hold the relevant results. There will likely be some new mechanisms that can assist here in 8.2 or later.