> Is there a way to find out what is the cause of this, perhaps through extended logging?
If you have a primary key on the view, we expect that it will be unique and create a corresponding index on the materialization table. Is there are primary key on the view?
>I would have expected the materialized view to fail on external materialization as well?
Is there a similar primary/unique key constraint on the external table?
There is no key on the view currently. I did have one at one point but removed it when trying to get past the error. There is no key on the external table as well.
Should I try to put a key on the view, and retry ?
I was also wondering if there was any extended logging that may give a bit more insight before I try again because it takes about 2 hours to run a federated query between Oracle using MAKEDEP to Teradata to filter the results?
> There is no key on the view currently. I did have one at one point but removed it when trying to get past the error.
And you got past the error correct? If not then we don't quite yet have a common understanding of what is going on.
> There is no key on the external table as well.
If you put one on there, I would expect that you would see a similar error.
> Should I try to put a key on the view, and retry ?
That should just produce the same error for the internal case.
> I was also wondering if there was any extended logging that may give a bit more insight before I try again because it takes about 2 hours to run a federated query between Oracle using MAKEDEP to Teradata to filter the results?
You can always enable trace logging, but I'm not sure at this point what that will tell you beyond what we're discussing here.
On the time involved it may be good to review the query plan. How large of a materialization are you creating? And is this a situation where incremental loading would make more sense?
Yes - I got past the error. It worked fine when materialized in Postgresql. It was a materialization of ~200000 rows.
Incremental loading would probably make more sense, how would I approach ?
> Incremental loading would probably make more sense, how would I approach ?
Unfortunately that takes a manual approach. You would create a set of materialized views partitioned by primary key and a unifying view on top. Then each partition will be loaded only as accessed and can be managed independently.
Steven - this is very helpful information! Thanks so much for your help !
I have not attempted to partition with a primary key before.
I'm a bit clueless where I would actually do the partition, would it be in the transforms? Or would I need to do it in the DDL on the database itself? Such as:
PostgreSQL: Documentation: 9.3: Partitioning
I've been looking for examples on partitioned queries, but have not found a lot besides ROW_NUMBER()? Is ROW_NUMBER the primary way to partition or are other options with Teiid ? Could we do this in SQL only, or would we need to write some Java code?
A partitioning approach can just be done in the vdb. What you have right now is probably a single materialized view:
create view x as (select ... );
instead you would define the top level view defined as a union of smaller materialized views:
create view x as (select ... from x_1 where id <= val1 union all select ... from x_2 where id > val1 and id <= val2 union all ...)
The predicates can be anything that conveys what is expected in the subview. The predicates then allow the planner to prune union branches. Anything that is effectively a full table scan against x will cause all of the smaller materialized views to load.
This is not the same a true incremental load such that only what you request is loaded on demand, but it can for common usage patterns help make large materializations more manageable.