If I am understanding your objective, I think there are several ways to incorporate Teiid into a semantic technology stack. I can't really tell which of these is closest to what you're trying to achieve.
1. Use semantic tech on top of Teiid. The stack would access data via JDBC, ODBC or any other API, and the data would be treated just like if the data were actually in a relational database. Really, this is exactly like any application uses Teiid, except that the semantic technology would be translating the relational data exposed through Teiid into the form (e.g., RDF triplets) used within the semantic tech. Teiid can be used out-of-the-box, and all of the interface logic is within the semantic technology layers that access Teiid (via JDBC or ODBC or any of the other public Teiid APIs).
2. Use semantic tech under Teiid. The stack would enable any application getting to data via Teiid to also access and integrate semantic data (e.g., triples in a triple store, triples coming out of a reasoning/inference engine, etc.) with non-sematic data. This would require creating a custom Teiid translator that can convert the statements that Teiid wants to make against the data source into requests against the underlying semantic data (e.g., a SPARQL query against an RDF triple store). If the translator also described the semantic model (e.g., OWL or whatever the source uses) via the relational metadata that Teiid expects, then Teiid can automatically process this relational model to create the necessary information for a VDB. If the semantic model changed frequently, then the Teiid VDB in which this source is used/exposed might need to be rebuild frequently. (Generally changes in any relational database's metadata are infrequent and often require corresponding changes in the consuming applications. This is one reason why Teiid doesn't really work terribly well with constantly-changing data source schemas.)
Teiid doesn't really do inference or reasoning in the way those terms are used in semantic technology, so it's not clear to me how you might think that it can. Sure, if you're using semantic technology to integrate data (e.g., integrate triples coming from various triples stores), then you might be able to instead use Teiid to create a virtual (relational) database that unifies the data. However, Teiid does this integration by a priori specification/declaration, not via runtime inference. And as I said earlier Teiid was never really designed for the relational model to be changing very frequently, as it often does in a semantic technology stack.
You could consider using Teiid models that expose the basic triple, but at this point you're really just using Teiid to implement some basic plumbing that could otherwise be done with some extremely trivial code. Such a usage would also be ignoring all of the core features and benefits of Teiid (which are quite amazing, really).
I hope this helps.
Considering the options you mention, maybe I'm inclined for the second option. I'm not sure but also may be the case in where I'll be consuming Teiid instead of providing. Schemas in RDF change and the tool beign built is meant to accomplish complex inference situations and without a fixed schema.
After a while, I've came up with this mix. I face up transformation using RDF as an underlying unifying model of, for example and not limited to: Tabular, XML, JSON, and even OLAP data sources as input. Then, perform an 'ETL' inference in a Loader layer where I can infer types an so and then populate a semantic graph. The idea is the graph is flexible enough to be viewed as any of the APIs mentioned in the document (Tabular, Neo4J, XML, JSON, etc). Any of this APIs are to be implemented in an ad-hoc manner so there is no limit if you need another format. I try to explain the benefits of doing things this way in the document, apologizes and let me know if I'm wrong.
Link is dead.. any new/updated link?