Skip navigation
1 2 Previous Next

Virtual Thoughts

19 posts

Red Hat Raffle Powered by OpenShift and SAP HANA



As a developer/solution architect/technical marketing engineer/whatever (red) hat du jour, I attend a lot of conferences. I mostly give presentations about Red Hat with SAP and other partners and work the booth on the show floor. In doing so, I get to see the infrastructure of many different venues and events. There is little consistency except the fact that the wifi is usually pretty sketchy. I also see a lot of different marketing approaches from the various Red hat marketing groups. Giveaways, booth layout and location, etc. In an effort to utilize the valuable products that Red Hat has to offer and show integration with SAP HANA from OpenShift, we created a Spring Boot service, mobile iOS scannner application and a web app to support event raffle drawings. It is available here: .


Red Hat Raffle


Red Hat Raffle started with postgres as the backend database. This worked well at several events, but with SAP TechEd in Las Vegas looming, we thought it would be cool to show how an OpenShift instance could host a containerized Spring Boot application with an external SAP HANA instance using a combination of interesting and powerful technologies. Let's take a look.


The Technology


There are various projects used in this architecture. This diagram illustrates how they work together. I will discuss all of the different bits down below.

Red Hat Raffle Architecture




OpenShift is Red Hat's platform as a service (PaaS), built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux. For purposes of the Red Hat Raffle architecture, I use minishift on my laptop connected to Ethernet at the booth. The Ethernet access eliminates the issue with unreliable wifi. Minishift is a single node cluster with a very minimal resource requirements. It also comes with its own Docker registry, so it is easy to install and setup. It's ideal for a single event where we may have 6,000-20,000 scans.

Spring Boot with Hibernate Running in an Embedded Tomcat

Spring Boot is one of the leading frameworks for building web applications and REST APIs and it even includes Hibernate, the leading open source ORM (Object/Relational-Mapping) to make database persistence a snap. You can read more about ORMs and how they can help expedite the development process here. The Spring Boot application runs in a lightweight embedded Tomcat Servlet Container for a powerful, low footprint runtime environment.




There has been limited support for SAP HANA in Hibernate for some time via the SAP HANA Hibernate Dialect (a dialect is the component that helps generate source specific SQL), but recently SAP engineers improved the dialect by fixing bugs and adding additional functionality supported in SAP HANA. There is a dialect for both row and column HANA tables.


Fabric8 and Maven


Fabric8 is a powerful platform for CI/CD in OpenShift. It allows you to quickly make and test changes. Fabric8 uses s2i (source to image) to create a container from source and deploys to OpenShift. This allowed me to rapidly develop Red Hat Raffle. It is truly a game changer in the DevOps world. Fabric8 has a handy Maven plugin which I used for my development.


Call to Action


So what next? I recommend the following:


I am having so much fun developing with OpenShift (specifically minishift). It is so cool to be able to change my code, build and deploy with one simple command.


$ oc start-build MyProject --from-dir=local/git/project.git


This command will pull my local changes, build them and deploy to OpenShift allowing for immediate testing using s2i. If you aren't familiar with s2i, it is a tool that will take your source code, create a docker image automagically. think about all the time I wasted over the years writing code, compiling and building a deployable artifact (like a WAR), and re-deploying to my server (sometimes this even required a server restart).


Upgrade your development experience with minishift! I am even helping a coding instructor set this up for his coding class to allow his student to have a better learning experience. They will spend their time more affectively learning how to code instead of spending wasted days setting up and debugging issues.


Have fun being productive!


Docker Reference Guide

Posted by tejones Mar 31, 2017

This blog will capture and document common Docker commands. I'll try to keep them in a logical flow so you don't have to jump around. This is for me as much as it is for you.


If you have suggestions for other commands, please leave in comments. Also, if I am wrong about something, please point it out and I will correct it. If you have a lot of really good info, I may even give you edit rights.




How to get an image locally?

docker pull {dockerimage} - pulls a docker image down to your local docker registry.


How to add an insecure registry to docker?

vi /etc/sysconfig/docker - opens docker configuration for editing (may need sudo).

Add your registry:

# If you have a registry secured with https but do not have proper certs

# distributed, you can tell docker to not look for full authorization by

# adding the registry to the INSECURE_REGISTRY line and uncommenting it.

INSECURE_REGISTRY='--insecure-registry --insecure-registry --insecure-registry localhost:5000 --insecure-registry'


How to run an image?


There are a lot of options available, here is an example I use most often.

docker run -it -p 8080:8080 -p 8443:8443 -p 9990:9990 -p 31000:31000 {docker_image}


How to get ips for Docker images?

There are a few ways to go about this, but this seems the easiest easiest assuming your docker is using a network bridge (the default):

docker network inspect bridge

will output something like this:



        "Name": "bridge",

        "Id": "101764200e68e573306bf799e0b7ed04ca03db8a4bb4c5f9a98bbf8f8a6de8dc",

        "Scope": "local",

        "Driver": "bridge",

        "EnableIPv6": false,

        "IPAM": {

            "Driver": "default",

            "Options": null,

            "Config": [


                    "Subnet": "",

                    "Gateway": ""




        "Internal": false,

        "Containers": {

            "280062f9a863983516ebe80b9aba9e507351cb0c85bb7b5d53dfc0473df8247b": {

                "Name": "naughty_liskov",

                "EndpointID": "91b6c2b270d72b2e48255d66580aaecfd3029dd7dc7fab4e52b369f412e9c634",

                "MacAddress": "02:42:ac:11:00:02",

                "IPv4Address": "",

                "IPv6Address": ""



        "Options": {

            "": "true",

            "": "true",

            "": "true",

            "": "",

            "": "docker0",

            "": "1500"


        "Labels": {}





More to come...



This demo illustrates a retail store scenario using a mobile app to track customer movements while shopping at a store. There are many parts to the demo and I will step through each one in detail. The demo is hosted on github so you can download and try for yourself. Also, a short video of the demo can be found here.


The Demo Components and Roles


Beacon Data


The beacon data is generated by a python script to emulate customers moving in a store. The script will send MQTT messages to Red Hat JBoss A-MQ, Red Hat's messaging platform reflecting customer movements (Enter, Move, Exit).


There are two subscribers to the topics on the message broker, a web UI running a node.js MQTT client and a Red Hat JBoss BRMS instance. BRMS is Red Hat's Business Rules Management system. It serves as a decision point in our demo and has rules defined to determine if customers are "focused" (in the same department for x number of moves) or "roaming" (moving through the store). Both of these scenarios indicate that the customer may need assistance and a message is added to A-MQ indicating the customer's id and location in the store. This message is intended to provide an alert to sales personnel in the store.


Web UI


The web UI is listening for both customer movements and sales alerts and uses this data to show customer movements in the store and display the alerts.




SAP SQL Anywhere


In addition to categorizing customers and adding alerts to A-MQ, BRMS is also responsible for inserting data into SQL Anywhere. The SQL Anywhere instances provides web services that are called by BRMS to insert the data. When network bandwidth or connectivity is available, the data is then pushed to the HANA Cloud Platform. This is the beauty of the Intelligent Edge. The data will continued to be gather in the gateway and made available to the data center when possible.




The HANA instance in our architecture also pulls data from a second HANA database using HANA Smart Data Access (SDA) that contains the inventory of all stores in our fictitious corporation. All stores are also kept in this second instance.


Red Hat JBoss Data Virtualization


The data virtualization layer federates all related but disparate sources to provide a central connection to the native Android app. This app has been downloaded by the stores customer's and allows us to track their movements in the remote store. We can use this information along with the data in HANA, Salesforce customer data, postgreSQL customer sales history and promotional data from Amazon Redshift to target the precise promotional notification to send to the customer's mobile app. We can tell what department the customer is currently in and what their previous purchases were to match a promotion for the customer that will most likely result in a sale. This is in addition to all of the other data we can use in our app using the unified view of all of our enterprise and beacon data.



mobile_home.png mobile_menu.png



Reference Architecture





Please check out our github repository and try the demo out for yourself! As usual, please don't hesitate to ping me with any questions you may have!

I attended another SAP Sapphire and (as usual) was blown away by the size and content-richness of the event. There is always a nice mix of business and tech users which creates a great opportunity expanding your comfort zone. I met many more open-source proponents than I have at prevent conferences, which tells me that SAP's efforts to embrace OSS developers is working only further enhancing our relationship.


I gave a presentation each day around Red Hat JBoss Middlware solutions for SAP that was well-represented by techies as well as high-level management. I also met many prospective SI partners interested in working with us to meet customer needs as they usually ran into more issues than they could conquer when implementing SAP systems.


IMG_0006 (1).JPG


I gave a talk and demo with Sid Sipes from SAP around a joint effort we are leading regarding an IoT scenario using technologies from both Red Hat and SAP. The use case is a retail store where customer movements are tracked and Red Hat JBoss Middleware Fuse components (AMQ and BRMS) process the beacon data and make decisions based on each customer's actions. The data is sent to HANA from SQLAnywhere using RDSync and then federated by Red Hat JBoss Data Virtualization along with related data from a Postgres database to be served up to a native android app via OData URLs. I will write a separate blog to document the demo in detail (so stay tuned), but in the meantime have a look at the following architecture diagram that shows the flow of data.


SAP Retail 051616.png

I am looking forward to adding some more features to this demo as the world of IoT has really opened up a huge world of possibilities. The demo will be further enhanced at Red Hat Summit, so be sure to check out our session or visit the SAP or Red Hat booth if you will be in attendance. I hope to see you there.

Connecting to OLAP-based data sources is something Red Hat JBoss Data Virtualization (JDV) has supported for some time. The use of the invokeMdx() function of JDV allows a user to connect to and execute Cubes or BEx queries defined in SAP BW. The following steps show you how to create a connection, generate your MDX query and model your JDV procedure to pull back your data for SAP BW. This examples pulls data from a BEx query, but of course you can execute directly against a Cube if required. All you need is the MDX for the Cube.


Setting Up Your Test Environment


1. Creating an InfoCube and BEx query in SAP BW.


If you don't already have an InfoCube and BEx query, here are some ways to create sample data. I created a cube with 1,000,000 rows and add a BEx query on top of it using these steps.


a. Create InfoCube. I created a Cube based on the pre-requisite specified InfoCube in step b. Basically, it required an InfoCube that looks like this:




b. Generate test data. Using CUBE_SAMPLE_CREATE in the SAP UI, I was able to generate a cube with 1,000,000 rows. You can specify whatever number of rows you would like though.


2. Create BEx query. This is a really nice video that walks you through creating a BEx query.


3. Get the MDX for your BEx query. You can do this using SAP transaction MDXTEST or RSCRM_BAPI. For RSCRM_BAPI, you need to enable debug. To do that, go to transaction RSCRMDEBUG. Enter your SAP username and the Debugging Options as mdx_gen. Choose Insert and execute. Then exit this screen and go to transaction RSCRM_BAPI.


Accessing from Red Hat JBoss Data Virtualization


  1. Setup a JDV server and connect from JBDS.
  2. Add the driver directly into the server configuration file with an entry as shown below:

<driver name="olap" module="org.olap4j"><driver-class>org.olap4j.driver.xmla.XmlaOlap4jDriver</driver-class></driver>

Run the following CLI command in $JDV_HOME/bin folder:

./ -c --command="/subsystem=datasources/jdbc-driver=olap:add(driver-datasource-class-name=org.olap4j.driver.xmla.XmlaOlap4jDriver,driver-module-name=org.olap4j,driver-name=olap)
  1. Import the source connection for JBDS (Import->Teiid Connection->Source Model)
  2. Create the view model and add your virtual table.
    • The format for the JDV virtual query is:
      • SELECT x.column1 AS column1, x.column2 AS column2, x.column3 AS column3 FROM (EXEC {source_model}.invokeMDX('{MDX_FOR_CUBE_OR_BEx_REPORT}')) AS x, ARRAYTABLE(w.tuple COLUMNS column1 type, column2 type, column3 type) as x
    • If you are using my example, your SQL will look something like the following. Notice the root hierarchy value has been changed to "Measures". This is because the enterprise id used in SAP for the root are not resolvable from JDV. Also note that your other enterprise ids used in your SAP system will be different:

SELECT x.zcustid AS cust_id, x.zprdid AS prod_id, x.price AS price, x.quantity AS   quantity


{[Measures].[058YFS98ML8W8QUZ11JQ6UHM1],[Measures].[058YFS98ML8W8QUZ11JQ6UNXL]}  ON COLUMNS,
FROM [ZIC_TEST/Z_CUST_ORDER_LIST]')) AS w, ARRAYTABLE(w.tuple COLUMNS zcustid string, zprdid string, price decimal, quantity decimal) AS x


  1. Save, right-click on your table and preview data.


That's it! You have successfully consumed SAP BW data in JDV!

Red Hat's partnership with SAP goes way back and has never been stronger than it is today. Whether it is certifying RHEL for SAP applications, creating joint demos, webinars or certifying Red Hat JBoss Middleware products with SAP technologies, we are constantly improving and expanding our integration story with SAP. This article will give you an up-to-date view of our solutions with SAP. Note that this is not an "official" document of supported solutions, but a document to create awareness of the possible integration scenarios with Red Hat Middleware and SAP.


OpenShift: Container Application Platform by Red Hat, Built on Docker and Kubernetes 


SAP Vora

SAP Data Hub

SAP HANA Express Edition

SAP SQL Anywhere


Red Hat JBoss Data Virtualization (JDV) as of Version 6.2



A Translator provides an abstraction layer between the JDV Query Engine and physical data source, that knows how to convert JDV issued query commands into source specific commands and execute them using the Resource Adaptor. It also have intelligence to convert the result data that came from the physical source into a form that JDV Query engine is expecting. JDV provides various pre-built translators for sources like SAP HANA, Sybase, SAP Gateway, Oracle, DB2, SQL Server, MySQL, PostgreSQL, XML, File etc. A Translator also defines the capabilities of a particular source, like whether it can natively support query joins (inner joins, cross joins etc) or support criteria.


SAP Gateway - The SAP Gateway translator allows the users to consume and update SAP data via the SAP Gateway server using the OData protocol. JDV is a producer and consumer of OData which means we not only can we consume SAP OData services, we can expose federated OData services out of the top. The US Army is using this capability and the solution was implemented by Northrup Grumman.

SAP HANA - New in JDV 6.2, the SAP HANA translator allows users to create abstraction layers of all SAP data stored in SAP HANA. Our translator is currently in the process of attaining SAP Certification for SAP HANA and we will release a whitepaper around the solution around the same time.

SAP Services registry - Although this may be a less common use case for customers given the complexity usually associated with SOAP services generated from SAP, it is still worth noting. We support the consumption of SAP services as a source.

Other SAP data source support via translators - SAP BW via the OLAP translator, Sybase, Sybase IQ, Sybase ASE, SQLAnywhere


SAP HANA as a consumer of JDV - SAP HANA has its own Data Virtualization capabilities which we can leverage and expand upon using an ODBC interface. The HANA DV capabilities (SDA - Smart data Access) are somewhat limited in breadth of source support and connectivity options, but we can immediately increase that support tenfold by connecting JDV to HANA via SDA.

SAP HANA as a data cache - JDV can use HANA as a cache to increase JDV performance as well as increase value in a company's HANA investment.

Fuse/Camel integration - There is a Camel component (Olingo) that allows consumption and updating of SAP data using OData URLs via JDV for those scenarios that may requires an ESB or an EIP platform.



GitHub - tejones/jboss-sap-integration


Red Hat JBoss Fuse as of Version 6.2




SAP Systems via JCo  - The SAP component is a package consisting of a suite of ten different SAP components. There are remote function call (RFC) components that support the sRFC, tRFC, and qRFC protocols; and there are IDoc components that facilitate communication using messages in IDoc format. The component uses the SAP Java Connector (SAP JCo) library to facilitate bidirectional communication with SAP and the SAP IDoc library to facilitate the transmission of documents in the Intermediate Document (IDoc) format.

See: Red Hat JBoss Fuse a Certified Enterprise Integration Solution for SAP - Additional Topics

SAP Gateway - There is a Camel component that integrates with SAP Gateway (currently supports producer endpoints only).

SAP Gateway OData Client - The Olingo2 component utilizes Apache Olingo version 2.0 APIs to interact with OData 2.0 and 3.0 compliant services. A number of popular commercial and enterprise vendors and products support the OData protocol. A sample list of supporting products can be found on the OData website. The Olingo2 component supports reading feeds, delta feeds, entities, simple and complex properties, links, counts, using custom and OData system query parameters. It supports updating entities, properties, and association links. It also supports submitting queries and change requests as a single OData batch operation. The component supports configuring HTTP connection parameters and headers for OData service connection. This allows configuring use of SSL, OAuth2.0, etc. as required by the target OData service. This component can also call JDV using the OData server capabilities that come out of the box with JDV.


Red Hat JBoss Hibernate


* SAP developers have recently contributed to the SAP HANA dialect in Hibernate to fix bugs and add previously missing functionality.


Dialect Support


SAP DBorg.hibernate.dialect.SAPDBDialect
SAP HANA (column store)org.hibernate.dialect.HANAColumnStoreDialect
SAP HANA (row store)org.hibernate.dialect.HANARowStoreDialect
Sybase 11org.hibernate.dialect.Sybase11Dialect
Sybase ASE 15.5org.hibernate.dialect.SybaseASE15Dialect
Sybase ASE 15.7org.hibernate.dialect.SybaseASE157Dialect
Sybase Anywhereorg.hibernate.dialect.SybaseAnywhereDialect



Red Hat Mobile Application Platform


SAP Netweaver Connector - The Red Hat Mobile Application Platform allows mobile connectivity to SAP Netweaver via RFCs using the FeedHenry SAP Netweaver connector. This is implemented on a per customer basis.




This is a living blog that I will update as technologies and use cases evolve. As more integration scenarios arise, I will add them here. If there is an integration you are interested in, please feel free to reach out to me!

What It Is


With the release of Teiid 8.11.Final, we have introduced support for SAP HANA as a data source. Teiid is the project used for creating the Red Hat JBoss Data Virtualization (JDV) product and Teiid 8.11.Final will be part of the upcoming Data Virtualization 6.2 release. SAP HANA is an in-memory data platform that is deployable as an on-premise appliance, or in the cloud.  It is a revolutionary platform that’s best suited for performing real-time analytics, and developing and deploying real-time applications. At the core of this real-time data platform is the SAP HANA database which is fundamentally different than any other database engine in the market today.


Image reference from SAP Community Network


JDV overview

How the SAP HANA Translator Works

Using the SAP HANA JDBC driver with Teiid, you can connect to SAP HANA, compose your views, and consume data from SAP HANA and other disparate data sources to create a single view of all of your SAP-related information. JDV can expose this single source via JDBC, ODBC, REST, OData or SOAP services. This allows for consumption of SAP HANA data as well as web services, relational databases, application systems, cloud, etc that can be utilized by enterprise applications, mobile, reporting or whatever client needs access. In addition, with JDV you can create custom views for each end user of the data with full CRUD capabilities, custom security and caching for a secure and performant solution to data integration with SAP.

A Typical Use Case

While there are many possibilities for solving SAP HANA data integration challenges using JDV, some scenarios are more common across all enterprises. A somewhat common problem users encounter is getting disparate data from SAP and other systems or web services and easily consuming that federated data in the field within mobile applications. JDV can easily accomplish this using the SAP HANA translator as well as its full suite of translators and connectors to create a single view and point of connection for your mobile applications. Furthermore, since the data from JDV can be exposed as web services (OData, REST, or SOAP), JDBC, or ODBC, the implementation can be tailored to be application-specific. The following diagram shows what an environment like this might look like, integrating SAP HANA with a SQL Anywhere data source and exposing as an OData service from a JDV instance running in the cloud.


Try it out... for free!

You can download Teiid 8.11.Final now which includes the SAP HANA translator. You will also need the Teiid Designer to create your models and views. Feedback is always welcome and please let me know if you have any questions!


In Teiid Designer, we can expose Virtual Databases (VDBs) as RESTful services by generating a REST WAR containing REST web service procedures. We have enhanced that capability in the 9.1 release (in beta as of this writing) by adding Swagger annotations that expose the service metadata and allow you to view and execute the procedures in the generated WAR. Swagger is an open-source framework that allows you to document your REST APIs.

Here is a video that illustrates the feature:

IMG_2487.JPG copy.jpg

First Impressions


I have attended multiple SAP techEd and d-code conferences over the years, but this was my first Sapphire. As anticipated, it was a different perspective in terms of audience. At techEd and d-code, most are developers with an in-depth knowledge SAP ABAP, development, data integration, open standards and other technical topics. At Sapphire, the average attendee was more business-minded, but many came from a technical background. There were many architects and product managers for example.


My previous webinars and talks on SAP integration with Red Hat JBoss Middleware were all heavily slanted towards the technical details. This time around I had talks discussing our Data Virtualization integration with SAP HANA. The fact that my session was devoid of the technical overtones from previous presentations, gave me the chance to appeal to the business case for our Data Virtualization solution for SAP HANA. I was able to take the time to discuss the issues with enterprise data management, the weaknesses of existing integration technologies, and where Red Hat JBoss Data Virtualization fills the gaps. I then followed the discussion with a drag-and-drop demo that federated data from SAP HANA and a MySql instance in Teiid Designer, the tooling for Data Virtualization. The talk and demo was well-received. I think explaining enterprise data needs, why current solutions fall short and how data virtualization can do what no other technology can to fulfill real-time data federation requirements was a huge eye-opener. Some expressed how they had no idea this technology was even available and the many man-hours it will save. This conference has given me a new perspective. I may re-purpose this presentation for a future, more technical conference. Developers will certainly appreciate the business value as well as the technical value of Data Virtualization.





Every time I attend a conference and pull booth-duty, I learn so much from my colleagues regarding other Red Hat products. This time I got a full serving of GSS (Global Support Services) knowledge and fully understand why we have the best customer support organization in the industry. We never rest on our laurels and are always looking at ways to improve and grow. Not only are we the most compelling and cost-effective technical option with our full stack of solutions, we ice the cake with our best-in-class services and support.


Not to be lost in all my Red Hat cheerleading (sorry.. just calling it like I see it), is the fact that our partnership with SAP is stronger than it has ever been. I have been working with my SAP colleagues for so long, it feels like we are one big company. The value of an SAP plus Red Hat integration strategy in the enterprise is as compelling as it is due to a level of cooperation and synergy seldom seen in technology today. The benefactors are our customers whom are reaping the rewards through cost-savings, efficiencies, and productivity never realized before.


Please feel free to reach out to me if you have any questions about our SAP integration solutions with Red Hat JBoss Middleware and follow me on twitter @jonested. I look forward to SAP d-code in November where we have big plans for unveiling more collaboration between SAP and Red Hat. Hope to see you there!


I attended SAP d-code Las Vegas and Berlin representing Red Hat at the Open Source bar of the Hacker's Lounge. The Open Source bar is a new concept for SAP and shows how far they have come in embracing and seeing the value of open source. In addition to Red Hat, GitHub, CloudFoundry, OpenStack, and SUSE participated in the bar.




Although we had a Red Hat booth (which was well-attended), the intention of the Open Source bar was to encourage conversation between OSS companies and SAP developers. There were many whom were not familiar with OSS or knew about it but did not realize the enterprise-class value it offered. We featured a demo using data virtualization running in OpenShift and consuming data from an SAP SQL Anywhere OpenShift instance and an external SAP Gateway instance. The unified view was then consumed by a native Android application.


We also had a couple of talks at both events that covered mobile development in the cloud using open-source technologies (PaaS, Data Virtualization, Android). The talks were recorded:

Las Vegas Tech Talk

Berlin Tech Talk

The beer and light up pilsner glasses were a hit in Las Vegas while the red Fedoras and challenging puzzle in the Berlin Red Hat booth kept traffic flowing. There were many great conversations on RHEL with SAP, HANA, OpenShift, Data Virtualization and many other hot topics. Although I seldom (read as never) get a chance to attend any of the sessions due to booth work, talks, etc., SAP TechEd/d-code is always one of my favorite conferences. They do everything first class and make it a valuable experience for all in attendance. I hope to go to the Shanghai d-code in March to spread some OSS love there as well!

Red Hat JBoss Data Virtualization allows you to expose multiple data sources as a single virtual database. Imagine pulling in all of your various sources together into a single, updatable view or series of views using a single point of connection. Now imagine doing that in the cloud using OpenShift! This article will walk through the steps to accomplish just that. The only prerequisite is that you have an SAP Gateway instance available. The demo uses the SAP Gateway demo system described here:


Here are the steps:


1. Setting Up a Data Virtualization instance on OpenShift.

2. Adding a SQL Anywhere instance in OpenShift.

3. Importing from SQL Anywhere and SAP Gateway.

4. Federating those SAP source into a single virtualized view that will be consumed by an Android application in JBoss Developer Studio (JBDS).

5. Run the Android application.


Here is a diagram that illustrates the architecture of this example:

sap gateway and sql anywhere dv architecture.png


1. Setting up Data Virtualization in OpenShift


The first step will be to provision Data Virtualization on OpenShift and install JBDS with Teiid Designer, the design tool for Data Virtualization. See this article for details: Provision Data Virtualization on OpenShift and Connect from Teiid Designer.


We will be working with an Android application, so you will need to install the Android plugin into JBDS. See Also, you will need to clone the Android application and artifacts from tejones/jboss-sap-integration · GitHub.


2. Adding a SQL Anywhere Instance in OpenShift


Now that we have our tooling and our OpenShift Data Virtualization instance running, we are now ready to create our SQL Anywhere database on OpenShift.

From a command prompt, type:


# rhc cartridge add -a <your app>


After adding the cartridge, you will have to accept the license agreement. There are instructions generated by the installation. After starting the ASA instance, refresh the OpenShift instance in JBDS to pickup the ASA and you can begin port forwarding in JBDS.


We will need to add our data the SQL Anywhere instance. Any JDBC capable SQL tool will suffice, but I use and love SQuirreL

The DDL is located in {local_git_repo_location}/jboss-sap-integration/android-dv-sap/demo resources/ddl/flight_sqlana.ddl. I've also included a SQL Anywhere driver for use in the demo located here: {local_git_repo_location}/jboss-sap-integration/android-dv-sap/demo resources/driver/jconn4-26502.jar. You will need that to create the driver setup and connection in SQuirreL. You will use your local host IP in the connection URL since you are using port forwarding. The default credentials are username=dba and password=sql. Here is what your connection in SQuirreL should look like:



You are now ready to connect to the SQL Anywhere source in OpenShift and execute the DDL to create your objects and source your data. Note that the demo data is specific to the data coming from the SAP Gateway demo system. You may need to modify the information in the liveFlightFeed to match up with the test data coming from the Gateway FlightCollection. I am targeting Nov 26, 2014 for this demo. You can change the data to whatever you want. The liveFlightFeed is really meant to simulate a real-time flight data source such as FlightStats.


3. Importing from SQL Anywhere and SAP Gateway


As I have stated, I will connect to the SAP Gateway Demo system and use the Flight data for this example, ( To access this data, you need an account. To create relational objects that map to the collections and functions in the SAP Gateway Service, we need to create a data source for the service and then import into Teiid Designer using the Teiid Connection import option. We will do the same for the SQL Anywhere instance so let's begin there.


1. Launch JBoss Developer Studio and switch to the Teiid Designer perspective.  The perspective options are shown in the upper right corner of JBDS.

2. Connect to the Data Virtualization instance in Teiid Designer as demonstrated in step 1.

3. Create a Model Project (or you can use any existing project).

  • In ModelExplorer, Right-click > New > Teiid Model Project
  • In the Model Project wizard, enter Demo for the Project name.  Click Finish to create the empty project.

4. Select the project, then Right-click > Import... You will see the import dialog (shown below).  Select Teiid Connection >> Source Model then click Next.  We are using the Teiid Connection Importer since we cannot directly connect to the DB port on OpenShift.  This importer allows us retrieve the DB schema through Teiid.

5. On the first page of the import wizard, click the New button in the DataSource section.  Click Add Driver to select the jconn4-26502.jar file from the driver directory of your local git project. Name the datasource Flight_Sybase_ASA and enter jdbc:sybase:Tds:{IP for OpenShift ASA instance}:2638/demo?ServiceName=demo. The username and password are dba and sql. Click Apply and then click OK. Click Next on the importer wizard.


6. On the next page of the wizard, set the Translator type to Sybase and add an optional source import property with the name importer.tableTypes and the value TABLE. This will avoid importing system table for the SQL Anywhere instance. Click Next.


7. On the next page, select a project and give the model name myflight. Click Next. This will deploy a temporary dynamic VDB that will enable the metadata mapping from SQL Anywhere objects the Data Virtualization relational objects by creating DDL.


8. You will now see the DDL that will be used to create the Data Virtualization relational object, click Next to see the objects that will be created and then click Finish.


This will create the objects for us in the ModelExplorer.


Creating the SAP Gateway Relational Objects

The process for importing the SAP Gateway source is a lot like importing SQL Anywhere except the connection information is different and we will use a different translator.

1. Repeat step 4 from above. When creating the datasource for SAP Gateway as in step 5 above, name the datasource SAPGatewayFlight and select the webservice driver. Enter for the URL, End Point and HTTPBasic for security. The username and password are your credentials for the SAP Gateway service you are using. Click Apply and then click OK. Click Next on the importer wizard.


2. On the next page, select the sap-nw-gateway translator and click Next. Enter SAPGatewayFlight for the model name and click Next.

3. Complete step 8 above and the relational objects that map to the SAP Gateway service will be generated for you in JBDS.

Note: With the current version of the Data Virtualization cartridge for OpenShift, there is an error in the mapping of the "byte" data type. The flightDetails_period field of the FlightCollection table should be change to "short". This will be corrected in the next release of the cartridge. To change the type, select the field and click on the ellipsis of the "datatype" property in the properties panel below. A data type dialog will popup allowing you to select the "short" type.


4. Federating SAP Sources into a Single Virtualized View


Now that you have imported the SQL Anywhere source from OpenShift and the SAP Gateway OData service, you are ready to join the data into a single view. To start, we will need to create a virtual model and table that will be use to federate the data from our SAP Gateway Flight service and our SQL Anywhere databse running in OpenShift.


1. Right-click on the Demo project and select New->Teiid Metadata Model. For Model Name enter AllFlightDataModel. Set the Model Type as View Model and click Finish.

2. Right-click on the new model and select New Child->Table. Enter AllFlightDataTable for the Name and click OK. Double-click on the generated model to open the modeling diagram.


3. Now expand the SAPGatewayFlight model and drag the FlightCollection table to the arrow with the T in the diagram. This will add the FlightCollection table to the transformation. Now do the same with the liveFlightFeed table form the myflight model to complete the transformation. Save your changes.


4. In order to expose the view table as an OData service in Data Virtualization, we need to add a primary key. Right-click on the AllFlightDataTable and select New Child->Primary Key. Name the new Primary Key allFlightData_PK and click the Ellipsis for the Columns property in the properties tab below the Model Explorer. Select the airline_iata, depart_time, arriveTime, flightDate, fromAirport and toAirport columns and click OK Save your changes.


We are now ready to create and deploy our virtual database.


1. Right-click on the Demo model project and click New->Teiid VDB and add your models.


Click Finish.

2. To deploy your new VDB, right-click on the Flight VDB in the Model Explorer and select Modeling->Deploy. A new datasource will be created for you and your Flight VDB will now be available in OpenShift as an OData service. You could also create a JDBC connection to the VDB if you wanted to, but we will use the OData capabilities of Data Virtualization for our Android application.


Testing Your New Service


Let's test our new service running in OpenShift. The URL for an OData service running in Data Virtualization is http://{host}:{port}/{vdbName}/{modelName.tableName} so we will use http://localhost:8080/odata/flight/AllFlightDataModel.AllFlightDataTable?$format=json to see our federated SAP data.

Note: Make sure you have port forwarding running in order to execute the service with the above URL.

5. Run the Android application


Running the Android app in JBDS using the Android emulator, you can access your data in DV on OpenShift using OData urls.

I recently upgraded my Mac to Mavericks and, of course, MySQL was hosed afterwards. I use MySQL a lot for development and demos and dreaded going through the pain of getting it running again. I recalled the initial installation was fairly unpleasant, with a bit of trial and error before I finally had my test databases the way I wanted them. When I upgraded to Mavericks, I panicked when my tests started failing with errors connecting to MySQL. Alas, it was broken. A bit of Googling indicated that it needed to be reinstalled on Mavericks, which was not actually true. Had I seen this first, I would have known better. Anyway, I came across this Apple User Tip that not only made installation and setup a snap, but made integration with OSx even better. Here is a summary from the article:


1. Download MySQL, if you haven't already. The article indicates to download the OSx 10.6 pkg with the dmg. I wanted the latest, so I got mysql-5.6.17-osx10.7-x86_64.tar.gz and extracted to /usr/local/mysql. Either way seems fine though.

2. Run sudo vi /Library/LaunchDaemons/com.mysql.mysql.plist , add and save the following:

<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">

<plist version="1.0">













3. Run sudo vi /etc/my.cnf, add and save the following:






4. Run sudo launchctl load -w /Library/LaunchDaemons/com.mysql.mysql.plist and MySql will start. You are then ready to set the root password and add any users.

Hopefully this will save you some time. There are lots of things like this that can be a pain to do (or remember what you did last time). As I come across those, I will add them here.


There are so many things to love about Red Hat Summit.. learning about new features and technologies, meeting current and prospective customers, seeing colleagues face-to-face after only knowing them virtually for years, the keynotes (especially the middleware keynote and demo), and presenting topics I am passionate about to people that are eager to listen and learn. That last one is my favorite. This year I spent a lot a time at the Red Hat JBoss Integration pod. There I could educate people on the power and value of Data Virtualization where we had looping demos on Data Virtualization (DV) with big data and SAP NetWeaver Gateway. Attendees were drawn by the lure of big data and intrigued by versatility of Data Virtualization that could not only consume Hadoop and other big data sources, but also integrate them with other sources for a single centralized view of all their analytical data.



The last few years, I have been involved with SAP integration with JBoss Middleware. Summit gave me a platform to show-off consuming SAP NetWeaver Gateway services into DV and exposing as a relational source for federation with other disparate but related data. Not only does this demo well, it illustrates the ability to easily consume SAP data and combine with other important data to create a single unified view of data that can be consumed by enterprise applications via JDBC, ODBC, SOAP, REST or OData. Here is a recording of the demo:



I also gave a joint talk with my Red Hat colleagues Kenny Peeples and William Collins that covered all connectivity options for SAP with Red Hat JBoss Middleware. I gave a brief history of solutions, starting with SAP Enterprise Services Registry which lead to the introduction of SAP NetWeaver Gateway in October of 2011. I also talked about OData which is now an OASIS spec with the recent approval of v4 and showed a demo of Data Virtualization with SAP. Kenny covered JBoss Fuse with the new SAP NetWeaver Camel component and William discussed and demo'd his new JCo Camel component. There was a lot of interest in the talk and many great questions. I have attached the slide deck from the presentation to this blog.


I'm looking forward to next June in Boston for the next Summit! Hope to see you there!


OData v4 - The Journey

Posted by tejones Mar 17, 2014

The OASIS OData TC Experience

I was privileged to represent Red Hat on the OASIS OData Technical Committee. Thanks to the tireless efforts of committee members from Microsoft, SAP and others, the end result is a powerful and flexible specification that will revolutionize REST services. There is much more work to do as the committee turns its focus to v4.1 and v5, but now is a good time to reflect on the awesome job done by the group!


Red Hat has been a proponent of OData and we have incorporated v2/v3 support in our projects and products. Our Data Virtualization product (based on Teiid), has extensive support for OData. Inspired by the SAP NetWeaver Gateway server, we added the ability to import Gateway services into a virtualized view for seamless integration of SAP data with other disparate data sources.


This led to full support for generic OData services as well as creating an OData provider for all data deployed to the Data Virtualization environment. See here for an example of how to consume and produce OData services in Data Virtualization.


So what's new in v4?

Lots! This document does a great job in identifying deletions, additions and improvements in v4 compared to v3:

New in OData v4


Will Red Hat Support v4?

You bet! We are excited to add the capabilities of the v4 spec into our tooling and Data Virtualization engine. There are two open source projects put forward by SAP which we plan to contribute to and support. Apache Olingo and Eclipse Ogee will fit nicely within our projects and Eclipse tooling.


Want to Contribute?

Are you or your company interested in shaping the next version of OData? You are more than welcome to join the committee and help us make a great spec even greater!

Filter Blog

By date:
By tag: