1 2 Previous Next 19 Replies Latest reply on Jul 25, 2017 12:52 PM by vikrant02

    Openshift - Infinispan uses pod ip address in cluster instead of hostname

    vikrant02

      Hi,

       

      We have deployed Infinispan 9.x cluster in Openshift environment. Once all Infinispan nodes joins cluster they starts communicating on ip address instead of using hostname even if jGroups stack used is TCP/TCPPING using hostname.

       

      We are using this Infinispan with Keycloak as a external cache. Embedded Infinispan of keycloak connects to external Infinispan using remote-store.

      Below is configuration for Infinispan remote-store in Keycloak

      <local-cache name="sessions">
          <remote-store passivation="false" fetch-state="false" purge="false" preload="false" shared="true" cache="sessions" remote-servers="remote-cache">   
      <property name="rawValues">true</property>
      <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
          </remote-store>
      </local-cache>
      <outbound-socket-binding name="remote-cache">
         <remote-destination host="${env.INFINISPAN_HOST}" port="${env.INFINISPAN_PORT:11222}"/>
      </outbound-socket-binding>

      External Infinispan cluster is front ended by a load-balancer(openshift service) which provides a static hostname for infinispan and this hostname is configured in keycloak remote-store for keycloak to infinispan communication.

       

      The setup work fine until all instances(pods) in external infinispan goes down and we bring up the Infinispan cluster again, keycloak is not able to get to new infinispan instances and it keeps trying on old ip address since it is trying with ip address which is dynamic in openshift and changes on each restart.

       

      Can we configure Infinispan to communicate over hostnames instead of using ip address.

       

      Thanks,

      Vikrant

        • 1. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
          sebastian.laskawiec

          The short answer here is yes, you can easily use host names. You need to use Downward API to inject hostname into environmental variable and then use it as "jboss.node.name" parameter. Here's an example.

           

          Although I think your architecture can be enhanced. I advice you to use KUBE_PING for cluster discovery, which queries Kubernetes API and gathers information about the cluster. You might find more information how to use it in this blog post. I would also recommend you to deploy Infinispan using StatefulSets. This will ensure stable host names. Recently we implemented Infinispan OpenShift Templates and our persistent template does exactly what you need. Perhaps you'd like to give it a go?

          1 of 1 people found this helpful
          • 2. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
            nadirx

            Quoting the documentation Infinispan 9.1 User Guide :

            • Hot Rod Server Module - This module is an implementation of the Hot Rod binary protocol backed by Infinispan which allows clients to do dynamic load balancing and failover and smart routing.
              • A variety of clients exist for this protocol.
              • If you’re clients are running Java, this should be your defacto server module choice because it allows for dynamic load balancing and failover. This means that Hot Rod clients can dynamically detect changes in the topology of Hot Rod servers as long as these are clustered, so when new nodes join or leave, clients update their Hot Rod server topology view. On top of that, when Hot Rod servers are configured with distribution, clients can detect where a particular key resides and so they can route requests smartly.
              • Load balancing and failover is dynamically provided by Hot Rod client implementations using information provided by the server.

            What is happening here is that the client connects to the server which sends back a list of cluster member addresses. To ensure that the "public" address is configured, you should use the external-host/external-port attributes:

             

            <hotrod-connector name="hotrod3" socket-binding="hotrod" cache-container="default" idle-timeout="100" tcp-nodelay="true" worker-threads="5" receive-buffer-size="10000" send-buffer-size="10000">
              <topology-state-transfer external-host="publichost" external-port="1234" />
            </hotrod-connector>

             

            Alternatively, configure the client to use BASIC intelligence: ConfigurationBuilder (Infinispan JavaDoc All 9.0.3.Final API) or by passing in the property:

             

            infinispan.client.hotrod.client_intelligence=BASIC

            1 of 1 people found this helpful
            • 3. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
              vikrant02

              Thanks sebastian.laskawiec nadirx for the quick reply.

               

              sebastian.laskawiec, We are currently using pet-set and kubernetes jGroups stack and we don't have a problem with getting Infinispan cluster to work. The issue is when we integrate keycloak and a external Infinispan.

               

              I will try your suggestions and will let you know the results.

              • 4. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                vikrant02

                Hi nadirx/ sebastian.laskawiec,

                 

                I tried both of them but issue still persist.

                 

                I added jboss.node.name argument with respective pod's hostname also updated 'external-host' in hotrod-connector.

                <hotrod-connector socket-binding="hotrod" cache-container="clustered" worker-threads="100">

                        <topology-state-transfer external-host="${jboss.node.name}" lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/>

                </hotrod-connector>

                I see following in keycloak when it connects to external Infinispan through remote-store(where I have provided a openshift service hostname pointing to infinispan pods)

                14:41:23,731 INFO  [org.infinispan.client.hotrod.RemoteCacheManager] (ServerService Thread Pool -- 54) ISPN004021: Infinispan version: 8.1.0.Final

                14:41:23,749 INFO  [org.infinispan.client.hotrod.impl.protocol.Codec21] (ServerService Thread Pool -- 52) ISPN004006: /10.0.34.242:11222 sent new topology view (id=5, age=0) containing 3 addresses: [cache-am-0.node.poc.coi/10.0.35.28:11222, cache-am-2.node.poc.coi/10.0.33.144:11222, cache-am-1.node.poc.coi/10.0.34.242:11222]

                But if infinispan cluster goes totally down and recovers, I see below exception in keycloak

                14:51:45,312 ERROR [io.undertow.request] (default task-10) UT005023: Exception handling request to /auth/realms/master/protocol/openid-connect/auth: org.jboss.resteasy.spi.UnhandledException: org.infinispan.client.hotrod.exceptions.TransportException:: Could not fetch transport

                  at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:247)

                  at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:168)

                  at org.jboss.resteasy.core.SynchronousDispatcher.writeResponse(SynchronousDispatcher.java:471)

                  at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:415)

                  at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202)

                  at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:221)

                  at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)

                  at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)

                  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)

                  at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)

                  at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)

                  at org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:90)

                  at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)

                  at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)

                  at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)

                  at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)

                  at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)

                  at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)

                  at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

                  at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)

                  at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)

                  at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

                  at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)

                  at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)

                  at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)

                  at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)

                  at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)

                  at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)

                  at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

                  at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)

                  at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

                  at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

                  at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)

                  at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)

                  at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)

                  at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)

                  at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)

                  at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)

                  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

                  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

                  at java.lang.Thread.run(Thread.java:748)

                Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not fetch transport

                  at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:405)

                  at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:244)

                  at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.getTransport(AbstractKeyOperation.java:42)

                  at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:53)

                  at org.infinispan.client.hotrod.impl.RemoteCacheImpl.getWithMetadata(RemoteCacheImpl.java:215)

                  at org.infinispan.persistence.remote.RemoteStore.load(RemoteStore.java:109)

                  at org.infinispan.persistence.manager.PersistenceManagerImpl.loadFromAllStores(PersistenceManagerImpl.java:463)

                  at org.infinispan.persistence.PersistenceUtil.loadAndCheckExpiration(PersistenceUtil.java:113)

                  at org.infinispan.persistence.PersistenceUtil.lambda$loadAndStoreInDataContainer$1(PersistenceUtil.java:98)

                  at org.infinispan.container.DefaultDataContainer.lambda$compute$194(DefaultDataContainer.java:324)

                  at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8.compute(EquivalentConcurrentHashMapV8.java:1873)

                  at org.infinispan.container.DefaultDataContainer.compute(DefaultDataContainer.java:323)

                  at org.infinispan.persistence.PersistenceUtil.loadAndStoreInDataContainer(PersistenceUtil.java:91)

                  at org.infinispan.interceptors.CacheLoaderInterceptor.loadInContext(CacheLoaderInterceptor.java:367)

                  at org.infinispan.interceptors.CacheLoaderInterceptor.loadIfNeeded(CacheLoaderInterceptor.java:362)

                  at org.infinispan.interceptors.CacheLoaderInterceptor.visitDataCommand(CacheLoaderInterceptor.java:183)

                  at org.infinispan.interceptors.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:132)

                  at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:74)

                  at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)

                  at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:495)

                  at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:560)

                  at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:198)

                  at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:74)

                  at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)

                  at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:96)

                  at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:40)

                  at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:62)

                  at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:74)

                  at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)

                  at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:107)

                  at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:76)

                  at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:43)

                  at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:74)

                  at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)

                  at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1672)

                  at org.infinispan.cache.impl.CacheImpl.putInternal(CacheImpl.java:1121)

                  at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1111)

                  at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1742)

                  at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:248)

                  at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:291)

                  at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProvider$InfinispanKeycloakTransaction$CacheTask.execute(InfinispanUserSessionProvider.java:889)

                  at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProvider$InfinispanKeycloakTransaction.commit(InfinispanUserSessionProvider.java:785)

                  at org.keycloak.services.DefaultKeycloakTransactionManager.commit(DefaultKeycloakTransactionManager.java:136)

                  at org.keycloak.services.filters.KeycloakTransactionCommitter.filter(KeycloakTransactionCommitter.java:43)

                  at org.jboss.resteasy.core.ServerResponseWriter.executeFilters(ServerResponseWriter.java:121)

                  at org.jboss.resteasy.core.ServerResponseWriter.writeNomapResponse(ServerResponseWriter.java:48)

                  at org.jboss.resteasy.core.SynchronousDispatcher.writeResponse(SynchronousDispatcher.java:466)

                  ... 38 more

                Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: /10.0.34.242:11222

                  at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:78)

                  at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:37)

                  at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:16)

                  at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)

                  at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:400)

                  ... 84 more

                Caused by: java.net.NoRouteToHostException: No route to host

                  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

                  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)

                  at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)

                  at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:68)

                  ... 88 more

                 

                Any further insight would be very helpful.

                 

                Thanks,

                Vikrant

                • 5. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                  nadirx

                  Vikrant,

                  The Hot Rod server sends topology using either the node address (which is the address/port on which the endpoint is bound) or the external address (as specified by external-host and external-port). The client then maps the topology to an array of InetSocketAddress which are "resolved". When the server goes down and comes back up the client doesn't re-resolve the addresses. I've created a Jira: [ISPN-7955] Hot Rod client needs to re-resolve topology addresses after failure to connect - JBoss Issue Tracker

                  • 6. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                    vikrant02

                    Thanks Tristan for opening the Jira.

                    • 7. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                      nadirx

                      Hey Vikrant, which version of KeyCloak is that ?

                      • 8. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                        vikrant02

                        Tristan Tarrant wrote:

                         

                        Hey Vikrant, which version of KeyCloak is that ?

                        Using Keycloak 3.1.0.Final.

                        • 9. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                          nadirx

                          Can you please replace the infinispan-client-hotrod jar with this one ? http://dataforte.net/software/infinispan/infinispan-hotrod-client-8.1.9-SNAPSHOT.jar

                           

                          The easiest way is to overwrite modules/system/layers/base/org/infinispan/client/hotrod/main/infinispan-client-hotrod-8.1.0.Final.jar

                           

                          Let me know if this works for you.

                          • 10. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                            vikrant02

                            Tristan Tarrant wrote:

                             

                            Can you please replace the infinispan-client-hotrod jar with this one ? http://dataforte.net/software/infinispan/infinispan-hotrod-client-8.1.9-SNAPSHOT.jar

                             

                            The easiest way is to overwrite modules/system/layers/base/org/infinispan/client/hotrod/main/infinispan-client-hotrod-8.1.0.Final.jar

                             

                            Let me know if this works for you.

                            Hi,

                            Jar doesn't exist at given link. Getting 404. The requested URL /software/infinispan/infinispan-hotrod-client-8.1.9-SNAPSHOT.jar was not found on this server.

                            Can you please check the link once.

                             

                            Thanks.

                            • 11. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                              nadirx
                              • 12. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                                vikrant02

                                Got it now will give it a try and let you know the results.

                                • 13. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                                  vikrant02

                                  Hi Tristan,

                                   

                                  Issue is resolved with this client. New hotrod client is able to re-resolve the Infinispan cluster even after whole infinispan cluster goes down and recovers.

                                   

                                  It would be helpful if you could provide release date of final version for this client and when we can expect this change with the Keycloak GA release? This info will help us planning our release cycle.

                                   

                                  Really appreciate your help on this.

                                   

                                  Thanks.

                                  • 14. Re: Openshift - Infinispan uses pod ip address in cluster instead of hostname
                                    nadirx

                                    Great !

                                    I can release a 8.1.9.Final of Infinispan that you can use to replace the one bundled in your keycloak server. As for the release cycle of other proects, that's out of my control.

                                    I see that KeyCloak 3.2.0 will be based on WildFly 11 and therefore Infinispan 9.1 should be in there (we will release on July 14th).

                                    1 2 Previous Next