středa 28. března 2018

RadarGun and performance testing in OpenShift

RadarGun has a brand new feature that allows for performance testing of distributed caches in OpenShift. There's a helper script called "openshift" and a new OpenShift plugin. Together they make it possible to test both library and client-server mode. In library mode the cache and the loader thread run in the same JVM process. In client-server mode the clients and servers run in
separate processes (and in this case even separate Pods).

I'll describe the process of running performance tests in OpenShift with Infinispan server and HotRod clients in client-server mode.

The process is as follows:

1.   Checkout the RadarGun master branch from https://github.com/radargun/radargun

2.   Build RadarGun from the top-level directory using Maven. This will produce the  target/distribution/RadarGun-3.0.0-SNAPSHOT directory that will be used in the resulting Docker image.

    mvn clean install

3.   Export the DOCKER_REGISTRY address where RadarGun images will be pushed for consumption by OpenShift deployments.

    export DOCKER_REGISTRY=<OpenShift's docker registry address>
 
    The registry address can be usually obtained by opening the "About" link in the top-right corner of your OpenShift project's web UI.
 
4.   Go to the openshift sub-directory. Pass the login command with the correct parameters specific to the OpenShift provider:

    ./openshift -L "oc login https://api.rh-us-xxx-1.openshift.com --token=l5gGjuKOPAWsdf6564sdfeOI7qhmQCGhJLvyj4oa4" login
 
    Note: The login command can be usually obtained by opening "Command line tools" link in the top-right corner of your OpenShift project's web UI.

5.   Create your project! This is named "myproject" by default. Customization is only possible by changing the openshift script.

    ./openshift newproject

6.   Build the RadarGun image and push it to the remote Docker registry.

    ./openshift build
 
    This command will take the target/distribution/RadarGun-3.0.0-SNAPSHOT directory from previous steps and create a Docker image based on it.

7.   Create a subdirectory named "configs" with the RadarGun benchmark, config files required by individual RadarGun plugins, and config file for Infinispan server. This directory will be mounted in master and slave Pods in OpenShift as /opt/radargun-configs. And as you'll see the Infinispan server template will also mount this directory under ${INFINISPAN_SERVER_HOME}/standalone/configuration/custom so that the configuration file that you place in this directory can be used by the server.

8.   Create the RadarGun deployment.

    ./openshift -cd "configs/" -cf "benchmark-openshift-hotrod.xml" -s 2 deploy
 
    The example uses a RadarGun config file named "benchmark-openshift-hotrod.xml" . It needs to be placed in the config/ sub-directory before running the command. The parameter -s determines the number of RadarGun slaves. Each slave will run in a separate Pod in OpenShift. The RadarGun master will run in its own Pod. In summary, this command will spin up one Pod for master and 2 Pods for slaves.
 
    Here's a snippet from the RadarGun benchmark. Comments in the file will shed some light on it.
         
    A full config with an artificial test scenario is here.
 
    Overall, this benchmark expects two RadarGun slaves to be running. One of the slaves will then deploy the template which results in two additional Pods for Infinispan servers. The other slave then holds the HotRod clients which will be sending requests to the server.

Here's a screenshot from OpenShift GUI at this point:
 

 
    The OpenShift template for Infinispan server from the benchmark specifies several objects that will be created in OpenShift:

  • HotRod service available on port 11222
  • REST service available on port 8080
  • JGroups Ping service which is later reference by JGroups DNS_PING protocol 
  • Two Infinispan server instances based on jboss/infinispan-server Docker image 
Below is a short snippet that shows a trick to pass the correct address of the DNS server to JGroups. JGroups then uses this DNS to find the address of the aforementioned JGroups Ping service which forwards discovery messages from server nodes. This way Infinispan/JGroups get to know the other cluster members.
   
   
     Full Infinispan Server template here.
   
   
     The following is a short snippet that shows the DNS_PING protocol configuration in JGroups subsystem of Infinispan servers. Other parts of the config file are not specific to OpenShift.
   
   
     Full Infinispan Server config available here.

9.    Alright, now when the tests have finished, we should be able to download test results and logs.

    ./openshift results

And we're done! We can now analyze the results which have been stored in radargun-data sub-directory.

úterý 13. února 2018

Arquillian in-container tests with WildFly, Infinispan Server and remote OpenShift

The Arquillian Cube project provides a nice API and everything you need for running standalone tests against a local OpenShift instance (e.g. started by oc cluster up). However, running in-container tests against a remote OpenShift instance requires a few tricks. This is a tutorial on how to execute in-container tests on WildFly 11 running in a remote OpenShift instance. Throughout this tutorial we utilise our own instance of OpenShift, however we could also have used the OpenShift Online Starter edition which is available for free.
  • First we must login to the remote OpenShift cluster as recommended by the provider. For example:
          oc login <openshift address> --token=<token>

Next, it is necessary to note the remote OpenShift's Docker registry. This registry is required so that we can push our created images to the remote registry, which enables other services deployed in OpenShift to consume them. In our case it will be registry.online-int.openshift.com.
  •  Create an OpenShift project:
     oc new-project myproject
  • Switch to the project:
           oc project myproject

  • Build your image that you'd like to test in remote OpenShift. I've used Infinispan server Docker image which is extended in our project. 
           sudo docker build -t infinispan-server-dev ./infinispan-server

    Our Dockerfile looks like this. It doesn't really modify any behaviour, it just shows that we can create our own image based on another image and push it to the remote OpenShift's Docker registry. Should you be interested in how to build a Docker image from a snapshot of Infinispan server, refer to an older article: Bleeding edge on Docker
 

  • Build the remote container's image where tests will be run. In our case WildFly with a few modifications. 
           sudo docker build -t wildfly11-testrunner ./wildfly11-testrunner    

  The Dockerfile for a modified WildFly server looks like this:

The Dockerfile above exposes port 9990, this is required so that Arquillian is to be able to perform a remote deployment via the management port. Furthermore, we also expose the management interface on address 0.0.0.0 so that the port is forwarded to the localhost. Finally, we add a user who will later connect and login via Arquillian.
  • Login to remote docker registry:
          sudo docker login -u $(shell oc whoami) -p $(shell oc whoami -t) registry.online-int.openshift.com

  • Tag your images and push them to the remote docker registry.
   The images must be pushed under this schema: <registry_address>/<project_name>/<image_name>

          sudo docker tag infinispan-server-dev registry.online-int.openshift.com/myproject/infinispan-server-dev

    sudo docker push registry.online-int.openshift.com/myproject/infinispan-server-dev

    sudo docker tag wildfly11-testrunner registry.online-int.openshift.com/myproject/wildfly11-testrunner

    sudo docker push registry.online-int.openshift.com/myproject/wildfly11-testrunner

  • Enable image stream lookups for our images so that we can use them from our application.
          oc set image-lookup infinispan-server-dev
    oc set image-lookup wildfly11-testrunner
  • Create an application that will be using our infinispan-server-dev image:
         oc new-app infinispan-server-dev -e "APP_USER=user" -e "APP_PASS=changeme"
  • Run the functional tests from Maven:
          mvn -Dkubernetes.auth.token=
kkF_eC3ZXcsddf5RUeZFXw4Ce9rC_uVVZt875mLsQ 
clean test -f functional-tests/pom.xml

    This maven command specifies a property named kubernetes.auth.token. This token is used to create an OpenShiftClient instance within the WildFly container.
 
    The arquillian.xml file follows.
Notice the definitionsFile property which specifies the WildFly server's pod. We previously pushed the image and created the image stream that is now available in OpenShift. And the wildfly11-testrunner consumes the WildFly image by pointing to the image stream called "wildfly11-testrunner:latest"
 
    Here's the json definition

The test class is less interesting. It basically needs a method annotated with @Deployment. This method tells Arquillian which classes/libraries to bundle withing the archive which is  deployed in a remote container (in this case, WildFly 11).
     Another interesting part is that Arquillian Cube's annotations such as @Named, or @RouteURL don't work in-container as of now. Similar to injecting OpenShiftClient via @ArquillianResource. So we have to create our own instances manually.

There you go. The full working example is available at https://github.com/mgencur/arquillian-remote-openshift-example . The project has a convenient Makefile which can do all the steps by simply invoking `make test-remote`. The readme file provides more info how to run the tests.





sobota 12. ledna 2013

Infinispan memory overhead

Have you ever wondered how much Java heap memory is actually consumed when data is stored in Infinispan cache? Let's look at some numbers obtained through real measurement.

The strategy was the following:

1) Start Infinispan server in local mode (only one server instance, eviction disabled)
2) Keep calling full garbage collection (via JMX or directly via System.gc() when Infinispan is deployed as a library) until the difference in consumed memory by the running server gets under 100kB between two consecutive runs of GC
3) Load the cache with 100MB of data via respective client (or directly store in the cache when Infinispan is deployed as a library)
4) Keep calling the GC until the used memory is stabilised
5) Measure the difference between the final values of consumed memory after the first and second cycle of GC runs
6) Repeat steps 3) 4) 5) four times to get an average value (first iteration ignored)

The amount of consumed memory was obtained from a verbose GC log (related JVM options: -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/tmp/gc.log)

The test output looks like this: https://gist.github.com/4512589

The operating system (Ubuntu) as well as JVM (Oracle JDK 1.6) were 64-bit. Infinispan 5.2.0.Beta6. Keys were kept intentionally small (10-char-long String). Values are byte arrays. The target entry size is a sum of key size and value size.


Memory overhead of Infinispan accessed through clients


HotRod client

entry size -> overall memory
512B       -> 137144kB
1kB        -> 120184kB
10kB       -> 104145kB
1MB        -> 102424kB

So how much additional memory is consumed on top of each entry?

entry size/actual memory per entry -> overhead per entry
512B/686B                -> ~174B
1kB(1024B)/1202B         -> ~178B
10kB(10240B)/10414B      -> ~176B
1MB(1048576B)/1048821B   -> ~245B

MemCached client (text protocol, SpyMemcached client) 

entry size -> overall memory
512B       -> 139197kB
1kB        -> 120517kB
10kB       -> 104226kB
1MB        -> N/A (SpyMemcached allows max. 20kB per entry)

entry size/actual memory per entry -> overhead per entry
512B/696B               -> ~184B
1kB(1024B)/1205B        -> ~181B
10kB(10240B)/10422B     -> ~182B

REST client (Content-Type: application/octet-stream)

entry size -> overall memory
512B       -> 143998kB
1kB        -> 122909kB
10kB       -> 104466kB
1MB        -> 102412kB

entry size/actual memory per entry -> overhead per entry
512B/720B               -> ~208B
1kB(1024B)/1229B        -> ~205B
10kB(10240B)/10446B     -> ~206B
1MB(1048576B)/1048698B  -> ~123B

The memory overhead for individual entries seems to be more or less constant
across different cache entry sizes.

Memory overhead of Infinispan deployed as a library


Infinispan was deployed to JBoss Application Server 7 using Arquillian.

entry size -> overall memory/overall with storeAsBinary
512B       -> 132736kB / 132733kB
1kB        -> 117568kB / 117568kB
10kB       -> 103953kB / 103950kB
1MB        -> 102414kB / 102415kB

There was almost no difference in overall consumed memory when using
storeAsBinary attribute and when not.

entry size/actual memory per entry-> overhead per entry (w/o storeAsBinary)
512B/663B               -> ~151B
1kB(1024B)/1175B        -> ~151B
10kB(10240B)/10395B     -> ~155B
1MB(1048576B)/1048719B  -> ~143B

As you can see, the overhead per entry is constant across different entry sizes and is ~151 bytes.

Conclusion


The memory overhead is slightly more than 150 bytes per entry when storing data into the cache locally. When accessing the cache via remote clients, the memory overhead is a little bit higher and ranges from ~170 to ~250 bytes, depending on remote client type and cache entry size. If we ignored the statistics for 1MB entries, which could be affected by a small number of entries (100) stored in the cache, the range would have been even narrower.

středa 5. září 2012

Infinispan Arquillian Container 1.0.0.CR1 released

Infinispan Arquillian Container is an extension to Arquillian that provides several ways to interacting with Infinispan, either with a standalone Infinispan server or just with Infinispan libraries. This extension can also communicate with JBoss Data Grid server via JMX.

It was released as Maven artifacts in JBoss Maven Repository. It is located at http://repository.jboss.org/nexus/content/groups/public-jboss/ . More information on how to set up and use the repo can be found at https://community.jboss.org/wiki/MavenGettingStarted-Users

What does this Arquillian extension offer to you? Let me describe all aspects of this extension one by one.

Developing tests with standalone Infinispan server


When testing, you might want to automatically start the Infinispan server before the test and stop it afterwards. This can be achieved by configuring infinispan-arquillian-container via Arquillian's configuration file. The following is a subset of attributes that can be specified and thus passed to the Infinispan server during startup: masterThreads, workerThreads, cacheConfig, jmxPort, ... The complete list can be found in bit.ly/R7j4d1 (all private fields).


NOTE: Examples are not a part of the release, only libraries are. In order to check out examples provided with the project, one has to clone project's repository: https://github.com/mgencur/infinispan-arquillian-container Examples are located in the respective sub-directory.

The configuration file then looks similar to the following:

Complete example: bit.ly/RkrpEE

When we tell Arquillian to work with Infinispan server, we can inject RemoteInfinispanServer object into our test. Such an object provides various information about the running Infinispan server. For example, we can retrieve a hostname and HotRod port and use these pieces of information to create a RemoteCacheManager instance. Besides that users are allowed to retrieve information available via JMX from the server like cluster size, number of entries in the cache, number of cache hits and many more.

Complete example: http://bit.ly/OaCw8q

Vital dependencies required for the test to run are:

org.infinispan.arquillian.container:infinispan-arquillian-container-managed:jar:1.0.0.CR1:test
org.infinispan.arquillian.container:infinispan-arquillian-impl:jar:1.0.0.CR1:test

Not only with standalone Infinispan server can Infinispan Arquillian extension work.

Developing tests with JBoss Data Grid (JDG) server


This time, the properties in Arquillian's configuration file are different and correspond to properties of JBoss Application Server 7. The most important property is again the path to the server (jbossHome).

Are you interested in what the test look like? It looks completely the same as tests for standalone Infinispan server, you just have a few more attributes available. JDG server usually starts all three endpoints (HotRod, Memcached, REST) at the same time while for the Infinispan server you have to specify which end point should be started. Furthermore, Infinispan server does not have the REST endpoint available out-of-the-box.

As a result, you can call the following methods with JDG in one single test.

server1.getMemcachedEndpoint().getPort();
server1.getRESTEndpoint().getContextPath();
server1.getHotRodEndpoint().getPort();

The difference is, of course in dependencies. Instead of a handler for standalone Infinispan server, one has to use a handler for JBoss AS 7. The dependencies then look like this:

org.jboss.as:jboss-as-arquillian-container-managed:jar:7.1.2.Final:test
org.infinispan.arquillian.container:infinispan-arquillian-impl:jar:1.0.0.CR1:test


Testing Infinispan libraries


Sometimes we don't want to use a standalone server. Sometimes we want to test just Infinispan in its basic form - Java libraries. Infinispan has been under development for years and during that time, lots of tests were developed. With tests come utility methods. Infinispan Arquillian Container enables you to leverage these utility methods and call them via an instance of DatagridManager. This instance can be easily injected into a test, no matter which test framework (TestNG, JUnit) you use.

DatagridManager class can be found at http://bit.ly/Q0a7ki

Can you see the advantage? No? Let me point out some useful methods available in the manager.

List<Cache<K, V>> createClusteredCaches(int numMembersInCluster, String cacheName, ConfigurationBuilder builder)

- creates a cluster of caches with certain name and pre-defined configuration

void waitForClusterToForm()

- helps to wait until the cluster is up and running

Cache<A, B> cache(int index)

- retrieves a cache from certain node according to the index

Cache<A, B> cache(int managerIndex, String cacheName)

- retrieves a cache with that name

void killMember(int cacheIndex)

- kills a cache with cacheIndex index

AdvancedCache advancedCache(int i)

- retrieves an advanced cache from node i

Trancation tx(int i)

- retrieves a transaction from node i

TransactionManager tm(int i)

- retrieves a transaction manager from node i

...and much more.


The following test can be found among other examples in the GIT repository.

Required dependencies:

org.infinispan:infinispan-core:jar:5.1.5.FINAL:test  -  users should replace this version with the one they want to test
org.infinispan.arquillian.container:infinispan-arquillian-impl:jar:1.0.0.CR1:test

Infinispan Arquillian Container was tested with Infinispan 5.1.5.FINAL and JDG 6.0.0.GA. Nevertheless, it should work smoothly also with other not-very-distinct versions. I'll be updating the project to work with newer versions of both Infinispan and JBoss Data Grid.
 

čtvrtek 21. června 2012

Fine-grained replication in Infinispan


Sometimes we have a large object, possibly with lots of attributes or holding some binary data, and we would like to tell Infinispan to replicate only certain part of the object across the cluster. Typically, we wanna replicate only that part which we've just updated. This is where DeltaAware and Delta interfaces come to play. By providing implementations of these interfaces we can define fine-grained replication. When we put some effort into such such an enhancements, we would also like to speed up object marshalling and unmarshalling. Therefore, we're going to define our own externalizers - to avoid slow default Java serialization.

The following code snippets are gathered in a complete example at https://github.com/mgencur/infinispan-examples/tree/master/partial-state-transfer This project contains a readme file with instructions on how to build and run the example. It is based on clustered-cache quickstart in Infinispan.

Implementing DeltaAware interface


So let's look at our main object. For the purpose of this excercise, I defined a Bicycle class that consists of many components like frame, fork, rearShock, etc. This object is stored in a cache as a value under certain (not important) key. It might happen in our scenario that we update only certain components of the bike and in such case we want to replicate just those component changes.

Important methods here are (description taken from javadocs):

commit() - Indicates that all deltas collected to date has been extracted (via a
                 call to delta()) and can be discarded. Often used as an optimization if
                 the delta isn't really needed, but the cleaning and resetting of       
                 internal state is desirable.

delta() - Extracts changes made to implementations, in an efficient format that
             can easily and cheaply be serialized and deserialized.  This method will
             only be called once for each changeset as it is assumed that any
             implementation's internal changelog is wiped and reset after generating
             and submitting the delta to the caller.
         
We also need to define setters and getters for our members. Setter methods are, among other things, responsible for registering changes to the changelog that will be later used to reconstruct the object's state. The externalizer for this class is only needed when cache stores are used. For the sake of simplicity, I don't mention it here.



Implementing Delta interface


Actual object that will be replicated across the cluster is the implementation of Delta interface. Let's look at the class. First, we need a field that will hold the changes - changeLog. Second, we need to define a merge() method. This method must be implemented so that Infinispan knows how to merge an existing object with incoming changes. The parameter of this method represents an object that is already stored in a cache, incoming changes will be applied to this object. We're using a reflection here to apply the changes to the actual object but it is not necessary. We could easily call setter methods. The advantage of using reflection is that we can set those fields in a loop.

Another piece is a registerComponentChange() method. This is called by an object of the Bicycle class - to record changes to that object. The name of this method is not important.

Defining our own externalizer


Alright, so what remains is the externalizer definition for the Delta implementation. We implement AdvancedExternalizer interface and say that only changeLog object should be marshalled and unmarshalled when transfering data over the wire. A complete (almost) implementation of Delta interface is the following.


Tell Infinispan about the extra externalizer


We also need to configure Infinispan to use our special externalizer to marshall/unmarshall our objects. We can do it e.g. programatically by calling .addAdvancedExternalizer() on the serialization configuration builder.

You can see we're also configuring transactions here. This is not necessary, though. We just aim to provide a richer example, removing transactional behavior is trully easy.

And here comes the "usage" part. Enclose cache calls by a transaction, retrieve a bicycle object from the cache, do some changes and commit them.

That's it. What is eventually transferred over the wire is just the changeLog object. The actual bicycle object is reconstructed from incomming updates.

If all of this seem to be too complex to you, I have good news. Infinispan provides one implementation of DeltaAware interface whish is called AtomicHashMap (package org.infinispan.atomic). If this map is used as a value in key/value pairs stored in the cache, only puts/gets/removes performed to this map during a transaction are replicated to other nodes. Classes like Bicycle and BicycleDelta are not need then. Even registering the externalizer for AtomicHashMap is not needed, this is done automatically during registration of internal externalizers. However, one might want a class emulating a real-world object, not just a map. That's the case when your own implementations of DeltaAware and Delta interfaces are the only way.

středa 23. května 2012

How to configure Infinispan with transactions, backed by relational DB on JBoss AS 7 vs. Tomcat 7


Migrating  projects from one container to another is often problematic. Not as much with Infinispan. This article is about configuring Infinispan, using Transaction Manager for demarcating transaction boundaries, while  keeping the data both in a memory and relational database - stored via  JDBC cache store. I'll demonstrate all the features on code snippets. 


A complete application is located at https://github.com/mgencur/infinispan-examples and is called carmart-tx-jdbc. It's a web application based on JSF 2, Seam 3 and Infinispan 5.1.4.FINAL, is fully working, tested with JBoss  Application Server 7.1.1.Final and Tomcat 7.0.27. There  is one prerequisite, though. It needs an installed and working MySQL database in your system. The database name should be carmartdb, accessible by a user with carmart/carmart username/password.
 

First, look at what we need to configure for JBoss Application Server 7. 

Configuring transactions and JDBC cache store on JBoss AS 7


Infinispan will be configured via new fluent API using builders, hence the call to  .build() method at the end. We need to configure aspects related to  transactions and cache loaders. The configuration API for cache loaders  is likely going to be changed in not-so-far future. It should be fluent  and more intuitive, generally easier to use than current one. 

I purposely do not show XML configuration. Configuration examples can be found at https://github.com/infinispan/infinispan/blob/master/core/src/main/resources/config-samples/sample.xml. In order to configure transactions and cache loaders, look for tags called  <transaction> and <loaders> and modify that sample file according to below configuration. Tag names and attribute names are very similar for both XML and Java configuration. If that is not enough, there is always a schema in Infinispan distribution.

The configuration of Infinispan is as follows: 


GlobalConfiguration glob = new GlobalConfigurationBuilder()
   .nonClusteredDefault().build();

 
Configuration loc = new ConfigurationBuilder()
   .clustering().cacheMode(CacheMode.LOCAL) 

   .transaction().transactionMode(TransactionMode.TRANSACTIONAL)
   .autoCommit(false)
   .transactionManagerLookup(new GenericTransactionManagerLookup())
   .loaders().passivation(false).preload(false).shared(false)
   .addCacheLoader().cacheLoader(new JdbcStringBasedCacheStore())

   .fetchPersistentState(false).purgeOnStartup(true)
   .addProperty("stringsTableNamePrefix", "carmart_table")
   .addProperty("idColumnName", "ID_COLUMN")
   .addProperty("dataColumnName", "DATA_COLUMN")
   .addProperty("timestampColumnName", "TIMESTAMP_COLUMN")
   
//for different DB, use different type
   .addProperty("timestampColumnType", "BIGINT")
   .addProperty("connectionFactoryClass", 

        "org.infinispan.loaders.jdbc.connectionfactory.ManagedConnectionFactory")
   .addProperty("connectionUrl", "jdbc:mysql://localhost:3306/carmartdb")
   .addProperty("driverClass", "com.mysql.jdbc.Driver")

    //for different DB, use different type
   .addProperty("idColumnType", "VARCHAR(255)") 

    //for different DB, use different type
   .addProperty("dataColumnType", "VARBINARY(1000)")
   .addProperty("dropTableOnExit", "false")
   .addProperty("createTableOnStart", "true")
   .addProperty("databaseType", "MYSQL")
   .addProperty("datasourceJndiLocation", "java:jboss/datasources/ExampleDS")
   .build();


BasicCacheContainer manager = new DefaultCacheManager(glob, loc, true); 
.... = manager.getCache()
 
Lines marked with red are different in other containers/configurations, as you'll see in a minute. The code above implies that we need to specify proper TransactionManagerLookup implementation which is, in this case, GenericTransactionManagerLookup. We  also need to say: "Hey, I wanna use ManagedConnectionFactory as a connectionFactoryClass". OK, here we go.
I should, as well, explain how to configure a datasource properly, right? In JBoss AS 7, this is configured as a subsystem in
 $JBOSS_HOME/standalone/configuration/standalone.xml:


<subsystem xmlns="urn:jboss:domain:datasources:1.0">
    <datasources>
        <datasource jndi-name="java:jboss/datasources/ExampleDS"    

                    pool-name="ExampleDS" enabled="true" 
                    use-java-context="true">
            <connection-url>jdbc:mysql://localhost:3306/carmartdb</connection-url>
            <driver>mysql-connector-java-5.1.17-bin.jar</driver>
            <security>
                <user-name>carmart</user-name>
                <password>carmart</password>
            </security>
        </datasource>
    </datasources> 

</subsystem>

The usage of transactions is very simple as we can obtain a transaction object by injection.


@Inject 
private javax.transaction.UserTransaction utx;
     
try {
    utx.begin();
    cache.put(...) //store/load keys to/from the cache
    utx.commit(); 
} catch (Exception e) {
    if (utx != null) {
        try {
            utx.rollback();
        } catch (Exception e1) {}
    } 
}
   
Sources: https://github.com/mgencur/infinispan-examples/blob/master/carmart-tx-jdbc/src/jbossas/java/org/infinispan/examples/carmart/session/CarManager.java


Quite easy, isn't it ...if you know how to do it. The only problem is that it does not work (at least not completely) :-) If you deploy the app, you find out that when storing a key-value pair in  the cache, an exception is thrown. This exception indicates that the operation with DB (and JDBC cache store) failed. The exception says:

Error while processing a commit in a two-phase transaction:
org.infinispan.CacheException:
org.infinispan.loaders.CacheLoaderException:
This might be related to https://jira.jboss.org/browse/ISPN-604

A complete stack trace looks similar to https://gist.github.com/2777348
There's still an open issue in JIRA (ISPN-604) and it is being worked on. 

Configuring transactions and JDBC cache store on JBoss AS 7 - c3p0


But how do we cope with this inconvenience for now... By not using a managed datasource but rather a third party library called c3p0 (JDBC3  Connection and Statement Pooling, more information at http://www.mchange.com/projects/c3p0/index.html) Infinispan allows you to use this library for connecting to the database. If you really want to use it, you need to choose a different connectionFactoryClass which is, in this case, PooledConnectionFactory.

Infinispan configuration looks like this:


GlobalConfiguration glob = new GlobalConfigurationBuilder()
    .nonClusteredDefault().build(); 


Configuration loc = new ConfigurationBuilder()
    .clustering().cacheMode(CacheMode.LOCAL)

      .transaction().transactionMode(TransactionMode.TRANSACTIONAL)
    .autoCommit(false)
    .transactionManagerLookup(new GenericTransactionManagerLookup())
    .loaders().passivation(false).preload(false).shared(false)
    .addCacheLoader().cacheLoader(new JdbcStringBasedCacheStore())

    .fetchPersistentState(false).purgeOnStartup(true)
    .addProperty("stringsTableNamePrefix", "carmart_table")
    .addProperty("idColumnName", "ID_COLUMN")
    .addProperty("dataColumnName", "DATA_COLUMN")
    .addProperty("timestampColumnName", "TIMESTAMP_COLUMN")
    .addProperty("timestampColumnType", "BIGINT")
    .addProperty("connectionFactoryClass",

        "org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory")
    .addProperty("connectionUrl", "jdbc:mysql://localhost:3306/carmartdb")
    .addProperty("userName", "carmart")   //we do not have a managed datasource 

     -> specify credentials here
    .addProperty("password", "carmart")
    .addProperty("driverClass", "com.mysql.jdbc.Driver")
    .addProperty("idColumnType", "VARCHAR(255)")
    .addProperty("dataColumnType", "VARBINARY(1000)")
    .addProperty("dropTableOnExit", "false")
    .addProperty("createTableOnStart", "true")
    .addProperty("databaseType", "MYSQL")
    //.addProperty("datasourceJndiLocation", "java:jboss/datasources/ExampleDS") 

    //oh, yes, we do not use JNDI now     
    .build();
 Transactions are accessible in the same way as in the previous use case. Now let's look at configuration for Tomcat servlet container. 

Configuring transactions and JDBC cache store on Tomcat 7


Tomcat does not have any Transaction Manager in it so we have to bundle one with the application. For the purpose of this exercise, we choose JBoss Transactions (http://www.jboss.org/jbosstm). See dependencies at the end.

Cache manager and cache configuration is in this form:


GlobalConfiguration glob = new GlobalConfigurationBuilder()
    .nonClusteredDefault().build();


Configuration loc = new ConfigurationBuilder()
    .clustering().cacheMode(CacheMode.LOCAL) 

    .transaction().transactionMode(TransactionMode.TRANSACTIONAL)
    .autoCommit(false)
    .transactionManagerLookup(new JBossStandaloneJTAManagerLookup()) 

    .loaders().passivation(false).preload(false).shared(false)
    .addCacheLoader().cacheLoader(new JdbcStringBasedCacheStore())

    .fetchPersistentState(false).purgeOnStartup(true)
    .addProperty("stringsTableNamePrefix", "carmart_table")
    .addProperty("idColumnName", "ID_COLUMN")
    .addProperty("dataColumnName", "DATA_COLUMN")
    .addProperty("timestampColumnName", "TIMESTAMP_COLUMN")
    .addProperty("timestampColumnType", "BIGINT")
    .addProperty("connectionFactoryClass", 

        "org.infinispan.loaders.jdbc.connectionfactory.ManagedConnectionFactory")
    .addProperty("connectionUrl", "jdbc:mysql://localhost:3306/carmartdb")
    .addProperty("userName", "carmart")
    .addProperty("driverClass", "com.mysql.jdbc.Driver")
    .addProperty("idColumnType", "VARCHAR(255)")
    .addProperty("dataColumnType", "VARBINARY(1000)")
    .addProperty("dropTableOnExit", "false")
    .addProperty("createTableOnStart", "true")
    .addProperty("databaseType", "MYSQL")
    .addProperty("datasourceJndiLocation", "java:comp/env/jdbc/ExampleDB")
    .build();

       
For Tomcat, we need to specify a different transactionManagerLookup implementation and datasourceJndiLocation. Tomcat simply places objects  under a bit different JNDI locations. The datasource is defined in context.xml file which has to be on classpath. This file might look like this:


<Context>
    <Resource name="jdbc/ExampleDB"
        auth="Container"
        type="javax.sql.DataSource"
        factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
        maxActive="100"
        minIdle="30"
        maxWait="10000"
        jmxEnabled="true"
        username="carmart"
        password="carmart"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://localhost:3306/carmartdb"/> 

</Context>

How do we get the transaction manager in the application then? Lets obtain  it directly from a cache. 


Infinispan knows how to find the manager and we need to know how to obtain it from Infinispan.


javax.transaction.TransactionManager tm = 

    ((CacheImpl) myCache).getAdvancedCache().getTransactionManager(); 
try {
    tm.begin();
    myCache.put(...)
    tm.commit(); 

} catch (Exception e) {
    if (tm != null) {
        try {
            tm.rollback();
        } catch (Exception e1) {}
    } 

}
   
Sources: https://github.com/mgencur/infinispan-examples/blob/master/carmart-tx-jdbc/src/tomcat/java/org/infinispan/examples/carmart/session/CarManager.java The transaction manager provides standard methods for transactions, such as begin(), commit() and rollback(). 


Now is the time for dependencies.


So...which dependencies do we always need when using Infinispan with JDBC cache stores and transactions? These are infinspan-core, infinispan-cachestore-jdbc and javax.transaction.jta. The scope for jta dependency, as defined in Maven, is different for JBossAS and Tomcat.

Common dependencies for JBossAS and Tomcat


    <dependency>
        <groupId>org.infinispan</groupId>
        <artifactId>infinispan-core</artifactId>
        <version>5.1.4.FINAL</version>
    </dependency>
   
    <dependency>
        <groupId>org.infinispan</groupId>
        <artifactId>infinispan-cachestore-jdbc</artifactId>
        <version>5.1.4.FINAL</version>
    </dependency>


Of course, our application needs a few more dependencies but these are not directly related to Infinispan. Let's ignore them in this article. JBoss AS 7 provides managed datasource that is accessible from Infinispan. The only specific dependency (related to transactions or Infinispan) is JTA.


Dependencies specific to JBossAS - using managed Datasource (managed by the server)


    <dependency>
        <groupId>javax.transaction</groupId>
        <artifactId>jta</artifactId>
        <version>1.1</version>
        <scope>provided</scope>
    </dependency>


Dependencies specific to JBossAS - using c3p0


    <dependency>
        <groupId>javax.transaction</groupId>
        <artifactId>jta</artifactId>
        <version>1.1</version>
        <scope>provided</scope>
    </dependency>
   
    <dependency>
        <groupId>c3p0</groupId>
        <artifactId>c3p0</artifactId>
        <version>0.9.1.2</version>
    </dependency>
   
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.17</version>
    </dependency>
   
Yes, you need to bundle also MySQL connector. On the other hand, for Tomcat use case and JBossAS with managed datasource, this jar file needs do be deployed to the server separately. For Tomcat, do this simply by copying the jar file to $TOMCAT_HOME/lib.  For JBoss AS 7, copy the jar file into $JBOSS_HOME/standalone/deployments.


Dependencies specific to Tomcat - using JBoss Transactions


    <dependency>
        <groupId>javax.transaction</groupId>
        <artifactId>jta</artifactId>
        <version>1.1</version>
        <scope>runtime</scope>
    </dependency>
   
    <dependency>
        <groupId>org.jboss.jbossts</groupId>
        <artifactId>jbossjta</artifactId>
        <version>4.16.4.Final</version>
    </dependency>


That's it. I hope you've found this article helpful. Any feedback is welcome, especially the positive one :-) If you find any problem with the  application, feel free to comment here or participate in Infinispan forums (http://www.jboss.org/infinispan/forums).