středa 28. března 2018

RadarGun and performance testing in OpenShift

RadarGun has a brand new feature that allows for performance testing of distributed caches in OpenShift. There's a helper script called "openshift" and a new OpenShift plugin. Together they make it possible to test both library and client-server mode. In library mode the cache and the loader thread run in the same JVM process. In client-server mode the clients and servers run in
separate processes (and in this case even separate Pods).

I'll describe the process of running performance tests in OpenShift with Infinispan server and HotRod clients in client-server mode.

The process is as follows:

1.   Checkout the RadarGun master branch from https://github.com/radargun/radargun

2.   Build RadarGun from the top-level directory using Maven. This will produce the  target/distribution/RadarGun-3.0.0-SNAPSHOT directory that will be used in the resulting Docker image.

    mvn clean install

3.   Export the DOCKER_REGISTRY address where RadarGun images will be pushed for consumption by OpenShift deployments.

    export DOCKER_REGISTRY=<OpenShift's docker registry address>
 
    The registry address can be usually obtained by opening the "About" link in the top-right corner of your OpenShift project's web UI.
 
4.   Go to the openshift sub-directory. Pass the login command with the correct parameters specific to the OpenShift provider:

    ./openshift -L "oc login https://api.rh-us-xxx-1.openshift.com --token=l5gGjuKOPAWsdf6564sdfeOI7qhmQCGhJLvyj4oa4" login
 
    Note: The login command can be usually obtained by opening "Command line tools" link in the top-right corner of your OpenShift project's web UI.

5.   Create your project! This is named "myproject" by default. Customization is only possible by changing the openshift script.

    ./openshift newproject

6.   Build the RadarGun image and push it to the remote Docker registry.

    ./openshift build
 
    This command will take the target/distribution/RadarGun-3.0.0-SNAPSHOT directory from previous steps and create a Docker image based on it.

7.   Create a subdirectory named "configs" with the RadarGun benchmark, config files required by individual RadarGun plugins, and config file for Infinispan server. This directory will be mounted in master and slave Pods in OpenShift as /opt/radargun-configs. And as you'll see the Infinispan server template will also mount this directory under ${INFINISPAN_SERVER_HOME}/standalone/configuration/custom so that the configuration file that you place in this directory can be used by the server.

8.   Create the RadarGun deployment.

    ./openshift -cd "configs/" -cf "benchmark-openshift-hotrod.xml" -s 2 deploy
 
    The example uses a RadarGun config file named "benchmark-openshift-hotrod.xml" . It needs to be placed in the config/ sub-directory before running the command. The parameter -s determines the number of RadarGun slaves. Each slave will run in a separate Pod in OpenShift. The RadarGun master will run in its own Pod. In summary, this command will spin up one Pod for master and 2 Pods for slaves.
 
    Here's a snippet from the RadarGun benchmark. Comments in the file will shed some light on it.
         
    A full config with an artificial test scenario is here.
 
    Overall, this benchmark expects two RadarGun slaves to be running. One of the slaves will then deploy the template which results in two additional Pods for Infinispan servers. The other slave then holds the HotRod clients which will be sending requests to the server.

Here's a screenshot from OpenShift GUI at this point:
 

 
    The OpenShift template for Infinispan server from the benchmark specifies several objects that will be created in OpenShift:

  • HotRod service available on port 11222
  • REST service available on port 8080
  • JGroups Ping service which is later reference by JGroups DNS_PING protocol 
  • Two Infinispan server instances based on jboss/infinispan-server Docker image 
Below is a short snippet that shows a trick to pass the correct address of the DNS server to JGroups. JGroups then uses this DNS to find the address of the aforementioned JGroups Ping service which forwards discovery messages from server nodes. This way Infinispan/JGroups get to know the other cluster members.
   
   
     Full Infinispan Server template here.
   
   
     The following is a short snippet that shows the DNS_PING protocol configuration in JGroups subsystem of Infinispan servers. Other parts of the config file are not specific to OpenShift.
   
   
     Full Infinispan Server config available here.

9.    Alright, now when the tests have finished, we should be able to download test results and logs.

    ./openshift results

And we're done! We can now analyze the results which have been stored in radargun-data sub-directory.

úterý 13. února 2018

Arquillian in-container tests with WildFly, Infinispan Server and remote OpenShift

The Arquillian Cube project provides a nice API and everything you need for running standalone tests against a local OpenShift instance (e.g. started by oc cluster up). However, running in-container tests against a remote OpenShift instance requires a few tricks. This is a tutorial on how to execute in-container tests on WildFly 11 running in a remote OpenShift instance. Throughout this tutorial we utilise our own instance of OpenShift, however we could also have used the OpenShift Online Starter edition which is available for free.
  • First we must login to the remote OpenShift cluster as recommended by the provider. For example:
          oc login <openshift address> --token=<token>

Next, it is necessary to note the remote OpenShift's Docker registry. This registry is required so that we can push our created images to the remote registry, which enables other services deployed in OpenShift to consume them. In our case it will be registry.online-int.openshift.com.
  •  Create an OpenShift project:
     oc new-project myproject
  • Switch to the project:
           oc project myproject

  • Build your image that you'd like to test in remote OpenShift. I've used Infinispan server Docker image which is extended in our project. 
           sudo docker build -t infinispan-server-dev ./infinispan-server

    Our Dockerfile looks like this. It doesn't really modify any behaviour, it just shows that we can create our own image based on another image and push it to the remote OpenShift's Docker registry. Should you be interested in how to build a Docker image from a snapshot of Infinispan server, refer to an older article: Bleeding edge on Docker
 

  • Build the remote container's image where tests will be run. In our case WildFly with a few modifications. 
           sudo docker build -t wildfly11-testrunner ./wildfly11-testrunner    

  The Dockerfile for a modified WildFly server looks like this:

The Dockerfile above exposes port 9990, this is required so that Arquillian is to be able to perform a remote deployment via the management port. Furthermore, we also expose the management interface on address 0.0.0.0 so that the port is forwarded to the localhost. Finally, we add a user who will later connect and login via Arquillian.
  • Login to remote docker registry:
          sudo docker login -u $(shell oc whoami) -p $(shell oc whoami -t) registry.online-int.openshift.com

  • Tag your images and push them to the remote docker registry.
   The images must be pushed under this schema: <registry_address>/<project_name>/<image_name>

          sudo docker tag infinispan-server-dev registry.online-int.openshift.com/myproject/infinispan-server-dev

    sudo docker push registry.online-int.openshift.com/myproject/infinispan-server-dev

    sudo docker tag wildfly11-testrunner registry.online-int.openshift.com/myproject/wildfly11-testrunner

    sudo docker push registry.online-int.openshift.com/myproject/wildfly11-testrunner

  • Enable image stream lookups for our images so that we can use them from our application.
          oc set image-lookup infinispan-server-dev
    oc set image-lookup wildfly11-testrunner
  • Create an application that will be using our infinispan-server-dev image:
         oc new-app infinispan-server-dev -e "APP_USER=user" -e "APP_PASS=changeme"
  • Run the functional tests from Maven:
          mvn -Dkubernetes.auth.token=
kkF_eC3ZXcsddf5RUeZFXw4Ce9rC_uVVZt875mLsQ 
clean test -f functional-tests/pom.xml

    This maven command specifies a property named kubernetes.auth.token. This token is used to create an OpenShiftClient instance within the WildFly container.
 
    The arquillian.xml file follows.
Notice the definitionsFile property which specifies the WildFly server's pod. We previously pushed the image and created the image stream that is now available in OpenShift. And the wildfly11-testrunner consumes the WildFly image by pointing to the image stream called "wildfly11-testrunner:latest"
 
    Here's the json definition

The test class is less interesting. It basically needs a method annotated with @Deployment. This method tells Arquillian which classes/libraries to bundle withing the archive which is  deployed in a remote container (in this case, WildFly 11).
     Another interesting part is that Arquillian Cube's annotations such as @Named, or @RouteURL don't work in-container as of now. Similar to injecting OpenShiftClient via @ArquillianResource. So we have to create our own instances manually.

There you go. The full working example is available at https://github.com/mgencur/arquillian-remote-openshift-example . The project has a convenient Makefile which can do all the steps by simply invoking `make test-remote`. The readme file provides more info how to run the tests.