středa 28. března 2018

RadarGun and performance testing in OpenShift

RadarGun has a brand new feature that allows for performance testing of distributed caches in OpenShift. There's a helper script called "openshift" and a new OpenShift plugin. Together they make it possible to test both library and client-server mode. In library mode the cache and the loader thread run in the same JVM process. In client-server mode the clients and servers run in
separate processes (and in this case even separate Pods).

I'll describe the process of running performance tests in OpenShift with Infinispan server and HotRod clients in client-server mode.

The process is as follows:

1.   Checkout the RadarGun master branch from https://github.com/radargun/radargun

2.   Build RadarGun from the top-level directory using Maven. This will produce the  target/distribution/RadarGun-3.0.0-SNAPSHOT directory that will be used in the resulting Docker image.

    mvn clean install

3.   Export the DOCKER_REGISTRY address where RadarGun images will be pushed for consumption by OpenShift deployments.

    export DOCKER_REGISTRY=<OpenShift's docker registry address>
 
    The registry address can be usually obtained by opening the "About" link in the top-right corner of your OpenShift project's web UI.
 
4.   Go to the openshift sub-directory. Pass the login command with the correct parameters specific to the OpenShift provider:

    ./openshift -L "oc login https://api.rh-us-xxx-1.openshift.com --token=l5gGjuKOPAWsdf6564sdfeOI7qhmQCGhJLvyj4oa4" login
 
    Note: The login command can be usually obtained by opening "Command line tools" link in the top-right corner of your OpenShift project's web UI.

5.   Create your project! This is named "myproject" by default. Customization is only possible by changing the openshift script.

    ./openshift newproject

6.   Build the RadarGun image and push it to the remote Docker registry.

    ./openshift build
 
    This command will take the target/distribution/RadarGun-3.0.0-SNAPSHOT directory from previous steps and create a Docker image based on it.

7.   Create a subdirectory named "configs" with the RadarGun benchmark, config files required by individual RadarGun plugins, and config file for Infinispan server. This directory will be mounted in master and slave Pods in OpenShift as /opt/radargun-configs. And as you'll see the Infinispan server template will also mount this directory under ${INFINISPAN_SERVER_HOME}/standalone/configuration/custom so that the configuration file that you place in this directory can be used by the server.

8.   Create the RadarGun deployment.

    ./openshift -cd "configs/" -cf "benchmark-openshift-hotrod.xml" -s 2 deploy
 
    The example uses a RadarGun config file named "benchmark-openshift-hotrod.xml" . It needs to be placed in the config/ sub-directory before running the command. The parameter -s determines the number of RadarGun slaves. Each slave will run in a separate Pod in OpenShift. The RadarGun master will run in its own Pod. In summary, this command will spin up one Pod for master and 2 Pods for slaves.
 
    Here's a snippet from the RadarGun benchmark. Comments in the file will shed some light on it.
<benchmark xmlns="urn:radargun:benchmark:3.0">
<clusters>
<cluster>
<group name="openshift" size="1" />
<group name="client" size="1"/>
</cluster>
</clusters>
<configurations>
<config name="OpenShift Client Server Test">
<setup group="openshift" plugin="openshift" lazy-init="false">
<!-- The openshift plugin will deploy OpenShift template specified by template-file parameter -->
<openshift xmlns="urn:radargun:plugins:openshift:3.0"
template-file="/opt/radargun-configs/infinispan-server-template.json"
master-url="https://api.rh-us-east-1.openshift.com:443"
oauth-token="..."
namespace="myproject"
cleanup="false">
<!-- Parameters for the template. Number of instances specifies the number of Infinispan server nodes.
Individual parameters are specific to the given template -->
<params>
APPLICATION_USER=user
APPLICATION_USER_PASSWORD=changeme
NUMBER_OF_INSTANCES=2
</params>
<!-- Wait for pods to be ready before proceeding with the real test -->
<pods-selector>
deploymentConfig=infinispan-server-app
</pods-selector>
<!-- Resolve Pod and Services IP addresses and save then in RadarGun service context. They can be later
referenced on other slaves in the way mentioned below. -->
<resolve-pod-addresses>
infinispan-server-app-0
infinispan-server-app-1
</resolve-pod-addresses>
<resolve-service-addresses>
infinispan-server-app-hotrod
</resolve-service-addresses>
</openshift>
</setup>
<setup group="client" plugin="infinispan91" lazy-init="true">
<hotrod xmlns="urn:radargun:plugins:infinispan91:3.0" cache="default">
<!-- We reference Pods from the other plugin through this convention: ${groupName.slaveIndex.podName}
We could possibly reference an OpenShift Service in a similar way. -->
<servers>${openshift.0.infinispan-server-app-0}:11222;${openshift.0.infinispan-server-app-1}:11222</servers>
</hotrod>
</setup>
</config>
</configurations>
...
<rg:scenario xmlns:rg="urn:radargun:benchmark:3.0"
xmlns="urn:radargun:stages:core:3.0"
xmlns:cache="urn:radargun:stages:cache:3.0"
xmlns:l="urn:radargun:stages:legacy:3.0">
<!-- The openshift plugin group has only 1 member but it spins up two Pods
in OpenShift so we need to set a specific expect-num-slaves -->
<service-start groups="openshift" validate-cluster="true" expect-num-slaves="2"/>
<!-- Once the openshift group is ready we start the client group -->
<service-start groups="client" validate-cluster="false" />
.... test ....
</rg:scenario>
</benchmark>
         
    A full config with an artificial test scenario is here.
 
    Overall, this benchmark expects two RadarGun slaves to be running. One of the slaves will then deploy the template which results in two additional Pods for Infinispan servers. The other slave then holds the HotRod clients which will be sending requests to the server.

Here's a screenshot from OpenShift GUI at this point:
 

 
    The OpenShift template for Infinispan server from the benchmark specifies several objects that will be created in OpenShift:

  • HotRod service available on port 11222
  • REST service available on port 8080
  • JGroups Ping service which is later reference by JGroups DNS_PING protocol 
  • Two Infinispan server instances based on jboss/infinispan-server Docker image 
Below is a short snippet that shows a trick to pass the correct address of the DNS server to JGroups. JGroups then uses this DNS to find the address of the aforementioned JGroups Ping service which forwards discovery messages from server nodes. This way Infinispan/JGroups get to know the other cluster members.
   
....
"spec": {
"containers": [
{
"image": "jboss/infinispan-server",
"command": [ "/bin/bash" ],
"args": [ "-c", "export DNS=`cat /etc/resolv.conf | grep nameserver | awk '{print $2}'` && docker-entrypoint.sh custom/infinispan-server-cloud.xml -Djboss.default.jgroups.stack=dnsping -Djgroups.dns_address=${DNS}"
],
....
}
]
}
....
   
     Full Infinispan Server template here.
   
   
     The following is a short snippet that shows the DNS_PING protocol configuration in JGroups subsystem of Infinispan servers. Other parts of the config file are not specific to OpenShift.
<subsystem xmlns="urn:infinispan:server:jgroups:9.2">
<channels default="cluster">
<channel name="cluster"/>
</channels>
<stacks default="${jboss.default.jgroups.stack:dnsping}">
<stack name="dnsping">
<transport type="TCP" socket-binding="jgroups-tcp">
<property name="logical_addr_cache_expiration">360000</property>
</transport>
<protocol type="dns.DNS_PING">
<property name="dns_address">${jgroups.dns_address:127.0.0.1}</property>
<property name="dns_query">${jgroups.dns_query:infinispan-server-app-ping.myproject.svc.cluster.local}</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2">
<property name="use_mcast_xmit">false</property>
</protocol>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG3"/>
</stack>
</stacks>
</subsystem>
   
   
     Full Infinispan Server config available here.

9.    Alright, now when the tests have finished, we should be able to download test results and logs.

    ./openshift results

And we're done! We can now analyze the results which have been stored in radargun-data sub-directory.