Code coverage is like compiling

Several months ago, I began a concerted effort at work to get our code coverage numbers up. This was prompted by an upper management target of 85% code coverage by a certain date, which I initially saw as unrealistic within any timeframe. I hadn't done much work with code coverage, but I did know the primary drawback, in that most tools simply show that code was executed and not that all paths through the code were executed (branch coverage). Any simple metric has the potential to be abused by naive management, since it's easier to measure code coverage than measure if the code is actually being tested correctly and fulfills the desired usecases (assuming they even exist!).

Several months later, now at 85%, I have a more positive and specific view:

Coverage is to testing as compiling is to coding.

That is, it doesn't ensure that your testing is complete, or correct, or anything, but it does make sure that nothing is completely wrong. If code is never even executed, you have zero assurance that it is correct, just as if code that won't compile has zero assurance of being correct. That doesn't mean it is correct, just that it has a non-zero probability of being correct. The realities of software development allow you to only increase the probably of correctness, so this is one more tool to do this.

I found large blocks of code that weren't being run at all, for various reasons. There were a few methods that intended to override superclass methods (and weren't annotated with @Overrides because the code was originally 1.4 based), but had subtle name or signature typos. Some code had subtle logic errors in branches which prevented one way from executing.

Code coverage metrics were particularly useful in the case I was in, where I had inherited a large amount of complex code from another developer who hadn't provided the most thorough set of tests. I could easily see what code wasn't being executed, and then devise test cases to cover these. One has to be very careful when doing this, since you only get one chance to test that section of code correctly, since after the first test which covers the code, you no longer have the obvious warning of uncovered code. This presents the developer with a moral hazard, since they can write the simple test to get their code coverage numbers up or they can write exhaustive tests which correctly test the code and contribute to genuine code quality. You only get one chance to do the right thing.

Aggregated code coverage with Emma and Groovy

This post describes a script I wrote to take XML Emma output and produce multi-package aggregated statistics. One of the drawbacks of Emma's HTML reporting is that it does not allow you to get aggregated coverage information across packages. For instance, if I have packages "com.foobar.sdk.interface" "com.foobar.sdk.impl", there's no automated way to get coverage information for all packages starting with "com.foobar.sdk". Most larger projects are logically grouped like this, so having these "superpackage" groupings is useful. My previous method of getting this was to cut-and-paste the HTML from a browser into a text file, run a Ruby script on it to convert it to CSV, import the CSV into Excel, and add the necessary formulas to the sheet to get the measurements I wanted. Having it simply printed out at the end of the Emma run is much simpler.

First, the setup of Emma. Inside the <report> tag, I put the following output descriptions:

<html outfile="${emma.coverage.dir}/foobar/coverage.html"
    columns="name,class,method,block,line"
     sort="+name,+class,+method,+block,+line" depth="method"/>
<xml outfile="${emma.coverage.dir}/foobar_coverage.xml"
    columns="name,class,method,block,line"
     sort="+name,+class,+method,+block,+line" depth="method"/>

These create both the full Emma HTML report and an XML document with the same results. After calling the report target that includes this, I then use the <groovy> Ant task to call a script which parses the Emma XML and produces some output.

<echo message="------------EMMA Summary----------------" />
<groovy src="${test.scripts.dir}/EmmaParser.groovy">
  <arg value="${emma.coverage.dir}/rules_coverage.xml" />
  <arg value="com.foobar.sdk:SDK,com.foobar.tools:SDK,com.foobar.engine:ENGINE,com.thirdparty:ENGINE"/>
</groovy>
<echo message="----------------------------------------" />

The format of the second argument is comma-delimited set of Java package prefixes and "superpackage" names for which we want aggregate coverage. In the above example, all packages that start with "com.foobar.sdk" and "com.foobar.tools" are grouped into the "SDK" aggregate, and "com.foobar.engine" and "com.thirdparty" are grouped into "ENGINE". For each superpackage, the total number of lines, number of lines covered, and percentage covered are printed.

Below is the groovy script which does the EMMA XML work. A few comments on it:

  • The Groovy XmlParser class was a joy to use and vastly simplified accessing the XML document.
  • The regex was the hardest part to get right. I most commonly write regexes in vim, which requires different escaping that Groovy. It involves both captures and parenthesis in the expression. In Groovy regexes, you escape the parens you want in the expression and don't escape the capture parens. This really tripped me up on the next groovy project after this one, where I reversed the meaning when looking at this regex.
  • Closures are such a nice feature to have when parsing with XmlParser like this. Their use in iteration and assignment of local variables makes the code much shorter to read and understand.

The script:

def filename = args[0]
def config = args[1]

def pkgmap = [:]
def spkgs = [:]
def cmap = [:]
def tmap = [:]

// split the config string by comma, then by colon
config.split(',').each { entry ->
  (entry =~ /(.+):(.+)/).each { all, pkg, spkg ->
      pkgmap[pkg] = spkg
      spkgs[spkg] = ''
  }
}

// init the package map
pkgmap.each { k, v -> cmap[v] = 0; tmap[v] = 0; }

// parse the report
def report = new XmlParser().parse(new File(filename))

// get the stats for the "line" coverage for each package
// packages are non-bundling, so pkg.foo does not contain stats for pkg.foo.bar
report.data[0].all[0].'package'.each() { pkg ->
  pkgmap.each { pkgname, sname ->
      if ((pkg.'@name').startsWith(pkgname) ) {

          (pkg.coverage[3].'@value' =~ /\d+%\s+\((\d+\.*\d*)\/(\d+)\)/ ).each {
              all, cov, total ->
                  cmap[sname] += Float.valueOf(cov)
                  tmap[sname] += Integer.valueOf(total)
          }
      }
  }
}

// print summary stats for each super-package
spkgs.each { sname,x ->
  if (tmap[sname] > 0) println "," + sname + "," +
     String.format("%.2f",cmap[sname]*100/tmap[sname]) + "%," +
        cmap[sname] + "," + tmap[sname]
  else println "," + sname + ",0%,0,0"
}

Testing inside a servlet with Ant, TestNG, and Groovy

In a previous post, I talked about how I run my TestNG unit/integration tests from within an EJB. The EJB implemented the old 2.0 standard, which meant that maintaining all of the configuration metadata was a continual effort sink. I recently moved it to simply use a servlet, which I should have done from the beginning. Since I could no longer use the EJB client code, I had to also write an HTTP client to invoke the servlet and process the results. This post describes the Java code invoking the TestNG tests from within a servlet, the Groovy HTTP client that invokes this servlet, and the Groovy Ant task configuration and code to invoke the client script.

First, the test servlet class was created, which extends HttpServlet and implements it with the following method (this is nearly the same as the previously posted code in the EJB):

    public void doGet(HttpServletRequest request, HttpServletResponse response)
                                        throws ServletException, IOException {

        // Sets the content type of the response
        response.setContentType("text/html");
        ServletOutputStream out = response.getOutputStream();

        try {

            TestNG tng = new TestNG();

            tng.setTestClasses( new Class[] {
                OneTestCase.class,
                TwoTestCase.class
            } );

            final StringBuilder sb = new StringBuilder();

            tng.addListener(
                new TestListenerAdapter() {
                    @Override public void onTestFailure(ITestResult tr) { 
                        sb.append("F{" + tr.getTestClass().getName() + "." + 
                                          tr.getMethod().getMethodName() +"}"); }
                    @Override public void onTestSkipped(ITestResult tr) { 
                        sb.append("S{" + tr.getTestClass().getName() + "." +
                                          tr.getMethod().getMethodName() +"}"); }
                    @Override public void onTestSuccess(ITestResult tr) { 
                        sb.append("."); }
                 }
            );

            tng.setGroups("srg");
            tng.run();

            out.println(sb.toString());

        } catch (Exception e){
            StringBuilder ex = new StringBuilder();
            for (StackTraceElement ste : e.getStackTrace()){
                ex.append(ste.toString() + "\n");
            }
            out.println("Aack! " + e.toString() +"  " +  ex);
        } finally {
            out.close();
        }
    }

I find the Groovy Ant task to be an excellent way of enhancing the power of Ant. Because of it's XML structure, there are many things that are difficult or impossible to do in an Ant task. Even a simple if/then is awkward. This make complex Ant code not only difficult to write, but more importantly difficult to read. Groovy allows to easily break out of the Ant jail and write like a Real Programmer. The concision of the Groovy language, closures, and built-in data structure syntax allows you to express in a couple of lines withing the Ant file something that would a page of Ant code or several lines of Java. Most importantly when working with many other developers, Groovy is enough like Java that most programmers can infer the meaning of most Groovy code even if they have no experience with the language.

I couldn't find any instructions with "cut and paste this text into your Ant build file to use the Groovy task", so here's what I use:

<property name="ant.home.dir" value="" />
<property name="junit.jar" value="" />
<property name="tools.dir" value="" />

<property name="groovy.dir" value="${tools.dir}/groovy" />
<property name="groovy.lib.dir" value="${groovy.dir}/lib" />
    
<path id="groovy.lib">
    <pathelement location="${ant.home.dir}/lib/ant-1.7.0.jar" />
    <pathelement location="${ant.home.dir}/lib/ant-junit-1.7.0.jar"/>
    <pathelement location="${ant.home.dir}/lib/ant-launcher.jar"/>
    <pathelement location="${junit.jar}"/>
    <pathelement location="${groovy.lib.dir}/antlr-2.7.6.jar"/>
    <pathelement location="${groovy.lib.dir}/asm-2.2.jar"/>
    <pathelement location="${groovy.lib.dir}/asm-analysis-2.2.jar"/>
    <pathelement location="${groovy.lib.dir}/asm-tree-2.2.jar"/>
    <pathelement location="${groovy.lib.dir}/asm-util-2.2.jar"/>
    <pathelement location="${groovy.lib.dir}/bsf-2.4.0.jar"/>
    <pathelement location="${groovy.lib.dir}/commons-cli-1.0.jar"/>
    <pathelement location="${groovy.lib.dir}/commons-logging-1.1.jar"/>
    <pathelement location="${groovy.lib.dir}/groovy-1.5.4.jar"/>
    <pathelement location="${groovy.lib.dir}/jline-0.9.93.jar"/>
    <pathelement location="${groovy.lib.dir}/jsp-api-2.0.jar"/>
    <pathelement location="${groovy.lib.dir}/mockobjects-core-0.09.jar"/>
    <pathelement location="${groovy.lib.dir}/mx4j-3.0.2.jar"/>
    <pathelement location="${groovy.lib.dir}/servlet-api-2.4.jar"/>
    <pathelement location="${groovy.lib.dir}/xpp3_min-1.1.3.4.O.jar"/>
    <pathelement location="${groovy.lib.dir}/xstream-1.2.2.jar"/>
</path>

<property name="groovy.lib" refid="groovy.lib" />

<taskdef name="groovy" classname="org.codehaus.groovy.ant.Groovy"
         classpathref="groovy.lib"/>

I think you can exclude the junit jar if you're not using it, but there's a version included with the Groovy distro if not.

The Groovy Ant task allows you to either call a separate Groovy file or embed Groovy code directly between the <groovy> tags. Below we'll call a separate Groovy script, and in a future post I'll do some embedded Groovy (specifically, querying the server to set the http.port property dynamically).


  <property name="http.port" value="" />
  <property name="app.name" value="" />

  <target name="run-tests" >

      <groovy src="${test.scripts.dir}/servletclient.groovy" >
          <arg value="http://localhost:${http.port}/${app.name}/"/>
      </groovy>

      <echo message="result running test: ${test.result}"/>

      <if>
          <equals arg1="${test.result}" arg2="true" />
          <then>
              <touch file="${test.results.dir}/t.foo.servlet.dif" />                   
              <delete file="${test.results.dir}/t.foo.servlet.suc" />               
          </then>
          <else>
              <touch file="${test.results.dir}/t.foo.servlet.suc" />                   
              <delete file="${test.results.dir}/t.foo.servlet.dif" />               
          </else>
      </if>
  </target>

The <arg> tag values are accessed in the groovy script via the array "args", with the indexing beginning at 0. Values are returned to the Ant context by setting values in the map named 'properties'. All properties in the Ant context are passed into the Groovy script in this variable, and are automatically unmarshalled back to Ant with any new entries (since Ant properties are immutable, you can change the values of the existing keys, but they won't be retained).

The Groovy HTTP client I use to access the servlet is below. This prints the content retrieved from servlet to standard out (so it appears in the ant log) and sets the value of the ant property test.result via the 'properties' map. If the output contains at least one '.' (meaning a test ran) and no instances of 'F' or 'S' (indicating a failed or skipped test), the result is 'true'. I'm certain this code could be shorter, but it's based on the longer DEWD client from Tony Landis, so I only stripped it down as far as needed to work for my purposes.

// Run tests within a J2EE container by calling a servlet.
// print the retrieved output to standard out

uri = new URI(args[0])
method='GET'
socket = new Socket(uri.getHost(), uri.getPort())

contentLen = 0
writedata = "GET " + uri.getPath() + " HTTP/1.0\r\n" +
"Host: " + uri.getHost() + "\r\n" +
"Content-Type: application/x-www-form-urlencoded\r\n" +
"Content-Length: " + contentLen + "\r\n\r\n" +
"Connection: close\r\n\r\n"  
writer = new PrintWriter(socket.getOutputStream(), true)
writer.write(writedata)
writer.flush()

// read and throwaway header
reader = new DataInputStream(socket.getInputStream())
c = null
while (null != ((c = reader.readLine()))) if(c=='') break

// read content
def row
content = ''
while (null != ((row = reader.readLine()))) content += row + "\n"  
// Response from the servlet should consist  of a string of periods
// (each period representing a successful test).
properties["test.result"] = (! (content =~ /[FS]/)) && (content =~ /\./)

println content

reader.close()
writer.close()
socket.close()

Ant, TestNG, and Groovy — such a powerful trio!

TestNG, part 2

Since migrating from JUnit 3, TestNG has been wonderful. Groups are the killer feature of TestNG that really make it worth the migration cost. When wanting to test a single method, I no longer need to manually comment or uncomment method names in the suite() method, I can just add a new group and run it from the command-line (well, from Ant. see below). Annotation-based test methods are much nicer, and have a much lower risk of accidentally being left out of the suite.

Just a few caveats before I show the Ant/TestNG setup we're currently using.

  • Merely moving to a new framework exposed several unintended test dependencies, so tests then failed because they ran after other tests. With the suite method, they always ran in the same order, so these dependencies were never found. None of ours were important, but there could have been ones that masked bugs.
  • Only void methods with names starting with "test" are annotated with @Test. This may seem obvious, but we had a few tests written by a developer auxiliary to our main team who had written a few tests that weren't prefixed properly, but ran because they were in the suite method. The JUnitConverter class should probably try and parse the suite method to find problems like this (Maybe I'll to a patch for this).
  • Only void returning methods annotated with @Test are run. Having a test method return a value doesn't generally make sense (it didn't in this case, either), but it may be difficult to understand if you test isn't running even though it's annotated.

Okay, so now onto the good stuff– our Ant/TestNG configuration. In these examples, I've replaced the name of my actual project with "foobar", and the prefix "sdk" indicates that it's the SDK part of the project.

First, in our common.xml file that is imported by all of our individual build.xml files, I added these lines, to define the location of the jar, add it to the common classpath, define the Ant task, and define the location for the reports to go ($twork is set to a temporary directory for the build):

<property name="testng.jar" value="${test.src.dir}/lib/testng-5.7-jdk15.jar" /
>

<path id="foobar.common.class.path">
  ....
  <pathelement location="${test.src.dir}/lib/testng-5.7-jdk15.jar"/>
  ....
</path>

    <taskdef name="testng" classpathref="foobar.common.class.path"
          classname="org.testng.TestNGAntTask" />

    <property name="testng.report.dir" value="${twork}/testng-report" />

Then in the build.xml for the specific tests, I added these targets. To clean, I added a target to delete old results:

    <target name="clean">
        <delete failonerror="false" quiet="false" includeemptydirs="true">
            <fileset dir="${testng.report.dir}" includes="**/*"/>
        </delete>
    </target>

Then I added a couple of targest to either produce a single "suc" (success) or "dif" (failure) file based on the results of the run (these files are used by the continuous build system to report the results of running the tests on a new build).

UPDATE:See this post for an updated version of the following targets.

    <target name="process-results" depends="copy-failure, copy-success" />

    <target name="copy-failure" if="has.failure">
        <copy file="${testng.report.dir}/testng-failed.xml"
                tofile="${T_WORK}/foobar.sdk.${infix}.dif"
                failonerror="false" overwrite="true" />
    </target>

    <target name="copy-success" if="has.success">
        <copy file="${testng.report.dir}/testng-results.xml"
                tofile="${T_WORK}/foobar.sdk.${infix}.suc"
                failonerror="false" overwrite="true" />
    </target>

Then, we have the target that actually calls the testng task. This target is never called directly, only through helper targets. Notice that it adds two listeners: one that will give use intermediate results on the command-line as the tests are running, and one that will give us a summary report at the end. After running, it then calls the previously mentioned targets. One thing I missed at first was that the test class files must be included in both the classpath (so the JVM can find them) and the classfileset element, so that TestNG will know what classes to use for the tests.

    <target name="run-testng" depends="" >
          <property name="excluded-groups" value=""/>
        <testng groups="${groups}" outputDir="${testng.report.dir}"
listeners="foobar.test.sdk.TestListener,foobar.test.sdk.SDKReporter" 
excludedgroups="${excluded-groups}" >
            <jvmarg value="-ea"/> <!-- enable assertions -->
            <classpath>
                <pathelement path="${twork.sdk}"/>
                <pathelement path="${foobar.common.class.path}"/>
            </classpath>
            <classfileset dir="${twork.sdk}" includes="foobar/test/**/${t
estcase}.class"/>
        </testng>
        <condition property="has.failure" value="true" >
            <available file="${testng.report.dir}/testng-failed.xml" />
        </condition>

        <condition property="has.success" value="true" >
            <available file="${testng.report.dir}/testng-results.xml" />
        </condition>

        <antcall target="process-results" >
          <param name="infix" value="${groups}"/>
        </antcall>

    </target>

The next thing was to set up a few helper targets to call the run-testng target. The first was "run-testcase", which would allow you to run a group from only a specific TestCase class, for a feel similar to JUnit. This is run with the command line 'ant run-testcase -Dgroups=srg -Dtestcase=BazTestCase'. Note that the group "broken" is excluded by default. If you actually want to run the broken group, you call it with 'ant run-testcase -Dgroups=broken -Dtestcase=BazTestCase -Dexcluded-groups=""' to populate the property exclude-groups so it's redefinition is ignored. Also, we add the most used command-line target, rung. This called with "ant rung -Dgroups=srg", or more commonly when I'm using it, "ant rung -Dgroups=phil". I can just add my name to the groups for a test case, and easily run only that one while debugging code or writing new tests. This alone was worth the migration to TestNG– it's liberating when writing tests.


    <target name="run-testcase" depends="setup">
        <property name="groups" value=""/>
        <property name="exclude" value="broken"/>
        <antcall target="run-testng">
          <param name="testcase" value="${name}"/>
          <param name="groups" value="${groups}"/>
          <param name="excluded-groups" value="${exclude}"/>
        </antcall>
    </target>


    <target name="rung" depends="setup">
        <antcall target="run-testng">
          <param name="testcase" value="*"/>
          <param name="groups" value="${groups}"/>
        </antcall>
    </target>

This is the listener that reports the intermediate results from running each test method. The name of the test class has its front chopped off so most of them will fit in an 80 character column, and it also prints the count of the tests (to gauge how far progressed the tests are) and the run-time for each test (to help gauge if there are any high-runtime/low-value tests out there). The one thing I would like to add but haven't looked at yet is printing out the actual results of the assert failure rather than just the stack trace of where it occured. I currently just look in the HTML report at the end for this.


package foobar.test.sdk;

import org.testng.*;

public class TestListener extends TestListenerAdapter {
    private int m_count = 0;

    private String name(ITestResult tr){
        return tr.getTestClass().getName().replaceAll("foobar\\.test\\
.","") +
            "." + tr.getMethod().getMethodName();
    }

    @Override
    public void onTestFailure(ITestResult tr) {
        log("[FAILED " + (m_count++) + "] => " + name(tr) );
    }

    @Override
    public void onTestSkipped(ITestResult tr) {
        log("[SKIPPED " + (m_count++) + "] => " + name(tr) );
    }

    @Override
    public void onTestSuccess(ITestResult tr) {
        log("[" + (m_count++) + "] => "+ name(tr) + " " + (tr.getEndMillis()-t
r.getStartMillis()) + "ms");
    }

    private void log(String string) {
        System.out.println(string);
    }
}

This is the reporter that I use for the summary report at the end of running all the tests:

package foobar.test.sdk;

import org.testng.*;
import java.util.*;

import static java.util.Arrays.asList;

public class SDKReporter implements IReporter {

    private String name(ITestResult tr){
        return tr.getTestClass().getName() + "." + tr.getMethod().getMethodNam
e();
    }

    public void generateReport(List<org.testng.xml.XmlSuite> xmlsuites ,List<o
rg.testng.ISuite> suites,String c) {

        for (ISuite suite : suites){
            Map<String,ISuiteResult> results  = suite.getResults();
            for(Map.Entry<String,ISuiteResult> entry : results.entrySet()){
                ITestContext itc =   entry.getValue().getTestContext();
                for (ITestResult tr : itc.getFailedConfigurations().getAllResu
lts()){
                    log ("Failed Config: " + name(tr));
                    log (asList(tr.getThrowable().getStackTrace()));
                }

                for (ITestResult tr : itc.getFailedTests().getAllResults()){
                    log ("Failed Test: " + name(tr));
                    log (asList(tr.getThrowable().getStackTrace()));
                }

                for (ITestResult tr : itc.getSkippedConfigurations().getAllRes
ults()){
                    log ("Skipped Config: " + name(tr));
                    log (asList(tr.getThrowable().getStackTrace()));
                }

                for (ITestResult tr : itc.getSkippedTests().getAllResults()){
                    log ("Skipped Test: " + name(tr));
                    log (asList(tr.getThrowable().getStackTrace()));
                }

            }
        }
    }

    public void log(java.util.List<java.lang.StackTraceElement> trace){
        for (StackTraceElement ste : trace){
            String s = ste.toString();
            if (s.startsWith("sun.reflect.NativeMethodAccessorImpl")) {
                log("\n-------------------------------------\n");
                return ;
            }
            log("\t" + s);
        }
    }


    public void log(String s) {
        System.out.println(s);
    }
}

TestNG Migration

The past couple days at work, I've been migrating all of our JUnit 3 tests to TestNG. The main motivation was the ability to easily create arbitrary collections of tests. When working on a single bug or feature, it's common to write a test that only exercises the code you're working on so it doesn't take minutes between test runs. I'd previously been manually editing the suite() method to comment out all but the test I wanted, but a couple of times recently I'd forgotten to uncomment them before running all of the the tests and merging to source control. The other related thing is having test methods that get accidentally removed from suite, so they're never run and it's not obvious that they're not running. Beyond the greater feature set and flexibility of TestNG, this was enough to motivate a migration.

Also, I recommend the book Next Generation Java Testing by Cédric Beust and Hani Suleiman. I haven't read the entire book yet, but the parts I have read are very good, much better than other testing book I know of.

The migration was simple using the JUnitConverter utility class included with TestNG. The main issues encountered during migration were:

  • Indent for the "@Test" annotations is set at at 2 spaces, and our codebase is 4
  • Any "assert(String message, String, String)" calls need to be reversed. Fortunately, I was lazy when I wrote most of the tests, so they didn't have messages. And, I wrote most of the two-arg asserts in the wrong order, so now they're correct.
  • The "assert" methods are static in Assert, so they need to be qualified with the class name or static imported in every class. Most of our tests extended a base class that extended TestCase, so I copied all of the methods in Assert into this base class, and via the magic of vim and regex capture, created a method for each that would just pass through to the equivalent Assert method. I then used a grep/sed script to change the few classes that inherited TestCase directly so they inherited my base class. An alternate solution would have been to do replace of all of the "import org.junit.*" with "import static org.testng.Assert.*;". I left all of the junit imports in so that the now-unused suite() methods would continue to compile, and I'm going to go back later and remove all of them.

After migrating the tests, I started to setup the ant targets to call them.
First I tried:

   <target name="run-testng" depends="init,compile" >
       <testng classpathref="common.class.path" groups="fast">
           <classfileset dir="${twork.sdk2}" includes="test/**/*TestCase.class"/>
       </testng>
   </target>

However, this gave the error:

run-testng:
  [testng] Exception in thread "main" org.testng.TestNGException:
  [testng] Cannot load class from file: /scratch/FirstTestCase.class
  [testng]     at org.testng.TestNGCommandLineArgs.fileToClass(TestNGCommandLineArgs.java:691)
  [testng]     at org.testng.TestNGCommandLineArgs.parseCommandLine(TestNGCommandLineArgs.java:232)
  [testng]     at org.testng.TestNG.privateMain(TestNG.java:831)
  [testng]     at org.testng.TestNG.main(TestNG.java:818)

This is causd because I hadn't included the compiled test case class files on the classpath with which the testng target was called. Adding the classes using the classpath tag (same as in the junit ant tasks) fixed it:

   <target name="run-testng" depends="init,compile" >
       <testng groups="srg">
           <classpath>
                               <pathelement path="${twork.sdk2}"/>
                               <pathelement path="${common.class.path}"/>
                               <pathelement location="${sdk2.tsrc.dir}"/>
           </classpath>
           <classfileset dir="${twork.sdk2}" includes="tests/**/*TestCase.class"/>
       </testng>
   </target>

The component I work on is a library that is indented to be used inside of a JEE container, so we have an EJB that runs all of the tests inside of it. It depends on some external files, so it gets passed a map with all of these variables in it, which it then sets the system properties with so the tests can get to them. (Yes, there is probably a better way to do this, but I wrote this about 2 1/2 years ago and it works). I changed our EJB method 'runTests' to use the programmatic TestNG interface and a custom TestListenerAdapter that would just return the the EJB client a string of dots, 'F's, and 'S's:

   public String runTests(Map map){

           for (Map.Entry entry: map.entrySet()){
               System.setProperty(entry.getKey(), entry.getValue());
               }

           TestNG tng = new TestNG();

           tng.setTestClasses( new Class[] {
               com.foo.FirstTestCase.class,
               com.foo.SecondTestCase.class,
           } );

           final StringBuilder sb = new StringBuilder();

           tng.addListener(
               new TestListenerAdapter() {
                   @Override public void onTestFailure(ITestResult tr) { sb.append("F{" + tr.getName() +"}"); }
                   @Override public void onTestSkipped(ITestResult tr) { sb.append("S{" + tr.getName() +"}"); }
                   @Override public void onTestSuccess(ITestResult tr) { sb.append("."); }
               }
           );

           tng.setGroups("srg");
           tng.run();

           return sb.toString();
   }

Finally, someone else had modified several test files since I had started the conversion, so I need to find the one test missing the @Test annotation. I updated from the source control system and then ran this:

find . -name '*.java' | xargs grep -A 2 "@Test" | grep "public void" | sed -e s/java-/java:/g | sort > out
find . -name '*.java' | xargs grep "void test" | sort > out2
diff out out2

The result was several lines long since it include a few tests that had been entirely commented out and therefore weren't annotated, but more importantly it included the one test method that had been added.

Overall, the process was smooth and I'm quite happy with how it went.