Bad UI: Avery DesignPro

Avery is mainly know for their blank printer labels. Many people use Word for creating labels, but Avery also has their own program, Avery DesignPro. I used this last week to print some Christmas card labels, and found an excellent example of poor UI component design.

In these two screenshots, can you look at them and tell which means "print a full sheet of this label" and which means "print a different individual labels"? The relevant part of the UI is the All Same "button".

picture-2-21-34-47

picture-3

The answer is actually neither. The entire area of "All Same"/"On"/"Off" is one big button, rather than a label and a toggle. Clicking changes the value, rather than performing the action that the button indicates. This is very confusing to users, since it's rare that you click an off button to turn something on. A better design would to have two radio buttons with "all labels the same" and "different individual labels", or some such text, so that it would be clear which of these was currently selected and which one you were changing to.

Code coverage is like compiling

Several months ago, I began a concerted effort at work to get our code coverage numbers up. This was prompted by an upper management target of 85% code coverage by a certain date, which I initially saw as unrealistic within any timeframe. I hadn't done much work with code coverage, but I did know the primary drawback, in that most tools simply show that code was executed and not that all paths through the code were executed (branch coverage). Any simple metric has the potential to be abused by naive management, since it's easier to measure code coverage than measure if the code is actually being tested correctly and fulfills the desired usecases (assuming they even exist!).

Several months later, now at 85%, I have a more positive and specific view:

Coverage is to testing as compiling is to coding.

That is, it doesn't ensure that your testing is complete, or correct, or anything, but it does make sure that nothing is completely wrong. If code is never even executed, you have zero assurance that it is correct, just as if code that won't compile has zero assurance of being correct. That doesn't mean it is correct, just that it has a non-zero probability of being correct. The realities of software development allow you to only increase the probably of correctness, so this is one more tool to do this.

I found large blocks of code that weren't being run at all, for various reasons. There were a few methods that intended to override superclass methods (and weren't annotated with @Overrides because the code was originally 1.4 based), but had subtle name or signature typos. Some code had subtle logic errors in branches which prevented one way from executing.

Code coverage metrics were particularly useful in the case I was in, where I had inherited a large amount of complex code from another developer who hadn't provided the most thorough set of tests. I could easily see what code wasn't being executed, and then devise test cases to cover these. One has to be very careful when doing this, since you only get one chance to test that section of code correctly, since after the first test which covers the code, you no longer have the obvious warning of uncovered code. This presents the developer with a moral hazard, since they can write the simple test to get their code coverage numbers up or they can write exhaustive tests which correctly test the code and contribute to genuine code quality. You only get one chance to do the right thing.

Double dereferenced properties in Ant with Groovy

The TestNG report system is nice, but sometimes you need to integrate the TestNG output with some other test reporting system. At work, this other system requires that a single ".suc" or ".dif" file written to a common directory for each "test" you run, where "test" is defined however you want. In our case, we do one test for each TestNG group we run.

Due to Ant's immutable properties, I found it difficult to get exactly behavior I wanted. The simple way was to set a property "has.failure" if the testng-failed.xml file existed, but then would have the effect of appearing that a bunch of tests had failed, rather than a single one. It wasn't that important of an issue, since we it it was easy to see which test had actually failed afterward.

I finally got around to fixing this today. I screwed around with ant tasks for a while, but finally decided to use Groovy, which I'll choose first next time. The main issue was with the double dereference — I wanted to create a property based on the groups the test was running, and then reference it the same way, e.g., create property ${foo}.failed, and then access it like "${${foo}.failed}" (which doesn't work). Ant lets you create this property, but then requires you to jump through some as-yet-unknown-to-me hoops in order to actually reference the property. However, this is simple in Groovy, shown below.

This basically replaces the "condition" tasks in my description of calling the testng task in this post.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<target name="run-testng" depends="" >
    ... call testng here
 
       <condition property="${groups}.has.failure" value="true" else="false">
          <available file="${testng.report.dir}/${groups}/testng-failed.xml"/>
       </condition>
 
       <antcall target="process-results">
          <param name="infix" value="${groups}"/>
       </antcall>
</target>
 
<target name="process-results">
       <groovy>
          def infix = properties['infix']
          def dest = properties['RESULTS_DIR']
          def suc = new File("${dest}/product.name.${infix}.suc")
          def dif = new File("${dest}/product.name.${infix}.dif")
          if (properties["${infix}.has.failure"] == "true"){
            dif.write('Pass')
            suc.delete()
          } else {
            suc.write('Pass')
            dif.delete()
          }
    </groovy>
</target>

And, yes, the framework we have requires "Pass" in the dif file– don't ask me.

Java is dead/alive

Funniest thing I read today:

"Autoboxing was a misguided effort to paper over Java’s early decision to have a segregated type system for primitives and objects. It was Java’s Plessy v. Ferguson decision that pretended primitives and objects were separate but equal; but the claim was no more true in Java than it was in American jurisprudence."

I do love a good Supreme Court ruling analogy.

From one of my favorites, Elliotte Rusty Harold, in Java is Dead! Long Live Python!