Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Thursday, November 08, 2007

Some test code smells

More of my writeup on the XUnit session I went to nearly two months ago. I hope someday to make all of my note-taking blog entries coherent to a larger audience :)

There are a couple of competing dynamics you get when writing tests: the first is that, in general, less code is better than more code. You want the code to express the concepts you need to express without any extra cruft, without huge globs of copy/pasted code here, there, and everywhere that is a nightmare to maintain. But you need tests as well, and tests are either more code, or more people, and people are one heck of a lot more expensive than code. So it's not at all unlikely that you would have as much test code as production code.

But those tests have to be maintained, so it behooves us to figure out the best way to do that, bearing in mind that the goals of test code are not the same as those of production code. So, here are some problems, or code smells, specific to test code that you might run into:

1. Conditional test logic . A lot of people like to say that one assertion per test is plenty. It seems like unreasonable test gold-plating to me, but if you're going so far as to put an if statement in the middle of the test, you need to have multiple tests to test each branch of the statement, each time. Or, the condition might simply be an assertion, where you're saying if (x) keep testing; else assert false. No point in that, just assert x at the beginning and let the code blow up if it needs to. This really helps with the readability of the test report, too, expecially if all the report says at the end is "The assertion FALSE occurred". Not helpful, whereas knowing directly from the report that X failed is much more useful.


2. Hardcoded test data

This problem is related to the principle that computer science profs have been tossing around since the beginning of time: that you never want to put numbers or strings in code; always define them somewhere as constants instead, so they can be easily changed. Not a bad principle; certainly it's ideal that anything that needs to be displayed to the user can be easily modified to use another language, so you don't want a bunch of MessageBox.Show( "Are you sure you want to do this?" ) statements scattered through the code. For numbers, though, my general rule is that it doesn't need to be a constant value unless it shows up more than once.

But in a sense, just about every number shows up more than once if written properly: at least once in the code, and at least once in the test for that code. Say for example you're testing a Price object with this line of code:

Assert.Equal( $14, CreatePrice().Retail );

CreatePrice() is part of your fixture, and it sets the list price to $20. Your Price object knocks off 30% to come up with the Retail number.

But now you've got the same number in there twice! See it? $14 is in there, and so is 70% of $20. The same number.

One fix is to move everything to constants. Presumably the Price object has a GetDiscount() method, so you could make the $20 into a constant ListPriceInDollars, and change the expected amount to ListPriceInDollars * GetDiscount. Still pretty verbose, but not really bad for this small example. A better solution can be to create an ExpectedObject to compare what comes back from CreatePrice to. In the ideal case, your test would then simplify to

Assert.Equal( ExpectedPrice, CreatePrice() );

Which would cover a boatload of other comparisons as well as your Retail value.

Tuesday, September 18, 2007

xUnit Test Patterns and Smells

This comes from a really good session by Gerard Meszaros on Test Patterns at SD Best Practices 2007.

Here's my history on test-driven develoment: Back in the nineties, I first read Martin Fowler's Refactoring. I thought it was a good idea, and attempted several refactorings on the code base I was working on, with good success. I think it was one of the better-coded applications to come out of that company. But I was always annoyed, because the instructions for the refactoring would always say something like, make your changes, and test. Testing is hard, man! Especially when you're testing a bit of the application that takes two minutes to get to from application launch and relying on a Direct3D driver to do the right thing.

So I added refactoring to my arsenal but didn't think too much more about it, until about five years ago, when I ran across an article on TDD in, I think, Dr. Dobbs, but it may not have been. The article mentioned some ideas about testing and mock objects, which turned out to be exactly what I needed for the project I was working on then, which was a business-level client API with a wrapper lib for calls to the server - the ideal thing for a mock. I played with it for a while, and it worked beautifully! Pretty soon I presented a proposal for moving to TDD to the team I was working with.

There were a couple of quotes that I put in my presentation (probably from the magazine article) that I really liked:

  • Tests must be easy to run. If they aren't, people won't run them.
  • Tests must be easy to write. If they aren't, people won't write them.
This session was all about the second quote.

The problem is, tests are easy to skip. Comment out. Ignore. If you do that, your code isn't being tested. But the client doesn't care about that...at least in the beginning. Later on, if your code isn't being tested, bugs will start to crop up. You'll make a change in one area that you never in a million years thought would affect this bit of code over there. But it does, and you've introduced a bug. The client will sure care about that! So you really have to put the effort in to write tests.

But at the same time, you're selling the production code, not the tests. If your team is spending more time on the tests than on the code itself, your velocity is sure to suffer.

So what's the solution? Go back and look at the second quote again. Tests must be easy to write. How do we make them that way?

The first thing to notice is that your objectives for test code are probably going to be a little different than for the production code. For example, execution speed is crucial for production code. You can't have your users twiddling their thumbs while they wait for your web page to load. But for test code, not so much. Go ahead and add ten seconds worth of tests to your build; think anyone will notice? Or, add four hours worth of tests. Sounds good! Just make sure to run them overnight when no one needs to watch them.

On the other hand, is simplicity important for production code? Well...it can't hurt, of course. The smaller and cleaner you can get the code, the better. But sometimes there's nothing you can do about it; you have to add that cache for speed; or denormalize the database so you don't have to make calls across a dozen tables. But for test code? Let's say it again: Tests must be easy to write.

What else? Is correctness important for production code? Of course...but users will put up with small bugs. But correct test code is an absolute requirement. If you don't have the tests right, you'll be writing incorrect production code to satisfy the bad tests. What about flexibility? Code should be flexible, right? Not really, not test code. In fact, there will probably be enough hard-coded test values to make it hardly flexible at all.

This is getting long. I'll add more later.

Thursday, July 05, 2007

Evaluating Javascript in an NUnit test

Adam Esterline posted his solution to javascript testing. He uses WatiN to run tests, which I wasn't excited about; I was hoping for a way to test where I didn't have to install any more software anywhere. Here's the solution I came up with:


static object Evaluator(string code )
{

ICodeCompiler compiler;

compiler = new JScriptCodeProvider().CreateCompiler();
CompilerParameters parameters;

parameters = new CompilerParameters();
parameters.GenerateInMemory = true;
parameters.GenerateExecutable = true;

parameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().Location);

CompilerResults results;

results = compiler.CompileAssemblyFromSource(parameters, code);
if (results.Errors.Count > 0)
throw new Exception(results.Errors[0 ].ErrorText);

Assembly assembly = results.CompiledAssembly;

MethodInfo entryPoint = assembly.EntryPoint;

return entryPoint.Invoke(null, new object[] { null } );

}

[Test]
public void EvaluatorTest()
{
Evaluator( "Context.Current.Data = \"Craig\"" );
Assert.AreEqual( "Craig", Context.Current.Data );
}


This seems elegant and able to handle a lot of things like side effects. Unfortunately I've been back on the server side for the most part and haven't really tried to put this code through its paces.

Monday, April 02, 2007

Javascript testing with NUnit

I've been looking around for a good way to test Javascript functions.

There are at least two if not more versions of JSUnit, one here and the other here. But there is one fundamental requirement I have for any unit testing framework, and that is that it has to integrate into an automated build script. For example, suppose you're using CruiseControl. It's got an NUnit step in it; once you write up your tests, it's the matter of a few minutes' configuration to get them running as part of the build, and it's very satisfying to watch the test counts grow as more builds are done. 117 tests run, no failures. 123 tests run, no failures. 135 tests run, no failures.

So if the framework doesn't work with automated builds, it's no good to me. Do they? I'm not sure. Edward Hieatt's version seems primarily to require a browser, although he does provide a JSUnit Server which appears to be designed to work from Ant or Java, but doesn't have any particular support for Nant or ASP.Net that I could find. Jörg Schaible's version is even less able to work in Windows; starting from the download which is only provided in tar.gz format. The documentation states that it can be run from the command line; if so that's easily adaptable to an automated build, but I didn't even take the trouble to download it, suspecting that it wouldn't even run on Windows.

So I was looking around for other alternatives, and I ran across this post. I'm sure that not everything you can write in Javascript can be evaluated by the .Net Javascript evaluator, but when you write a lot of tests you get used to keeping functionality nicely isolated.

I'm not sure what the best way to use this is. My first couple of tests have the Javascript in the ASP.Net codebehind file, where they can be unit tested at test time and Response.Write-n at runtime; but there's a few other possibilities; keeping all the Javascript in a separate file to be read in at test time, and using it as an include at runtime perhaps.

So I have a lot of work to do on this technique. But it seems promising!