Monthly Archives: January 2012

CS371p Week 1

A month ago I listened to a podcast interview with Kent Beck, father of Extreme Programming, creator of JUnit, and evangelist of Test Driven Development.  In the context of CS371p, Collatz is as much about learning to test as it is about shaving off seconds in Sphere, so I decided to give the podcast another listen to give me something to write about.

The interview began with some discussion of JUnit’s implementation.  I couldn’t understand all of it, but there were several high-level points I digested.  JUnit is a language in itself with different semantics than Java, and a different system of inheritance.  As I understand, it generates objects to do different things for you, such as test loops.  Beck also discussed the unusual problem of having to test a testing tool, which I shudder at the thought of.

When the conversation turned to general testing philosophy, Beck stressed that each test should “tell a story” and “have a moral.”  For example, a Collatz test named testEvalMax tells a better story than something generic such as testEval7.  He described testing with JUnit as a tool of abstraction; solving a problem and testing the components of that problem’s solution are two separate but paradigmatically related perspectives.  When testing and development are divided and focused on individually, it gives the programmer a more holistic view.  Writing tests first can also be a way of getting acclimated to the task at hand, in contrast to the celebrated practice of hacking together the first thing that comes to mind, no matter the size of the problem.

According to Beck, a series of unit tests can be likened to a doctor’s procedure of differential diagnosis, a process of elimination where the most probabilistic causes of error (or malady) are scrutinized first.  He suggested thinking of a test file as whittling down the space of programs in the universe to find the few which pass your tests.  He questions DRY (don’t repeat yourself) as a doctrine for testing, saying that two tests will rarely attempt to strain the exact same functionality of a program.  DRY is crucial to development code because if a method is replicated with close to the same behavior, the method and its replicant could end up complicating the entire source code and ballooning it substantially.  If I make a Collatz test for (“7 9”) and another for (“8 9”), I haven’t really complicated anything, and the latter test is easily dispensable.

On the topic of testing projects that grow quickly, and which have 2-3 lines of test code for every line of production code, Beck implied that quality and responsiveness must scale with volume, and that dutiful testing does not slow down development if best practices are adhered to.  “The person I learned the most about testing from was a compiler writer who wrote five lines of tests for every line of compiler code, and he was the most productive programmer I’ve ever seen.”  One crucial message was that once a test starts working, it should, and probably will work for the rest of the project’s life.  He then added that, on the contrary, he has seen many instances where a project team will foolishly discard an old test that is suddenly failing rather than finding the source of the failure at the expense of some development speed.

For those who may blog contemptuously of Collatz’s large test quota, Beck advised using test-dogma as a learning tool.  When in doubt, test.  He spoke in awe of a certain presence of mind that one reaches after enough testing, where the programmer thinks of how to test his or her software from a high-level perspective almost immediately after thinking of how to design it.  This ability makes it easier to dive into small projects haphazardly and without testing, so that when a project occasionally matures into something large, the programmer finds that his or her code is already copacetic for writing tests.

During the closing segment of the episode, the host asked for any interesting things about testing that had not been touched on in the conversation.  Beck said that the current hastening of release cycles (which is perhaps largely due to the agile methodology he has promoted) portends an increasing reliability on testing.  Beck also pointed out an interesting pattern he has noticed over the years: a program’s tests usually follow a Power Law distribution.  It seems that, more often than not, there are many tests that take a short time to finish and few tests that take a long time to finish.  I have trouble finding significant utility in knowing this fact, but I nonetheless find it interesting.

That podcast, as well as others with more ostensibly thrilling topics, can be found at