The Skeptical Methodologist

Software, Rants and Management

More Java Style Classes in C++

I’ll refer you to this explanation for the benefits of Java style classes in C++.  The jist of it is that the dividing line between interface and implementation ought to be done with abstract base classes and inheritance rather than header and cpp files.  It’s how a good chunk of the STL works, and it’s how a good chunk of Boost works.  To me, it eases compilation woes since you no longer have to worry about static versus dynamic linking, and whether your library was built with debug symbols on.  Don’t get me wrong, there is still a place for compiled libraries, just a smaller one than there was before.

Anyway, and perhaps this is just me being rusty with my knowledge of the C++ precompiler, but I find the author’s reasoning sharp yet his implementation rusty.  Hiding everything inside a global class seems like it’s just asking for trouble, as I recall having some trouble with inner templated classes acting different than normal global classes.  I instead prefer this solution – at the top of every header file you have, BEFORE your precompiler firewall, you ‘export’ the names of the types you’re defining, similar to what was always in .h files.  Turn this:

#ifndef MYHEADER_H

#define MYHEADER_H

class foo {};

#endif

into this:

class foo;

#ifndef MYHEADER_H

#define MYHEADER_H

class foo {};

#endif

That way, each time you include a header, only the implementation itself is behind the precompiler firewall, the name is exported fine and can be used.  The same problems with circular types need to be resolved, namely, with one type holding the other by pointer or reference, but this isn’t anything new.  I haven’t used this technique in practice, so I throw it out to the community as a potential piece of not very well thought out shit.  It may only work for a few trophy cases.  But it seems to solve the same problems the author is having, and seems a more elegant solution than a global class with #includes inside of it.

Advertisements

March 27, 2009 Posted by | Uncategorized | 2 Comments

Has TDD jumped the shark?

Thesis + Antithesis = Synthesis.

Reading this entry by Ovid, he states:

One problem I have with the testing world is that many “best practices” are backed up with anecdotes (“when I write my tests first…”). Plenty of questionable assertions (ha!) are made about tests and some of these assertions are just plain wrong. For example, tests are not documentation. Tests show what the code does, not what it’s supposed to do. More importantly, they don’t show why your code is doing stuff and if the business has a different idea of what your code should do, your passing tests might just plain be wrong.

I think the author very clearly blows open the dirty secret of TDD – that it has nothing at all to do with Testing.  A few people have stated this before, some even going so far as to reinvent the acronym as “BDD”, or Behavior-Driven-Development.  Applying the dictum “Test First” simply can’t be supported any more, there’s too much counter-evidence.  So the TDD movement needs to refine its approach and its arguments.

To do so, let’s take our intellectual scalpel and also analyze this passage from t he same post:

A number of years ago at OSCON I was attending a talk on testing when Mark Jason Dominus started asking some questions about testing. He was arguing that he couldn’t have written Higher Order Perl (a magnificent book which I highly recommend you buy) with test-driven development (TDD). If I recall his comments correctly (and my apologies if I misrepresent him), he was exploring many new ideas, playing around with interfaces, and basically didn’t know what the final code was going to look like until it was the final code. In short, he felt TDD would have been an obstacle to his exploratory programming as he would have had to continually rewrite the tests to address his evolving code.

There’s a phrase in there that should immediately leap out at you.  Exploratory Programming. Mark Dominus was arguing that TDD would be an obstacle when used as an exploratory programming method.  In this case, he is entirely right.  TDD is not for exploration, and there’s is that infamous attempt to solve Sudoku that exemplifies the failures of exploring with tests.  As inane as those poor BDD folks were at simply changing a few letters around and calling tests “specs” instead of tests, they really are on to something.  What is the purpose of specifications?

To communicate.

Specs have been around since the beginning of software, and are probably the closest thing we have to considering us engineers.  The brilliance of “Test-Driven-Development” is the fact that many, if not all, of our specifications can just as easily be written down in code as they can be written down in a requirements document.

Upon the receipt of message X, the system shall respond with message Y.

Specifications have only had interfaces to communicate them heretofar, but hereafter, they also have ‘tests’.  Examples of behavior.  Tests, in this case, need to derive from what the system is supposed to do.  They need to define the system, whereas the implementation behind the scenes does the dirty work.  Defining systems is a method to communicate between human programmers, and any quality increase there of is a benefit of the fact that people have a better understanding of what the system is supposed to do.

So let’s go back to our original quote from Ovid.  He cites problems with some tests that the assertions made in those tests are wrong, that tests do not document code, and that they don’t document why a piece of code is doing what it does – therefor that piece of code may be out of alignment with your business goals.  He is entirely right.  Writing tests in the style of unit tests, tests written to search for defects, are a) more prone to breaking as code changes, b) don’t really document code all that well and c) can become disentangled with the business goals.  You cannot write your TDD style tests as if you were checking for defects.

Just as interfaces and abstract base classes allow you to ‘throw some code over the wall’ at another and have them fill out that interface, meeting those compiler enforced specifications on types, having a set of Tests that define your code allows you to work much more with others in attacking some project.  And of course, there is always the anecdote that it better allows you to work with yourself three months later when you’ve forgotten everything.  Since unit tests do not normally fulfill the goals Ovid laid out above, let’s be explicit in laying out what makes a good TDD style test or Testing enforced specification.

Do not test too deeply.

If the underlying specification you are attempting to capture in a test says object X shall respond with message Y when probed with message Z, then your test should say nothing about HOW object X does that.  You may know that Object X does so by storing the message ID in a map, and then correlating the two, blah  blah blah, so you want to check to make sure that object X is building it’s map correctly.  This is a unit test style test, this is a defect hunt test, not a specification test.  Check for only things that can be traced directly to formal requirements.  This has a dual effect of enforcing requirements on code at near-compile time, AND serving as a tool to let you know you have crappy requirements.  If you ever find yourself thinking “How can I test this requirement without testing implementation?”, then you’ve got a crappy requirement and it’s a sign you need to talk more with your customer to get these things ironed out.  If you test just the interfaces, then you are far less prone to breaking tests by changing implementation.

Tests should document code behavior.

If you’re expecting your tests to document how your code does what it does, then you’re setting yourself up for a world of hurt.  Self documenting code is code that uses intelligent variable names, takes advantage of the language for DSL style constructs, and clearly comments anything that is out of the ordinary.  Tests, or tests that enforce specifications, need to document what code is supposed to do, not how it does it.  This is similar to the rule above, in fact, all three of these problems stem from the same central cause – using tests to test implementation, not interfaces.  Unit tests test implementation, TDD tests test behavior.  If you have a test that does not clearly show what behavior it’s enforcing, you have a problem.  Either you’ve creeped back into implementation, or you have a test that’s actually testing multiple specified behaviors all at once.  Tests can be coupled and cohesive just like code – break it apart, test one behavior at a time.  Name your tests intelligently, and use your test framework in a consistent manner to show what code must do to fulfill the test, not how it must do it.  Oddly enough, when it comes to documentation, the less you say the better – document only the essential problems, not the accidental ones.

Tests should always trace back to clearly defined business goals.

This is the main goal of test driven development.  To enforce requirements and specifications automatically, and to document these requirements in the code so that the code never is out of tune with the requirements.  If you go out of your way to make sure your tests are now, themselves, out of tune with the requirements, then go shopping.  Specifications are hard.  Since we’ve already emphasized the fact that tests should be small, simple, and self-documenting, then they make great artifacts to show your stakeholders.  Before you laugh me off stage, realize that 99% of the time your stakeholder is not some clueless customer but another developer.  Developers split up work between themselves and communicate just as if the developer who handed you work was your customer.  Tests should be the only documentation of these requirements and they’ll never get out of sync.  This is more of a problem of documenting the same thing in two places, which is never a good idea.  If tests don’t trace back to clearly defined business goals, that means for some reason you’re keeping track of your business goals in a place different from your tests (your specification style tests).  This can also mean your teams aren’t communicating.  All of these are bad signs of bigger, worse problems.

Specification tests, BDD or TDD or whatever you want to call it, are communication tools.  They communicate requirements to the implementer (even if that’s the same person).  They are a wonderful way to nail down exactly what a system is supposed to do.  Many times, you’ve got a few talented developers and they all have different ideas on how to accomplish something.  Interfaces and test enforced specifications are how to communicate between them.  Exploratory programming is when you don’t even know how to accomplish something.  Tests help you define the next step out of a dozen possible next steps, exploration helps you figure out a next step when there are no clear next steps.  Don’t confuse the two and think you can solve Sudoku by testing your way into the problem.  There’s a whole host of different techniques for exploring a problem domain that I’m not going to touch in this post, but TDD is not one of them.  But hopefully I’ve shown that while TDD can’t help you figure out the how, it can help you communicate the what to your peers.

March 8, 2009 Posted by | Uncategorized | Leave a comment