The Skeptical Methodologist

Software, Rants and Management

Object Oriented Management

It’s both surprising and not surprising at all just how alike software is with business structure in general.  It’s surprising in that it’s not something someone not familiar with software would expect, and it’s not something someone familiar with software would think of.  But it’s not surprising in that both modern software and business structure are systems built with interactions and behaviors defined for some purpose.

The benefit to this is that you can frequently see anti-patterns that normally would smell in software, out in the ‘wild’.  Think of the committee who must approve everything?(Sounds like a ‘God-Object’ to me) or the fact that an Engineer might have to handle his own schedule reporting rather than deferring it to an expert.(Sounds like someone gave out responsibilities to objects wrong).

The fact that so many business policies are so convoluted and hard to follow in the first place smacks of spaghetti code.  Business policies are nothing but a bunch of ‘patches’ on a flawed base, each policy oriented to solving one specific problem, while the foundation beneath is in sore need of a rewrite.  And when this happens the same symptoms of a fragile, unmaintainable architecture arise – more managers(bodies) have to be thrown at it, more money has to be thrown at it, and ultimately bugs(and potential for fraud) creep in.

Hilariously enough, you can even see the same approaches to approaching ‘business engineering’.  Who doesn’t know of that one guy who couldn’t write a for loop to save his life, yet marks up your code in peer reviews complaining about the names or style?  The same thing happens when vice presidents earn their wings by simply changing around acronyms!

To build an efficient, easily understandable business system (and believe me, easily understandable is the best way to go.  It saves you tons of money in training and maintenance (not as many middle managers) and its more transparent across the board, reducing the risk of fraud or negligence.) you need to use the same object-oriented techniques to create a clean design up front as possible, and upon any rework, be willing to ‘refactor’ your business to take on new responsibilities that you originally didn’t see before.

Here’s an open question, though, do other software techniques apply to the business world?  I can easily see ‘Aspects’ having a role in business engineering.  But functional style or procedural style really don’t seem to make sense.  I might just not be thinking hard enough on that.

June 29, 2008 Posted by | Uncategorized | , , | 1 Comment

Forth and ‘fluent’ programming

I was reading a rant from a Forther about how Java didn’t invent the idea of stringing method calls together to create what looks like a mini-DSL.  I’ve seen this technique before to fake named arguments to a C++ constructor.  First you break object creation from the object itself, ala the factory object pattern.  Then add methods to your factory object as per every single argument you’d like to name.  Overload the equals operator on the receiving object and voila:

tree g =;

But seeing it spelled out in the Forth conversation makes me wonder.  Are their applications of this to testing of object oriented systems?  If we treat all objects as simply local stacks(which, basically, they are) then we can divide up state changing methods into stack pushing methods, and utilize stack computing to automated testing for these types of methods.

Obviously, the usual functional-style method that lacks side effects is still the way to go to keep testing and design easy.  But obviously this can’t always happen, at least easily.  But if you ALWAYS stick to all side-effects methods as strictly returning the object itself, then I do wonder if stack based thinking won’t help you in your design and testing strategy.

What about methods who have side effects AND return values?  Well, if we break it apart upon the lines above, perhaps it is these methods that are to be avoided?  In a way, methods that return values without side effects are similar to ‘getters’, except in a more mature sense(since they are actually doing some domain calculation).  And methods that mutate an underlying object (who’s entire existance can simply be thought of as a local stack) should strictly return the underlying object to allow chaining.  Can you think of a method that MUST do both? (Barring cross-cutting cases such as cacheing, which should ‘invisibly’ change object state anyway).

Since a stack (if I remember my computer theory correct) can be modeled using a state machine (or is it that some state machines require stacks?) I imagine that some static analysis could easily extrapolate functions that strictly mutated the underlying object into an easy to see diagram, and again, derive the above mentioned test scenarios.

I’ll think further on this.

June 16, 2008 Posted by | Uncategorized | Leave a comment

Testing and Language Design

Design patterns are held by some, including me, to be evidence of ‘code smell’.  That is, many so-called patterns exist because a language makes it hard to do something that ought to be easy.  A prime example would be the strategy and command patterns, both of which virtually dissapear in a language with first class functions.

These patterns hit Java hard, but they strike C++ too.  The reason C++ gets away a little better than Java does is because C++ allows you to ‘hack in’ a few things, like passing around function pointers, that Java disallows.  This alone, I think, is primarily responsible for Java’s verbosity.

Anywho, that’s neither here nor there.  The point is that many things we do in software over and over again are evidence of a missing abstraction.  If you ever have to write anything twice that could have been written once and reused, and you are using the language correctly, then you’ve hit a limit of that language.  The language is not doing the work for you that it should, and now the ‘work’ is back on the programmer rather than the compiler.

This is bad.  But we have to deal with it.  Ideally, if we used “the one perfect language”, all work would be design and there would be no “busy work” of having to implement a design that’s already been implemented once.  We’d just reuse the other design, switching out the parts that are variable.  Object orientation was supposed do to this for us, but there’s obviously limits.  Instead, what’s more inevitable is that we’re just going to have to keep coming up with new things that are abstractable as language design progresses – from routines, to objects, to functions to aspects, and so on.

What does this have to do with language testing?  A lot of people HATE language testing.  Just check out a unit testing or mocking frameworkin Java, or C++, and you’ll see that while those frameworks help a little, there’s still a GREAT deal code repetition.  Compare these unit test frameworks to Python’s doctest.  In one, we have to have all the syntax of starting up a new test, associating it with a class, giving the inputs, the outputs, the assertions, etc…  In the other, I just pretend I’m working with an interactive shell – with a method I’ve already implemented.  I pretend the work is done, if the work was done, how would the function behave? Well… I call it with certain inputs, and I expect the shell to spit out certain outputs.

This type of testing isn’t perfect, but it does show something in python that is outright missing in other languages.  This is implemented in a completely non-language design way, though, via the use of the doctest library and a prebuilt parser.  What language features would we need to implement this style testing in the language itself?

Another similar feature that is completely apart of the language in Eiffel, Design By Contract, also is very similar to this style of testing.  In design by contract, I enforce, in the language, certain assumptions about my inputs, my outputs, and invariant things in my class.  This can of course be done via libraries(and assertions those libraries host) in other languages, but its the fact that one language includes these kinds of tests as ‘first class’ members whereas another just ‘gets away’ with a library based implementation.  Nothing is wrong with libraries – but if you implement something as cross-cutting and global as testing via a library, you make it very difficult for other features like reflection, introspection, etc.  If we had an introspection version of Eiffel, I would expect it to be able to tell me what assumptions any particular method or class made.  In c++, even if we had introspection, it’d be much more difficult for an introspective system to figure out what assertions I’m making and why.

This is because assertions can be used for all sorts of things, but a contract, as used by Eiffel, means ONE thing, and can always be assumed to mean one thing.  Just like in mathematics, when we make a constraint on certain things, proofs become much easier and the constraint actually gives us MORE freedom.

I imagine, for instance, that the inclusion of ‘Aspects’ in a language, or that is, the ability to post-facto put wrappers around any function/method, such that code is called before, after or in both places any particular function, would make testing far easier.  Or, alternatively, Python makes use of its introspection to do doctests – what if we removed the need for a parser and made tests a first class member of a language?

There’s an idea called predicite dispatch, where I don’t just call a function based on it’s name(like Python), or its number and type of arguments(like C) or it’s encapsulating class(like C++ and Java).  Instead, I can call a function based on the strictest interpretation of all polymorphic types(called multimethods) AND certain predicates about those types!

A better example would be in the world of Haskell, where multiple functions can be defined for any one name, and the runtime does a lookup based on the pattern of arguments.  That is, I can declare a recursive function defining the fibonnaci sequence not by simply building a single function with an if statement inside checking to see if my base condition is met, but by declaring two functions.  One is defined for the base case and one is defined for the recursive case – then, on each function call, the language itself does the lookup to see which case I need to run, the recursive or base case.

This moves branch testing up to a  language level, again, allowing introspective frameworks like static analyzers much more information making it easier for them to find bugs.  Furthermore, it simplifies code and makes testing easier – now a single function does not need to ensure it tests that all branches are exercised, but instead a multitude of smaller functions simply need to ensure that their test are satisfied – seperating concerns and easing reading.

Certainly, if we can store all of this predicate information, we’ve almost re-implemented Design By Contract(now as a pattern matching mechanism, so it’s not just doing debugging for us but also speeding our development!)  Is it that much of a stretch to use a similar mechanism for testing?  Of course, with the same predicate dispatch, our actual NEED for testing might in fact go down.  As it’s often the case to test for border conditions, obviously, in a predicate dispatch system, we’re actually going to institute an entirely different function for border cases(for instance, the base case in the fibonnaci recursive sequence, f(0) and f(1)).

But we’re likely to still have testing situations, even if we’ve moved a good chunk of testing out to static analysis and compile-time exceptions(“You don’t have a function defined for f when argument is -1!”).  These tests would probably be best put, doctest style, in the ‘function declaration’ of most of these predicate dispatch frameworks.  That is, predicate dispatch in frameworks like PyProtocols still ask you to define a new ‘generic function’ that will be able to be overloaded.  It is in these generic functions that we can push in expected inputs, outputs and checks for invariants.

In a way, its like class inheritance.  By defining the tests at a function level, we are enforcing a mechanism to say “Whatever I overload on this function, it should not violate that f(1) = 1 and f(0) = 0, etc…”(Of course, like I mentioned before, this particular example might be completely suited with the use of predicate dispatch in the first place.)  These tests can be run at compile time to make sure that for each argument I’ve expected for input, there is in fact a function that not only resolves to deal with that argument, but that function’s output is what I expect.

Putting these input/output style doctests at the generic function level will not only help documentation(just as Doctest already does) since the user won’t have to delve into all the overloaded members to get a good understanding of the function, but also aids in design since the programmer will first have to think “What are the use cases of this method/class?” at the declaration level, which probably gains the most quality of any step in the software design process.  The validation tests at compile time are just icing on the cake and can provide coverage, profiling and validation.

Furthermore, the definition of these generic function’s inputs and outputs ALSO gives us mock objects – for free!  If I can define a functions expected inputs and outputs, then I can simply mock that function based on those inputs and outputs – using pattern matching on these inputs.  If I declare a function but do not define it, but that declaration has tests, of course I cannot use the function – expect in a mock case.  If you think of an object like a photograph, the fixture is the positive and the mock object is the negative.  Both are defined, really, in a test – testing a class just by pluggin in it’s mock self is fruitless since you’re just getting exactly what you write down.  But using the tests you provide to test the real class, or testing another class using the ‘mock’ version of the class, you can tests other objects that rely on functions you haven’t even written yet – and test those objects in isolation.

IDE support can and should be given for this sort of thing, again utilizing introspection, to recognize when a test is not fulfilled giving the designer the option to ‘capture’ results – i.e., if the set up for a specific test might be time consuming, just run the program and capture the test on the fly.  This would work for webpage parsers, for example.  If I want to test a parser, but I don’t want to go through the time of actually building a webpage to test it against, I should be able to just do a dry run, then when the test fails, I capture whatever webpage I’ve already downloaded and write my tests against that.

There’s a lot of ideas here – all of which, I believe, can make testing easier to do and more effective.  We’ve got a bunch of different, but similar pictures, of testing from DBC to doctests to unit testing frameworks and it reminds me of that joke about the blind men and the elephant.  We’ve all got different ideas, and no one will fulfill all of our needs, because they are all different parts of the same big beast.  Putting testing into a language itself rather than as an afterthought helps us recognize how key testing is to the design process, push even more bug catching behavior to static analysis, and make it as easy as possible for the developer to create high quality code, fast.  Unfortunately, most of these things simply can’t be done with current languages due to the lack of features – not just a lack of pushing tests into code, but the features needed to use those tests like introspection and aspects.  As languages become more mature, I believe it will be easier and easier to make tests, as we know them today, not an optional thing a good designer does to find bugs, but an integral part of the design and code process, shrinking development times and letting us designers spend more time doing what we love rather than writing out testMyClass inherits TESTCLASS yet one more time.

June 13, 2008 Posted by | Uncategorized | , | Leave a comment

A new, er…, ‘method’

A Synthesis Concept

Let’s invent a new methodology. No, wait, if there’s any buzzword used by the Pointy-Haired-Bosses(PHB) that really does have a repeatable connotation, it’s methodology. Methodologies always mean a cross between religious fundamentalist zeal and a slightly good idea, but are inherently good at hiding the ‘good idea’ far in the back of the book. Let’s instead invent a new, er, approach that will attempt to be a synthesis of some of the better ideas of the better ‘methodologies’ around. We’ll call it PNAM, for “PNAM is not a methodology” to remind ourselves it’s just an approach. By the way, the P is silent.

I’ve mentioned combining MDA with something like test-driven-development before, and we’ll use that as the basis for our new approach. Many ‘agile’ design methods are good at the key ‘getting-it-done’ part, but are not so good at the ‘anticipating change’ part. That’s not to say that agile methods cannot react to change – quite the contrary, the whole basis of agile is to expect random change thrown at you, and have processes around that deal best with that environment. The actual changes that get thrown at you are more nebulous, requirements changes, or interfaces, or this or that. Traditional stodgy methods like waterfall are good at anticipating these changes, but terrible at dealing with them. Ask any software engineer from the 80’s what costs the most in software development and he’ll say changes to requirements. He could rattle off a whole bunch of examples, and then some. Waterfall’s huge arrogance in it’s ability to predict change is that it believes prediction == avoidance. Agile’s huge arrogance is that it believes that as long as it stays as light on its feet as possible, it will always outperform a project that attempts some ‘formal design’.

June 2, 2008 Posted by | Uncategorized | Leave a comment

MDA is dead

MDA is dead, and IBM killed it.

Model Driven Architecture is dead, and the so-called champion of MDA is it’s killer. A craftsman is only as good as his tools, and due to the utter incompetency of the chief toolmaker, an entire way of developing and designing software is being relegated to the same dustbin that other such buzzwords as the “Rational Unified Process” are.

“As a great and wonderful agile developer”, you say, “how come I’ve never hard of this MDA? If it’s so great, why haven’t I used it, since I am always using the ‘newest thing™?’ Well, the answer is you probably have been exposed to it, and it’s exposure sent you into knee jerk convulsions of architecture astronomy. Find your nearest ‘Software Architect’, and ask him to explain to you the wonder that is UML. Apparently, if we draw our pretty pictures just right, the software design will just fall out. That’s MDA, using UML to create models of your software as apart of the design and development process. And despite your justified sense of nausea upon understanding it, it does have a great deal of promise which, unfortunately for you, oh great Agile developer, will never been shared with you.

If you want to know how MDA(or MDD depending on which Pointy-Haired-Boss(PHB) buzzword you want to use today) works, and be happy with it, then good God, never, EVER look at the UML standard. UML just happens to be MDA’s poor retarded younger brother who got a job with him because his mother made him feel guilty. No, instead, if you want to understand MDA, look at lisp. Or C++ template metaprogramming. Or any sort of ‘programs-that-build-programs’ approach. UML just happens to be the langua-franca of MDA, but the heart of any solid MDA approach is using the models you develop to auto-generate code. Some Architecture Astronomers out there might try and convince you that the models in and of themselves have worth, but I’m sure by now you know better. As a documentation tool alone, they are sub-par, but workable. What really makes models shine is their ability to generate code, and turn our pretty pictures into an exact replica in actual, fieldable code.

This could work amazingly well with many other modern ‘methodologies’, like agile or TDD. UML basically represents the code-base as structured text(and pictures!). It allows for comically easy major-refactorings, freeing up your coder’s time from skeleton/boilerplate work, and pretty pictures to speed up new guy’s familiarization with the system(as well as automatic documentation and something to impress your customer with…) Auto-generation alone won’t get you integrated with modern approaches like TDD, instead you need both forward and reverse engineering capabilities – transforms that take a model and turn it into code, and transforms that take code and turn it into a model. This allows your architects and coders to work in the same code-base, the coder looking at code and the architect looking at pretty pictures, and neither is none-the-wiser.

But therein lies the problem – hackers and architects do not want to be none-the-wiser. Even if you can show both camps how they can easily work together, respect eachother, and not be so damned arrogant about their differing approaches, they won’t do it. Instead we run again into one of the deepest and most ugly underlying human anti-patterns: People would rather be right than get things done. If a product doesn’t ship, but the designers can claim that their process was flawless, most people would be happier than if a product did ship, but those people’s feelings were hurt. Hense we get these huge rifts between the designing and developing communities, and just as huge amounts of rework as architects attempt to accomplish something in pictures that ought to be in code, and coders completely unaware(and tripping over themselves as a result) of good design practices.

For some reason, we coder’s let MDA get gobbled up by the architects, and since the architect’s try their best to never code, the code auto-generation(that is, the magic beans that make the whole endeavor worthwhile) are stuck between two divorced parents. And since Mom thinks we’re over at Dad’s and Dad thinks we’re over at Mom’s, and we(code auto-generation) are actually out smoking pot with our friends, we’re the one who suffers. Architects continue drawing pretty pictures that have virtually no worth, and hackers continue barely scraping by any major system.

When architects go into their ‘architecture conferences’, they hear these sales pitches for tools like IBM’s wares. RSD, RDA, R-something-something, basically are all the same package, and since IBM’s the biggest pusher behind MDA, you’d think they’d be a top notch shop. But they also know they don’t have to sell their tools to the coders, they just have to sell them to the architects. So as long as the pictures look prettier than everyone elses, it doesn’t matter WHAT the tools actually produce. Besides, isn’t it the coder’s job to manually and demeaningly translate the architect’s design into code? Who needs code auto-generation when you have cheap new-hires?

June 2, 2008 Posted by | Software Methodologies | , , , , | 3 Comments