The Skeptical Methodologist

Software, Rants and Management

Read, don’t reason, about the code.

I recently read this post on software exceptions, and the ensuing debate between whether or not exceptions or return codes are the better way to handle errors. A common critique of exceptions – and nearly every high level programming construct since the introduction of the object – is that they are hard to ‘reason’ about. Their logic is hard to follow for a myriad of reasons, and this makes them hard to debug and dangerous in the hands of novices.

I couldn’t disagree more with this criticism. There are many different counterpoints to be made, but the main one is that we, as software engineers, need to begin to read, instead of reason about code. Ultimately, whether or not code is easy to reason about is less important than how easy that code is to read.

Reading, versus reasoning, about code can also be put another way – like our justice system, we should assume code is innocent until proven guilty. We need to be able to ‘trust’ code, even code that we ourselves did not write, as doing what it claims to be doing. Is this because most code is good code? Or because as designers, we’re inherently skeptical of anyone else’s output? Not at all – The main reason we need to ‘reason’ about as little code as possible is because we’re so damned bad at it. Human beings are good at reading, we’re good at interpreting, we’re good at working with fuzzy knowledge and coming to conclusions. Insofar as mechanical reasoning goes, though, we’re terrible at it.

The idea of ‘trusting’ code to be right until we have reason to believe it’s wrong will likely strike many of you as wrong headed. After all, we’ve all read absolutely TERRIBLE code, code we have no reason to trust. But that’s not the point – we don’t trust because we trust the original designer to be talented and experienced. We trust the code because we have to – we simply don’t have enough RAM in our own skulls to keep large portions of code in there at once. We have to trust code because if we don’t, we hinder our own ability to track down the portions of code that actually contain defects and shouldn’t be trusted.

This means that, to have any chance at all ‘reasoning’ about any code, say, in the case where the code is not doing what we expect it to, we need to reason about as little of the code as possible. We need to keep as many cross-cutting concerns out of sight and out of mind as possible to allow our puny meat brains a chance to actually understand the mechanical problem that’s staring us right in the face, but we aren’t seeing. Many modern programming ‘paradigms’ have this in mind, most notably object oriented and aspect oriented programming. Both attempt to put like logic with like logic and to split apart as many responsibilities as possible. When I zero in, as a programmer, on a single method on a single object, I should like it to do one thing and one thing only. That way, if there is a defect, it will a) jump out at me since there’s little code around it and b) need to only be fixed in one place and one place only.

These language constructs that OOP and AOP have introduced, as well as those introduced by functional programming, have all been an effort to raise the abstraction level such that we reason less and less and read more and more. “Reading” a well nested for loop in C, dereferencing pointers all over the place, is nigh impossible. Such things require us to reason about them (and reason poorly at that). Replacing such for loops with list comprehensions or progressive calls to map, filter and reduce all use higher level constructs that make certain guarantees to the user and they reduce the cognitive load on the reader. The less code there is, and the higher level it is, the less we need to worry about reasoning and the more we can worry about reading.

For instance, if we are worried about a defect in a for loop, we need to go through line by line, examining each pointer dereference and each operation. On the contrary, if we are looking at a few applications of map and filter, we are assured of where the defect is not – it is NOT in the looping logic, stored away in the map and filter higher level functions.

In the exception versus return code debate, the problems are similar – if I have a bug and I have narrowed it down to a certain section of code and realize that code involves checking return codes from other methods, I need to figure out how to interpret those return codes (which most likely are poorly documented), and then probably drill down further into the function calls that return those codes themselves to find out whats going on. I am forced more and more to reason about the code, rather than just read it. In fact, that I found this section of code buggy at all is a miracle, probably requiring a few hours of sprinkling in print statements or disciplined use of a debugger to find out what was going on.

In the case of the exception, any uncaught exception bubbles to the top of the program. Generally it not only gives a better description of what went wrong but also where it went wrong, immediately helping me figure out where the defect is. Furthermore, return codes must be reasoned about, while exceptions can be read. When I see a try catch block, I can read it to say “there might be an exception thrown in there”. I don’t necessarily have to ask what exception, although that’s usually self-documented at the catch block. I don’t need to ask why this exception might be thrown. If I read in code a statement that raises or throws an exception, the designer who wrote the statement has embedded semantic knowledge that the control flow that leads to that statement is, in fact, exceptional.

If I’m reading error codes, I’m having to lower my level of abstraction. In the world of exceptions, I have exceptions and I have return values – I can stick closer to the metaphor of a mathematical function, and stick closer to the idea of simply ‘reading’ what a function does from it’s name, not how it does it, from it’s actual implementation. When I have to check return codes, now I’ve completely lost the metaphor of the mathematical function – now I no longer know for sure what arguments a function might be modifying, and I have to go and do some investigative work on looking up what each ‘error’ condition actually means. In other words, when I’m forced to reason about a function, I’m forced to figure out everything to figure out anything. The ‘price’ of fixing that defect is higher since it imposes more cognitive load on me. When I can read a function, and I trust that it does what it implies it does, either through its name or something like a docstring, I only need to concentrating on understanding the one thing about the function I’d like to understand – rather than the entire function itself.

Both in high level list comprehensions and exception handling, the problem of just reading what the function is telling you “I throw an exception” or “I am a mapping from one collection to another” is easy and gives the reader some conventions that he or she can rely on – namely, the bug is not in the map function itself. Likewise, with exceptions, I know that the bug is not in the exception handling mechanism itself – it’s impossible to miss a return code check, or to misunderstand a code and return the wrong one in an exception handling mechanism. These constructs are, like everything else, “no silver bullet” but they do reduce potential errors and do reduce the cognitive load required by the reader by allowing the reader to simply ‘read’ them and move on, rather than trying to ‘reason’ about them and figure out what the hell the original designer was trying to do.

(A side note to this whole conversation is what is ‘readable’ and what is not ‘readable’. Certainly, misusing features and simple lack of talent can produce unreadable code. But likewise, more advanced idioms and techniques might appear ‘unreadable’ to someone not familiar with them. When something is ‘unreadable’, it means there is a miscommunication between the original designer of the code and the current reader of the code – but how do we separate and decide when the unreadable code is the originator’s fault – via a misunderstanding of concepts or techniques, or the current reader’s fault, via the same misunderstanding of concepts and techniques.

Some heavily templated C++ code is derided as unreadable by some. Is that because the originator has abused the templating features of the language? Or is it because the reader never understood those templating features in the first place? I bring this up not at all to imply that Joel doesn’t understand exception handling – he seems as talented a designer as any (many would say more so.) But he does keep odd company, as I would suspect many of the most vocal proponents of a return to error codes probably don’t understand exceptions in the first place.)

August 25, 2008 Posted by | Software Design | , , | 1 Comment

Innumeracy in Academia

I have nothing but respect for Academics, but I do have a problem when a person from one field decides to enter another without the greatest humility.  In this case, I’m referring to this series of videos.  The author makes many good points, but unfortunately, the classic Malthusian catastrophe that each generation foresees seems to come from a general, subconscious and primitive fear of large things than any real gut understanding of whats at stake.

Zero population growth advocates seem to constantly ignore the fact that most Western countries are nearing flat growth as it is.  If you have three things: decent access to health care, relatively good wealth, and women’s access to family planning, then you will see a decline in growth to a steady state, if not shrinking, population.  I do not take that fact from theory, but from empirical evidence.

Ironically, it’s usually the poorest among us that will be the most influenced by these zero growth policies.  After all, what demographics are growing the fastest?  What social class tends to have the most number of children?  Militant social Darwinists who still think that whole Eugenics thing sounded good if we honestly just gave it a good shot are uneasy about this.  Many ‘educated’ people don’t like to be reminded that the mathematical fact is that unless they have more children, their genes are going to be replaced with the ‘morlocks’.  So we come up with these zero growth advocacy groups who, instead of deciding that hey, maybe having kids won’t be so bad at all, they decide to push their austere lifestyle on the rest of us – most notably the underclass who’s only joy might be their children.

Consider this the alarm bell “It’s all going to be ok.”  Each generation doesn’t understand how we could possibly survive another doubling, and yet, we keep doing just that.  Across nearly all ways of measuring society, things are getting better.  Worldwide well is better distributed than ever before.  Fewer people are being oppressed than ever before.  Disease, starvation and war are affecting fewer than ever before.

It may not seem like it, but by empirical measures it’s true.  The main reason many of us think that this generation is the worst yet is because of the overblown sensationalism of our modern media.  Salmonella outbreaks sicken thousands and kill a few dozens, yet more than thirty thousand people die on our roadways each year and we don’t bat an eye.  Why are sharks more dangerous than cars?  Why are we more afraid of food poisoning than we are of heart disease?  This is the innumeracy that Academia should be attacking.  Not finding something else to scare us all about.

I think ultimately there are people out there that have to have a crisis – they have to believe in doom and gloom.  They’re the type of people who remind you, when you’re eating dessert, that it’ll do nothing but make you a diabetic.  Or explain that they can’t come meet you out at your favorite bar because second hand smoke causes cancer.

Why can’t we celebrate the great days we are living in?  When a war with only a few thousand casualties draws millions to the streets in protest, when we are more conscious of our environmental impact than ever before, and when things like the Internet allow any sentient being any where a voice on the world stage?

Doom-and-gloomers get excited when the price of oil spikes, when housing prices crash, and when former superpowers invade their neighbors.  After all, we’re one step closer to proving them RIGHT.  “Things are more out of control than ever!” They’ll exclaim.  But any student of history should see our time as, perhaps, the most boring.  In the grand scheme of things, there really is nothing major going on – just slow, steady, exponential progress:  Towards wealth, health and environmental stewardship.

August 17, 2008 Posted by | Social Commentary | , , | 2 Comments

Beyond the Machine Gun Nest: Thoughts on the IT workplace and a path forward

I don’t want to produce a wall of text here, but there is quite a lot to say and very little space to do it.  The space being, of course, how long this little charlatan can keep your attention before you drift back to whence you came, be it Digg or Reddit or watching the Olympics.

The machine gun nest metaphor was, make no mistake about it, inspired by the horrors of game development and application development at the big three (Apple, Microsoft and Google). In all of these companies you see the same classic pattern of competition between designers on how dedicated they can be (measured by how many hours they put in) to the product, with little to no real compensation for it. The worst of the perpetrators would be the game developers where it’s the actual process to expect 80 hour work weeks.

French Guy’s original post and it’s follow up merely reminded me of these facts and finally pushed me over the edge to the point where I felt that a counter-culture to this industrialization of software needed to form. I’m under no false pretense that one little blog entry was going to start a counter culture. But it was all I could be bothered to do 🙂

YC was just the instigator, it wasn’t the focus, of the Machine Gun Nest metaphor. And while I agree that entrepreneurship and motivating ones self to do something different is categorically different than working a corporate job 9-5 every day, and in fact, I believe such entrepreneurship should be encouraged, the same old anti-patterns of believing hours equals productivity can creep in. We’d be naive to think they couldn’t.

The problem, as much as we’d like to blame it on corporate culture, was never actually corporate culture’s fault (although they’ve certainly taken advantage of it.) The criticism was always fairly leveled at we, the designers, hackers and programmers. I said we had a frat-boy hazing mentality over who could put in the most hours. And if the problem is with software culture, then why wouldn’t it also exist at startups?

The point of this post is to not go into the specific problems that crop up again and again, so much so that they have their own acronyms like KISS and YAGNI. The point is we all know a guy who’s ten times more productive than us and gets done in half the time. That’s the image we should celebrate – not the martyr who stays up into the wee hours of the morning ultimately fixing a bug that his own fatigue introduced – but the guy who can make it seem so easy and effortless that we the point: We’re doing it completely wrong.  Locate these people and learn from them. Don’t just keep putting in the grind work thinking that you too will finally get the Zen of design and code and be able to be as productive as he (or she). The productive programmer did not get to 4 hour work days by working 12 hour work days and more than he got to five thousand lines of code to perform some functionality that could be rewritten in five hundred thousand. Why would writing more buggy code help one become the type of programmer that wrote less code over all? Indeed, since lines of code seem to be our easiest metric, it’s counter-intuitive to understand that the fewer lines of code we right, the more productive we actually are (barring obvious abuses of this system).

This is an example of advice that we should all be skeptical of. Everyone in software is selling their own methodology, their own system, their own technique, and while there is some gems in there, it’s mostly coal. Including this particular methodologist. Including Paul Graham.

I won’t claim to fully understand the inner workings of YC. I’m not vying for sponsorship nor am I really keen to get involved in their culture. But given what I know of business, I can point out a few things that – while they may be true – I find on the face of them to be unbelievable.

The average investment rate YC puts in it’s child projects is supposedly around 7%. But what does that really mean? Does that mean that, should I have an idea, pitch it to old Paulie, and get some funding, they are going to ask for their standard 7% stake in the company right then and there, and forever thereafter they will sit on the sidelines as a disinterested party?

Probably not. They will probably give me a little seed money to get started, see if I have potential. In fact, that’s what start up investing is all about – you have to feed the creation just enough for it to survive and grow, but not too much that it grows into a bubble. It’s a tricky art. YC can’t be any different.

They will provide me seed money, and the more promise and potential I come back to them with, the more they will be willing to invest in me – and the more they will ask in return. Equity stakes and investment most likely grow geometrically, meaning the 7% ‘average’ grossly underestimates what YC will eventually want in your company. At first it will be a tiny sliver of equity, and many of these start ups will fail. Then in round two we have some good ideas, so we grow our investment – providing more money and asking for more equity.  Quite a few of these startups fail too. But the cycle continues until we have one or two really good ideas, really far alone, and predominantly owned by YC.  But because there are SO FEW that make it, the mass numbers of failures and their equity rates pull down the average to something that doesn’t seem scary at all. Say 7%.

Other fields have sayings too. Economists say there is no such thing as a free lunch. A smart guy promising to turn your start up into the next Google for only 7% equity is a free lunch.

The true betrayal of YC’s intentions are no further away than the management process: build to flip. This is not at all a new idea – I know two companies, Enron and WorldCom that were definitely “built to flip”. Whether you intend to flip over the company to some potential buyer, or flip it to the public with millions of buyers of company stock, the motivations and the means are the same. Short term thinking, and the invitation of a less than honest culture.

Our “Build-to-Flip” CEO just wants to make last quarter’s earnings look as good as possible because he has options he wants to exercise.  The key phrase there is look as good, not be as good.  Looking good is a hell of a lot easier than being good. This encourages sloppy accounting, this encourages high promises, and one ground breaking news conference (with nothing to show for it) after another. Sure enough, the price goes up, the management secure their retirement, and then waltz out of the room.

Build to flip is short term thinking – its boosting last quarters earnings to make the stock go up a blip before you sell. It’s an immoral way to run a company and it leaves many others who believed in you (your shareholders) out in the cold.

I’ll bet you were thinking though, it’d be pretty sweet to be that CEO.  And I’m not screwing over a bunch of shareholders, I’d be screwing over Microsoft, who’ll be buying me up. And who likes them anyway?

That’s not the case, sadly.  Read up a few more paragraphs and remind yourself that when you are ready to flip you, the creator of the enterprise most likely will not be the prime share holder. You’ve vacated the CEO’s position and became just another shareholder. “A shareholder?” you ask. “But they’re the ones who get screwed!”

Creating value is hard, and there is no shame in putting in the hard work to create your own value and then selling that value, at a fair price, to a potential buyer. In fact, its rewarding, its proof that all your work meant something, it got you somewhere. But putting in that hard work only because your project is kept on life support by angel investors who require more equity each and every time you visit is selling your idea, your vision and your work out short.

Keep the music industry in mind. Many bands sign record deals and receive what’s tantamount to a ‘loan’ from the record company to produce a CD. They are required to use the company’s own studios, the company’s own equipment and the company’s own personnel. In fact, the usually end up paying the company back nearly all of the ‘loan’ one time over by purchasing services from them. Then, when the CD is made, the actual loan is paid back a second time in sales. The artist, generally, got left out to dry. We might see celebrities and rock stars seem to be living a fantastically wealthy life, but that’s not the majority of them, and even those that are generally are living beyond their means.

Being given seed money for equity is a loan (of potential future dividends). And it’s a good way to start a business – a tried and true way. But one must keep a watchful eye that you don’t end up producing a CD and being asked to leave because it’s taken everything you made from your start up to pay off that loan.

Enough of that. What’s the solution? How do we fix this – stop working these long hours for no reason. I admit, my last post came up suspiciously long on accusations and suspiciously short on solutions.  And this post too, will not offer any ‘solution’ per say, but perhaps a few ideas.

I’m a big fan of capitalism. And any time I see something mispriced, say, for instance, our labor, I think – well how can a market fix this?  Any true defender of the capitalist faith should find mispricing blasphemous, and need to be fixed immediately.

We must then better the market by redefining what it is we programmers sell. Frankly, we’ve done a very bad job of figuring out what our actual value added artifacts are, what someone would actually ‘buy’ from us, other than a full fledged finished product (that can never be defined up front!)

If we were to create a market – like an auction, where buyers of software and sellers of software openly bid on new functionality, new applications, new algorithms, what would it look like? We don’t all have to become freelance designers – these markets can operate inside of corporate intranets too.

If I open up a bid on this market, and it wants a piece of software – imagine with me – what will I be reading? What will you see as the semi-perfect specification that will define the contract between me, the supplier, and they, the customer?

Our craft has always benefited by breaking down problems into yet simpler parts, and certainly, I’d say, many of our ‘parts’ actually have already good specifications to them. If I bid on a contract that had me develop a class in C++, where the class interface and documentation was already given to me, and the customer simply said “make it do that”, we’d be closer to buying and selling based on value rather than hours. Our collective wisdom and experience of years, and hundreds of designers and coders all bidding on different projects, will more accurately price the difficulty of a specific problem or application than our current metrics of trying to derive SLOC counts and man-months.

What of underbidding? Just like on EBAY, our reputations will follow us around – in fact, in a much more useful way than our resumes could ever provide – and any potential customers can see our rating – “AAAAAAA!!! Would buy again!”

A simple mechanism – hell, it could even be web based – for a market of components, of classes, of function and algorithms would both solve the issue of more properly valuing a talented programmers time, and also more properly policing our craft of, shall we say, less talented individual’s contributions.

You can already see prototypes of this sort of thing on ‘rent-a-coder’ sites, but the classic web 2.0 question is: Will it scale? Could open source, for example, begin to offer bids out on patches that are known issues but no one seems to want to work on? Will compensation, then, from open source be monetary or something else? Prestige? Or just perhaps a better way to organize what needs fixing, what needs adding, and what needs ignoring?

I was inspired by this idea when reading more about ROWE or results only work environment. But it’s the problem of defining what counts as ‘results’ that needs fixing for this to work. I do not think its intractable, although some parts are harder than others. But being paid for results is a hell of a lot better than being paid for effort – and it optimizes for results, rather than effort. Say goodbye to projects that go over-budget and past schedule that we’re basically optimizing for now by focusing on effort. “Seriously guys, if we just worked a little harder, we could get this done.”
That kind of thinking could pass into antiquity.

I offer up this market/auction driven approach to the critical masses.  May it be shot down and descend in flames gloriously.

August 10, 2008 Posted by | Software Culture | , , | 1 Comment

New hire cannon fodder

I recently read Some French Guy’s lamblast of YCombinator, the start-up-firm firm that doles out angel investing to young kids out of college hoping to be the next Google.  Unfortunately, I can’t say I’m surprised at the state of the ‘ideal’ career in software, but I can again reiterate how appalled I am at this frat boy mentality that has infected our culture.

It probably began with Microsoft, but you can see its effect at Apple (notably Steve Jobs’ notorious temper) and I’m sure it’s at Google too (except it’s far more sinister there.)  If you have recently graduated from college with a CS degree, congratulations, your stock options are just behind that Machine Gun nest.

Why these large firms, and now even places like YCombinator continually think the best way to move forward in software is to hire as many gullible young naive programmers as possible and work them to death is beyond me.  It’s pretty well known that 80 hour work weeks and inexperience is a guarantee to continually make the same damn mistakes over and over again.  It’s also an open question as to why new hires let these companies take advantage of them so badly.  Paul Graham had a start up, he begged for angel investing, and his life should show you – what does he do now?  Well he learned from his experience that designing and building is for chumps, to make the big bucks and sit on your ass you become an angel investor.

Kids will work for pennies.  You can continue to fill their heads with dreams of having the next big idea, even though they are carrying all the risk for you.  Junior developers, whether entrepreneurs or otherwise, are being asked to give up their 20’s, probably the best, most energetic years of their lives, to have a chance at making a dent in someone else’s bottom line.  (Make note, the one exception here I’ve seen is 37 Signals 🙂 )

Is it our culture?  A friend told me after visiting Japan that it was a workaholic culture.  It was considered rude to leave the office before your boss did, and the only way your boss got to be the boss is because he stayed the longest.  So they go in at six in the mornings, leave at twelve at night, get drunk at the bar (because you’re expected to go hang out with your friends after work to network) and do it all over again the next day.  The culture sounded familiar.

Junior designers see lack of sleep as a ‘badge of honor’, they see long hours as proof of their worth.  If another developer stays later than you, drinks those extra cans of redbull developing, well then, by God, he’s the best coder alive.  Only that he isn’t.  But he is spreading a lifestyle, through adopting it himself, that is counteractive to real progress at the company and lets face it, quality of life.  If Steve works harder than you, then your PHB(Pointy Haired Boss) is going to expect his influence to make you work longer and longer hours.  This looks good on paper despite the psychologists all telling us that we get no more work done.

We nerds didn’t really grow up playing sports, but we shouldn’t be surprised to find out we’re probably the most competitive people alive.  We’ll constantly try and outdo each other in our quantity of hours put in in an effort to truly see who is the best hacker.  But have we never stopped to think who truly is benefiting from all these hours?  Do we get paid more?  No.  In fact, because many of us are salaried, we’re effectively paid less.  Are we compensated with faster promotions?  Possibly – but don’t forget about that silicon ceiling.  The only person who knows how many hours you’re putting in is probably just the guy above you – but he makes sure to show just how productive his department is (via your hard work) to everyone.  He will always get the spoils.  Who will end up really getting the spoils out of any of YCombinator’s work?  Paul Graham.

We’re the infantry and this is World War I.  The officers, the generals, are actually less knowledgeable of modern warfare than we are because they haven’t seen it.  We see it every day.  We know that hours don’t turn into value, and we know that faster means slower.  Yet, if we’re lucky, they might afford us a moments rest before they order us to charge, bayonets mounted, towards that next machine gun nest.  Our wiser calls to bombard it with artillery, or charge it with tanks, go unheeded.  Those are untested technologies, after all, and anyone who doesn’t charge a Machine gun nest is a coward and doesn’t deserve to be called a hacker.

It’s a class war between the people who know how to get something done and the people who are slowly realizing they play no more role in modern development.   And I really, honestly, want no part of it.

We mere coders are artists, dreamers, idealists.  But we must face facts – we are unnaturally naive, we believe that one day, somehow, our talent will be recognized and rewarded by some benevolent, all powerful manager.  We refuse to work together because we refuse to acknowledge just how bad it’s got.  We refuse to ask for more because we refuse to acknowledge how little we’re getting.  We refuse to stand up for ourselves because we refuse to acknowledge how screwed over we are.  We are the talent, the knowledge workers in today’s economy, and many of us are fearful of our livelihoods being lost to some cheaper person over seas.

Some escape, some make it.  Make no mistake – a coder CAN change the world.  But I have yet to see a coder escape and not turn around and punish the next generation.  It seems that once a person realizes just how screwed over they’ve been, they can’t wait to screw over the next guy.  Like hazing in college.

There’s little I can do to convince you.  Little I can do to change the culture, really, because we are that competitive, and we are that arrogant to believe that we would never be screwed over like this.  We’ll never admit that working long hours has sacrificed relationships, family, entertainment, career development, education.  It’s just ‘a part of the job’.  I can, however, make fun of you for it.  And I think you should do.

So if you take anything else from this, junior developer, it’s that you don’t have to put in that extra 10%.  You don’t have to stay the extra hours to get ahead.  But you can make a snide comment while walking past the cube of the guy who refuses to leave work.  Maybe one day he’ll get it, but until then, he’s machine gun fodder.

[edit: Fixed typos.  Keep my feet to the fire or I’ll never learn.  Thanks MichealMichael and Joe.]

August 2, 2008 Posted by | Software Culture | , , | 30 Comments

Unit Tests as a Negative

There’s a constant debate between hacker types and PHB(Pointy Haired Bosses) types over what exatly it means to ‘design’ software.  While many things in software that we consider ‘design’ are helpful in one way or another, they don’t suffice for a design like you might find in other engineering disciplines.

For example, if I gave you a ‘design’ for a bridge, a blue-print for a bridge, it would then be a completely mechanical effort after that to build the bridge.  The design is a peice of knowledge that captures all the decisions that must be made to build a bridge, leaving only the physical work to be done.  (I realize here I’m simplifying bridge work but stay with me…)

In software there’s no such thing, and we can prove it by contradiction.  If you were to give me something that could be turned into ‘software’ with only mechanical effort, i.e., some sort of ‘design’, similar to designs in other fields, what would you have actually given me?  You would have given me the CODE for that software.  Compiling IS the mechanical work of software, as our field is entirely knoweldge based.  There are no materials to put together once the ‘design’ is complete, and the idea of removing all decisions means you’ve specified your software so much you might as well have coded it.

Programming languages are, after all, our best attempt at describing a language that allows non-ambiguous description and determination of a system.

In other words, as the hackers have always said, “The Design is the Code”.  This point of view is very attractive, but I want to add a nuance to it that might show room for comprimise between hackers and PHBs.  Software starts out at a high level – if you’re doing waterfall, you start out by building requirements.  If you are doing agile, you start out by gathering user stories/cases.  Most of us probably start out doing a little bit of both – we need to start at a very high level description of the system we’d like to build.

This would be like being asked by a city to build a bridge over some river.  You still need to scout for a location, secure funding, go through designs given to you by architects, etc.  The use case phase is similar.  As we drill down, we can turn use cases into smaller and smaller sequences of behavior the customer wants, or more and more detailed requirements on different parts of the system.  We do this, ideally, until we get to the point that it’s more effective to use code to describe what the system should do than to use high level abstractions like sequences and stories.

But our requirements, our user stories, they are not just providing a means to drill down into what our system is supposed to do – they are levying TESTS on our system.  The highest level we can call verification testing, but ultimately, for every use case, one should imagine that there ought to be an automated way to test that use case to ensure the system we are building fulfills that case.

Like plaster being poured into a mould,  our software is ‘poured in’ to these implicit tests.  The mould defines where our system stops.  Similar to photography – when we create a photograph, we not only have the picture, but the negative.  The negative defines the complete opposite of the picture, and if combined with the picture would simply look like a meaningless gray.  It is through the difference between the negative and the picture that form takes place.  Likewise, it is not just in code, but also in our means of testing that code, that our true design forms.

Specifications, requirements and use cases all are simply high level views of ‘test-driven-development’.  A test is just the negative of the software that fulfills it, and together, the test and the software, do you have a true design.  If we focused more on continually refining our requirements and use cases into actual test automated test cases, at the lowest level, then we can take advantage of TDD from the begining in.

After all, for anyone who’s done TDD, what’s the first thing you do when you start out with a blank slate?  You decide what it is you want your new object to do, and then you write a test for it.  A specification can be seen as a test (unfortunately they do not exist in that form very much today).  A specification can be seen as a negative of the software that it produces.

For every object in your software, there should be a negative, a thing that describes the exact opposite, partnered to that object.  If your software provides a function which you plug in 3 and get out 6, then you should have a negative that plugs in a 3 to some nameless thing and expects out a 6.   These are two ways of describing the same thing, but as art like photography and sculpture shows, you need both to move forward.

August 2, 2008 Posted by | Testing | , , , , | 2 Comments