The Skeptical Methodologist

Software, Rants and Management

Let me show you the door…

Coding Horror and Joel both have things to say about leaving the field of software development.  Jeff hits the nail on the head,

I mean this in the nicest possible way, but not everyone should be a programmer. How often have you wished that a certain coworker of yours would suddenly have an epiphany one day and decide that this whole software engineering thing just isn’t working out for them? How do you tell someone that the quality of their work is terrible and they’ll never be good at their job”

I’m not one of the lucky ones that gets to work at the Googles, Microsofts or Amazons and thus never work with someone completely inadequate for this field, so I can tell you how hard it is working with someone obviously not here for the enjoyment of it.  I also question whether those fabled firms actually escape the hopelessly and forever lost developer, since it’s so hard to spot them in the first place.  I hear Google uses riddles and brain teasers.  Yeah, that doesn’t work.

Anyways, I would LOVE for anyone who’s thinking about leaving the industry to go ahead and do it.  Like Jeff, if you don’t love it, you should probably get out.  Loving development has no bearing on your character, it doesn’t make you a worse or better person.  But there is absolutely no reason you should do anything that you don’t enjoy doing.  This is not only for your own sake, this is for the rest of our sakes too.  This is because 90% of the defects we have to fix, 90% of the horrible designs we have to work through, 90% of the shit we have to shovel is left behind by you.  Someone just in software for the pay, who doesn’t love it, doesn’t care whether or not they do a good job or not.

I remember I was taking some drawing classes awhile back, and all my stuff was just wholeheartedly uninspired and lacking any talent whatsoever.  I wondered whether it was a lack of experience, or maybe I just hated my own stuff.  I sat myself down one night and drew, as best as I could, a still life of a coke can.  That was all.  But I made sure to spend the time to do it right.  You know what I learned?  I learned that when I put my mind to it, it wasn’t a lack of skill on my part.  I could draw reasonably well.  But I also learned I just didn’t care about drawing enough to put in the effort to do it right. I don’t get a thrill out of it like others do.

Programming is an art form too, and if you don’t want to put in the time and effort to do it right, then you’ll produce crap.  And the only way you will ever put in the time and effort in this field is if you love it – you can’t bondage and leather you way into it.  You can have an IQ of 160, a degree from MIT and the whole cannon (Art of Computer Programing, etc) under your belt, if you don’t love it, you will produce crap.  Crap the rest of us have to clean up.

So, if you really find yourself wondering whether or not starting your own business, or working on cars, or something else is for you, its a sign you don’t love programming.  Hell, this lesson is something that applies to every field – the best mechanic will be one who loves working on cars.  Any thing you find yourself longing to do other than coding, any time you hate the actual process of development (not all the shit that comes with it) you honestly should think about stepping aside and contributing to society in some other way.  You are probably a much better artist than you are developer, and you’ll help the development community, and the art community, if you just let me show you the door.


December 29, 2008 Posted by | Social Commentary, Software Culture | Leave a comment

Hard work is for suckers

Mike Elgan claims that we’ve moved away from the ‘hard work’ ethic of the industrial age and moving to something he snarkly calls ‘work ethic 2.0’, probably because his editor was paying him by the buzzword.  While I believe there’s an ounce of truth in this, there’s a few misconceptions in the article I’d like to deal with first.

I’m  no historian, but I highly doubt that the Protestant Work Ethic had anything to do with the industrial revolution, it just happened to appear at the same time.  Certainly, the myths it perpetuates, namely that with hard work you can achieve anything, had more to do with the birth and evolution of modern Capitalism than the industrial revolution.  This is because hard work tells a person to just pick cotton faster, rather than telling him to invent the cotton gin.  Hard work frowns on innovation to make work easier – after all, if you make work easier, then you’re really just acting out of laziness!

Secondly, while Edison did say something along the lines of 1% inspiration and 99% perspiration, that 1% inspiration was the most important part.  The flash of genius, the realization that there is a better way of doing things, is not substitutable.  Replacing that 1% inspiration with even a doubling of perspiration, or working twice as hard, won’t get you anywhere but on a factory assembly line.  As a counter to Edison’s famous quote, is Robert Frost’s, which I’m sure to butcher: “If you work hard eight ours every day, one day you’ll get to be the boss and work hard twelve hours every day.”

Indeed, most of the perspiration that goes along with our inspiration, in the case that our inspiration amounts to anything, is in convincing others that we truly have found a better way!  A better mouse trap is traditionally disdained by any established mouse trap makers since novelty requires them to try new marketing schemes, new sales pitches, new manufacturing floors, and on and on and on.  It’s overcoming the static friction of your current environment that saps most of that perspiration.  Anaesthesia was, for a time, thought of by surgeons as ‘cheating’, after all.  Anything that makes our lives easier must somehow cost us something else.  This is the real lesson of the ‘Protestant’ Work Ethic, the same ‘Protestants’ who gave us feeling guilty after sex or eating good food.  It’s no wonder we have been taught to feel guilty for not putting in as much effort as others in doing this job or that, given the theme of the Protestant Work Ethic.

Hard work has always been a bit of a bait-and-switch with the Entitled Class to the Working Class.  Now, don’t think I’m getting all Marxist on you here, but it’s a fact that most of the bankers and auto execs getting bailed out right now really have no skill, talent or ability that separates them from others.  Either they were lucky, born into good families, or cheated.  That isn’t to say that there aren’t good, talented people out there that are in positions of high responsibility – after all, I did say most.  And, to confirm my Libertarian leanings, it is solely because of our free market that even those few got into the positions they did.  A more Victorian age would have ensured that in fact ALL people in responsibility were idiots. 🙂

In the article, the author mentions how we instill the love and appreciation of  ‘hard work’ in our children, as we value it so much.  Perhaps this is a clue as to where we should attack.  The epitome of modern education takes place at the University.  While I very much appreciated my University, and hardly consider it ‘schooling’ compared to ‘education’, I’ve now had a good run through some state run public schools and am having second thoughts.  My University was very much about education, about each person pursing their interests and self-fulfilment.  I guess I was lucky.  After a spat taking a few courses at a local State University, I have to say it reminded me a lot more of high school than real College.  The professor was, in fact, simply reading out of the book (which we all had), and then put such little efforts into the tests he created for us as for them to lack any real statistical validity at all.

All to get a little piece of paper at the end of it that basically allows me to have a job.  Well, not really, as I  already have that little piece of paper from another University, but you get what I’m saying.  People put their nose to the grindstone and put up with instructors waiting out their tenure just to get a paper that lets them  have a good paying job.  When you start thinking about this racket as applied to Ivy Leagues, it really starts to incense you.  Just because they went to a more prestigious university, they are basically taken care of for the rest of their lives, even though the quality of our education was the same?(I’d argue mine was better, in fact.)  It’s an entitlement program for the born rich – and the pinnacle of our education process that so lauds ‘hard work’.  A bit of an irony that the top of such a program doesn’t seem to establish hard work at all as a thing of value?

Now that we’ve established hard work and it’s proponents are for suckers, at least, hard work for hard works sake anyway, how does the Information age change this?

Any Linux fan is about to be greatly offended, but one of the best poster children of the new way to work is Bill Gates.  A college drop out, he lost all interest in simply getting entitlements.  Instead, he had that 1% inspiration and filled in the rest with perspiration.  Did he work hard?  Of course he did – but was that the REASON for his success?  Not at all.  He was successful because he was inspired to see a better way of doing things and worked hard to change the world to fit that better way.(As a olive branch to the Linux fans out there, I don’t know Linus’s particular views on formal education, however his own inspiration and persperation after the fact are also great examples of the real new work ethic).

How can we better formulate this?  If you were to go back in time and catch Bill working in his garage on some new piece of software, would you ask “Are you working really hard?”  He’d probably say no.  You see, culturally, hard work is not just putting in a lot of effort – it is putting in a lot of effort that, in other circumstances, you normally wouldn’t.  Another education example – if a student is failing a course and simply doesn’t ‘get it’, but then really puts their nose to the grindstone and pulls out a passing grade, they were working hard.  If another student is simply interested in the subject material, and ‘got it’, but still put in effort, they were not ‘working hard’.  Work, in other words, is commonly thought of as something we don’t want to do for its own sake.  We only work to get the things work brings us, namely compensation.  If we enjoy what we’re doing, then it’s not work.

Bill enjoyed what he was doing.  Therefor, even if he was putting in crazy hours on his own projects, it was not work.  It couldn’t be.  And if it’s not work, then it couldn’t be hard work either.  In other words, nothing in life that’s worth anything comes without effort.  But effort for its own sake is just called work.  If someone tries to tell you that effort, and only effort, will get you where you want to go, they’re trying to convince you to put in effort for them.

The article goes on to explain that the new ‘hard work’ is really ‘focus’.  But, really, isn’t that just another way to say hard work?  After all, if I have all these distractions, then isn’t all it takes to ignore them effort to do so?  I’m not working hard to do some job any more, I’m working hard to ignore the distractions.  It’s still hard work!  It still rewards discipline for discipline’s sake!  It’s still industrial age thinking.  If you honestly believe that discipline, goalless, directionless discipline, will get what you want, then Sisyphus has a boulder he wants to talk to you about.

This reminds me of an online discussion on different work styles.  It was a question posed to potential managers.  If you were interviewing Bob, who had a stellar resume, references, and a whole portfolio of open source code he’s shown you, and he said “I want to be paid full time, but only work half time”, what would you do?  To elaborate, you know his twenty hours will be worth your average developer’s 60.  You know this is a good deal rationally, it mathematically makes sense.  Of course, we’re glazing over the difficulties of actually deciding whether Bob or not is lying and really has his skills.  We’re assuming he has the skills, but he’s demanding, basically, to be paid twice as much as your other developers.

Any reasonable person, any real business oriented person, would take this guy on at the drop of a hat.  Why not? What is there to lose?  But that’s not what our potential managers thought.

The responses went from bad:

“Well, if Bob could do all that production in 20 hours, imagine what he’d do in 40 hours!  I wouldn’t hire him unless he was willing to put in those 40 hours.  It’d be my job to make sure everyone reached their full potential.”

To worse:

“How dare any developer think they could skirt around what the rest of us have to do.  I’d show him the door that instant!”

You see, their job, or so they thought, was to make sure Bob worked hard, not to make sure the business was getting a good deal from working with him.  We all know for a fact that Bob’s 40th hour is going to be a lot less productive than his first, and moreover, that letting him go after 20 will allow him to recuperate that much faster.  But damnit, what a punk!  Who does he think he is?  I don’t care if Bob is tired and worthless at 40 hours, I care that he does what I say.  I’m his manager, for Christ’s sake!

Bob is the epitome of the new economy.  Bob puts in effort, but that’s not why he’s so productive.  Why then, is he so productive?  Why are there Bobs out there that still seem to be so much more useful than all the Jims and Berrys?  Because Bob enjoys what he does.  He enjoys design and development.  And if he enjoys it, he’s automatically self-optimizing.  He will only work so far as he’s getting things productively done, because as soon as he isn’t, he isn’t having fun any more.

Unlike the factory floor, the Cubicle farm is generally littered with people who, given the problem, and no pay check, and the resources, would try and solve it anyway!  They’d get a kick out of it.  The fact that you’re willing to pay them is just that much better.  But the problem is, in our Protestant Ethic economy, we believe that if someone looks like they’re enjoying what they’re doing, then they aren’t working, and thus are a bad worker.  So we pile on the hours, we make conditions terrible, and we become assholes as people to show them ‘who’s boss’.

The new work ethic isn’t a work ethic at all.  It’s the idea that someone can love what they do, and thus not need to be pushed into doing it.  In software this is a double whammy, because if you love what you do you also do it better – code and design from people who enjoy it are done quicker, with fewer resources, and fewer defects than entire teams of people working like slaves on a slave ship.  Hard work is not enjoyable, by definition.  Work is something we only do for the consequences.  But we can do things that are enjoyable AND help out our businesses and the bottom line.  This is the new breakthrough in the ‘new’ creative economy.  It’s the realization that when it comes to being inspired, no amount of perspiration will ever get you that flash of genius.

December 21, 2008 Posted by | Uncategorized | 1 Comment

C++ is a horrible language

The title of this post was shamelessly ripped off from this wonderful diatribe.  But first, some context.  I work with a bunch of electrical engineers and other guys who’s sole knowledge of programming is Fortan.  Speaking of which, I am excited about the Fortran based language, Fortress, which Sun is developing, I think.  Fortran was, and still is, good at what it originally was meant to do – translate mathematical formulas( FORmula TRANslator).  Tacking on objects in the 90’s was like hooking up those two friends of yours that are just SO alike, they’d be, like, PERFECT for each other.  And sure enough, the few dates they went on were boring, awkward, and you got all the blame.  Sorry Grant.

Anyway, where was I?  Oh yeah, Fortran.  Fortran is incredibly simple and if it’s your only language, simplicity is the goal.  You begin to learn code as simply being nothing more than certain steps, to take in an established order.  Practically the only real metric of how well you code in Fortran is hacks and tricks for efficiency, which is all made worse by the language being so low level in the first place.  To sum it up, the double E’s I work with are obsessed with efficiency.  They couldn’t tell Big Oh notation from Asshole notation, but if you accidentally make an extra copy of an integer, boy, they’ll be explaining to you how you don’t know shit about software.  To sum up, I’m used to having conversations explaining the importance of object orientation, generic programming, ‘scripting’ languages, and the like to people who’s only way to judge code is how fast it runs.

I was suprised to hear this kind of talk from someone I regard as a pure software person.  There is no doubt about it – efficiency is incredibly important in programming.  Speed is important.  But we now realize that it isn’t the most important thing – maintainability is important.  Reducing defects is important.  Rapid delivery is important.  To deal with these things, we’ve demanded from language more and more complicated constructs like objects, modules, first class functions, and the like.  That’s why when someone condemns C++, even implicitly, because it’s a hair’s-width slower than C, I feel like they’ve missed the point.  But when they start claiming that a rats nest of function pointers is more maintainable and elegant than an object model, I’m really confused.

Don’t get me wrong.  If you were to propose writing an OS in C++, I’d think it was an interesting research project, but I wouldn’t expect delivery any time soon.  C is, was, and always will be the language for building kernels.  If you’ve read anything about domain specific languages, you’ll agree that most problem domains are better solved using a domain specific vocabulary, and it’s the developers job to turn that domain language into actual executable code.  Well, the DSL for kernels is C.  It’s domain is interfacing with computer hardware and memory, and it does it incredibly well.

But a revision control system?  This is something I still don’t understand.  I’m impressed by git, but only because it was the first time I had heard of a distributed version control model.  I guess I’m on airplanes and aweful lot, huh, and just HAVE to get to work at 40,000 feet?  Not really, but the distributed model simply seems more elegant given the use cases for a version control system, and quite honestly I think we’d all do a bit better if we branched and merged a little more.  Linus can implement Git in whatever language he wants to.  He can do it in brainfuck for all I care, I’m just a user.  But he’s lying to himself if he believes he chose C and Perl because they were the best tools for the job, and not just because they’re the tools he’s most used to.

I don’t want my DVCS running ass slow, but quite honestly, speed isn’t what I’m after either.  I know about Git, and am interested in it, but I still use subversion for my own personal projects.  I use it because it’s integrated with the tools I use, and frankly, I don’t want to screw around with having to learn the ins and outs of yet another tool.  I believe in time tool support for various distributed systems will be more widespread, but until then I’m stuck with subversion.  And despite all my complaints, I’ve never once even considered the speed.

Believe it or not, speed is not my number one concern with version control.  That concern is, and I suspect it’s everyone else’s too, correctness.  My version control better NEVER, EVER lose information I’ve committed to it.  That’s the point.  So is speed important after correctness?  No, usability is what is important to me after correctness, as I’ve described above.   Is speed third?  No, cost is third.  Thankfully, there’s a plethora of free options.  Speed and efficiency are number 4 in my list of things I need in a DVCS.

So if correctness is my most important requirement, which language would I prefer to code in – C or C++?  C++ hands down.  It’s more type safe via its better casting mechanisms and templates, RAII is a godsend, and exceptions are a better way to ensure errors are detected and dealt with rather than return codes that people can ignore.  What about usability?  Well, the two languages are on more even ground here since just about EVERYTHING has an interface to C code, but luckily most of them work with C++ with very little fuss.  In addition, since most people criticize C++ for letting programmers overdesign to easily and produce an API when one isn’t needed, I suspect that this is actually useful for extensibility and therefore usability with third party tools.  The two languages add equal cost (free) in and of themselves.  Finally, when it comes to speed one might be tempted to finally pronounce C the winner.  But given the fact that C++ isn’t, honestly, that much slower than C and the fact that your greatest bottleneck in a version control system tends to be a network, I’d say there’s no clear winner.

In other words, if I were to write a DVCS, I’d come armed with C++ and Python, not C and Perl.  But I’m not criticizing.  Linus is the one who’s thrown the first stone here, and although I’ve  made a point of saying he’s lying to himself if speed were the real reason he chose C for git, for good measure let’s see if he has any real criticism of C++ worth acknowledging.

Firstly, he says C++ sucks because he doesn’t like the programmers who use it.  Fair enough.  C++ is too ‘expert friendly’ as Bjarne himself has said, and many of those who think they can code in it probably shouldn’t.   But you also have to take into account that C is becoming one of those ‘revered’ languages, ala Lisp, that only attract people who are pretty smart in the first place.  In other words, a decent C++ programmer can perform as well as or better than a decent C programmer, and there are more decent C++ programmers, but far more terrible C++ programmers to sample from.  Either way, this doesn’t really amount to a good criticism of the language itself, and moreover, Java has been, thankfully, stealing most of our terrible C++ programmers.

Secondly, he argues that C++ programmers overly rely on the ‘crappy’ STL and Boost libraries, which are apparently not portable nor efficient.  If we’re talking base memory management, nothing is faster than a primitive array.  But given the correctness guarantees I can much more easily get with the STL, and the fact that they are not that much slower, the STL is a better bet.  Good programmers write, great programmers steal.  It’s a fallacy to believe that you can build a better vector, even if you’re the creator of Linux.  If you can, please do, and share it with the rest of us.  Most of the efficiency gains one gets from specializing list, hash and algorithm code by rolling their own is lost in the world of templating anyway.  It’s just not worth your time.  Furthermore, insofar as portability is concerned, I’m not sure if we’re in the early 90’s or not.  Templated code has been easy to compile since at LEAST the turn of the century.  Boost may not be portable to everything, but its at least as easy to port as C – for example, I don’t have to switch threading libraries with Boost::Thread.  Portability was the point.

Finally he brings up the efficiency argument, which frankly I just don’t understand.  Why is he considering git something that requires system level efficiency down to the very last bit?  Why didn’t he just write the damn thing in assembly if he were that obsessed.

It all boils down to git was never designed to take user’s needs into mind.  It was designed to download Linux source as fast as possible, and I’m sure it does that well.  The problem is if git fanboys and Linus want to go around advertising their system as the superior one, they might have to actually start listening to these ‘users’.  If he wants to build something for himself, as his own personal tool, then why not use the language he’s most familiar with?  But if he wants to build something that the programming community as a whole should use and enjoy, then he needs to stop pretending he magically shits bricks and every design decision he makes is the best one.  Choosing C for a project he expects others to adopt, maintain and extend was a bad one – it was one he made for himself, and which he’s continuing to justify by holding on to shreds of an early 90’s fear of a new language.

December 17, 2008 Posted by | Uncategorized | 20 Comments

Tying your hands behind your back and loving it

Small post today, but a thought.

Sometimes, for many problems, the best way to approach the programmatic solution is to code/specify the solution in the language you wish you had, and then implement that language.  This is the DSL approach to programming.  The best poster case for this sort of programming is Lisp and Forth, primarily because they are so easily amenable to the meta-programming required.

As a side note, it’s ironic that the DSL approach to programming isn’t more widely adopted in corporate and engineering culture.  It’s just about every other day some new company is promising a new way to codify system requirements to help in the software engineering process.  Seems like the solution they’re approaching asymptotically in the ‘enterprise’ field is to code up user requirements in a programming language itself.  But I digress.

DSL’s are incredibly useful from a usability perspective – frequently, a domain specific declarative language can specify something in far fewer lines than the equivalent fully fledged programming language can.  I recall a presentation I saw that, unfortunately, I don’t have the time to link to, that described Adobe Photoshop’s code being 90% publisher-subscriber patterns trying to run an overall MVC such that any new command you use is instantly reflected in the underlying picture.  That means, as a good estimate, 90% of their bugs exist in that code too.  The speaker then explained a small, declarative language he developed to capture most of the same usage of Photoshop.  The code was far smaller, maintainable and readable.

But, a frequently overlooked side of DSL’s is that they limit you.  In many cases, DSL’s are not Turing-Complete, meaning that there are tasks that any particular DSL simply can’t solve.  Believe it or not, this is the best thing they have going for them, though.  A language that’s Turing-Complete can express any computable algorithm, but it also suffers from some drawbacks, not the least of which is the Halting Problem.  The idea is that, for any Turing Complete language, one can never prove whether any arbitrary program halts in that language or not.  In layman’s terms, we can’t detect infinite loops.  That’s not all, though, as many problems can be reformulated in terms of the Halting Problem, meaning that they too are undecidable.

Do we need a Turing Complete language, though, to describe a GUI?  Or a MVC framework?  For most of our work, we can sacrifice a tiny bit of power to step back into weaker computation mechanisms.  In fact, when you really think about it, the only problem we really need a Turing Complete language to describe is the problem of what non-Turing Complete language we really ought to be using.  I’ll get back to this in a second.

If a language is not Turing Complete, that means we can prove far more useful things written in that language.  That means program proving mechanisms become a much more useful way to eliminate and track down bugs and defects.  None of this is to say that we can’t prove a lot , or at least, assume a lot about programs we write today – it’s very impressive just how far static analysis can go based mostly on heuristics.  But the fact of the matter is, in simpler constructs, we can prove far more.  And when I say prove, I mean prove. Unit tests, while very valuable for defect detection and development, can never prove bugs absence, just their existence.  Via proofs, in a language that is limited enough that can be constrained in this way, we can prove the absence of certain bugs.

What does this mean insofar as development is concerned?  It means that perhaps, once size should not fit all.  It means that we can tie our hands behind our back with DSL’s and love it, as long as we have a variety of DSL’s to do every small job for us.  We now have constrained where defects can occur to the Turing Complete languages we must use to implement our DSL’s and the glue code we must also write to have our DSL’s interact.  That’s what I was talking about before – perhaps the best role for our powerful, Turing complete languages is not to solve problems, but to describe less powerful languages that solve those problems and also prove things about those languages.

A language for GUI’s, a language for events, a language for this and that.  We already are used to this sort of thing, we just call these sorts of things libraries.  Learning a new DSL, since it does not have to be any where near as expressive as a fully fledged programming language, need not be any more difficult than learning any new API.  Using this approach to development, not only can we begin to reap the huge productivity benefits of using declarative, specific languages to solve specific problems, but also reap the huge quality benefits of relying more on automated proving mechanisms rather than just tests.  This quality benefit itself turns into a boost in productivity, since most of our time gets caught up in chasing down defects anyway.

This sort of separation of concerns can already be seen in a language like Haskell.  While Haskell has not demarcated a certain part of it’s core vocabulary as Turing Complete and another part as somehow less powerful, it has taken the important step to separate pure computation from more dangerous things like I/O via it’s monad mechanism.  As an example, then, I imagine it’d be far easier to prove something about pure functional Haskell code than even in another functional language.  This drastically reduces the amount of program testing and manual verification we humans have to do, and the language already lets us know where to focus our effort – state driven, I/O code.  This sort of separation of concerns can continue such that each time we delve into a specific way to solve a known problem, we already have a huge safety net ensuring quality, even before tests.  Combine this with the expressiveness of the same specific language on the specific problem, usually reducing the amount of code by as much as a factor of ten, and you can almost see, if not a silver bullet, a good, solid lead cannon ball for development and developers.

December 14, 2008 Posted by | Software Methodologies | Leave a comment