The Skeptical Methodologist

Software, Rants and Management

Conway’s Corollary

Conway’s Law states that a software’s architecture will inevitably resemble the organization that produced it.  An example being that if you have four groups of people building a compiler, you’ll get a four pass compiler.  Well, I posit that the opposite is true as well, that given any sort of software architecture, there is an optimal social organization to build it.  This seems trivially true, but at the same time, gives us some insight into the software ‘engineering’ process, or project management.

Right now, there is a strong inclination in software development organizations to have two chains of command, a ‘people’ chain, and a ‘technical’ chain.  The ‘people’ chain, commonly derided as ‘management’ in general, tends to deal with the contracts, the hiring, the firing, the personnel reviews and other ‘business’ stuff.  The technical chain decides how to architect a product, what technologies to use, how to implement it, and other more nitty gritty details.  This is not really because the separation of people management from technical management is a good idea, but because it is so incredibly hard to find talent in both fields.  Your most technical people tend to not get along with others, while your most social people tend to balk at technical work.

The problem is that work is broken down by the management side, not based on any sort of architectural guideline, but based on manpower and resources available.  The resulting organizational structure will be far more driven by office politics, historical relationships and other business partnerships than what is ideal for the product.  And due to Conway’s Law, we can suppose that the resulting organizational structure will most likely have more impact on the product and project than any other decision, technical or managerial.

Software development is, poetically, much like software itself.  The Mythical Man-Month states that one of the inherent falsehoods repeated in software engineering circles is the idea that effort equals productivity, and that measuring effort in man-months confusedly implies that men can be exchanged with months.  This is altogether too similar to another problem currently faced by the computer science world: concurrent programming.  Throwing another processor at a program will not necessarily double it’s speed.  In fact, attempting to make some things concurrent can slow you down when the problem is inherently sequential in nature.  The trick is decoupling processes that can be effectively completed in parallel.

The same truth can be said of software development.  There are some inherently sequential tasks, tasks that must be done step by step, and thus really can only be done by one person.  Splitting up these tasks results in less work done, as now two people can at best, go no faster than one person (if they were both to simply solve the problem independently) and at worst, slower due to communication overhead.  The trick to utilizing the most manpower for any project is to find the optimal number of ‘threads’ that run concurrently throughout the project – that require the least amount of communications overhead naturally.  Then that is your number of developers you can use.  If you try and find any more concurrency than this point, you’re going to be drowning your developers in meetings and other communications overhead.  The project will cost more, and take longer.

If we take Conway’s Corollary, that the best software architecture necessarily has a best organization to develop it, then we realize that project management cannot begin until primary architecture is in place.  This architecture must be done entirely from a technical point of view, since any manpower concerns will necessarily trick us into thinking we’re optimizing for our resources.  Conway’s Corollary says we are not.  The ‘best’ architecture can be split up into so many components, and so on subdivided until we reach a point where the sub-components are too coupled with each other to yield any more real concurrent development gain.  This, then, should be used to develop a project plan, estimate manpower and resource needs, rather than the other way around.

Conway’s law says that organization drives architecture.  If we turn it on its head, and try letting architecture drive organization, we might find out why what took the generation before us ten people and a time-share mainframe takes us now teams of dozens of developers with the best equipment.

Advertisements

January 17, 2009 Posted by | Social Commentary, Software Culture, Software Methodologies | 1 Comment

Let me show you the door…

Coding Horror and Joel both have things to say about leaving the field of software development.  Jeff hits the nail on the head,

I mean this in the nicest possible way, but not everyone should be a programmer. How often have you wished that a certain coworker of yours would suddenly have an epiphany one day and decide that this whole software engineering thing just isn’t working out for them? How do you tell someone that the quality of their work is terrible and they’ll never be good at their job”

I’m not one of the lucky ones that gets to work at the Googles, Microsofts or Amazons and thus never work with someone completely inadequate for this field, so I can tell you how hard it is working with someone obviously not here for the enjoyment of it.  I also question whether those fabled firms actually escape the hopelessly and forever lost developer, since it’s so hard to spot them in the first place.  I hear Google uses riddles and brain teasers.  Yeah, that doesn’t work.

Anyways, I would LOVE for anyone who’s thinking about leaving the industry to go ahead and do it.  Like Jeff, if you don’t love it, you should probably get out.  Loving development has no bearing on your character, it doesn’t make you a worse or better person.  But there is absolutely no reason you should do anything that you don’t enjoy doing.  This is not only for your own sake, this is for the rest of our sakes too.  This is because 90% of the defects we have to fix, 90% of the horrible designs we have to work through, 90% of the shit we have to shovel is left behind by you.  Someone just in software for the pay, who doesn’t love it, doesn’t care whether or not they do a good job or not.

I remember I was taking some drawing classes awhile back, and all my stuff was just wholeheartedly uninspired and lacking any talent whatsoever.  I wondered whether it was a lack of experience, or maybe I just hated my own stuff.  I sat myself down one night and drew, as best as I could, a still life of a coke can.  That was all.  But I made sure to spend the time to do it right.  You know what I learned?  I learned that when I put my mind to it, it wasn’t a lack of skill on my part.  I could draw reasonably well.  But I also learned I just didn’t care about drawing enough to put in the effort to do it right. I don’t get a thrill out of it like others do.

Programming is an art form too, and if you don’t want to put in the time and effort to do it right, then you’ll produce crap.  And the only way you will ever put in the time and effort in this field is if you love it – you can’t bondage and leather you way into it.  You can have an IQ of 160, a degree from MIT and the whole cannon (Art of Computer Programing, etc) under your belt, if you don’t love it, you will produce crap.  Crap the rest of us have to clean up.

So, if you really find yourself wondering whether or not starting your own business, or working on cars, or something else is for you, its a sign you don’t love programming.  Hell, this lesson is something that applies to every field – the best mechanic will be one who loves working on cars.  Any thing you find yourself longing to do other than coding, any time you hate the actual process of development (not all the shit that comes with it) you honestly should think about stepping aside and contributing to society in some other way.  You are probably a much better artist than you are developer, and you’ll help the development community, and the art community, if you just let me show you the door.

December 29, 2008 Posted by | Social Commentary, Software Culture | Leave a comment

Convergent and Divergent Thinking

I read this article in the Times today, and began thinking of how apt a metaphor it was for software.

Not to put words in the author’s mouth, but to sum up, human thinking seems to fall into two discrete camps, what they call divergent and convergent thinking.

Convergent thinking is basically attentioned, focused thinking, where the mind is applies reduction rules to a problem.  Doing a math problem is a good example of convergent thinking, the brain is applying a mechanized system to some problem, where the answer is basically just a number of steps from the problem, and we know to go to B from A, and C from B, and so on, to the answer, it’s just a matter of doing the work.

Divergent thinking is what we do when we don’t know the answer, when we don’t know the next step, when the problem is completely open ended.  Another common problem that you solve using divergent thinking would be a word puzzle like Wheel Of Fortune.  How do you solve those puzzles?  Do you methodically plug in every letter of the alphabet into the puzzle until one fits?  Obviously, even a trivial puzzle could have you calculating for days using that method as it explodes in complexity.  Instead, you stare at it, relax a little, and the answer occurs to you.  It’s a eureka moment.

We’ve all had stories of spending all day working tirelessly, and fruitlessly, on a project, only to have the answer occur to us right as we drift off to sleep.

As software developers, we’ve basically come to believe the false notion that what we do is convergent thinking.  We’re lead to believe that, because ultimately what we’re working on is algorithms and numbers and bits and bytes, we need to use the same strategies we’ve always used for those technical problems.  Indeed, the vain hope with different software methodologies is that the ‘one-true-process’ will emerge and allow anyone to produce what they want given some specified requirements.

We don’t just need experience to tell us the folly of this expedition, now we can look into human psychology.

I would posit that, if anything, the arts, like sculpture, painting and drawing are fundamentally divergent activities.  Certainly an artist must have a measure of ‘technical’ skill, she ought to be able to draw well.  But just drawing well does not a work of art make – she needs to also have a flash of insight, to be inspired.  Otherwise what she will produce will be drab and lack quality.  It will be unmotivated.  This is not specifically tied to divergent thinking, but quality is something I’d like to go on more about, but at a later time.

Anyways, to produce a work of art, one must have the technical skill to do so, but also the flash of insight to know what to produce in the first place.

Software, while having some ‘engineering’ aspects to it, is fundamentally a creative pursuit.  This is not any sort of wishy-washy statement, but a simple conclusion from logical deduction.  Yes, it’s ironic that we’re using logic to show that straight out logic won’t suffice in the end, but that’s beside the point.

The fundamental problem solving technique we developers use is abstraction.  Upon finding a solution to a problem, we identify what can vary, abstract out the solution, and reuse it.  This is the basis for libraries and all other code ‘reuse’.  The point being, if there is ever an ‘algorithm’, or actual pattern of work that is done, some sort of mechanical process, it is within our power to abstract that pattern out and reuse it as a primitive.  Despite what the GoF will tell you, patterns CAN be generalized and pushed into languages and libraries.  Hence, in the ideal development environment, you won’t be faced with reimplementing some already known technique or solution, but instead will have those solutions already at hand.  What you will be doing is using those already known solutions in a completely unique way that’s never been tried exactly like this before.  You now have an open ended problem, for the sheer sake that all the problems that convergent thinking could do have already been solved for you by people before you.  And once your solution is complete, it too will fall into the hall of convergent solutions, and be reusable.  Hence, the only real work in idealized software is creative work of taking the solutions we already have at hand and putting them together in a completely new and unique way.

That ‘proof’ required a lot of hand waving, but I think you get the point.  Common sayings in software like DRY and KISS exist as words of wisdom because we should not be doing mechanical work as software developers – that’s what the computer is for.  We should focus on creative work, and let the computer do the rote stuff.  Most problems in software, I suspect, come from either the false belief that creative work can be shifted into mechanical work (and our endless supply of methodologies is evidence of that) or that mechanical work should be redone as creative work (and reinventing the wheel, or not-invented-here syndromes are evidence of that).

As the article at the top implies, creative work is best done divergently.  And creative work is best solved by not trying to solve it at all.  Know the problem, understand it in detail and research it.  Investigate other attempts at solutions and have discussions, but never think you can just sit down and mechanically come up with some sort of solution.  Inevitably, the ‘ideal’ solution that ought to be a one-liner will blow up into thousands of lines of case statements as you learn more and more about the problem.  As we know, the best predictor of number of bugs is number of lines of code.   Less is definitely more.

Back to my post on new hires being driven by company culture to simply throw as many hours at a problem as possible, we can now see how mere hours won’t amount to much.  Psychology has shown that while our ability to solve problems requiring convergent thinking decreases as fatigue increases, it does so at a much slower pace than our ability to solve divergent problems.  Sleep deprivation saps creativity, gumption and quality in one’s work.

In fact, any sort of ‘drive’ to find solutions mechanically will cause work to suffer.  Many times, when trying to figure out how to design some sort of extensible framework, the BEST thing we can do for ourselves is to stop working on it.  Sure, you can do the ‘moral’ breaks like taking walks, or a corporate-approved power nap.  But ‘immoral’ things would work too – surfing the web, reading blogs, chatting, playing video games.  All of these things improve morale and give your divergent mind a chance to churn away at the real solution.

I know it’s against our protestant culture and mindset to ever believe there’s virtue in laziness, but I maintain that all great insights occur when we’re not even looking for them.  So to increase your chances of finding those insights, you best stop looking for them as soon as possible and let them creep up on you.  You may end up ‘working’ less, but getting a lot more done.

November 16, 2008 Posted by | Social Commentary, Software Culture, Software Methodologies | , | 2 Comments

Innumeracy in Academia

I have nothing but respect for Academics, but I do have a problem when a person from one field decides to enter another without the greatest humility.  In this case, I’m referring to this series of videos.  The author makes many good points, but unfortunately, the classic Malthusian catastrophe that each generation foresees seems to come from a general, subconscious and primitive fear of large things than any real gut understanding of whats at stake.

Zero population growth advocates seem to constantly ignore the fact that most Western countries are nearing flat growth as it is.  If you have three things: decent access to health care, relatively good wealth, and women’s access to family planning, then you will see a decline in growth to a steady state, if not shrinking, population.  I do not take that fact from theory, but from empirical evidence.

Ironically, it’s usually the poorest among us that will be the most influenced by these zero growth policies.  After all, what demographics are growing the fastest?  What social class tends to have the most number of children?  Militant social Darwinists who still think that whole Eugenics thing sounded good if we honestly just gave it a good shot are uneasy about this.  Many ‘educated’ people don’t like to be reminded that the mathematical fact is that unless they have more children, their genes are going to be replaced with the ‘morlocks’.  So we come up with these zero growth advocacy groups who, instead of deciding that hey, maybe having kids won’t be so bad at all, they decide to push their austere lifestyle on the rest of us – most notably the underclass who’s only joy might be their children.

Consider this the alarm bell “It’s all going to be ok.”  Each generation doesn’t understand how we could possibly survive another doubling, and yet, we keep doing just that.  Across nearly all ways of measuring society, things are getting better.  Worldwide well is better distributed than ever before.  Fewer people are being oppressed than ever before.  Disease, starvation and war are affecting fewer than ever before.

It may not seem like it, but by empirical measures it’s true.  The main reason many of us think that this generation is the worst yet is because of the overblown sensationalism of our modern media.  Salmonella outbreaks sicken thousands and kill a few dozens, yet more than thirty thousand people die on our roadways each year and we don’t bat an eye.  Why are sharks more dangerous than cars?  Why are we more afraid of food poisoning than we are of heart disease?  This is the innumeracy that Academia should be attacking.  Not finding something else to scare us all about.

I think ultimately there are people out there that have to have a crisis – they have to believe in doom and gloom.  They’re the type of people who remind you, when you’re eating dessert, that it’ll do nothing but make you a diabetic.  Or explain that they can’t come meet you out at your favorite bar because second hand smoke causes cancer.

Why can’t we celebrate the great days we are living in?  When a war with only a few thousand casualties draws millions to the streets in protest, when we are more conscious of our environmental impact than ever before, and when things like the Internet allow any sentient being any where a voice on the world stage?

Doom-and-gloomers get excited when the price of oil spikes, when housing prices crash, and when former superpowers invade their neighbors.  After all, we’re one step closer to proving them RIGHT.  “Things are more out of control than ever!” They’ll exclaim.  But any student of history should see our time as, perhaps, the most boring.  In the grand scheme of things, there really is nothing major going on – just slow, steady, exponential progress:  Towards wealth, health and environmental stewardship.

August 17, 2008 Posted by | Social Commentary | , , | 2 Comments