The Skeptical Methodologist

Software, Rants and Management

The Essense of Complexity

You’re reviewing source code. It is a relatively small ‘for’ loop in a procedural language, and it uses the ‘continue’ and ‘break’ statements. Suddenly one of your colleagues makes a critique – “Continue and break are tantamount to gotos, and therefore are hard to understand. You can simplify this loop by adding a few flags and only returning in once place.”

The author is notably flustered, “Adding in those variables will make this loop less efficient! Plus it will add needless lines of code and could introduce errors if we don’t check the flags right.” he responds.

“Your solution is too complex!” The original critic might say.

“But it’s elegant!” The author defends.

What is complexity? When is a solution complex, and when is it simple? When is it understandable? In the context of yesterday’s conversation, when is a language feature – continuations – make a language too complex?

By complex, we usually mean ‘hard to understand’. But as we’ll see, this is hardly an objective measure. We’ve seen one example of two different ideas of complexity in the argument above, but there are others. Some of these have been settled, so we’ll look at what Python has considered complex.

Nowadays, if you code up a huge for loop that operates on each item in a collection and builds a new collection, it’s generally recognized that a list comprehension is less complex. Why? A comprehension is also less complex if you’re building a new collection from an old one, but removing some members based on some predicate.

On the other hand, using another functional construct, ‘reduce'(fold), is considered MORE complex than the equivalent for loop that operates over the collection, except in certain cases.

Another example would be the general notion that simple logic and shallow nesting is almost always less complex than intricate logic and heavy nesting. Should you find these things in your code, you’re supposed to break them up into smaller chunks. Why would breaking something up somehow make it less complex – you still have to check all your flags and do all your loops? In fact, isn’t it even more complex now that everything is spread out all over the place?

So breaking things down, even pulling things away from their context is, counter-intuitively, a way to make a program LESS complex. What gives? You’re giving a person LESS information about the problem, yet they understand the whole problem better?

The fewer things we have to think about, the less complex they are! The less we have to keep in our head, the less complex it all is!

Think of your mind as having, like a computer, RAM and a hard drive. What happens when you fill up RAM? The OS steps in, carves out a place on the hard drive, and switches out some virtual memory. We, like computers, only have so much RAM we can use as our working memory. The more we have to hit our hard drive, the longer it takes us to do something. Plus, unlike computers, we’re not all that infallible when it comes to remembering things. We may do a look up of a certain problem on our hard drives today and come up with one answer, and then tomorrow come up with another. Uh oh, programming bug!

How do things like list comprehensions and loop control statements help us not hit our hard drives when we’re thinking about a problem? How does breaking a problem up keep things all in working memory?

The second statement should be obvious – the smaller our focus, the fewer things we have to keep in our heads. We divide one large problem up into multiple small ones, plus one more. The small ones are the modules we use to decompose the problem, while the plus one problem is the far simpler ‘big picture’ of how these small modules work together to solve our original problem.

The first statement, though, reduces complexity in another way. Think of breaking a problem up as ‘optimizing’ the problem for our mind’s cache. We think much quicker that way, and can keep everything in fast, working memory. There is another option we can take when we try and shrink a problem to fit in memory and that’s compression.

We like to zip up files because transmitting over the Internet is slow, so we want to transmit as little information as possible to the other side. Compression takes data and eliminates as much redundancy as possible to shrink it to its smallest ‘essential’ size. This identification of ‘essential’ also applies to our problem solving and programming. We can compress ideas by identifying them by their ‘essential’ structure, and ignoring all the extra details. This is the process of ‘abstraction’.

You see, ‘continue’ and ‘break’ abstract away a very common loop control structure. That means they allow a more complex loop to be stated in a simpler way. Instead of keeping the details of exactly how the loop will continue or break in our heads, we ‘compress’ that information away and simply mark those facts with the smaller, ‘essential’ nature of the problem.

List comprehensions do the same thing, they boil a big problem – a loop over a collection – into a smaller one that only consists of the ‘essential’ complexity of the problem.

But what about the critic in our example? Surely he knows that continues and breaks reduce complexity via abstraction. Er, don’t they?

Not necessarily. Compression in your mind, just like on a computer, takes time. It’s called ‘learning’. We see patterns, over and over again, and only after awhile do we begin to replace the pattern with it’s ‘essential’ nature (the parts that change). Only after we’ve seen something many times do we begin to think about that thing as an object in itself, and then begin to try and shrink that object into its most compact form.

The critic probably was not familiar with the use of continue and break, and thus saw the continue and break solution as inherently more complex. To him, continue and break didn’t boil down to small complex problems, but instead made the original problem even bigger. Now, in addition to understanding what the loop DOES, the reader has to understand these esoteric control flow keywords. That’s even more stuff to keep in your head!

And now comes the rub. I call this the Teacher’s Fallacy: You tend to project your level of competence onto other people. In other words, if you understand a subject, you frequently over-estimate how much others know. Indeed, we’ve all been in talking to a professor or local guru, asking a question and they start answering with jargon about three levels above the question we asked. They don’t realize they’re being incomprehensible. They don’t know what patterns you have or have not abstracted, and the problem is, once you’ve abstracted a pattern, you become blind to it. After all, how else is your mind going to keep all that information from clouding your RAM?

Similarly, if you do not understand something, you tend to project that level of misunderstanding on to others. The developer skeptical of ‘continue’ and ‘break’ sincerely believed that everyone else in the room also found their use as complex as he(or she) did. He( or she) most likely believed that the author of the code was simply trying to be ‘clever’, and obfuscate things in an effort to show how smart they were.

So, when is a potential language feature too complex to add to the language? People who aren’t familiar with the feature, but ARE familiar with the language, are likely to greatly over-estimate the complexity of the proposed feature. After all, we all get complacent with our level of mastery over things. Too often, we often see all knowledge as a single vast mountain, rather than a mountain range. In other words, we think “Well, I understand so much about programming, and this concept is complex to even me, therefor, it must be VERY complex to someone new to the language!” In actuality, you’re not looking at a new steep cliff near the top of your own mountain of knowledge, but rather a rather shallow grade at the bottom of someone else’s. You’re on equal footing with many beginners, but are so used to the thin air solving your own advanced problems that you don’t recognize where you are.

Indeed, if continuations and functional programming were, in fact, more difficult to understand than imperative or procedural programming, you’d see a trend of students taking far longer to master the former than they do the latter. But we don’t. There have been projects to teach Haskell, generally conceived to be one of the more obtuse academic languages out there, to high schoolers. And they were successful.

Functional programming seems hard because we’re already imperative programmers. Good ones too! We forgot how hard it originally was to wrap our minds around objects, around function calls, around variables and references versus values. When we approach functional programming, we generally do so at the level of an amateur (at least, those of us new to it), but we still believe ourselves to be experts. It will take your average smart imperative programmer about as much time as it would someone completely new to programming to ‘get’ Lisp, Scheme or Haskell. It’s starting all over again.

So, we’ve established that it’s very hard to judge the complexity of a new idea, because we’re so used to ‘getting’ it so quickly. We tend to over-estimate how hard it might be for someone else to grasp, since it’s so hard for us, and we’re, after all, the experts.

For someone arguing that continuations are too complex to put in a learning language like Python, really, the onus is on them to show that learning a stack-based function call mechanism is somehow intrinsically simpler for a person completely new to programming than a simple continuation based flow. Until then, I’d argue that it is simple enough for anyone to grasp. Even we expert imperative programmers🙂

Another major question looms, though, and that is – whatever the complexity is, is a feature or solution useful enough to justify it. If, in our original story, we lived in an alternate dimension where loops were written so differently that really, ‘continue’ and ‘break’ only made sense in one or two cases throughout all code, then it would not be worth introducing those control flow mechanisms.

Learning is hard, recognizing new patterns takes time. I don’t want to learn a new language construct if I’m never going to use it. Then, even if it only adds a little complexity, it’s entirely needless complexity. Even easy things should be justified.

There are two kinds of abstractions, though. The first kind of abstraction is ‘evolved’. It comes from repeated exposure to a pattern until we shrink that pattern into its essential form. Once we have shrunk a few patterns, there emerge new patterns – patterns of patterns, and we continue to shrink entire systems of patterns down into their essential form. This process continues, and these abstractions build off of each other.

The second kind of abstraction is different. It comes out of the blue at us, from something completely unrelated. A flash of insight, maybe. These abstractions are ‘orthogonal’. They come at us from right angles, and are usually taught rather than learned through our environment. They are also used in a different way. As you recall, evolved abstractions come from obvious patterns – in other words, the use of them precedes their existence. We have a problem, and we find a solution, and now we can solve that problem again and again. Orthogonal abstractions go the other way, presenting us with a solution. We frequently see the problem they solve as contrived or unrelated to what we need.

This type of abstraction arises frequently in mathematics. We frequently prove theorems well before we find applications for those theorems. Math is in constant supply of solutions looking for problems, and the most frequent way to apply these new solutions is the transformation. Instead of seeing a pattern in our world and reducing it to its essential form, we see instead raw data. No patterns. But we can transform this raw data, this accidental complexity, into a form of which we already have a solution to. Mathematicians frequently make progress by stepping back an restating the problem.

We take a problem we don’t know how to solve, or one that has a messy solution, and we turn it into another problem that has an easy solution. But the only way we can do this is if we steadily expand our solutions, our ‘bag of tricks’, through other means than evolution alone. Evolution is never going to give you a solution you don’t already need, and such, your problems will always grow linearly with your solutions. Orthogonal abstractions are the only way to really grow your bag of tricks.

Continuations have a few uses that have been stated multiple times. But they still may seem like they solve a problem that’s almost as easily solved in some other way. Are we just tacking on something to the language that doesn’t NEED to be there? The Zen of Python is that there should be preferably one way, and only one way, to do things. This helps the learner of the language, because when they discover a new feature, they should immediately be able to find applications. If there is one way to do something, and a feature is in the language, then it’s abilities aren’t replicated by any other, and when you run into a problem it solves, then you’ll need it and only it.

But just as we said the imperative expert will misjudge the complexity of learning functional style for the newcomer, I think we’ve shown that we all are going to underestimate the usefulness of orthogonal abstractions. They’re solutions looking for problems – why should we tack that on to the language? In fact, such things may even be risky. A solution today with no problem, but tomorrow we may find it solves the same problem as something we already have. It could only be needless today, but tomorrow it could be downright redundant. That is not Pythonic.

There is a debate in economics over whether demand drives the economy or supply. Demand siders say we should give people what they want. They are clearly indicating what problems they’d like solved, and we evolve solutions to those problems from stuff we already have. Supply siders say that if you build it, they will come. People frequently don’t know what they’re missing, or what they need. Take the introduction of the personal computer, for example, the archetype for a completely orthogonal solution. No one demanded a computer, no one even knew what they were. Many were skeptical of their usefulness, yet today, obviously, they are ubiquitous.

In terms of continuations, an extended continue syntax, as an orthogonal abstraction, could very well solve problems we don’t even know we have. Stackless Python has shown the potential of a Python with no stack, while continuations may be able to work that power into a language with the safety of a stack. Native thread advocates constantly bemoan the GIL, yet continuations may have the potential to make efficient, expressive green threads a reality in Python, with very little legwork. We will learn to solve many problems today by transforming them into problems continuations solve far more simply. They are coming at us at right angles, and we never know where that will lead.

May 1, 2009 - Posted by | Uncategorized

2 Comments »

  1. Stackless Python still uses the stack. The original implementation that was intended to actually be Stackless and actually had continuations, was the opposite of very little legwork. The green threading Stackless has now, is the directly stack based, and has very little legwork involved in its implementation.

    Comment by Richard | May 2, 2009 | Reply

  2. Richard,

    Hence, a combination of stack eliding and stack dependent ways to control the flow of a program gives a lot of flexibility🙂

    Comment by austinwiltshire | May 2, 2009 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: