The Skeptical Methodologist

Software, Rants and Management

SYWTLTC: (AB) Chapter 3.2 Quality : Static Analysis

Go here if you want the prolog and table of contents to the SYWTLTC series!

Per the 5 pillars of quality, up next is static analysis. As always, treat all links as required reading unless stated otherwise.

What is Static Analysis?

Static analysis is a broad term used to categorize all tools that you can run on code to tell you whether it’s correct or not. It’s a program that you run on your program which tells you whether you’re making mistakes.

It’s called a static analyzer because it doesn’t run your code to figure out what’s wrong – it analyzes it “statically”, or unchanging.

What kinds of Static Analysis are there?

There are three broad categories of static analysis tools out there. Linters, static analyzers proper, and model checkers / theorem provers.

The most prevalent, and the one you’ll be using from here on out, is called a linter. Linters “remove lint” from programs. They operate primarily on the text of the program itself, looking for simple stylistic mistakes. Think of them like spellcheckers. They more or less look at your program line by line and give you warnings if for example, you use a variable name that is hard to understand, or if you switch between spaces and tabs.

The other categories (static analyzers, model checkers, theorem provers) can all eliminate harder and harder bugs to suss out, but require substantially more work. Python is a ‘dynamic language’, which means the entire program isn’t really defined until it’s running, and so ‘static’ analysis of the code itself tends to have too many unknowns to be worthwhile.

We’ll be investigating Pylint in particular which is primarily a linter but also does some more difficult checks as well by attempting to interpret your python without actually running it.

Benefits of Static Analysis

There are a number of benefits of static analysis as well as drawbacks, but most important to note is that many of these benefits and drawbacks tend to be complementary to testing, peer review, types / contracts and design (our other pillars). Static analysis in and of itself is of limited power, but combined with the other pillars of quality can be very powerful.

Consistent style

First off, stylistic checking ensures a code base has a single, consistent style. This helps maintainers as they can expect certain patterns in the whitespace, variable names and other parts of the code to read it more easily.

It also helps peer reviewers since, again, a single way to use whitespace, variable names, and other stylistic concerns make code easier to read than many different styles.

Absolute removal of certain kinds of bugs

Some bugs, such as variable misspellings, which would end up crashing your program at runtime can be absolutely eliminated from your code base.

This is in contrast to testing. Testing can only show that the one path through the code that the test executes does not fail in any way that the test doesn’t expect. In other words, you can never really prove your program works via tests alone, since each test only proves that that one, single scenario worked.

Linters can prove that your program is free of certain kinds of bugs, completely and absolutely.

Very low cost in terms of time; quick turn around

Compared to contracts, peer reviews or tests, linting takes nearly no time at all to run. Tests take a lot of time to write, and later, to maintain. Peer reviews can involve multiple person hours as other developers look at your code.

Linting takes, usually, on the order of seconds. This is great for two reasons.

First, it means that the level of effort you have to get a clean lint is minimal compared to testing. You can squash a lot of bugs very quickly with linting, a lot more than you would via testing.

Second, it means you can lint often. In the previous chapter, I showed you how to automatically run your tests as files change. This is a great productivity tool as you can find out if you broke a test very early.

Trying to make sure that “bad thing” gets feedback ASAP is a key to learning, and it’s also a key to fixing “bad thing” fast. The mistake you just made is still fresh in your mind, so getting feedback on it means you don’t have to go looking for the bug – it’s right there, right where you were already working.

Tests and testing still require some care – tests can easily take minutes or hours, which means you have to start splitting up what tests run when. Usually, we like ‘unit’ tests to be our fast tests, the ones we can run automatically on changes, whereas other tests we may run nightly.

Linters, however, are super fast. They can be run faster than unit tests even. Many linters are actually built into text editors and IDEs so that when you save your file, the linter automatically runs and tells you what errors it has found (again, like spellcheck).

For static languages like C++ or Java, it’s often said that just getting your program to compile is like one big test. We don’t get that luxury in python – however, we can get most of it back by linting early and often. A clean lint is like a version of a test that runs quickly and eradicates many kinds of errors.

Can’t test your tests

Speaking of testing, it’s hard to test your tests, and not always value added to do so. TDD ensures you do some minimal testing of your tests – this is why you make sure the test fails first and then passes when you do code changes. All too often have I written tests after the fact, then when a bug crept in, I realize that the way I wrote my test would have never found it because I screwed up writing the test.

How do you ensure your test code is high quality then? With the other pillars – static analysis in particular. Ensuring your test code has a clean lint gives some assurance that your tests are maintainable and readable, as well as free of certain kinds of errors. This, in turn, makes your tests more easy to peer review for other issues.

It’s a virtuous cycle!

Drawbacks

There are some downsides to expect from linting.

High false positive rate

Linters are going to find a lot of issues that just aren’t that important. Whitespace issues may hurt readability, but they’ll never crash a program. Variable names are nice to get consistent, but the interpreter doesn’t care.

Most of what you’ll be fixing will be things that may have never ended up crashing your program.

Fixing them, though, is often very simple. And you’ll get into a habit of breathing a sigh of relief when the linter runs and finds no issues in your code. You’ll become more confident as a coder, and be much more willing to take risks.

Types of bugs found usually aren’t that nefarious

Along with the above, the worst bugs are often those hardest to catch via linters. If you’re handling credit cards, making sure you debit the right account isn’t going to be something a linter can help you with. Making sure you don’t leak personally identifiable information is something linters would struggle to help you with too.

Often the bugs found are simpler readability and maintenance errors, as well as some actual defects that are pretty quick to learn how to avoid. On the other hand, linters prepare the code for people who can find those bugs in peer review and can give more assurance to test code that it’s correctly exercising your credit card and PII functionality.

Hard to do in dynamic languages

One final drawback is that linting is hard to do in dynamic languages, as discussed above. This means things that some languages can spot via static analysis alone like resource leaks (you grabbed memory from the operating system and forgot to give it back) aren’t going to be things Pylint can find, though.

On the other hand, linters end up being of about equivalent power to the compiler in dynamic languages – which is a great first step towards ensuring your program works. If another program reads it and says “I don’t see anything obviously wrong with this”, that’s some assurance.

Smelly Code

Despite the drawbacks mentioned above, often we go along with fixing all the false positives as you don’t really know whether or not something is wrong until you try to fix it.

Code with lots of Pylint errors can be said to be ‘smelly’ code – we don’t know something is wrong for sure, but we need to check it out. Check out the write up here, and then skim a few of the code smells classified on C2.

Often you might fix one or two Pylint errors and three more will pop up. This is a sign that there’s actually a fundamental design flaw that leads the code to be brittle and hard to understand – even if on the surface it just seems like a few small warnings from Pylint.

If we keep the code squeaky clean, we’ll avoid any smell.

Pylint

Pylint is pretty much the industry standard linter for python. It does a lot of stylistic checking based on what’s considered true idiomatic python (called PEP8) as well as some deeper analysis.

Go download it here.

Challenge

We’re going to loosely follow it’s tutorial, which involves the Ceaser cipher which you’ll want to read up on.

First, fork this repo. Then create a branch in your forked repo where we’re going to do some work.

Then, go ahead and clean up the ceaser_script.py file using Pylint according to this tutorial.

 

When it’s clean, commit.

That is a workflow you might use if you inherit some code and want to clean it up – often running a linter on inherited code is a good way to both improve its readability as well as get familiar with it.

Next, we’ll work on a workflow that combines both linting and test driven development. In the future, you’ll be required to use the workflow practiced below!

The next step will be a little more difficult – create a new file, ceaser.py and ceaser_test.py – we’re going to refactor or change the script you worked on before to be more reusable.

 

  1. Write a test for a function you haven’t written yet in ceaser_test.py, the function will have the following signature:encode(message, offset) so you can call it like this, encode("beware the ides of march", 3) and get a message with the Ceaser cipher offset 3. (You’ll probably have to create a test message by hand)

  2. Ensure this tests fails. You may have to put an empty function in ceaser.py that does nothing.

  3. Ensure pylint is clean.
  4. Commit
  5. Using ceaser_script.py, copy and paste some of the functionality into your encode function in ceaser.py

  6. Debug it until your test passes.
  7. Ensure pylint is clean.
  8. Commit
  9. Write a test for a function you haven’t written yet in ceaser_test.py, the function will have the following signature:decode(encoded_message, offset) so you can call it like this decode("jewlrp ajk ippf kl aqjrk", 9) and get a decoded, English message using the Ceasar cipher. (Again, you’ll have to create a message by hand, the above was just random letters I made up, it’s not an actual message.)

  10. Ensure pylint is clean.
  11. Commit
  12. Using ceaser_script.py, copy and paste some of the functionality into your decode function.

  13. Debug until your test passes.
  14. Ensure pylint is clean.
  15. Commit
  16. Open a pull request on your branch.

The above illustrates a pattern – in Test Driven Development with Static Analysis, every commit should either be adding a test or code. Every commit has 10/10 on pylint and 100% coverage.

When a test fails and you can’t figure out why, then break out the debugger. Also, often running the debugger the first time you want to walk through your code can also be a good practice.

Hook up Pylint to your Text Editor

Fixing things as soon as they happen creates a tight feedback loop that both makes you more productive and accelerates learning. It’s easiest to see during testing.

If you make a change to your code, and your tests fail, you know what you just changed. All the context is still in your head and you’re much more quickly able to debug code and get the test passing again. Moreover, you know that the changes you made in the code ended up affecting tests that you may have not predicted. You learned something about the code.

Compare that to making a lot of code changes, then days later, running the tests. A few fail. You have no idea what changes are tied to which failures. You can try taking a debugging approach, and you can look at your git diffs to see what’s changed, but this is a much more complex problem than above. You’ve already moved on, mentally, to other things. Debugging the same issues could take two to ten times longer.

The lesson? Debug as close to possible to when you added the bug.

Linters work like fast unit tests – something that can run in the background of your editor and let you know about issues as soon as possible. Again, since they work like a compiler for a dynamic language, they’re a single large global test for things like misspellings, syntax errors and other things you’d otherwise have to wait for your tests to run. Catching them as soon as possible speeds you up, and allows you to focus your testing efforts on things the linter can’t catch, like actual logic errors rather than merely running the code looking for syntax problems.

Go ahead and use the instructions below to hook up Pylint to your editor of choice:

Pylint for Sublime

Pylint for Vim

Pylint for PyCharm

Hook up Pylint to Git

Another approach is to have git automatically reject any commit that doesn’t have a 10/10 out of Pylint.

When running from inside a text editor, Pylint decorates the current file. If you make changes to that file, and Pylint gives you a clean bill of health, that doesn’t mean that your changes didn’t suddenly break other files.

For example, you may rename a function, and forget to rename other places it is used. Pylint would flag your current file as clean, but other files where that function as being used as having errors.

Putting a Pylint check-on-commit allows you to do a whole project Pylint at the last moment to prevent adding any erroneous code to the repo.

Check out this repo and add it to your own fork of the Ceaser project above.

Some additional resources can be found here.

What about false positives?

For the duration of these chapters, we’ll treat every pylint error as a real error. You’ll be expected to fix everyone, whether you agree or not unless your mentor explicitly tells you to ignore them.

That being said, in the real world, often you have to make compromises. For that purpose, there are configuration files to turn off families of checks, suppression files to suppress warnings line by line, as well as in line suppressions. No task is tied to these, but go ahead and skim these links so you have a cursory understanding of how to squelch a Pylint error.

Example of inline suppressions

How to add suppressions to the config file

Example config file

To move on…

When you’re done, you’ll need to provide your mentor with the following…

  • show your mentor a 100% coverage report
  • show your mentor a 10/10 Pylint report
  • open a pull request on your code, and clean up any comments your mentor has.
  • show your mentor that you have pylint installed in your text editor
  • show your mentor that you have a pylint hook in your git repo

For Mentors…

  • In addition to the above, check out each commit and ensure that each one is pylint clean.

November 11, 2016 Posted by | Uncategorized | 1 Comment

SYWTLTC: Novice Chapter 2: Effective Hacking

Go here if you want the prolog and table of contents to the SYWTLTC series!

The Novice section of SYWTLTC is intentionally pretty sparse – Chapter 1 gives you all the tools you need to get started in Code Combat .

However, there are often meta-lessons to be learned even as early as Code Combat. We’ll go over one of those today.

Hacking?

So I’m using this term loosely. But often, we hear the term ‘hack’ in the context of programming meaning when someone doesn’t fully understand what they’re doing and they’re just trying to ‘make it work’.

The usual work flow here is to make a sometimes not-so-educated guess about what might be wrong, change that thing, then run your program and see if it works.

Senior coders tend to have many tools in their toolbelt, however, we never completely drop hacking as a means to understand things. There will always be time when you will have code you inherit, a library you don’t understand, or even code you wrote yourself that you no longer remember how it works – there will be times like these that all you can do is ‘fiddle with it’ until it does what you want it to do.

Still, there are tips for more rigorous hacking

1. Scientific Hacking

Change one variable at a time

I don’t mean actual variables in a program, although that may be the case as well. What I mean here that’s scientifically inspired is that we try to isolate only one ‘theory’ of why it’s not working at a time.

If it may be X, Y or Z, you don’t change X, Y or Z all at the same time. Change one and see if it worked, back that change out, change the next and see if it worked, and so on.

This may feel like you’re going slower, but you’re actually going faster. This is because your ability to mentally understand what’s changing in the system goes out the window after a certain (very small) level of complexity. So you may be able to “change all the things!” once or twice on toy programs you’re working on, and things will appear to work.

But in larger programs, many bad things happen when you do this.

  1. Your program can suddenly appear to work. But it’s all in appearances.
  2. You may fix your thing and break something else.
  3. You may not even fix your thing, break something else, and not understand what you changed well enough to unbreak it.

Three is usually the most common.

There is actually an advanced way to change “all the things” though, and you’ll need to combine it with tip 7 at the bottom – always leave breadcrumbs (i.e., lots of git commits for every change you can back out, or comments in code that you can back out, ways to easily undo what you’ve done.)

Backwards Science

This is to basically do science in reverse – change all the things. Then run your program – does it work? If so, undo half of the things. Does it still work? Then you know it wasn’t that half. Undo half of what’s remaining – does it still work? If it doesn’t, you’ll want to turn those fixes back on and turn the other half off.

This is akin to a the ‘binary search’ algorithm which you may become more familiar with later. And it’s a good compliment to the traditional ‘turn one on, leave all off’ technique described above. This is because some issues may be interactions of multiple fixes, i.e., you may need to make more than one fix to the code to get things in ship-shape. The turn-it-all-on and then binary search downwards can find this easier than the turn-one-on-at-a-time approach. The turn-one-on-at-a-time approach, though, usually is faster since it requires less work to set up and back out.

Keep a Journal

You can do this in a documentation tool, in comments in the code, or just in a paper spiral at your desk. Often it’s good to write down what you’re doing, and what the results were, again in a scientific manner. Each change you make to the code is a little ‘experiment’, and you need to write down what you did and what the results were for each experiment.

This helps with number 7 below – keeping a journal complements other techniques of ‘backing things out’. It also prevents trying the same experiment twice – which may happen if you’re struggling with a bug for months at  a time. When you start forgetting what you’ve already tried, that’s when you truly begin to spin in place and become completely unproductive.

Finally, a journal can help with hypothesis-generation. As I stated above, each fix is an experiment. Your minds ability to come up with a hypothesis for any given event is nearly infinite (given enough time). But you’ll come up with better hypothesis the more information you have.

A hypothesis is valuable insofar as it explains the given data. Your initial bug is one data point – the program currently does X when it should do Y. Many hypothesis can fit this, and your job is to methodically step through them one by one until you find the one that’s correct.

However, each time you do an experiment, you narrow the solution space. If your program prints “Hello WOrld!” when it should print “Hello World!” and you perform an experiment to lowercase all O’s in the program and it fails… your real problem just got constrained. Now your problem is:

  1. Program prints “Hello WOrld!” when it should print “Hello World!” AND
  2. When lower casing all O’s at line 13, the program continued to malfunction.

A journal helps keep these thoughts all in order and allows each of your experiments to gather more data.

2. 90% of Programming is Knowing What to Google

Most of coding is research.

cfxphz8w4aa711p

But what to google and what sites to go to first is something you learn over time. This series will have a particular module dedicated to research, but until then, understand that if you have to search for something on the internet, that doesn’t mean you aren’t coding right.

Most of coding is googling for APIs, code snippets, blog opinions about tool X versus tool Y, and looking for others who have had your same problem and fixed it.

3. Don’t Grind

There’s a lot of times when you’ll be struggling with making your program work and you’ll chose to … struggle more.

Bayesian reasoning is a kind of statistical reasoning that says “What should we expect given what we’ve seen?” It says: take all the data into account, including new data, and what should we expect in data going forward?

In other words, given that you’ve already struggled for 3 days with this bug, how likely are you going to solve it by struggling for 3 more days?

Not very likely.

This is called grinding. And it’s a technique that may leave you with the answer, after maybe thirty more days, or may drive you to completely change your result (which is bad – if you wanted to design it in a certain way, it’s probably because that certain way was good. Changing to another way means you’re sacrificing quality because you couldn’t make it work.)

Or it may leave you quitting your job. I’ve seen all three happen.

When you find yourself grinding, your hypothesis generating engine slows down, and you have trouble coming up with new ideas for why your bug is occurring. You either rehash old ones – which is a waste of time if you’ve kept a journal – or you come up with increasingly bizarre theories on why your program may not be working, which isn’t the best use of your time.

The best thing to do when you realize you’re grinding is give up and work on something else. Your subconscious will be busy grinding away at the problem for you, and you’ll be greeted with an especially good idea right when you’re falling asleep, or when you’re showering, or otherwise occupied. These are the insightful ideas that have lots of promise, whereas the bizzare ideas you come up with staring at the code are almost always bad.

Walk away, play a game, read a book, talk about your problem with someone else, or talk about anything but your problem with someone else. Insight will strike.

4. Play

When you’re coding, you’re not always stuck on something. Sometimes, things are going just fine, swimmingly actually. This is when you should try to make your own problems to get stuck on.

If you’re trying some tutorial and you can get a button to show up on your screen where you want it – what happens when you move it? What happens when you set certain things to negative numbers? What happens when you try and push it off the screen?

These are experiments, like the above, but rather than experiments trying to prove or disprove a theory about how something is causing a failure, they’re still adding data to “how buttons work” or “how strings work” or just about anything else. They’re a form of play – exploration for its own sake – and they’re incredibly valuable forms of “hacking”.

Again, as with tip 7, leave yourself a way to back out. But rather than trying things to fix your program, you’re more or less trying ways to break it – or maybe not. You’re just trying things on a completely fine program, and checking with what you think will happen with what actually does.

Along with tip 5 below, playing is the best way to get the most out of something you’ve already done – if you already implemented some widget, what are a few ways to change it that you don’t know what they do? That ensures you get the most out of every project and exercise.

5. Make it Work; Then, Make it Pretty

Before we get further into this tip, let me make one thing clear –

You are not done with your code until it works and is pretty.

There’s nothing more demoralizing than sitting in a peer review with some recalcitrant coder who refuses to change what they have done because “it works, doesn’t it?”.

Working code is the bare minimum of what you’re expected to produce.

However, when trying to prioritize what to do first – getting things working is often the hardest part. Finding one solution to your problem is hard – there’s an infinite variety of solutions, but a much larger infinity of non-solutions. It’s ‘sparse’.

However, once you do have a working solution, it’s usually far easier to make slow incremental changes to that working solution to make it more pretty.

What I’m not saying here is that you should code a large project together with no regard to making things readable, and then do it later as an after thought. What I am saying is that sometimes, you’ll get stuck – it’s these times when it’s okay to get some sawdust in various places, so long as you can follow tip 6 below and keep it isolated to a certain area.

The fact is, writing a test for each solution is going to be cumbersome if 99 of your potential solutions don’t work and the 100th does. Sometimes you get the benefit of a single test telling you whether or not your solution works at all – this is when you’re lucky. But when you’re designing a new feature and you don’t know how it should work yet – you want to play in the design space and see what feels right – letting things get slightly dirty in isolated parts of code is fine, so long as you follow through and get them cleaned up before any peer review.

6. The Surgical Curtain

In surgery, surgeons often lay down cloth around the incision site to block out everything except the area that they’re going to be working with. This is to more or less shrink the problem size and focus all attenion only on the surgical area.

Similarly, when trying to ‘hack’, you want to shrink the problem by as much as possible, and only work on the area that is problematic.

Remember in scientific hacking, we talked about ‘reverse science’, where you change everything to see if your issue is still there?

There’s a similar technique to shrink the problem space, where you try and turn off (by removing or commeting out code) large swaths at a time and seeing if the problem is still there. As you turn things off and the problem remains, that means you can be confident (not sure, but confident) that your problem is not in that area of code.

Often you can shrink things down into a small toy program where your problem lies, and it becomes much easier and faster to try different experiments out on it.

This is one benefit of well factored / well designed code, it’s usually easy to isolate parts of the code and write small ‘unit’ tests around where your issue lies, rather than having to run your entire program to see if it works or not. The curtain is easy to lay down in well designed code.

If your stuck, and there are lines you can comment out while not affecting your problem, do so – this reduces changes of accidentally breaking other things, introducing interactions, and keeping the problem small enough that you can keep it all in your head.

7. Leave, and use, Breadcrumbs

Finally, leave yourself a way out.

Hacking can often mean many changes to your code – if you’re making them methodically as illustrated in tip 1, you also need a methodical way of backing them all out. This is what source control is often used for – try an experiment and commit it to the repo. If that experiment doesn’t fix your problem, roll back the commit and your code will be as it was before you did anything.

Often, even with the best rigor, we find the code base to be an unintelligable mess after some hacking around. It’s best at some times to start all the way over, and leaving yourself breadcrumbs allows you to do that.

You really don’t want to find yourself trying to fix a problem where you have a code base that is so heavily hacked that it’s unrecognizable compared to how you found it. It means that you’ll basically have to debug your way out, which is never fun.

Get in, change only what you need to in a methodical fashion, and get out, leaving the code as clean as possible.

Leaving breadcrumbs like git commits that are very granular also allows you to easily back out scaffolding code like print statements and other things that help you debug.

Finally, backing out fixes that don’t work is incredibly important. If we write for readability first, and performance second (which you should), then you should assume that the code base is as readable as it can be. Any change you make that’s not a refactoring to make it more so must by default make it less so. In other words, any change you make that’s not explicitly made to improve readablility is most likely harming it. No change should be left in that doesn’t do something – like fix a bug. If it doesn’t fix your bug, you need to take it out.

There are often thousands of ways to code something. Your fix may not have fixed your original problem, but it may not have also introduced new ones, meaning you could potentially leave it in and the code base would work as it always did. Don’t do this – you’ve harmed readability by letting in code that had no reason to be there. Back that code out and start anew on a new experiment.

Conclusion

Hopefully these tips and early insights into how coders code are good to have. I know a few things like knowing that people spend most of their time debugging and googling have helped people feel like they aren’t utter failures when they’re working through code combat.

It’s okay to hack, it’s okay to research.

But it’s also good to practice hacking and researching the right way, so that you can speed yourself up and be ready for some more robust tools to put in your toolchest.

November 1, 2016 Posted by | Uncategorized | 1 Comment

SYWTLTC: (AB) Chapter 3.1 Quality: Test Driven Design

Go here if you want the prolog and table of contents to the SYWTLTC series!

In this and future chapters, treat all links as required reading.

Five Pillars of Quality

I claim that there are five pillars of quality in software.

  1. Testing and Dynamic Analysis
  2. Static Analysis
  3. Peer Review
  4. Contracts and Types
  5. Design

You can find people who will swear by one, or even two. Some folks will say all you need is tests, others will say that types are the only way to prove there are no bugs in your code.

Each of these pillars has strengths and weaknesses, but they all tend to be very complimentary. That means they work best as a team.

But first…

Why Quality? And Why so Early?

We all want to be ‘good’ at what we do, we all want to produce ‘quality’ work, sure. But I’ve got to get this script out by tomorrow, so we can keep all the nice and pretty stuff like testing for tomorrow, I have real work to do today!

This is very myopic thinking!

Ultimately, we are looking how to be productive coders. Quality is part of productivity, it’s part of being fast – it is not the opposite of being fast! The fastest coders out there code quality, and the reason is that the number one thing that’s going to slow you down is rework. After all, we never seem to have time to do things ‘right’, but we always seem to have time to ‘do them over’. The previous sentence should bother your logic center, as clearly – we have time to do things right if we have time to do them over.

And if we do them right in the first place, we don’t have to redo them later, and we go much faster.

Software maintenance costs us about 60% of our total effort we put into software, the rest going to requirements, design and coding. That means that our coding time – the part you think you’re ‘speeding up’ by not doing quality work – accounts for a very small part of our overall efforts. You may be penny wise but pound foolish.

What is maintenance? It’s anything you do to code after it’s already written. That’s going to be the lion’s share of your work – and you probably have already noticed that you’ve done a lot of maintenance. You attempt to write out some code to solve a problem, and it doesn’t work exactly right. Everything you do after that point is maintenance. When you hack on your program to attempt to make it work, that’s maintenance.

It’s most of what you do as a developer.

When we forego quality, we rack up what’s called ‘Technical Debt‘. We call it debt like credit card debt because, from the time we take it on to the time we pay it off, we have to pay ‘interest’ in terms of effort. We go slower and slower in future projects, spending more and more time hacking through our low-quality code to get things done.

Debt is generally a bad thing until you know how to deal with it. So for now, it’s best to learn how to never go into debt, as well as if you find yourself in debt, how to dig yourself out.

Why so early, though? Why do you need to learn about quality, now? You barely know how to do string manipulation or arithmetic in Python. Why are you having to worry about getting things perfect now?

There are two main reasons.

What did I JUST SAY about productivity?

Do you want to learn how to code faster? I just said that quality is the same thing as productivity because it prevents rework. Why wouldn’t you learn quality as soon as possible so that you can blow through the rest of these lessons as fast as possible?

Learn Good Habits Early

You’re going to run into a lot of coders who refuse to test. Who review peer review. Who think static analysis is a waste of time. They never really bit the bullet to learn the right way to do things, and they don’t like how testing, peer review, and other processes make them feel dumb.

We end up justifying a lot of stuff to ourselves to avoid psychological pain. Tests are painful – they’re going to tell you where you screwed up. But they don’t have to be – if you learn good testing discipline early, you’ll realize that tests are just a part of the process, not a tool designed to make you feel dumb. You’ll realize that everyone makes mistakes – a lot of mistakes – and the earlier we catch them, the better for everyone. Macho coders who refuse tests, reviews and other help aren’t all that good, they just don’t want to be ‘exposed’ as an imposter.

Learn these things now so you never really learn how to ‘code’ without them – an early realization of their value means you never have to think “I can either implement this thing my Boss wants by tomorrow, or learn to test.” You’ll already know how to test.

You’ve Already Been Doing It

Code Combat is a test driven system – you had to keep trying out your code on the right to see if you passed the challenge on the left. Each state is a test, and it had certain requirements to move on to the next stage. That’s the exact same thing as a test.

Of course, Code Combat also gave you a very nice visual debugger too – you got to see where your character was. So keep in mind, testing and using the debugger go hand in hand.

On to Testing!

Early on, I advised learning how to do what’s traditionally called unit testing. Traditional unit testing is when you write some code, and then write tests that exercise that code. We’re going to be doing the opposite, though, and focus on Test-Driven-Development, or TDD.

What is TDD? What are its benefits?

TDD is when you write the tests first, then you write code that passes that test.

The benefits of TDD are as follows:

  • In the traditional approach, you can’t guarantee that you’ll write code that’s easily testable. As you get more experience coding and testing, you’ll realize that some code is hard to test, while another code seems easier to test. If you start with the tests first, you’ll almost always, instinctively, write code that’s easy to test.
  • You reduce pressure from management. If you write code first, bad managers might be tempted to ask you to deploy what you have, and explain “we can always test later.” If you start with the tests first, you never can be pressured to deliver before things are quality and get into technical debt.
  • TDD is also sometimes called ‘Test Driven Design’. This is because sometimes starting with the tests helps us think about our code as already complete and well designed – how would you like to interact with the module you’re writing? If you test against that design, then you’ll be forced to code to that design. Too often, if we try and build prototypes first, we end up testing whatever design we get that works. We don’t think about how we want our code to look from the outside and make sure it looks like that. With TDD, we get those benefits.

How do I do TDD in Python?

I’m glad you asked!

First, watch this video.

Then, read this article.

We’ll be using py.test from here on out, but it’s useful to see other testing frameworks like nosetest and unittest to see similarities.

How do I know when I’m done, or if I’ve done a good job?

Testing and TDD have a great initial metric of ‘goodness of testing’ called coverage. Coverage is a measure of how many lines of your program were executed by your tests.

Generally speaking, higher coverage is better, and if you can get 100% that’s great, although some functions are intrinsically harder to test. There are many kinds of bugs that can sneak through 100% test coverage. Coverage is a good first metric to watch, though.

You can get a plugin for py.test that adds coverage reporting here. Read through the overview to see how to use it, specifically how to generate an HTML report! When you generate an HTML report, you should be able to open the index.html file it generates (in a directory it makes) in your browser to see a very nice, colorful coverage report built for you automatically.

Once you’re done with the videos, reading and tools above, you’re ready for this module’s code challenge.

Code Challenge

You need to fork this repo on GitHub.

I’ve started a simple calculator module and tests, with the “calculator_add” function, and “test_calculator_add” test.

I want you to fully implement the simple arithmetic for a calculator:

  • Add “calculator_subtract”, “calculator_multiply” and “calculator_divide” functions.
  • Add appropriate tests.
  • Do it all TDD style

For the purposes of this exercise, you should be able to reach 100% code coverage easily.

To ensure you’re following TDD, please use the following Github workflow:

  1. Fork the source repo
  2. Create a branch where you’ll do your work
  3. Add a test THAT FAILS, commit your work.
  4. Add code that makes the test pass, commit your work.
  5. Go to 3 until all functions are added.
  6. When you’re done, open a pull request on your branch (how to open a pull request)

What you are doing, and what TDD emphasizes, is also known as unit testing. There are many other forms of testing, but no one seems to agree on hard and fast definitions so we’re gonna skip over them for now.

The important thing to note is that unit tests and tests you write need to be reasonably fast. This is so you can run them again and again and again – even after every file change. A good habit to get into is every time you save your file, you should run the test.

There’s actually a helper function in py.test that will automatically rerun your tests for you as you make changes to a file. It’s a great idea to use this plugin and keep at least two windows open, one for your test runs constantly and one for your text editing. (You may even want three, one text editor, one test window, and finally, a window with IPython running where you can interactively prototype your program.)

Finally, there is a py.test option to drop into the debugger on test failure. Try it out.

Mentors

For this challenge, please confirm that your mentee has built the above-mentioned functionality, that they can generate an HTML coverage report and that they’ve reached 100% test coverage. You can confirm this in person.

You’ll also need to confirm they know how to use the looponfail feature and the debug on fail feature of py.test.

You need to also review the commit history in their repo to make sure they’re following TDD.

Onward and Upward!

First, the requirements imposed above – that you’ll need 100% test coverage, or as high as you can get it, and that you’ll commit after every test and after every passing of a test, are requirements of all future challenges. Your mentor will double check that you are keeping test coverage high and working in a TDD fashion!

October 21, 2016 Posted by | Uncategorized | 2 Comments

Don’t Mistake Engagement for Passion

Passion is used as a code word nowadays by many hiring managers. It means “will work a lot of extra hours for free”. Rightfully so, many people are turned off by this idea about passion, about passionate developers, about looking for passionate hires.

But you have to give the mislead hiring managers a little slack. After all, they’ve been asked to hire good talent, so they look around at the company and they ask “well, who’s good here, and how do we hire more of them?”

They see some of your best people putting in hours on the weekend, staying late, and taking their work very seriously. And they – without much critical thought – think, “Ah, yes, it’s those type of people we want. I’ll just hire more of them!”

But here’s the problem – those people aren’t passionate they’re engaged. And there’s a huge difference.

Passion is hired. Engagement is earned.

In fact, even the above is a simplification, since ‘passion’ is so hard to find. You really have to find boy-geniuses (and they’re almost always males) fresh out of school who seem to know the latest technology and be willing to work all hours of the night because they don’t know any better.

Then they churn out some hacked together crap as fast as possible, and seem to provide evidence that you have to hire passionate people who are “smart and get things done”. People who are worried about “work-life balance” or “working smarter, not harder” aren’t going to fit into your “workplace culture” – you hire passionate people and expect the best!

But you have no diversity, which means your innovation sucks. And you have no senior developers, which means your quality and repeatability sucks. You have software that sometimes works and is a few months away from crumbling under its own weight. You wrote facebook in PHP and have no idea how to add new features to stay competitive – everything seems to take forever to change.

What gives?

Engagement is frequently confused for passion. Engagement is “I believe in this company and this project.” Can you hire engaged people? No, how would you? You can’t truly believe in a group of people or what they work on until you’ve… worked with them. Usually for awhile.

There’s no question you can ask in the interview process that’s going to determine whether someone will be engaged or not. That’s because everyone is capable of it. Everyone can become engaged – they can believe in the people they work with, and the project they work on, and when they do that they become internally motivated to push hard for those two things.

So instead of hiring for ‘passion’, try hiring good all around talent and build a team that people are proud to be a part of, and a project that they’re proud to work on.

October 17, 2016 Posted by | Uncategorized | Leave a comment

Alan Kay did not invent OOP

I could have sworn I already had an article here.

There’s  story that’s occasionally pulled up from the ditches about some demo of technology that was supposed to be object-oriented, and some guy raises his hand and says “That’s not object oriented!”

“Well what would you know about object oriented?” the demonstrater replies

“I invented the term.” the interluder protests.

And scene.

This may have happened at Xerox PARC some years ago. And the guy who claimed to have invented the term probably was Alan Kay. But just because Alan Kay says he invented object oriented programming, that doesn’t actually mean he did.

First, let’s look at the common sense argument. The dominant languages that are classified as object oriented today are some static languages like C++, C#, and Java, as well as dynamic ones like Python and Ruby.

Alan Kay is also known for having said: “I invented OOP and C++ was not what I had in mind”. Well, if an inventor is saying he created X, and the most popular version of X is C++, and he’s saying C++ isn’t X… one of two things may be true:

  • We’ve somehow come very far from the inventor’s original vision (which I believe Mr. Kay is implying here)
  • Or the guy claiming to have invented X didn’t actually invent X

C++, in particular, is a good example in this common sense argument. It takes its roots from a very old language, Simula, which Wikipedia claims to the first object-oriented language.

To sum up the story so far, one of Alan Kay’s targets here – C++ – that he’s argued is not really object oriented, apparently derives quite closely from the first object-oriented language. Huh. So if C++ isn’t object oriented, neither was Simula?

That leads us into the second, historical argument. Of course, that’s weird, because Simula was one of Alan Kay’s own inspirations for Smalltalk, the language that never really took off that he bet his hat on and is now so cranky no one uses.

So there’s basically two different interpretations of what an object-oriented language is – one that includes C++, Simula and Smalltalk, and one that only includes Smalltalk. And of course, the guy who writes mostly in Smalltalk is quite happy to keep telling you he invented OOP and thus should be listened to.

Let’s talk about one final argument here – what if Alan Kay was merely saying he coined the term ‘object oriented’? So in this interpretation, Alan Kay isn’t going around trying to convince everyone he’s a genius and invented OOP and why don’t we all use Smalltalk, but rather, he merely coined a clever phrase to help other people.

Well, this runs afoul of the common sense argument again – what’s the use of coining a term if no one actually uses it that way. If most people think of C++, Java, and Smalltalk when they think of OOP, then you claiming you coined the term to only mean Smalltalk again runs afoul of two conclusions:

  • It either doesn’t matter what you said because most people have used a substituted definition anyway
  • Or you’re lying

In fact, doesn’t it all beg the question? If Alan Kay meant Object Oriented to mean Smalltalk, yet everyone else thinks it means C++ as well… well, who’s going around and spreading that ugly rumor?

In this world that Alan Kay is trying to present to us, he cleverly defines a term, then goes out to tell everyone what great work he’s done. And scoundrels like the people who designed C++ and Java try to ride his success. What ungrateful wretched tricksters, convincing us all that Object Orientation may not mean what Alan Kay is so sure it means!

And somehow, they were all so successful in convincing us that Object Orientation was something Alan Kay says it’s not, and Alan Kay was so unsuccessful, that that is why today we’re all confused.

That may be how things have happened. But let’s see what’s more likely – is it more likely that tricksters emerged to try and get their own languages adopted by hopping on the OO bandwagon “in name only”, meanwhile the one true and good language Smalltalk languishes… Or, is it more likely that OO was tied at the time more to what the inventors of C++ and Java claim it was tied to: Simula.

What we’re asking here is what a word meant in the 80’s, and while Alan Kay has made quite a fuss in the late 90’s and 00’s about what he meant then, he didn’t seem to be making much of a fuss in the 80’s when C++ and Java were being written up.

So let’s look at this version of events – here we have a whole bunch of languages being inspired by Simula: Smalltalk, Java, C++ and others. They all adopt a very similar view of the term ‘object oriented’ in that they have classes, objects, inheritance and the like. So far, so good.

Then some guy, Alan Kay, starts getting cranky that Smalltalk isn’t getting adopted and so starts trying to use what fame he had from Xerox to rewrite history.

Frankly, I believe the latter if only because it requires fewer tricksters. The first version of history requires everyone but Alan Kay trying to trick us, while the second only requires Alan Kay trying to trick us.

This is what frustrates me so much about this – we have some guy, using the fact that most other inventors are dead or in other non-English speaking countries, running around claiming his off the wall interpretation of OO is the one true way.

It’s like the ultimate archetypical asshole machismo asshole programmer, willing to ruin others careers and say terrible things to get his own time in the spotlight. He knows he isn’t going to be challenged because Simula was invented somewhere else, and he has a whole bunch of kids to try and teach his own version of history to. He exemplifies the lone genius programmer who is “smart and gets things done”, and perpetuates that myth in the middle of programming’s diversity and teamwork crisis.

Every time I hear the Alan Kay story, I think – someone else has just learned not to listen to others, not to work with others, not to give credit to others, to believe that everything’s been built by themselves and that every miscommunication is someone else’s fault. Culture is wrong, not me, and that’s why I don’t get along with everyone.

And that’s just fucking dangerous.

September 28, 2016 Posted by | Uncategorized | Leave a comment