Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Tuesday, April 24, 2012

Cultural Difference

Here's a cultural difference:

When I am doing test first programming (more formally known as test driven design), when I write code it has usually been tested.  To some degree.  With an automated test checked in.

But working with people who don't do test-first or TDD, they sometimes write gallumphing of code. And check it in.  No automated test. Possibly they may have manually tested it.  Or perhaps not.  Possibly they have no expectation that the code will work at all.

Now, what happens when I start using that code, expecting it to have been tested and to work?  If I'm lucky, it will break right away.  If unlucky, it will almost work... until a good while later I figure out that there are bugs in code that I thought was working.

Ideally, there was communication that "this code is not expected to work".  Verbally?  Comments in the code?

Automated tests are, in some ways, just a form of communication.  A form of communication that is in your face and hard to ignore.

Should you ever write code that isn't tested?  Sure, maybe: to sketch out an idea.  But it should be clearly documented as such.  Ideally automatically:  if code that you don't expect to work is executed, perhaps it should log or otherwise advertise that it is not expected to work.

I encountered a similar problem in another tool, that processed a class of input messages.  Now, in theory it should have been processing all of a class of input messages. It could tell the class of all inputs.  But for whatever reason only one particular example of that class of input messages was handled.  The rest were ignored. Silently.  No warnings.  Since I expected the tool to handle all of the class of input message, not just one particular type, it took me a while to onion peel.

Now, I did not fix the tool to handle all of the class of inpurt messages.  I just added one more instance.

But I could set it to warn:

if in class of input messages
      if example1 of this class ...
      else if example 2 of this class ...
      else
              warn:  I know I am supposed to do something for all of this class, but am not handling ...


No comments: