Quality control in Excel spreadsheets- a serious and universal issue!

Via Paul Krugman’s blog, I’ve been following a fascinating online discussion about the importance of spreadsheet error-checking and independent replication of modeling results in the economic research sector:

In a nutshell, when an independent research team finally succeeded in reproducing the results of a controversial recent paper that had big implications for policy response to the  recession, they found, in addition to a series of questionable assumptions and weighting schemes, a basic error in the Excel spreadsheet that many of us users are familiar with:

I just hate it when I add extra rows rows of data and forget to update my equation arguments accordingly- if only Excel would warm me about that!

I just hate it when I add extra rows rows of data and forget to update my equation arguments accordingly- if only Excel would warn me about that!

Economic policy implications of the RR paper aside, I’m personally very interested in the best practices associated with Excel modeling.  Many lifecycle assessment (LCA) tools are Excel-based since the program makes it so easy to represent and document the input-output relationships inherent in that kind of modeling.  In fact, much of my own recent work has been based around big Excel spreadsheets that I put together from scratch.  I first started using Excel when I worked in R&D prior to coming back to grad school.  The company I worked for did most of their general product design calculations in Excel, and had developed a set of best practices for model structure and documentation – practices that I still try to use today in my own work.  Beyond my own profession experience, Excel is an indispensable tool across many industries, particularly finance, yet it seems that everyone struggles with the same issues and is prone to the same types of mistakes, in most cases with much bigger repercussions.

Since Excel is such a useful and universal tool, how can we develop best practices to identify and avoid these user errors to which it is so prone?  Here are a few suggestions that I have started to implement in my own work:

Built-in error-checking-  PAY ATTENTION WHEN THE PROGRAM FLAGS A POTENTIAL PROBLEM!  The version of Excel I’m most familiar with (2011 for Mac) has a feature where, if it senses inconsistencies between adjacent cells (i.e. if you do a calculation using a series of cells but exclude some adjacent non-blank cells, or if you have two adjacent cells with the same equation, followed by a third with a different one) it flags it with a little green triangle.  It can actually be pretty annoying, since 95% of the time it’s flagging things that you did intentionally.  But for that other 5% the feature can be a lifesaver, and it was designed for exactly the type of error identified in the RR paper.

... oh wait, it does warn me!

… oh wait, it does warn me!

Spreadsheet structure- Even though a lot of spreadsheets out there are extremely difficult to follow, it’s actually very easy to keep them well-organized.  I typically use a four-column ‘name-value-units-notes’ structure to ensure that the proper variable names and units follow each calculation.  I always keep unit conversions separate from actual computations (lumping it all together gets really really confusing), and sometimes break big ugly computations into two parts so you can still see what’s going on with just a glance at the equation bar.  When the analysis gets big, I color-code the different sheet tabs, and then change the color of individual cells that contain important inputs from different sheets to match, to facilitate tracing out the underlying dataflow.  Any input values based on assumptions get highlighted in bright red.  All of this requires a little time to implement, but in my experience it’s an investment that pays for itself the first time you come back to the spreadsheet later and are trying to remember what you were doing earlier.  None of these little tips guarantee that you won’t make mistakes, but they greatly increase the readability of the finished product and the likelihood that you (or someone else) will catch little errors.

Sensitivity analysis-  This is modeling 101.  If your result is dependent on any uncertain inputs or basic assumptions (and what model isn’t?), at a minimum you should be perturbing those values and assumptions, observing how much your overall result changes, and reporting on your model’s sensitivity to those factors (it’s even better to do a full, rigorous uncertainty estimation, though in many cases that can be very challenging).  I’m personally very suspicious/dismissive of any published LCA results that don’t include sensitivity or uncertainty analysis; though it happens not infrequently, it reeks of the authors having a hidden agenda.  Note that the only-3000-some-word RR paper admits that their result ‘merit(s) further sensitivity analysis.’ Indeed.

Publishing spreadsheets- This is a tough one, and expectations seem to vary a lot by discipline.  While sharing your underlying spreadsheet models is a great way to get feedback from your peers and engage the research community, it’s not without thorny IP and proper-use issues.  If I share my model, am I limiting my ability to use it for future publications?  Should I be going through some sort of copywriting exercise?  Will others modify it in a way that I disagree with, and then publish contrary results?  All of this aside from that fact that you look somewhat foolish if someone does identify an error in it (even though this is a good thing from a scientific method perspective)!  Full disclosure- I have not yet shared the spreadsheet model underlying the publication that I linked to above.  I plan on doing so, but I haven’t yet made the time to clean up the file or go through the copyrightit-makeitreadonly-wheretohostit thought process…

Does anyone have any other tips to add to the list?

Postscript (for all the other doctoral students out there)-  While Excel is recognized as a potentially powerful platform for building simple models or doing basic data analysis, it’s a pretty crappy tool for data visualization, and it’s considered Busch-league to submit manuscripts with plots using the default Excel color schemes and formatting.  You can spot them from a mile away, and it gives some readers a bad first impression of the sophistication of your work (‘if they couldn’t even be bothered to make a nice figure in SigmaPlot or R, how good could the analysis be?’)

Advertisements
This entry was posted in policy, science and tagged , , . Bookmark the permalink.

13 Responses to Quality control in Excel spreadsheets- a serious and universal issue!

  1. Paul says:

    Wow! Good advice – thanks for sharing!
    Thats an interesting question about excel copyright issues.. I might ask some people about that.
    I was curious about the phrase “busch-league” so I looked it up… pretty interesting!
    http://boards.straightdope.com/sdmb/showthread.php?t=548790

  2. John says:

    A very nice short summary of the policy context of all of this:
    http://www.nytimes.com/2013/04/19/opinion/krugman-the-excel-depression.html

  3. Paul says:

    Go UMass Amherst! (my alma mater)

  4. John says:

    More quality research from the economics community:
    http://www.washingtonpost.com/blogs/wonkblog/wp/2013/04/30/the-studies-behind-austerity-are-weak-the-study-behind-uncertainty-is-worse/?wprss=rss_ezra-klein
    This is the same kitschy method that I and many of my colleagues use in the introductions to our dissertations to construct a “my research topic is relevant because the number of publications are increasing” plot- it’s a bit silly, and certainly not very useful in any quantitative sense. In this case, using media reports as an economic indicator when you can’t even tell if “uncertainty” is being reported to increase or decrease, is beyond absurd. What are the odds that this paper was peer reviewed?

  5. Pingback: Programming skills, and scientific inspiration vs. computational firepower | Energy and the Future

  6. Pingback: Repeatable and transparent data analysis: making the leap from Excel to Python (with tutorial) | EcoPress

  7. Excel says:

    Very true! We often forget this. I’ll try to put some focus in my courses too on quality control

  8. Pingback: Repeatable and transparent data analysis: making the leap from Excel to Python (with tutorial) – NREL CMS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s