#13 from R&D Innovator Volume 1, Number 4          November 1992

The Value of External Review
by Craig Loehle, Ph.D. 

Dr. Loehle is a mathematical ecologist at the Environmental Research Division, Argonne National Laboratory.  He is writing a book, Chaotic Science:  The Search for Pattern in Ecology. 

Large research-and-development projects are a mixed blessing:  They're essential to an increasing number of investigations, but they're hard to review and control.  Specialists within the project may have the most technical expertise, but they typically lack perspective on the whole process.  They may be prohibited from criticizing the project --or their criticism may be ignored.  Thus the Hubble space telescope is in orbit with a serious optical aberration because NASA experts never checked the test protocols and test equipment used to manufacture the mirror.  In other words, NASA failed to review the project adequately. 

Many projects are vulnerable to failure because of inadequate review of process and design.  And when I speak of review, I'm not talking about the researchers involved chatting formally or informally about the project, nor about the project leader giving a presentation to senior managers.  I'm speaking of hiring outside experts to examine a project in detail.

External reviewers--whether they come from another department or another organization--have several key advantages.  They bring diverse backgrounds to the process and are not committed to a particular plan.  Perhaps most important, they can't be fired or ostracized for pointing out a fatal flaw.

On the other hand, no amount of internal paranoia, managerial oversight or quality assurance can prevent problems on major undertakings.  It's just too easy, and utterly natural, for people who are intensively engaged in a project to develop blinders.   (More on this later.)

The most familiar type of review is what I call the "dog-and-pony show."  In this exercise, the latest research results or invention are presented with slick graphics to an audience of top brass, some of whom are expected to ask "tough" questions.  This type of review is good for the presenting scientists or engineers because it gives them exposure.   

It also makes managers feel they are keeping in touch.  The fact is that an internal reviewer is unlikely to report a major flaw during a dog and pony show.  And the executives aren't likely to find flaws in the presenter's experiments.  The purpose of such an exercise is actually communication.  Important as that may be, it's not a review.

What Review Does Work?

The most effective review is a comprehensive look by outside peers at a new plan or design.  Its purpose is neither to rubber stamp nor to make yes-or-no decisions, but to improve the design or process.

A good review should take place at an intermediate phase of development, when the project is at the prototype or rough-draft stage.  This timing allows concrete criticism to be made early enough so mistakes can be corrected at minimal cost.

Participants must be chosen carefully.  Peer reviewers are most effective if they're actually peers--have a similar professional rank--of those whose work is under review.  Eminent or senior reviewers may not be up-to-date on crucial technical details.  In addition, even minor criticism from a famous scientist or a high-level R&D administrator can overwhelm a junior researcher.  On the other hand, if those whose work is under review are technical big shots, they are bound to resent reviewers who are wet behind the ears.

The Reviewer’s Personality

Once these criteria are met, the key factor to consider is the reviewer's personality.  By this I do not mean extroversion, charm, or wittiness (though having a few wits can help).   Rather, I'm talking about discerning, and avoiding, personality traits that are destructive to the review process: particularly pessimism, a tendency to nitpick, perfectionism, a one-track mind, and a need to dominate.

Pessimists can be extremely discouraging in any innovative context.  Certainly, a review must provide a reality check, and while pessimists typically claim that they are "just being realistic," no pessimist has ever supported a new product or believed that a proposed design was useful, feasible, necessary or cost-effective.  The meeting chair must firmly control pessimists and remind them that their inherent negativity does not prove flaws in the subject matter.

The nitpicker may have an admirable concern for detail, and in the lab this personality can be quite useful.  But excessive focus on detail can derail the review process:  Why discuss which brand of capacitor to use in a circuit until you are sure the overall circuit is functional?

Perfectionists are nitpickers with a broader mission.  They want absolute proof that a new product or process will sell or work, or that a theory is true, even though such proof is never available until far too late.  To get unimpeachable results, they insist on huge sample sizes.  These problems can be avoided if the meeting chair firmly keeps nit-pickers and perfectionists on track and, if necessary, silences them.  (If potential reviewers with these tendencies happen to be famous, I wouldn't invite them in the first place--they could dominate the proceedings.)

Derailed or on Track?

What about people with a one-track mind, the annoying type who tends to sing the same chorus over and over?  Perhaps these people lack a short-term memory (or assume everyone else does), but they never seem to mind their insufferable redundancy.  The chair must control windy one-trackers, perhaps by pointing out that their comments have already been noted in the minutes and asking if they have anything new to contribute.

Reviewers with a need to dominate can subvert the review process to gratify their egos.  These reviewers seek fatal flaws in the product or process but seem bored at the prospect of fixing them.  Dominators are more interested in being right (and proving others wrong) than in being helpful.  Such people are difficult to control and should be excluded from the outset.

Overall, the choice of participants is key to a successful review.  While the chair can control some behaviors, extreme cases can irreparably damage the process and should be excluded.

A Review Workshop

Meeting logistics are the final key to success.  The best review I've attended was a workshop held at a remote conference center, which right away prevented interruptions.  During the first half-day, an overview of the components of the proposed project was given, with no questions from reviewers.  Then a full day was devoted to detailed presentations, with regular questions and discussion.  Review teams spent several hours on various technical aspects of the project and composed comments, questions and recommendations.  During the final half-day these remarks were presented and project personnel explained how they might modify their design to take the comments into account.  Thus a complete cycle of review was completed, right through to revisions, at one workshop.

This workshop violated several "rules" of meeting behavior.  First, rather than having a secretary take notes, scientists were designated as scribes for specific topics.  For example, one scientist was asked to note any comments on measurement methods.  Reviewers' technical comments are often incomprehensible to outsiders, and minutes taken by secretaries can produce lots of literal transcript but little sense.

Second, we omitted flip charts.  Complex technical discussions are difficult to summarize in short phrases and hence can rarely be captured on flip charts.  Trying to summarize a discussion on a chart disrupts the flow of discussion without producing much of value.

If the above guidelines are followed, the reward is a profitable review  that can greatly improve the chances of achieving a project's goals, and also save time and money.

1-50  51-100  101-150  151-200  201-250  251-300
301-350  351-400  401-450  451-500 501-550  551-600
601-650

©2006 Winston J. Brill & Associates. All rights reserved.