from R&D Innovator Volume 1, Number 4
Value of External Review
Loehle is a mathematical ecologist at the Environmental Research
Division, Argonne National Laboratory.
He is writing a book, Chaotic Science:
The Search for Pattern in Ecology.
research-and-development projects are a mixed blessing:
They're essential to an increasing number of
investigations, but they're hard to review and control.
Specialists within the project may have the most technical
expertise, but they typically lack perspective on the whole
process. They may be
prohibited from criticizing the project --or their criticism may
be ignored. Thus the
Hubble space telescope is in orbit with a serious optical
aberration because NASA experts never checked the test protocols
and test equipment used to manufacture the mirror.
In other words, NASA failed to review the project
projects are vulnerable to failure because of inadequate review of
process and design. And
when I speak of review, I'm not talking about the researchers
involved chatting formally or informally about the project, nor
about the project leader giving a presentation to senior managers.
I'm speaking of hiring outside experts to examine a project
reviewers--whether they come from another department or another
organization--have several key advantages.
They bring diverse backgrounds to the process and are not
committed to a particular plan. Perhaps most important, they can't be fired or ostracized for
pointing out a fatal flaw.
the other hand, no amount of internal paranoia, managerial
oversight or quality assurance can prevent problems on major
just too easy, and utterly natural, for people who are intensively
engaged in a project to develop blinders.
(More on this later.)
most familiar type of review is what I call the "dog-and-pony
show." In this
exercise, the latest research results or invention are presented with
slick graphics to an audience of top brass, some of whom are
expected to ask "tough" questions.
This type of review is good for the presenting scientists
or engineers because it gives them exposure.
also makes managers feel they are keeping in touch.
The fact is that an internal reviewer is unlikely to report
a major flaw during a dog and pony show.
And the executives aren't likely to find flaws in the
presenter's experiments. The
purpose of such an exercise is actually communication.
Important as that may be, it's not a review.
most effective review is a comprehensive look by outside peers at
a new plan or design. Its
purpose is neither to rubber stamp nor to make yes-or-no
decisions, but to improve the design or process.
good review should take place at an intermediate phase of
development, when the project is at the prototype or rough-draft
stage. This timing
allows concrete criticism to be made early enough so mistakes can
be corrected at minimal cost.
must be chosen carefully. Peer
reviewers are most effective if they're actually peers--have a
similar professional rank--of those whose work is under review.
Eminent or senior reviewers may not be up-to-date on
crucial technical details. In addition, even minor criticism from a famous scientist or
a high-level R&D administrator can overwhelm a junior
researcher. On the
other hand, if those whose work is under review are technical big
shots, they are bound to resent reviewers who are wet behind the
these criteria are met, the key factor to consider is the
reviewer's personality. By
this I do not mean extroversion, charm, or wittiness (though
having a few wits can help).
Rather, I'm talking about discerning, and avoiding,
personality traits that are destructive to the review process:
particularly pessimism, a tendency to nitpick, perfectionism, a
one-track mind, and a need to dominate.
can be extremely discouraging in any innovative context.
Certainly, a review must provide a reality check, and while
pessimists typically claim that they are "just being
realistic," no pessimist has ever supported a new product or
believed that a proposed design was useful, feasible, necessary or
meeting chair must firmly control pessimists and remind them that
their inherent negativity does not prove flaws in the subject
nitpicker may have an admirable concern for detail, and in the lab
this personality can be quite useful.
But excessive focus on detail can derail the review
process: Why discuss
which brand of capacitor to use in a circuit until you are sure
the overall circuit is functional?
are nitpickers with a broader mission.
They want absolute proof that a new product or process will
sell or work, or that a theory is true, even though such proof is
never available until far too late.
To get unimpeachable results, they insist on huge sample
sizes. These problems
can be avoided if the meeting chair firmly keeps nit-pickers and
perfectionists on track and, if necessary, silences them.
(If potential reviewers with these tendencies happen to be
famous, I wouldn't invite them in the first place--they could
dominate the proceedings.)
or on Track?
about people with a one-track mind, the annoying type who tends to
sing the same chorus over and over? Perhaps these people lack a short-term memory (or assume
everyone else does), but they never seem to mind their
insufferable redundancy. The
chair must control windy one-trackers, perhaps by pointing out
that their comments have already been noted in the minutes and
asking if they have anything new to contribute.
with a need to dominate can subvert the review process to gratify
their egos. These
reviewers seek fatal flaws in the product or process but seem
bored at the prospect of fixing them.
Dominators are more interested in being right (and proving
others wrong) than in being helpful.
Such people are difficult to control and should be excluded
from the outset.
the choice of participants is key to a successful review.
While the chair can control some behaviors, extreme cases
can irreparably damage the process and should be excluded.
logistics are the final key to success.
The best review I've attended was a workshop held at a
remote conference center, which right away prevented
the first half-day, an overview of the components of the proposed
project was given, with no questions from reviewers.
Then a full day was devoted to detailed presentations, with
regular questions and discussion.
Review teams spent several hours on various technical
aspects of the project and composed comments, questions and
the final half-day these remarks were presented and project
personnel explained how they might modify their design to take the
comments into account. Thus
a complete cycle of review was completed, right through to
revisions, at one workshop.
workshop violated several "rules" of meeting behavior.
First, rather than having a secretary take notes,
scientists were designated as scribes for specific topics.
For example, one scientist was asked to note any comments
on measurement methods. Reviewers' technical comments are often incomprehensible to
outsiders, and minutes taken by secretaries can produce lots of
literal transcript but little sense.
we omitted flip charts. Complex
technical discussions are difficult to summarize in short phrases
and hence can rarely be captured on flip charts.
Trying to summarize a discussion on a chart disrupts the
flow of discussion without producing much of value.
the above guidelines are followed, the reward is a profitable
review that can
greatly improve the chances of achieving a project's goals, and
also save time and money.