Intellectual Imperialism
   (This is a slightly revised version of an essay that appeared in Dialogue, the newsletter of the
                    Society for Personality and Social Psychology)
                                            posted 11/27/02

                                                      Lee Jussim

Agricultural Imperialism
    A few years ago, while casually skimming through some social science
journals, I came across an article on "agricultural imperialism."  I almost lost it
right there.  Talk about taking a reasonable idea (imperialism) to a bizarre, exaggerated
extreme.  I had visions of vast fields of wheat, armed to the teeth, prepared to wage war on
defenseless fields of barley, soy, and rice.
     Until I started reading the article.  The author's point was that agricultural production
was becoming so standardized and excessively focused around a relatively small number of
crops (such as corn, rice, soy, and wheat), that many local, unique, and indigenous products
were being squeezed out of the marketplace and, functionally, out of production.  And the
point was not that this was, by itself, intrinsically bad.  Instead, over-reliance on a fairly small
number of crops would seem to put much of the human race at excessive risk should an act
of god (drought, disease, etc.) decimate one or two particular crops.  Although the author did
not quite put it this way, just as it is important to diversify your stock portfolio, it is
important for us, both as individuals and as a species, to diversify our food sources.  And the
creeping Westernization of agriculture threatened to undermine the diversity of those food
sources.

What is Intellectual Iimperialism?
    I use the term "intellectual imperialism" to refer to the unjustified and
ultimately counterproductive tendency in intellectual/scholarly circles to denigrate,
dismiss, and attempt to quash alternative theories, perspectives, or methodologies.
Within American psychology, for example, behaviorism from the 1920s through the 1960s is
one of the best examples of intellectual imperialism.  Behaviorists often characterized
researchers taking other (non-behaviorist) approaches to psychology as "nonscientific" (see,
e.g. Skinner, 1990).  And, although other forms of psychology did not die out, behaviorism
dominated empirical, experimental American psychology for four decades.  Although
behaviorism undoubtedly provided major contributions to psychology, to the extent that the
scientific study of intra-psychic phenomena (attitudes, self, decisions, beliefs, emotions, etc.)
was dismissed, ridiculed, or suppressed, behaviorism also impeded progress in psychology.

     Unjustified rejection of failures to replicate.  Intellectual imperialism emerges in all
sorts of ways.  One common manifestation is reviewers' tendency to reject articles because
they do not find (what the reviewer believes) someone else has.  Such studies seem to me to
have unusual potential to be particularly informative and intriguing.  They raise all sorts of
possibilities, such as: The original finding or phenomena is not as powerful or widespread as
the initial studies seemed to suggest; the new pattern may be as or more common than the
original finding; there may be conditions under which one or the other is more likely to hold.
But a common knee-jerk sort of reaction is "There must be something wrong with the study if
pattern X failed to replicate."  Certainly, this is possible.  But, it is also possible that there
was something wrong (or limited or left unarticulated) in the original study or studies
demonstrating pattern X.

     Just because researcher Smith published pattern X first, does that necessarily mean that
a subsequent study by researcher Jones, who found pattern not X, is fatally flawed?  I do not
see it -- there is no logical or philosophical reason to ascribe higher quality to a study just
because it was performed first.  Doing so constitutes intellectual imperialism -- unjustifiably
presuming one study's findings are superior to another's.

     The un(or at least rarely)questioned superiority of the experiment.  Correlation does
not mean causality.  A knee jerk reaction we have all been taught since our first statistics
class and maybe even our first psychology class.  But it is wrong.  Correlation does mean
causality.  If we discover that A is correlated with B, then we now know either that: 1) A
causes B; 2) B causes A; 3) C (or some set of C's) cause both A and B; or 4) some
combination of 1, 2, and 3 are true.  This is not nothing -- indeed, although we do not know
the precise direction or set of directions in which causality flows, we know a lot more about
causality than we did before we obtained the correlation.

     As far as I can tell, it has been overwhelmingly, and perhaps exclusively,
experimentalists who have touted the absolute superiority of the experiment.  Researchers who
routinely engage in both experimental and nonexperimental work rarely make this claim

     The alleged superiority of the experiment has been greatly exaggerated.  Whole fields
with considerably more scientific status and recognition than social psychology, such as
astronomy, paleontology, and evolutionary biology do not rely primarily on experiments for
building theory and discovering new knowledge.

     Of course, if we compare a perfect experiment (i.e., one whose procedures are fully
articulated, instituted flawlessly, which leaves open no alternative explanations, and involves
no measurement error) to a realistic naturalistic study, the experiment is superior.  But not if
we compare a perfect experiment to a perfect naturalistic study.  Our hypothetical perfect
naturalistic study is also executed perfectly, is longitudinal (thereby ruling out B, which is
measured at Time 2 from causing A, which is measured at Time 1), includes measures of all
possible alternative explanations (all possible "C's" in the C causes A and B sense), and all
measures are free of error.  In such a case, the experiment and naturalistic study are equally
perfectly capable of assessing causal relations between A and B.

     What about a realistically good experiment and a realistically good naturalistic study
(which, of course, is the bottom line issue)?  Because this issue is too complex to deal with in
this type of short essay, I will make only a few brief points here.  Although there may be
some net advantage of experiments over naturalistic studies, that advantage is small and
quantitative, rather than an absolute quantum leap.  Both rule out B causing A (at least if the
naturalistic study is longitudinal).  This leaves one major ground for comparison regarding
quality of causal inferences: their ability to rule out C's.  Experiments do not necessarily rule
out all C's.  They only rule out all C's that are uncorrelated with the manipulation.  An
obvious case is demand characteristics (though the possibility of C's correlated with the
manipulation is infinite, just as in naturalistic studies).  Some studies may produce differences
between conditions, not because the manipulation worked, but because participants figure out
what responses the experimenter wanted them to provide.

     Naturalistic studies nonetheless do have a harder time ruling out those pesky C's.  But,
if there is any prior empirical work in the area, any theory, or even any related theories, the
researcher may often have a good idea of just who are the most likely contenders for C's.
They can then be measured and controlled.  Not necessarily as good as an experiment, but not
a sloppy second, either, at least not if those C's are reasonably well measured.  Indeed,
because researchers using naturalistic designs may be more sensitive to C's than many
experimentalists, they may often make more of an effort to include, measure, and control
those C's in their designs.  If so, at least some naturalistic studies may do a better job of
ruling out C's than some experiments.

     Furthermore, even if the causal inferences derivable from a typical naturalistic study
are not quite as convincing as those derived from a typical experiment, the naturalistic study
will often provide more information about naturally-occurring relationships than will an
experiment.  To the extent that we are trying to understand basic processes, therefore, I would
give the edge to the experiment.  But to the extent that we are trying to understand the role of
those processes in everyday life, I would give the edge to the naturalistic study.  Whether
there is any greater net increase in scientific knowledge, even of causal relationships, resulting
from experiments than from naturalistic studies is, therefore, primarily a matter of opinion,
perspective, and context.

     Of course, as a field, we do not really need to choose.  Both experiments and
naturalistic studies are extremely important, precisely because they complement each other so
well.  Put this way, it probably seems obvious.  If so, then you are already agreeing with me
that any tendency toward methodological imperialism (dismissing, derogating, giving less
credence to naturalistic studies over experiments) is not a healthy thing for our field.

     The curious case of (in)accuracy.  For years, social psychologists, especially those
with a social cognition orientation, have waxed enthusiastic over error and bias research, and
rejected almost out of hand accuracy research.  Consider the following:

"It does seem, in fact, that several decades of experimental research in social psychology have
been devoted to demonstrating the depths and patterns of inaccuracy in social perception ...
This applies ... to most empirical work in social cognition ... The thrust of dozens of
experiments on the self-fulfilling prophecy and expectancy-confirmation processes, for
example, is that erroneous impressions tend to be perpetuated rather than supplanted because
of the impressive extent to which people see what they want to see and act as others want
them to act ... (Jost & Kruglanski, 2002, pp. 172-173).

"Despite the obvious importance to social psychology of knowledge about person perception
processes, the development of such knowledge was delayed by a preoccupation with the
accuracy of judgments about personality ... The naivete of this early assessment research was
ultimately exposed by Cronbach's elegant critique in 1955.  Cronbach showed that accuracy
criteria are elusive and that the determinants of rating responses are psychometrically
complex" (Jones, 1985, p. 87).

"The accuracy issue has all but faded from view in recent years ... On the other hand, in
recent years, there has been a renewed interest in how, why, and in what circumstances
people are inaccurate."  (Schneider, Hastorf, & Ellsworth, 1979).

     Despite spending pages and pages on inaccuracy, error, and bias, both the
recent round of handbook chapters and most undergraduate texts, hardly
discuss accuracy at all.  The reasons for social psychology's rejection of
accuracy research are too long and involved for this essay; two short points, however,
highlight the intellectual imperiousness of attempts to denigrate or dismiss accuracy
research.  First, how can we possibly reach conclusions about inaccuracy unless we can also
reach conclusions about accuracy?  This question is mostly rhetorical, because on its face, the
question seems ludicrous.  Its not completely ludicrous, primarily because research on errors
can provide insights into processes, but whether those processes typically lead to accurate or
inaccurate perceptions and judgments is a separate question that rarely can be addressed by
process research.  Furthermore, some biases (which are not necessarily the same thing as
errors or inaccuracy) actually enhance accuracy (Jussim, 1991).  All this is very rich and
interesting, at least to some of us.  The entire analysis, however, could not occur at all unless
at least some researchers studied accuracy.  This suggests that attempts to dismiss accuracy do
us all a disservice by attempting to clamp theoretical and empirical blinders on the field.

     Second, there is the supposed "criterion problem" in accuracy research (highlighted in
the Jones quote).  This criticism is so common that it has been known to evoke paroxysms of
sweat, angst and even self-flagellation from people engaged in actual accuracy research.
Aren't the criteria for evaluating the validity of social beliefs so vague and fuzzy as to render
attempts to assess accuracy meaningless?

      I have never seen criticisms of the criteria used to establish self-fulfilling prophecies
that remotely resemble those leveled at accuracy research.  I find this peculiarly ironic
because, of course, although the processes by which a perceiver's belief become true are
different, the criteria for establishing their trueness are (or, at least, should be) identical.
Social psychology cannot have it both ways.  It cannot be tortuously difficult to identify
criteria for establishing accuracy unless it is equally tortuously difficult to identify criteria for
establishing self-fulfilling prophecy.  Conversely, it cannot possibly be unproblematic to
identify criteria for establishing self-fulfilling prophecy unless it is equally unproblematic to
identify criteria for establishing accuracy.

Some Scientific Claims Really are Just Plain Wrong
     Do not get me wrong.  Sometimes mountains of data really do say "X is true and Y
isn't."  The end (at least until someone comes up with new data saying Y could be true
sometimes after all).  When there is sufficient research to document the falsity of Y, so be it,
and we should all feel free to say that Y just ain't true.  But the criteria should be the data --
not our own preferences for one view over another.  And, the entire point of this essay is that
premature denigration or dismissal of an area of research restricts our data, thereby reducing
the quality of the science produced by our field.  It is one thing if we have tons of data that
Y isn't true.  But it is another thing entirely if there is just no evidence that Y is true, because
research on Y has been prematurely stigmatized or trivialized.  In such a case, the value and
credibility of our field, and our ability to both understand human nature and to improve the
social condition, have been sorely limited.

Intellectual Affirmative Action
     Is there a solution?  Well, one of the best solutions I know of to bias and
discrimination remains affirmative action.  Intellectual affirmative action would involve both
reviewers and, especially, editors, taking a position of being especially favorably predisposed
to publishing intellectually diverse (i.e., different perspective, different results) research.  To
get concrete, the next time you come across a study that fails to find stereotype threat effects,
or a priming effect, or that finds people have extraordinarily good access to their own
cognitive processes, or that conscious controlled processes seem to dominate over automatic
ones -- to overcome your own predisposition to reject such papers, set what may seem to feel
to you like a lower, not higher, theoretical and methodological bar for acceptance.  This will
merely compensate for your own predisposition to look negatively on such papers, thereby
giving them a fair chance.  Let the result out there, so the rest of us can do our work trying to
sort it all out.

                                                      References

Jones, E. E. (1985).  Major developments in social psychology during the past five decades.
In G. Lindzey & E. Aronson (Eds), The handbook of social psychology (Third edition, Vol 1.,
pp. 47-107).  New York: Random House.

Jost, J. T., & Kruglanski, A. W. (2002).  The estrangement of social constructivism and
experimental social psychology: History of the rift and prospects for reconciliation.
Personality and Social Psychology Review, 6, 168-187.

Jussim, L.  (1991).  Social perception and social reality: A reflection-construction model.
Psychological Review, 98, 54-73.

Schneider, D. J., Hastorf, A. H., & Ellsworth, P. C. (1979).  Person perception (2nd edition).
Reading, Massachusetts: Addison-Wesley.

Skinner, B. F. (1990).  Can psychology be a science of mind?  American Psychologist, 45,
1206-1210.