William S. Verplanck, PhD
Professor of Psychology, Emeritus
University of Tennessee, Knoxville
Knoxville, TN 37916
NHSTP is embedded in the research of ‘cognitive science’. Its use is based on unstated assumptions on the practices of sampling, of “operationalizing,” and of using group-data. NHSTP has facilitated both research and theorizing–research findings of limited interest and diverse theories that seldom complement one another. Alternative methods are available for data acquisition and analysis, and for assessing the “truth-value” of generalizations.
Since 1955, the cumulative number of papers cited by the author is a linear function of the year. A short fall-off in rate of cites/year in the early seventies is made up by an increased rate in 1990, when “questions began to be asked.” The few references prior to 1955 are dated over a period of years: one in 1763, the rest from 1927 (Bridgman) to 1954. These are “basic” works on the philosophy of science in books, and at work.
George Miller’s 1956 paper, entitled “The magical number seven, plus or minus two: some limits on our capacity for processing information,” often taken as the beginning of the “cognitive revolution,” introduced “information processing” into psychology. “Information processing” became the subject for theorizing in “cognitive science.”
In due course, first Popper and then Kuhn confirmed and endorsed, at least by implication, the ‘cognitive revolution’, the new paradigm, ensuring that the theory of the ‘scientific empiricists’ about theories and theory-testing would be wedded, by still further theories (e.g., about “truth” and “falsification”), to NHSTP.
Dr. Chow’s citations may reflect only his decisions in deciding what to cite. They nevertheless clarify the historical development of the wedding of ‘cognitive science’ to inferential statistics.
Back in the forties, somebody measured the Hullian ‘e-bar-dot’ by dint of 26 or so assumptions made about data from rats. Ever since, this reviewer has been suspicious of “assumptions” and is inclined to hunt them out, and to take alarm when they remain hidden.
Dr. Chow states the assumptions upon which NHSTP is (are?) based. In a complex series of arguments and presentations, he enables the reader to identify the kinds of dataÄand guesses (hypotheses, theories)Äbased on them that are suited to treatment by NHSTP.
Chow does not consider (a) the samples of individuals whose behaviors provide the data used with NHSTP for the development and testing of theories on the cognitive functions of ‘mind’ or brain of the human (and some other) species, (b) evaluation of the concept ‘operationalize’, (c) the relationship of statistical measures made on group data to the behavior of any one individual in the group, or (d) the implications for both research and theory in psychology of NHSTP methodology for rejecting ‘untruth’.
Sets of data used in the cognitive sciences are most often derived from the behavior of experimental groups made up of students at college/university A in the year X. Most often, these are students in psychology courses who were required to serve as subjects, or who served as subjects to make up for an exam they missed, or who were paid to serve, or who volunteered. Many of these may have liked, disliked, or not known the individual who “ran” them.
Can findings on such a sample be replicated using a sample of students at college B, C, or D, in years X+10, X+20, or X+30? Are these appropriate samples of the human speciesÄeven of young Americans? (Time for a t-test!)
Are such samples appropriate for generating theories purporting to find out about “cognitive structures” of the brain or ‘mind’?
In “operationalizing” theoretical terms and statements, Boring and Pratt stood ‘operationism’ on its head. One published researchÄreprinted in a book of readingsÄ”operationalized” the Freudian “identification with the father-figure” (or some such). He measured this by count of spools packed in boxes by Stanford undergraduates in 195x, following the single instruction “pack the spools in the boxes” (with no further instruction or ‘feedback’) until they stopped. How many other theoretical entities can this procedure “operationalize?”
“Operationalization” produces garbage; most psychologists have failed to note that Bridgman’s ‘operationism’ developed from the methods used in measuring ‘time’ and ‘space’Äbefore “Big Bang” theory.
Theorists use group-data from samples, using “operationalized concepts” to construct falsifiable (seldom-falsified) theories about the structures of the human mind or brain that “process information.” What structures? Where? In the mind or brain of that .56 infant of the 1.56 infants that statistics tell us is/are born to the N female graduates of Z University in the 25 years following graduation?
What are the assumptions underlying the application of probability theory about the distribution of errors to such group-data?
In analyzing group-data, one adds a datum (information; 0, 1) on each identifiable thing a single subject does, first with another such datum-data, then with other things that this individual does; these new ‘data’ are then added to the equivalent new ‘data’ of every other subject, producing newer ”data”. Such a procedure seldom fails to produce normally distributed ”data”, suitable for NHSTP. That the occurrences of each specific action (response) of each subject might show orderlinessÄ”lawfulness”Änot suitable for NHSTP methodology is ignored, even though this is easily demonstrated by research in both ‘psychophysics’ and ‘learning’.
The wedding of NHSTP with cognitive science, with the blessing of ‘theory-construction’, has been successful: Count the number, since 1955, of papers given at meetings and published in refereed journals, then duly summarized in “secondary sources.” Count the number of kinds of memory discovered by “operationalizing.”
NHSTP has enabled research to be carried out easily; computer programs can both produce and analyze data, all but untouched by human handsÄor thought. Doing such research is easier than observing, counting, and classifying.
That most findings are trivial, that the theories are all but irreconcilable, that answers to most questions lie buried under ten to the n’th bytes of “information” is becoming evident. A cognitive scientist now wonders publicly whether they’ve been “spinning (our) wheels” for the past thirty years or so.
Behavioral science needs data on the individual behaviors of individual organisms, each finding ‘verified’ÄreplicatedÄby data taken from a number of other individuals, one by one. The visual methods introduced by Tufte, non-parametric “quick and dirty”, and descriptive statistics (excluding means and standard deviations), suffice in testing generalizations, confirming or disconfirming them.
*No references are cited; this reviewer assumes that the cutting-edge readers of BBS are already familiar with publications alluded to.