Department of Psychology, Harvard University, Cambridge, Mass.
All of us, whether psychologists or not, observe people acting. We learn rules of “practical psychology.” Some of us, especially the novelists and playwrights, do a remarkably good job of giving plausible accounts of behavior, often in terms that seem pertinent. These writers, however, do not employ the language used by psychologists at either end of the spectrum. They describe ordinary, everyday behavior, and describe it well, but not by using the conceptualizations that psychologists seem to have found useful, nor even terms that can be readily translated into such conceptualizations.
The psychologist’s efforts tend to be limited in their usefulness to the description and prediction of the behavior of people whose behavior is awry, or of people who are engaged in the strange and unusual activities demanded of them in a laboratory of experimental psychology. Dale Carnegie, practical politicians, and, perhaps, everybody but psychologists, concern themselves with simple, ordinary, everyday behavior. One reason for this situation is, perhaps, the lack of methods of conceptualizing behavior, or of abstracting relevant aspects of behavior for study that are not clinic- or laboratory-bound. This lack of methods is due, perhaps, to a conviction that ordinary behavior is too complex and is determined by too many variables to make possible the discovery of any order except by the application of theory.
What I desire to do here is to introduce some concepts, and to describe some experiments derived from them, that suggest that the orderliness of human behavior may be more accessible than has been hitherto assumed. These experiments may accordingly suggest new direction for research on human behavior.
A number of years ago, in a series of articles that were summarized in his book, The Behavior of Organisms (1938), B. F. Skinner introduced two new concepts into the behavioral sciences. These concepts had familiar names, but their experimental and theoretical content represented a sharp break with the past, a break that had been foreshadowed only in the writings of Kantor (1924).
The first of these concepts is that of the operant response, defined as a part of behavior (a) that is recurrently identifiable and hence enumerable; and (b) whose rate of occurrence can be determined as a systematic function of certain classes of environmental variables. These parts of behavior, or actions, are what we can see an animal perform repeatedly. They are not simple muscle twitches or limb movements. Rather, they are meaningful, repeated actions. They constitute bar presses rather than leg extensions, the picking up of food rather than digital flexions, the speaking of words rather than laryngeal contractions. One class of environmental variable of which such responses are functions represents the second new concept. This is the reinforcing stimulus, defined as a recurrently identifiable and experimentally manipulable part of the environment that has the property of modifying the rate of occurrence of those operant responses that have produced it. Like responses, they are parts of the environment that are meaningful to the animal, not abstract physical events. They are doors, spoken words, food, not the energy patterns that concern sense physiologists. The two concepts are closely related to one another, so much so, in fact, that many writers, including Skinner himself, often state that operant responses are defined by their consequences, that is, by their production of a given reinforcement. Without going into detail on the experimental and theoretical questions involved in this statement, let me single out for elaboration certain properties of these concepts that distinguish them sharply from those other concepts of stimulus and response that have been widely understood–and even more widely misunderstood–by both psychologists and laymen.
The first of these properties is this: it is not possible to determine arbitrarily and a priori what recurrently identifiable parts of behavior will prove to be responses–that is, what parts of behavior will obey the empirically established laws of behavior. True, within limits, an experimenter may “shape” a part of behavior by differential reinforcement, and thus may be able to introduce a new response into the animal’s repertoire, but his ability to do so is sharply limited by the animal itself. Animals of each species, each with its own individual history, come to the experimenter with a repertoire of operant responses that the experimenter can analyze and, within limits, modify. By and large, however, the experimenter must work with those responses that he finds in the animal’s behavior. Not all recurrently identifiable behaviors are responses. After observation, the sophisticated experimenter will often be able to guess which identifiable parts of the animal’s behavior will prove to be responses, that is, will vary as particular functions of reinforcing stimuli.
Similarly, the experimenter does not have unlimited latitude in determining a priori what environmental events he can use as reinforcing stimuli for an animal. As with responses, he can convert, to a limited degree, originally neutral events into reinforcing stimuli. To do this, he must follow certain experimental procedures. There still remains, however, the empirical problem of determining the whole class of events that will reinforce an already identified response of his experimental subject. Again, as with response, the sophisticated experimenter often will be able to guess which of the environmental events that he can make dependent on a response will produce changes in the rate of occurrence of that response in the period following the occurrence of the environmental event.
With some reason, operant responses have been characterized as “spontaneous” or “voluntary.” They are what the animal does “by himself.” This is not to say that operants are independent of the ordinary laws of elicitation (and hence of our usual conceptions of causality in behavior) but rather that, for all practical purposes, the only control the experimenter has over them is the one he exerts by the operation of reinforcement, that is, by presenting the animal with a reinforcing stimulus following the occurrence of the response.
In tackling any new piece of behavior experimentally, or in starting to study the behavior of a previously uninvestigated species, the experimenter has a double problem. He must find responses, and he must find reinforcing stimuli that control their occurrence. More often than not, he designs experiments that involve the simultaneous identification both of the operant responses and of the reinforcing stimuli by demonstrating orderly changes in the rate of some part of behavior that are contingent upon the association between that part of behavior and a specific environmental change. A bar press is a response, and food is a reinforcing stimulus, because when bar presses produce food, the rat presses the bar more frequently. By the same rules, water is a reinforcing stimulus for a thirsty rat, whose response may be the sticking of its nose in one corner of the cage.
In view of these systematic restrictions, one is not surprised that almost all studies of operant conditioning have been made on two responses and on two species. The bar pressing of hungry rats and the key-pecking of hungry pigeons, both with reinforcement by food, have been experimentally studied on a scale that, to put it mildly, is intensive. A few other responses from a few other species, involving a few other reinforcing stimuli, have been studied, but not extensively. In any case, a very large body of experimental evidence has been amassed on the operant (Keller and Schoenfeld, 1950; Skinner, 1938; Skinner, 1953b), and behavioral psychologists may take some pride in the number of stimulus-response laws that have been found
The problem arises, however, of the relationship of this body of experimental data, and the laws derived from it, to human behavior. For those investigators interested in men, wherein lies the relevance of these laws? Several kinds of attempts have been made to exhibit such relevance. The first of these may be termed “extrapolation by analogy.” The theorist hypothesizes that certain kinds of human behavior are responses and that certain kinds of events are reinforcing stimuli for humans. The rat- and pigeon-derived laws are then assumed to apply, and a set of statements about human behavior are generated that satisfy the writer and his friends, and that horrify others or (what is worse) leave them cold. This has been the history of Skinner’s Walden Two (1949), and of his Science and Human Behavior (1953a).
The second tactic is to elaborate a rather complex theoretical structure and then to spell out predictions of human behavior that may or may not be experimentally testable. This, too, has produced books (e.g., Miller and Dollard, 1947) that edify or horrify, depending on the reader.
A third procedure is an experimental one. In this, one studies human behavior in the laboratory under conditions that parallel as closely as possible the experimental procedures followed for the lower animals. Very simple responses are studied, and very simple reinforcers are used. In 1942, Warren and Brown (1943) conditioned children to press a lever, using candy as a reinforcing stimulus. Since then, a number of responses have been conditioned, usually involving tapping a telegraph key or the like as response and, with food, the registration of a number, or the playing of music as reinforcer (e.g., Lindsley, 1954; Green, 1955). The studies have shown clearly the reproducibility in human subjects of the laws of operant conditioning found to obtain in rats. These methods, however, leave something to be demanded. The relevance of pressing-a-key-for-a-piece-of-candy to most everyday human behavior may be questioned, and the vexing problem of “awareness” enters, so that a variety of “interpretations of the results” are possible. It may reasonably be questioned whether a human subject, under observation and engaged in unfamiliar activities in a laboratory, will behave as he would if he were not in such a special situation.
The fourth tactic is the one that is the subject of this paper. This is the identification of responses and of reinforcing stimuli, and the verification and education of laws relating them to one another in human behavior under conditions where the subject is acting as naturally as possible, and where, insofar as possible, he is not “aware” of what is going on. The approach is characterized by a broad increase in the classes of both responses and reinforcing stimuli investigated, and by a controlled relaxation of the rigorous (and very probably irrelevant) environmental controls exerted in the laboratory.
The first such investigation was that of Greenspoon (1950). From his observations of “nondirective” therapy, Greenspoon suspected that therapist’s “mmm-hmm” was a reinforcing stimulus and, hence, that it modified the verbal behavior of the client undergoing therapy. He proceeded to test this notion in the laboratory by instructing each of his large (separately run) number of subjects to say as many different words as they could. He further assumed that saying plural nouns was a response, and proceeded to show that the relative frequency of plural nouns in the subject’s verbal behavior was a function of the experimenter’s manipulations in reinforcing the subject by casually murmuring “mmm-hmm” each time he said one. Greenspoon went on to discover several other reinforcing stimuli. In the data he reported, none of his subjects was aware that his behavior had changed as a function of reinforcement. None seemed to notice the experimenter’s “mmm-hmm’s,” even though his behavior changed as a function of them.
In 1951, we tried to repeat the Greenspoon experiment in a class at Harvard. The experimenters had had very little previous work with human conditioning. They were game, however, and cornered their subjects in a variety of places. The results were interesting and instructive. A few experimenters obtained unequivocally positive results, but not always without the subject becoming aware of the reinforcements. The successful experimenters were the most prestigeful, socially adept individuals, and the unsuccessful ones tended to be those of what might be termed lower prestige. At the same time, attempts were made to condition simple motor behavior, using reinforcers such as “mmm-hmms,” smiles, and “good.” The results, as with verbal behavior, were indifferent.
Therefore, a new tack in the research was taken. In an attempt to pin down the relevant variables, we reverted to the study of simple motor behavior and modified the procedure to ensure that subjects responded to the reinforcing stimuli (the experiments described here are more fully reported in a paper now in press: Verplanck, l956a). Subjects were instructed explicitly as to the environmental changes that the experimenter would manipulate, although no information was given them as to the behavior the experimenter would reinforce. After finding a fellow student who was willing to be a subject, the experimenter instructed him as follows: “Your job is to work for points. You get a point every time I tap the table with my pencil. As soon as you get a point, record it on your sheet of paper. Keep track of your own points.” With these instructions, it seemed likely that a pencil tap, a “point,” would prove to be a reinforcing stimulus. The method worked very well. Indeed, the experimenters were now able to condition a wide variety of simple motor behaviors, such as slapping the ankle, tapping the chin, raising an arm, picking up a fountain pen, and so on. They were further able to differentiate out, or shape, more complex parts of behavior and then to manipulate them as responses. The data they obtained included the results on the manipulation of many of the variables whose effects were familiar in operant conditioning of rats and pigeons. Despite the fact that the experiments were carried out in a variety of situations, the experimenters were able to obtain graphical functions that could not be distinguished from functions obtained on the rat or the pigeon in a Skinner box.
To be sure, the responses studied in humans were very different from the key pecking of a pigeon, or the bar pressing of a rat, and a point is a very different event from the arrival of a pellet of food in a food hopper. But the general laws relating those parts of behavior demonstrated to be responses to those environmental events shown to be reinforcing stimuli were the same.
More interesting, there appeared to be no fixed relationship between the conditioning effects and the subject’s ability to state verbally the response that the experimenter was reinforcing. That is, the subjects were responding in a lawful and orderly way without necessarily being aware of what they were doing. They were not necessarily “figuring it out.”
With these results, we were encouraged to proceed in two directions. First, we returned to the Greenspoon experiment and repeated it with certain refinements of design, and with experimenters who had acquired considerable experience in the conditioning of human motor behavior. These results are reported elsewhere (Wilson and Verplanck, 1955), but they may be summarized briefly here: experienced experimenters had no difficulty in reproducing the changes in rate of saying words of particular classes that Greenspoon reported. The experimenters’ “mmm-hmm,” although uttered without instructions and introduced as if unintentionally, modified the subjects’ behavior, whether or not the subject noticed it. Subjects usually were aware of the reinforcing stimuli, however, and, at the end of the experiment, would tend to report something such as, “I noticed that you seemed to like nouns for a while, and that then you didn’t care.”
The second direction was forward, and it took a long step beyond the chin tappings and word sayings that had been shown to be responses. For a number of reasons (not the least of which was a set of encouraging results from an exploratory experiment), we hypothesized that in an ordinary conversation, saying sentences of a particular class would act as a response, and that agreement by a hearer, or paraphrasing back to the speaker by a hearer, would prove to be reinforcing stimuli to the speaker.
Two experiments were done. In the first, the response reinforced was stating opinions, where opinions were defined as sentences beginning “I think that,” “I believe that,” and the like. In the second, the response reinforced was making any statement on a preselected topic, the topic being chosen by the experimenter and introduced into the conversation by him in a sentence requiring no answer.
The experiments were carried out in a series of 44 conversations that took place on a wide variety of topics and in a variety of circumstances. The sole restriction on the experimenter was that he carry out the experiment with only himself and the subject present, and under conditions where he could keep accurate but camouflaged records of the subject’s behavior. He was also under instructions to terminate the experiment if the subject gave any indication that he suspected that this was anything other than an ordinary conversation.
The results of these experiments were unequivocal. In the first experiment, on opinion-statements, every one of 23 subjects showed a higher rate of giving opinion-statements during the 10-minute period when the experimenter reinforced each of them by agreeing with it, or by repeating it back to him in paraphrase, than he showed in the first 10-minute period, when the experimenter did not reinforce. In a final 10-minute period, 21 of 23 subjects showed the converse effect, termed extinction, that is, they decreased in their rate of stating opinions when reinforcement was withdrawn. Irrespective of topic of conversation, or of the situation in which the conversation took place, the expected changes in rate occurred, under conditions where not one subject gave any indication, at any time, that he was “aware” of the experimental manipulation. The subjects’ behavior, an orderly function of the experimenters’ actions, followed the laws of reinforcement without any awareness by the subject, even of the fact that he was in an experiment.
In the second experiment, the experimenter introduced the experimental topic at the end of the first 10 minutes of conversation. Some subjects (N = 6) were controls and were not reinforced through the following 10-minute period. The others (N = 15) were regularly reinforced. Under these conditions, every subject but one replied to the experimenter’s sentence introducing the topic. Those who were not reinforced dropped the topic quickly (within 2 or 3 minutes), whereas those who were reinforced shifted their speech so that almost everything they said through the next 10 minutes of conversation fell into the specified response-class. In all cases, on the withdrawal of reinforcement, and without respect to whether statements on other topics were reinforced, the subjects dropped to a rate of zero on the previously introduced and reinforced topic.
Again, no subject gave evidence of being aware in any way that this was not an ordinary conversation.
These experimental results indicate that our hypotheses with respect to the analysis of conversational speech into response and reinforcing stimuli were justified. One example of complicated, superficially variable human behavior proves simple under experimental analysis. Order is readily demonstrable. The time may hence be ripe for an extension of experimental analysis into other areas of human behavior that have been considered unsusceptible to direct experimental investigation.
Orderly, significant data can be collected in situations devoid of the elaborate environmental controls exerted in the laboratory if significant variables are manipulated, if measures of that behavior are possible, and if experimental designs are clear-cut. Experiments can be performed on human subjects “in the field” and under conditions where the subject will not be able to discriminate either his own or the experimenter’s behavior as dependent on an experimental procedure. Hence we can study human behavior with good reason to hope that the mere fact that it is being investigated will not modify it. This alone has seriously limited the generality of the many results of laboratory experiments.
One can now conceive of a broad experimental program designed to determine what other parts of behavior have the functional properties of responses, and how broad a class of environmental events will prove to be reinforcing stimuli for one or another subject. The conceptual tools are at hand. The experimental methodology, involving a number of sophisticated and skilled experimenters, is at hand. A very rapid development in this field may be expected. We may hope for an experimental, rather than an ‘intuitive,’ understanding of many of the things that people do.
GREENSPOON, J. 1950. The effect of verbal and non-verbal stimuli on the frequency of members of two verbal response classes. Unpublished doctor’s dissertation, Indiana Univ., Bloomington, Ind.
KANTOR, J. R. 1924-1926. Principles of Psychology. 2 vols. Knopf. New York, N.Y.
KELLER, F. S. & SCHOENFELD, W. N. 1950. Principles of Psychology. Appleton-Century-Crofts. New York, N.Y.
LINDSLEY, O. R. 1954. A method for the experimental analysis of the behavior of psychotic patients. American Psychologist, (Abst.) 9, 419-420.
MILLER, N. E. & DOLLARD, J. 1947. Social Learning and Imitation. Yale Univ. Press. New Haven, Conn.
SKINNER, B. F. 1938. Behavior of Organisms. Appleton-Century. New York, N.Y.
SKINNER, B. F. 1948. Walden Two. Macmillan. New York, N.Y.
SKINNER, B. F. 1953a. Science and Human Behavior. Macmillan. New York, N.Y.
SKINNER, B. F. 1953b. Some contributions of an experimental analysis of behavior to psychology as a whole. American Psychologist, 8, 69-78.
VERPLANCK, W. S. 1956a. The operant conditioning of human motor behavior. Psychological Bulletin 53,. 70-83.
VERPLANCK. W. S. 1956b. The control of the content of conversation by reinforcement: topic of conversation. In preparation.
VERPLANCK, W. S. l956c. The control of the content of conversation: reinforcement of statements of opinion. Submitted for publication.
WARREN, A. B., & BROWN, R. H. 1943. Conditioned operant response phenomena in children. Journal of General Psycholgy, 28, 181-207.
WILSON, W. C. & VERPLANCK, W. S. 1955. Some observations on the Greenspoon effect. Ann. N. Y. Acad. Sci. Submitted for publication.
* This paper, illustrated with lantern slides, was presented at a meeting of the Section on May 16, 1955.