Home Page  Contents  Course Intro |  Bibliography

 



History and Theory of Psychology: An early 21st century student's perspective

Paul F. Ballantyne, Ph.D. 2008©.
pballan@comnet.ca


Section 5:

Wax and Wane of American General Psychology (1920-1990s): S-O-R, the Operationist Variable model, and the Crisis of Relevance.

In this Section we'll consider three widespread disciplinary aspects of 20th century American General psychology: The successive expansion of its supposed subject matter (from an admittedly narrow "S-R account" of observable behavior on up to various wider S-O-R accounts of animal or human action and mentality); the gradual adoption of an operationist "Variable model" of empirical research (in its combined experimental and psychometric manifestations); and the ensuing "Crisis of relevance" once the limited descriptive value or methodological confines of the former two aspects started to become recognized.

R.S. Woodworth's initial expansion of the Stimulus-Response account of psychology's subject matter to include dynamic, goal-directed, and interacting Stimulus-Organism-Response processes represented a progressive step as did his (and E.C. Tolman's) contemporaneous efforts to outline an independent-dependent variable model of empirical research. Similarly, S.S. Stevens (1935a, 1935b, 1939) and E.G. Boring et al., 1945 relied upon the tradition of Logical positivism to advocate the adoption of clearly stated operational definitions as a means of subjecting formerly amorphous psychological concepts to empirical inquiry.

Woodworth (1926, 1934) cautioned psychologists not to interpret his S-O-R account of subject matter as a rationale for advocating a strictly linear three-moment view of empirical method, but this finer point of procedure was neither consistently adhered to by Woodworth himself nor was it widely appreciated by other general psychologists. Similarly, critiques of the rather fundamental philosophical and methodological faults residing in the 1930s era Stevens version of operationism were put forward (Waters & Pennington, 1938; Roback, 1952; Ritchie, 1953, and even Stevens, 1951b), but they also failed to sustain the attention of empirical psychologists.

Instead of meeting the central aspects of these cautions and critiques head-on, a discursive-procedural flanking maneuver was carried out. The initial variable model of experimental research and the emphasis upon operational definitions indicative of these two respective efforts to encapsulate how the science of psychology ought to be conducted, were gradually melded with the ongoing ontologically agnostic tradition of individual differences research. This softened form of "convergent" operationism (a.k.a., the convergent validity of indirect measures approach) was intended to allow empirical research of the correlational and experimental stripes to carry on without becoming bogged down in acrimonious ontological (a.k.a., theoretical) or epistemological debates (e.g., realism vs. anti-realism).

During this mid-20th century period of scientistic hubris a tacit professional agreement to remain more or less metaphysically neutral -to abandon the former talk about mental or psychological processes in favor of talk about psychological variables, operational definitions of hypothetical constructs, or convergent validity of multiple measures- is the most striking characteristic of General psychology (see MacCorquodale & Meehl, 1948; Tolman, 1949; M. Marx, 1951; Cronbach & Meehl, 1955; Garner, et. al., 1956; Cronbach, 1957; Campbell & Fiske, 1959).

Furthermore, the popularized version of this "combined" operationalized variable model of psychological research which appeared in mid-1960s through 1970s textbook depictions of empirical procedure (e.g., Munn et al., 1969; 1972; Evans & Murdoff, 1978) remained within the confines of the scientistic mode of assessment. The simple intent of these depictions was to provide students with an albeit "eclectic" introductory portrayal of the assumed disciplinary relationship between our empirical techniques and the so-called organismic or internal psychological "variables" (like motivation, personality, and intelligence) they were designed to measure. Accordingly, however, the structure of the said diagrams was undeniably linear and static with no depiction of the development of psychological processes per se being provided. It was thereby revealed that the combined variable model, like the preceding strictly behaviorist or operationist traditions, constituted a problematic retreat from the very subject matter of psychology itself.

What seems relatively modern or progressive at one historical juncture of the discipline, can also be found to be holding it back at a later juncture -hence the need to periodically reconsider and weigh the available methodological options in order to bring about successive advancements in the discipline. The period of reassessment we are labeling here as the Crisis of Relevance was one such attempt at: (1) reconsidering the Behaviorist and Logical positivist assumptions of general psychological research methods; and (2) broadening our understanding of empirical practice in psychology so that it remains at least potentially relevant to questions of human affairs. The proposed disciplinary remedy (namely: a revised but highly generalized "S-M-R variable" definition of subject matter with its specific theoretical constituents being both empirically assessed along Popperian "falsificationist" lines and propped up by a rickety "dynamic interactionist" metaphor of nature vs. nurture- was itself eventually found wanting in certain respects and the disciplinary fallout from that realization are still underway in mainstream psychology.

The fundamental methodological disjunction between the traditionally quantitative-mechanical focus of variable model psychology in all its forms and the qualitative-developmental nature of ape, primitive, child, and modern adult mentality was not resolved by that era's reassessment of the issues (Koch, 1959-63; Cronbach, 1975; Guthrie, 1976; Kendler, 1981a, 1987; Koch & Leary, 1985; Hilgard, 1987, 1988; Kline, 1988; Royce, 1988; Staats, 1983, 1991). The disjunction not only remained intact, it was both further exacerbated by professional infighting between psychological subdisciplines and even abandoned as "irresolvable" by some of the high-profile metatheoreticians of that extended period (Koch, 1981; Gergen, 1981, 1984; Toulmin & Leary, 1985; Wertheimer, 1988; Leary, 1990; Danziger, 1990, 1997).

There are surely numerous reasons why the relatively progressive intent and potential disciplinary impact of that reassessment era was ultimately undermined in practice. Some of these quite clearly include the institutional or administrative vested interests of its main participants but anyone looking back at that era's assessment of the issues can not help also noting the decidedly narrow Amero-centric scope of their efforts to consider the existing methodological alternatives. In any case, there is simply no denying that late-20th century General Psychology -in not only its mainstream (experimental, psychometric, or clinical-developmental) manifestations but also in its fringe (historical or theoretical) manifestations- had become a thoroughly middle-class, Amero-centric discipline with little serious consideration of the existing methodological alternatives which fell outside its immediate self-serving purview.

The historiographic object lesson we will be working toward from here on in is as follows: The progressive intentions or critical viewpoint of any given psychologist or group thereof is not sufficient alone to bring about the required changes in the discipline. What is needed is the "right kind of psychology." Stating the argument to be made in this Section rather negatively at the outset might help you better recognize and weigh the respective importance of the various remedial arguments as they are presented. The right kind of psychology is one that does not constantly sabotage theoretically progressive and sincerely democratic intentions by: (1) its constrained assumptions about psychological subject matter; (2) by its stubborn analytical adherence to the merely mechanical structure of statistical methods; or (3) by its self-serving motives for carrying out periodic though merely tactical, esoteric, and face-saving professional adjustments (rather than strategic disciplinary transformations) once these proclivities become a matter of public concern.

The present rather damning critique of the discipline between 1920 and the 1990s, will be interspersed with (and ultimately followed up by) some rather specific suggestions as to how the past constrictive methodological assumptions, empirical practices, etc., might still be remedied (revised, augmented, reconstructed) without falling prey to either positivistic scientism or to the outright anti-empirical positions of the Neo-Kantian "Constructivist" camp of psychological metatheory. So hold onto your hats, it is going to be a bumpy ride!

From Watson's S-R to Woodworth's middle of the road S-O-R

In order to outline the early 20th century disciplinary strivings toward a moderate account of psychological subject matter (one that would allow a continuance of research into observable behavior and of what James called conscious mental life), we'll start by contrasting the career of J.B. Watson with the initial part of R.S. Woodworth's considerably longer career in this regard. The contemporaneous debates between E.C. Tolman's moderate "methodological" behaviorism versus the respective radical behaviorisms of Watson and C.L. Hull (up to 1948) will also be drawn upon to indicate the built-in confines (in terms of accepted empirical practice and possible theoretical advance) of that early "molar S-O-R" vs. "molecular S-R" disciplinary divide.

These initial considerations regarding the rationale for the rise of a molar S-O-R account of psychological subject matter, will set the stage for our subsequent coverage of the independent-dependent variable approach, operationism, and the gradual implicit acceptance of a combined operationalized variable model of research. While our initial goal in these considerations will be to note the disciplinary antecedents and overlap in timing between the rise of the S-O-R account and the early variable model approaches to research (in Woodworth and Tolman respectively), the overall historiographical motive will be to understand their deeper disciplinary relationship with what followed thereafter. In other words, the eventual "combined" variable model of empirical research logically follows from the earlier S-O-R account of psychological subject matter. The strengths and problems which reside in the latter spring from those inherent in the former.

J.B. Watson and the disciplinary context for his Behaviorist Manifesto

John Broadus Watson (1878-1958) was raised in a poor, rural South Carolina family by both his mother (a pious Baptist) and at least partially in the absence of his carousing father who left the family when John was about 13 years old. Up to that time John seemed destined to follow his father's unruly example, but by age 16 he was off to Furman University (Greenville, SC.) for a traditional education in the classical curriculum conversant with his mother's aims of making a him a minister. In his senior year, John applied to Princeton Theological Seminary but ended up staying on at Furman for an extra year to receive a Masters degree in 1900 (see Fancher, 1990). During that pivotal year, his mother passed away and Watson was thus freed from all family expectations.

He initially enrolled in the then combined University of Chicago graduate philosophy and psychology curriculum but quickly became disenchanted with John Dewey's rambling teaching style. He then switched over to doctoral research in the new field of animal psychology (under J.R. Angell and a physiologist called Henry Donaldson) receiving a Ph.D. in 1903 (see Watson, 1936).

It was customary for American graduate students of this period to participate as subjects in the empirical studies ran by their professors and by other students. This was a role which Watson found rather frustrating: "I hated to serve as a subject. I didn't like the stuffy, artificial instructions given... I was always uncomfortable and acted unnaturally" (Watson, 1936, pp. 274-276).

One of the most notable studies which Watson participated in was Angell's on the localization of sound (Angell, 1903b). The blindfolded "observer" was seated in a chair at the center of a circular device that could be tilted or rotated to generate a tone at any point in the surrounding space. While subjects were asked to provide post hoc introspective reports of their "conscious experience" of the experimental procedure, the major empirical focus of the study was on the "accuracy" of the elicited pointing (the measured correlation between the position of the generated tone and the observer's pointing).

Watson's participation in this experiment is not only notable because he would eventually propose that "observable behavior" is the only proper subject matter for experimental psychology, but also because it is indicative of the immediate antecedent context of Americanized research within which that proposition would be advocated. As Thomas Leahey puts it: "[In] the experiments of this entire period [roughly 1900-1912] one finds, with the exception [of those] from Titchener's laboratory, introspective reports being first isolated from the primary objective results and then shortened or removed altogether" (1991, p. 157).

One can well imaging Watson's early graduate-school days reasoning in this regard: If introspective reports from human subjects are so ancillary to the central concerns of even so-called "functional" psychology research, why not do away with them altogether? If reference to consciousness in either its structuralist "content" sense, or in its functionalist "utility" sense does not serve as a reliable source of publicly verifiable data, why blame it on the inadequate training of those providing such reports? Why not attempt to adopt a third approach, one that abolishes introspection as a method for experimental psychology?

Watson's Animal Research

Rather conveniently for Watson, however, his doctoral research (published as Animal Education, 1904) utilized animal subjects. It investigated the effect of various surgical interventions on the ability of albino rats to gain entry into a specially constructed wire box containing food (see Chapter 2 of Ballantyne, 2002 for a picture and further elaboration). This new laboratory animal had first become available to Americans in 1896, when Adolf Meyer, a young Swiss neurologist convinced Donaldson at Chicago to use them for his studies on nervous system development (Boakes, 1984; Demarest, 1987). Donaldson, in turn, lent Watson the necessary cash to publish his dissertation in exchange for Watson's continued help in maintaining the University of Chicago animal laboratory. Watson also served out a part-time instructorship there (1904-1908) before managing to negotiate a position as full professor of "Experimental and Comparative Psychology" at Johns Hopkins (in Baltimore).

Watson's next notable Chicago era work (published in Psychological Monographs, 1907a) attempted to answer the question of how rats learn mazes. It portrayed the rat's maze performance as an additive chain of discretely learned responses controlled by kinesthetic feedback which presumably become increasingly integrated as training continues. A related study, with his first University of Chicago graduate student Harvey Carr (known as the Kerplunk experiment) lent even more empirical weight to this "chain of responses" hypothesis. Once the rats were extensively trained to retrieve food at the end of a long arm of a maze, it was shortened by placing a barrier about half way along.  When released into this shortened arm, the rats ran squarely into it -making a "Kerplunk" sound- and seemed to ignore the food located there (Watson & Carr, 1908; cf. MacFarlane, 1930 below).

Throughout his time at Chicago, as well as during the initial phase of his teaching career at Johns Hopkins University (up to 1912), Watson maintained that the behavior of rats and other lower organisms warranted scientific investigation "regardless of their generality" (Watson, 1906, 1907b, 1908a&b). He even conceded that humans in the same situation probably use "ideational" means and "visual imagery" to navigate mazes (see Watson, 1907a). He was, however, becoming bored with the seemingly unresolvable debates between Jennings, Loeb, Pfungst, and Yerkes over the proper criterion for mental phenomena, and human vs. animal thought (Watson 1907a&b, 1908a, 1909). Accordingly his subsequent works increasingly portrayed even human learning as the additive development of simple into more "complex motor habits" (Watson, 1913, 1914, 1919a, 1924a, 1924b, 1930).

The Behaviorist Manifesto

Once free from the early moderating influence of Angell's Chicago functionalism, the first explicit step along this new argumentative path was taken when Watson produced his behaviorist manifesto aimed at the systematic ousting of appeals to unobservable "consciousness" from psychology. This manifesto initially appeared in 1913 and was then slightly revised as the first chapter of his Behavior: An introduction to comparative psychology (1914). In both versions Watson asserts that thoughts and images are sensations arising from events outside the brain. Since these events are "habits" identical to other bodily actions -except that they are more difficult to observe- he applies the label of "implicit behavior" to them and suggests, further, that what we usually call "thinking" in human beings is really subvocal speech: "Now [if] it is admitted... that words spoken... belong really in the realm of behavior as do movements of the arms and legs.... the behavior of the human being as a whole is as open to objective control as the behavior of the lowest organism" (Watson, 1914, p. 21).

In order to support this new behaviorist argument, Watson (1914) begins with a summary account of animal sensory research to date concentrating specifically on the experimental hardware and techniques -such as delayed reaction situations- developed by American psychologists since the turn of the century. Secondly, he outlines various techniques of observational field work including his own work with noddy terns (1908b), followed by an account of maze learning and other "acquired habits" in rats. This, in turn, is followed by a brief report on measured tongue movements while performing experimentally derived thought tasks as an example of "language habits in human beings." This latter evidence, while providing empirical "support" for his subvocal speech hypothesis, was also the most speculatively driven part of Watson's (1914) book. To his credit, however, he openly admitted the limitations of contemporaneous knowledge about such language habits (see also Watson, 1920).

As indicated in Section 4, the professional reception to Watson's behaviorist manifesto was rather muted. Angell (1913), for instance, openly acknowledged the possible usefulness of "behavior as a category" for the description of the objectively observable aspects of psychological phenomena, and Carr eventually adopted a similar conciliatory stance. There are two notable exceptions to this rule however. Firstly, Titchener's (1914) rather defensive reply was that Watson's position poses no threat to introspective psychology because it is "not psychology" at all but belongs more properly to the class of biological inquiry. Secondly, and perhaps more importantly, the particularly "extravagant" motor theory of thought proposed by Watson (the equation of thought with surreptitious movements in the throat and larynx) was readily questioned by McComas (1916) who called attention to the continuance of thought in persons whose throats had been destroyed by disease (Samelson, 1981).

Prior to revising (though not abandoning), these initial overstatements regarding the subvocal speech hypothesis (1920), Watson's research interests and activities had already turned decidedly toward carrying out a series of systematic investigations with human subjects. His joint report with Lashley of their investigations into homing "activities" of birds (Watson & Lashley, 1915) constitutes the last time Watson would spend any considerable time on animal research. Do see Boakes (1984), however, for an excellent account of Watson's early and last research into birds!

Watson's human research

Watson's shift toward human research was most certainly driven by his ongoing system-building ambitions. It was also, however, considerably aided by new and partially serendipitous institutional circumstances which Watson began utilizing toward this end.

The theoretical-technical aspect of the shift came in late 1914, when Watson read the French translation of Bekhterev's Objective Psychology (1907-12). While Pavlov's research on conditional salivary reflex in dogs had been brought to the attention of American psychologists by Yerkes and Morgulis (1909) it was Bekhterev's laboratory techniques -such as the withdrawal of a paw when electric shock was administered as the "unconditioned stimulus"- which struck Watson as both easily applicable to human beings and as one empirical means by which he could support his own claim that "no new principle is needed in passing from the unicellular to man" (Watson, 1914, p. 318).

Accordingly Watson (with the help of graduate student Karl Lashley) set out immediately to construct his "finger withdrawal reflex" apparatus and to then interpret the forthcoming "conditioned" tone-withdrawal responses in a thoroughly additive-mechanical (a.k.a., molecular) fashion as if they resided merely at the level of muscular motor reflex. We will return to this issue later (see Wickens, 1938 below) so let's simply note that Watson was now emphasizing "conditioned reflex" as a possible basis for the empirical prediction and control of behavior in animals and man; and that some of these empirical forays had already progressed sufficiently far to become the main topic of Watson's 1915 presidential address to the APA (Watson, 1916a). Here Watson described initial experiments (done with Lashley) on humans, dogs, and owls, and suggested the new technique might well prove to have wide generality (see also Cason, 1922a&b).

Watson's other 1916 article, Behavior and the concept of mental disease further clarified his intention to gradually expand the application of behaviorist techniques to topics outside of animal psychology. He paid tribute to Freud's insight that early childhood experiences might have a pervasive influence on later adult life but also suggests that "objective" terminology based on the concept of "habit" would be a more useful tool to employ if we are ever to understanding the early origins of neurotic behavior.

The partly serendipitous aspect of Watson's shift toward human research also came in 1916 when a number of Johns Hopkins departments were relocated to a new site outside the central Baltimore city core. Here, unfortunately, the facilities for animal research were inferior to the older facilities (Boakes, 1984). Although Watson's new office located in a Psychiatric Institute afforded him easy access to a possible study population of adult neurotic subjects, he did not have the inclination to pursue that research. Instead, Watson opted to carry out research on infants in a nearby maternity hospital. Infants like rats after all, don't talk back and provide no messy introspective reports to cloud the rather fundamental issues which had captured his immediate interest.

As Watson understood them, these fundamental issues included: (i) distinguishing "learned" from "unlearned" behavior (hence his interest in the infant grip reflex, strength of grasp, and possible environmental origin of hand preference); as well as (ii) investigating the initial scope and possible malleability of so-called "emotional reactions." As first reported in Watson & Morgan (1917) and as later graphically depicted in silent film footage (1919c), Watson evoked the emotional reactions of "fear, rage, and love" in infants (aged within 1 month of birth) by a variety of rather crude means. "Fear" was evoked by sudden loss of support, sudden shake or pull of blanket, and loud sounds. "Rage" was generated by hampering the infant's movements (by holding the head or constricting the arms and legs). "Love" reactions were obtained when the babies were tickled, rocked, or given another form of stroking -including "manipulation of some erogenous zone" (Watson, 1919a, p. 200; see also Watson, 1919c). Watson (1919a) also reports data on the grasping strength of infants collected at this time and makes further comments on the possible environmental origin of hand preference to the great disdain, it should be added, of subsequent developmental psychologists.

The most notable and infamous study arising from Watson's pre-WWI academic career, however, is the one now known as the Little Albert experiment (reported in Watson & Rayner, 1920). Here Watson unequivocally demonstrated the viability of classical conditioning as an agent of behavioral change and also provided possible evidence of an environmental cause for phobic mental disorders (see also Watson, 1919c, 1926).

Was Watson's research unethical?

While the ethical protocols, measurement procedures, and reductive analysis of his pre-WWI research are clearly out of step with our current standards of empirical research conduct and theory (see Harris, Whatever happened to Little Albert?, 1979, and also below), they were nonetheless important and require that we put them into some sort of historical-disciplinary context.

On the theory side, Watson seemed to have empirically demonstrated not only that the stimuli which provoke "emotional reactions" in infants "prior to learning" are decidedly "limited" but also that they are easily manipulated through the technique of classical conditioning. This was at least his view (see Watson, 1919a, 1926). Although the additive associationist assumptions (e.g., that adult human emotion is a mere build up of evoked feeling responses learned by habit from infancy) and mechanical S-R terminology of Watson's analysis are easily recognized today as too confining, let's at least recognize that he had very good disciplinary and empirical reasons for sticking steadfastly to an environmental interpretation of psychology. The only well recognized pre-WWI disciplinary alternative during that period of American psychology (Eugenics) was unconscionable to him; especially because his own infant studies indicated no difference between the occasional "Black" and the more usual "White" babies on the albeit fundamentally biological and crude observations made therein (Watson & J. Morgan, 1917; Watson, 1919a; Watson & Watson, 1928; Watson, 1924, 1930).

It is this wider historiographical context of Eugenics-inspired, outright racist, psychological research and the ongoing rise of psychometrically-guided administrative school sorting technologies (see Ballantyne, 2002, Chapters 1-4) that sheds the best light on both Watson's oft-quoted (1924, 1930) hyper-environmental overstatement; and the candid (less often quoted) tone of the passage which immediately follows after it:

"Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I'll guarantee to take any one at random and train him to become any type of specialist... doctor, lawyer, artist, merchant-chief and, yes even beggar-man and thief, regardless of his talents, abilities, vocations, and race of his ancestors. I am going beyond my facts and I admit it, but so have the advocates of the contrary and they have been doing it for many thousands of years" (Watson, Behaviorism, 1924, p. 82; 1930, p. 104).

Within the context of the times, therefore, the ethics of Watson's empirical human research as well as the overzealous additive environmentalism are not as appalling or outrageous as they might appear to the modern reader at first glance. Those labels, belong to the theorists and psychometric practitioners in the inheritance camp whom Watson was arguing against. From their blatant or assumed hereditary viewpoint, these figures (including: Henry Goddard, C. B. Davenport, R.M. Yerkes, Lewis M. Terman, Ellwood Cubberley, and even Henry Chauncey -the first head of the Educational Testing Service) used their own version of an additive mental ladder as well as the new individual and group mental testing technology to do more overall harm to potential immigrants, visible minorities, lower-class job applicants, and generations of both school children and hopeful university applicants than Watson ever might have done to subjects like Little Albert.

Despite knowing when their subject was scheduled to leave the hospital Watson & Rayner, it is true, did not decondition Little Albert. This, as Harris (1979) points out is reprehensible according to our current ethical standards of research. They did, in hindsight, outline a reasonable procedure for "'Detachment' or removal of conditioned emotional responses" and this procedure was later used by Mary Cover Jones (1924) to "uncondition" her subject "Peter." The ethical treatment lesson I guess had been learned.

Watson, like all of his notable contemporaries (including R.S. Woodworth, E.C. Tolman; and later E.G. Boring described below) was struggling to do his best within the context of that early era of psychological knowledge. The same can not be said for figures like Lewis Terman (the progenitor of American "Intelligence" and "Achievement" tests) who took full advantage of a rather repressive sociological context to further his own professional career ends (see Minton, 1988; Guthrie, 1976, 1996).

That repressive sociological context, incidentally, had a rather direct personal impact on Watson's career too. Like his Johns Hopkins predecessor James Mark Baldwin, Watson was forced to resign his chair because of a "sex scandal." He had not only become sexually involved with his research assistant Rosalie Rayner, but also had the audacity to carry out physiological measurements of that "involvement" -a fact which apparently came out as evidence during the process of a rather messy divorce. As noted above, Watson continued to publish books on psychology -including Behaviorism (1924 rev. ed., 1930, 2nd ed.) and The Psychological Care of Infant and Child (1928)- but by the 1930s his main career energies had shifted to the advertising business.

Woodworth's "Dynamic" psychology and the S-O-R formula

As for the ongoing disciplinary standing of Watson's S-R account of psychological subject matter (1920-1938), R.S. Woodworth (1924) had already pointed out that there were at least three other less radical varieties of behaviorism in existence (including that of E.C. Tolman, 1922, see also Lashley, 1923). We'll start with a generalized overview of Woodworth's consensus-building discourse regarding "dynamic psychology" because it sets the stage rather nicely for our subsequent consideration of both the strengths and limitations of his ensuing and constantly amended Stimulus-Organism-Response account of psychological subject matter, as well as the related early "variable" model of experimental research which Woodworth and others outlined.

Influenced at Harvard by William James, Robert Sessions Woodworth (1869-1962) did doctoral research in psychology at Columbia University under James McKeen Cattell, where he then taught during 1903-42. His major works include: Dynamic Psychology (1918); Psychology (1921, 1929, 1934); Contemporary Schools of Psychology (1931); Experimental Psychology (1938); and Dynamics of Behavior (1958). In all of these, he advocated the eclectic use of behavioral, physiological, introspective, or psychometric methods depending on which method best fits the situation of empirical interest.

The task which Woodworth set himself was one of: How to best draw together the divergent empirical concerns and theoretical claims of Structuralism, Behaviorism, and comparative psychology including mental testing into a "middle of the road" psychology that nearly everyone could accept. In this effort, he can be said to have represented the strivings and sentiments of most general psychologists of that era.

Dynamic Psychology: "activity," motivation, and the study of mental life

While trying to define his own "Dynamic Psychology" approach to General psychology, Woodworth was wrestling with the three-way disciplinary standoff between Watsonian behaviorism, Titchenerian introspectionism, and William McDougall's "Hormic" psychology (a form of motivational psychology utilizing an appeal to so-called social instincts). The term "dynamic" was employed by Woodworth (1918, 1926, 1930a) to emphasize the urgent disciplinary need to avoid the unnecessary methodological prescriptions and the unwarrantable exclusion of relevant empirical research which characterized these former psychological traditions.

Similarly, the first three editions of his introductory text (1921, 1929, 1934) also employed a clever consensus-building strategy and thereby managed (along with his Experimental Psychology, 1938) to become by far the most widely used for undergraduate and graduate courses in psychology (Winston, 1988, 1990). In these, as well as in his 1931 historical text, Woodworth adopts the argument that it is in the best interest of all contemporary psychologists to play-down their preconceived metatheoretical differences and concentrate upon the actual empirical practices and results of the discipline; for it is in this way that a relatively noncontentious middle of the road psychology might eventually be worked out.

Thus, while the subtitle of his Psychology: A study of mental life (1921) was most certainly a snub to Watson's radical behaviorism, the opening definition of psychology contained therein utilizes the behavior-friendly concept of mental "activity" which includes analysis of the action of bodily organs:

"We conclude, then: psychology is a part of the scientific study of life, being the science of mental life. Life consisting in process or action, psychology is the scientific study of mental processes or activities. A mental activity is typically, ... conscious and we can roughly designate as mental those activities... that are either conscious themselves or closely akin to those that are conscious. Further, any mental activity can also be regarded as a physiological activity, in which case it is analyzed into the action of bodily organs, whereas as 'mental' it simply comes from the organism or individual as a whole. Psychology, in a word, is the science of the conscious and near-conscious activities of living individuals" (Woodworth, 1921, p. 17, emphasis added).

This broad "activity" concept struck a congenial cord with moderate behaviorists including Harvey Carr (1925) who (as indicated in Section 4) adopted it though in an albeit less effective manner as a theoretical foundation-stone for his own introductory text. Both Woodworth and Carr are in agreement that the theoretical concept of activity is one in which the long-standing methodological dichotomies (subjective-objective; internal-external; vital-mechanical; whole-part; biological-individual, etc.) might be captured and made amenable to study in experimental settings.

"Activity" itself, Woodworth later defines as "any process which depends upon the life of the organism and which can be viewed as dependent upon the organism as a whole" (1930a, p. 328). In other words, it is a broad concept which includes (subsumes) not only physiological processes in the brain and spinal cord of an intact organism; but also the bodily movements, unconditioned and conditioned reflexes, and observably goal-directed actions or consciously motivated behaviors of those living organisms.

"Behavior" he suggests rather early on in the 1921 text, "would be a very suitable [stand-alone] term, if only it had not become so closely identified with the [Watsonian] 'behaviorist movement'... which urges that consciousness should be entirely left out of psychology, or at least disregarded" (p. 2). Reiterating the argument he writes:

"What the behaviorists have accomplished is the... overthrow of the doctrine... that introspection is the only real method of observation in psychology.... But we should be going too far... to exclude introspection altogether.... Let us accumulate psychological facts by any method that will give the facts" (Woodworth, 1921, p. 13).

Likewise, while Woodworth (1930a) argues that psychology ought to include introspective analysis (it should utilize experiential, phenomenological methods where applicable) he also explicitly states that there is "nothing in that requirement" which limits psychology to "the study of [Titchenerian] sensations." In his opinion, both experiencing and acting (a.k.a., behaving) should be accounted for in the new experimental psychology, and Woodworth is careful to point out why this is so:

"With the advent of laboratories and groups of psychologists the subject of an experiment became typically someone other than the investigator himself, and psychology became in practice the 'psychology of the other one,' to use a pregnant phrase of Max Meyer [1921]. But if we are studying the 'other one,' there is no excuse for limiting the study to his 'experiences' [a la Titchener]; we should study his behavior as well, if only to round out our study and to see things in their relations.... for neither behavior... nor experience... is anything but a fragment when taken alone" (Woodworth, 1930a, p. 330, emphasis added).

Woodworth's (1930a) climatic argument for the inclusiveness of subject matter and place of psychology in a hierarchy of science runs as follows: Since both introspective experience and behavior are dynamic and goal-directed rather than passive, "we can combine experience and behavior under the inclusive term, 'activity,' and say that psychology is the study of the activities of the individual as an individual" (Woodworth, 1930a, p. 331). In turn, physiology as the study of the activities of parts of the organism, and sociology the study of groups of individuals would cover all the positive findings of behaviorists as well as introspectionists, while abandoning their problematic "taboos". Exactly whom initially set up these taboos, and what they were is stated rather clearly in Woodworth's autobiographical statement also from 1930:

"My bogey men -the men who most irritated me, and from whose domination I was most anxious to keep free- were those who assumed to prescribe in advance what type of results a psychologist must find, and within what limits he must remain. Münsterberg was such a one, with his assertion that a scientific psychology could never envisage real life. Titchener was such a one, in insisting that all the genuine findings of psychology must consist of sensations. Watson was such a one, when he announced that introspection must not be employed, and that only motor (and glandular) activities must be discovered. I always rebelled at any such... [a priori] table of commandments" (Woodworth, 1930b, p. 376).

Though in need of further elaboration, Woodworth's general arguments seem perfectly admissible and they raise a host of historiographic questions including: How far did Woodworth, himself, proceed along the dynamic psychology path outlined above?; Were his arguments and elaborations adopted by the subsequent experimental psychology tradition?; and If they were not sufficiently adopted, what disciplinary forces held such an adoption back? While bearing these questions in mind, let's take special notice of two central points: (1) Why Woodworth believed that "activity" (acting, doing, performing) along with their "ends" or "motives" were so important to study; and (2) That it was in the hope of studying both that his S-O-R formula for psychology was put forward.

"Motivation has always seemed to me a field of study worthy to be placed alongside of performance [behaving].... We need a study of motivation in order to understand the selectivity of behavior and its varying energy. In my books I have sought repeatedly for a formula that should bring motives right down into the midst of performance instead of leaving them to float in a transcendental sphere" (Woodworth, 1930b, p. 371).

In utilizing the "mental activity" concept and in emphasizing the importance of studying "motives," Woodworth's intent is to provide a means by which the discipline can use the objective notions of cause and effect in an internally dynamic and developmental manner rather than the merely external mechanical manner that was characteristic of Watson's behaviorism. As outlined below, the "formula" Woodworth came up with -while initially stated in an expanded S-R plus "central tendencies" manner (1921)- was amended "repeatedly" to became various versions of Stimulus-Organism-Response. Although these new formulas addressed certain deficiencies of the discipline, they also contained their own difficulties, which -in my opinion- served in combination with other disciplinary forces to preclude the adequate fruition of Woodworth's above named intent, and forestall their adequate uptake into the subsequent tradition of empirical psychological practice.

Well, that's the generalized overview of our initial considerations. What now follows is a detailed account of Woodworth's all important S-O-R hinge argument upon which the remainder of Section 5 and much of the remainder of this course will depend. If I can convince you of the soundness of Woodworth's intent (to provide a "dynamic" S-O-R account); and if I can convince you of the disciplinary relevance of E.C. Tolman's "molar" behaviorist approach -as well as indicate the inherent limitations which resided in them both,- then our joint retrospective passage through the problematic "operationist era" of psychological research and the ensuing disciplinary "crisis of relevance" will not be burdened with conceptual or contentual difficulty but will constitute a valuable object lesson in what to avoid in your own subsequent career. So listen up because this part of the Section is really important! It will also be told in a way that you may not be accustomed to, or even encounter again for some time if at all.

Rise of the "molar" S-O-R formulas and early Variable models (Woodworth and E.C. Tolman)

While Woodworth was sifting through the existing intellectual products of the whole discipline (and presenting various versions of "S-O-R" to encapsulate them), E.C. Tolman's systematic energies were being focused upon the manner in which key terms in the learning subdiscipline might be brought under experimental scrutiny. The two efforts overlapped considerably when it came to portraying psychological experiments as the empirical investigation of connections between "independent and dependent variables." Although their respective early variable models were intended to support similar molar S-O-R rather than molecular S-R accounts of psychological subject matter, the ontological status of the connecting middle term in each ("organismic" vs. "intervening" variables) differed considerably. The combined significance of these overlaps and differences will be highlighted in our consideration of their respective uptake into what would become the fully operationalized variable model of subsequent empirical psychological research.

Woodworth: From augmented S-R to successive S-O-R accounts

By way of describing the troubled state of the discipline which he was now setting out to improve upon, Woodworth (1921) provides the following rather pithy footnote: "First psychology lost its soul, then it lost its mind, then it lost consciousness; it still has behavior, of a kind" (p. 2). In order to better reflect the broad practical and empirical concerns of contemporary psychologists, the definition of subject matter needed to be revised away from the former false choice of either introspective content or observable behavior toward the more inclusive new concept of mental activity.

Woodworth recognized rather early on that Watson's radical behaviorist system was the mirror image of Titchener's introspectionist system. Structuralism focused on the "How and What" of consciously experienced content; Watson's system focused on the "What and How" of overt behavior. It was just as mechanistic, elementistic, and associationist as structuralism but concentrated upon a different level of psychological subject matter. The notions of stimulus and response now constituted the fundamental units of such analysis; just as the elements of "sensations, image and feelings" did in Titchener's system. The behaviorist's program to date was simply one of translating the old mental mechanism into stimulus-response terminology (After Hillner, 1984).

In contradistinction to these, Woodworth's "dynamic" psychology (1918 onwards) "refuses to be a party to any such mutilation" of psychological subject matter (1930a, p. 333). Stated plainly, any psychology that does not set out to answer all of the journalistic "W-Fives" (Who, What, Where, When, and Why) didn't have any real chance of answering the much related "How" question! Such approaches to psychology were intellectually and empirically shackled from the outset. Hence Woodworth embarkes upon a joint emphasis on motivation (the functional aspect of internal conscious or "near conscious" content), as well as the externally observable (behavioral-performance) aspects of mental activity -the latter of which Watson (1914) himself had recognized as also "functional" in the physiological process sense of the term.

As that era's chief metatheoretician, Woodworth produced a succession of works to remedy what he viewed as a falsely constrained disciplinary situation. He begins in 1921 with an expanded S-R analysis -where he argues like Dewey, (1896) that these functional categories of analysis should not be regarded as separate disjointed events occurring in a sequence- and his efforts culminate in 1934 with a full-fledged swing away from S-R toward a "dynamic" S-O-R account accompanied by a complementary experimental methods rationale.

Woodworth's early 'expanded S-R' account

The following two diagrams and argumentative examples from Woodworth's 1921 account will serve to indicate the intellectual baseline of his revisionist efforts in this regard. Although Woodworth carefully prepares the way for his readers by providing various informative ground-up statements in the proceeding chapters ("Reactions" and "Reactions of Different Levels"), the crucial 1921 chapter for our present concern is "Tendencies to Reaction" (pp. 68-88). It is here that he suggests that "no violence has been done to the general conception of a [Stimulus-Response] reaction" by the "addition" of seemingly teleological concepts like 'motive, directive tendencies and preparatory reactions'" (p. 84). So, let's go through this chapter to find out what Woodworth is on about.

One advantage of basing our psychology on reactions, he suggests, is that it keeps us close to the ground (within the realm of concrete, specific observations rather than "fanciful" speculation). "Whenever we have any human action before us for explanation," we have to ask "what the stimulus is that arouses the individual to activity and how he responds" (p. 68). Stimulus-response psychology is "solid;" for if it can "establish the laws of reaction," so as to predict what response will be made to a given stimulus, it furnishes the 'knowledge that is power'. "Perhaps no more suitable motto could be inscribed over the door of a psychological laboratory than these two words, 'Stimulus-Response'" (p. 68).

"But," he writes -and this is the crucial part- "we must not allow it to blind our eyes to any of the real facts of mental life; and at first... it seems as if motives, interests and purposes [do] not fit into the stimulus-response program" (p. 69). To emphasize this point, and in order to prepare us for what he believes is the proper resolution to issue at hand, Woodworth makes the following contrast: Suppose we are looking out on a city street during the noon hour. We see numbers of people standing or walking about, looking at anything that chances to catch their eye, waving their hands to friends across the street, whistling to a stray dog that comes past, etc. These people are responding to stimuli and there is no difficulty in fitting their behavior into the stimulus-response scheme. But here comes someone who pays little attention to the sights and sounds of the street, simply keeping his eyes open enough to avoid colliding with anyone else. He seems in a hurry. He is not simply responding to stimuli, but has some purpose of his own that directs his movements. Here is another who seems to be looking for someone in particular, making him extra responsive to certain sorts of stimuli.

"Now it would be a great mistake to rule these purposeful individuals out of our psychology" (p. 70). We wish to understand busy people as well as idlers:

"To complete the foundations of our psychology, then, we need to fit purpose into the general plan of stimulus and response. At first thought, purpose seems a misfit here... But if we could show that a purpose is itself an inner response to some external stimulus, and acts in its turn as a 'central stimulus' to further reactions, this difficulty would disappear" (Woodworth, 1921, p. 70, emphasis added).

The purposeful person, Woodworth suggests, "wants something he has not yet got, and is striving towards some future result" (p. 71). Whereas a stimulus pushes him from behind, a goal beckons to him from ahead. This "element of action directed towards some end" is absent from the "simple" (physiological reflexive or casual) response to a stimulus. Thus Woodworth presents the following diagram to allow for "action steered in a certain direction by some cause acting from within the individual" (p. 71).

Woodworth's aim here is to find a way around what would later be called (by A.N. Leontiev, 1979) the "postulate of environmental immediacy," which was inherent in the behaviorist S-R scheme of John B. Watson. To this extent we can be sympathetic to Woodworth's aim but should also be somewhat wary of his proposed solution as depicted above because among other things he would later revise and then partially repudiate it on the grounds that it remains too "linear," static and mechanical overall. We will indicate why shortly.

For the time being, however, Woodworth accepted this depiction and sought to indicate the specific mechanisms by which purpose acts as a "central stimulus." Even at this time, however, he is careful to point out a vertical (evolutionary) dimension to the scheme he is presenting. "Purpose" is not the best general term to cover all the "internal factors" that direct activity, he argues, since this word implies conscious foresight of the goal. This "highest level of inner control" over behavior differs from the "two levels" below which, while being carried out "towards a certain result" do not involve "conscious foresight of that result" (p.71):

"The lowest level, that of organic states, is typified by [physiological] fatigue. The middle level, that of [individual] internal steer, is typified by the hunting dog, striving towards his prey, though not, as far as we know, having any clear idea of the result at which his actions are aimed. The highest level, that of conscious purpose, is represented by any one who knows exactly what he wants and means to get it" (Woodworth, 1921, p. 72).

While Woodworth admits that no single word (including motives) is proper to cover all three levels of directedness toward ends, he nonetheless suggests that "Motives" will serve -as long as we agree that a motive is not always clearly conscious or definite "but may be any inner state or force that drives the individual in a given direction" (p. 72). In my opinion, this over-generalization of a perfectly useful colloquial term beyond its commonsense meaning is both uncharacteristic of Woodworth and extremely unfortunate. If there are indeed three levels of analysis to be differentiated here, why not explicitly outline them by providing different terms for the kinds of activity being carried out and the conditions under which (or ends toward which) they are being carried out? It should be mentioned that it would be a long while before A.N. Leontiev's Activity Theory did precisely that!

Thus, from our current historically retrospective vantage point, when Woodworth (p. 74) asks whether "the facts already cited compel us to enlarge somewhat" the conception of a S-R reaction, we can answer a qualified 'No' to his definitive 'Yes.' Recall that Dewey (1896) had already suggested that while S-R analysis works for automatic physiological reflexes, it fails to capture the intentional aspects of even the simplest forms of individual voluntary acts (let alone the more complex ones). Instead of expanding the scope of a fanciful analytical metaphor, why not just give it up? Dewey in his own arcane prose tried to argue this point by rejecting both the "reflex arc" and "reflex circuit" view of intentional human action (see Section 4). Woodworth, like Dewey and Carr too, seems to know "what he wants" (a levels approach to mental activity) but can't exactly find the conceptual means to get it!

Meanwhile, having answered "yes" to the above question, Woodworth proceeds to expand his S-R account still further in an attempt to address the issue of the "selectivity" of these directive tendencies towards ends (which tendencies will actually result in the carrying out of a response and which will not). To address this selectivity issue, Woodworth utilizes the concept of preparatory reactions (labeled as "P").

It is here that we start to get a glimpse of the linear-mechanical nature of his initial (1921) expanded S-R scheme. The more terms he plugs into the above schematic space between Stimulus and Response, the more mechanical the account appears. The immediate question raised by Woodworth himself is one of the nature, scope, and basis of these preparatory "reactions". As in the case of "motives" (as defined by him) these "reactions" are said to operate on three distinct levels. They seemingly depend upon "organic states" (e.g., physiological reflexes, hunger, muscle fatigue); and or prior "conditioning;" or "learning" depending upon the related level of mental activity being considered. The examples of the functioning of preparatory reactions provided by Woodworth (pp. 74-82) likewise nearly run the gambit of conceivable generality: from "neural" preparation, to the body orientation of rats towards the designated box of a delayed reaction experiment, to the seeking of food or drink by needy animals, to baby's refusal of a bottle, to Woodworth's own rising from his chair in order turn on a light for reading in the late afternoon.

I say "nearly" above because there is no reason why -according to Woodworth's presented scheme- that a college or university education itself should not also be termed a "preparatory reaction." Yet, as in the case of his overgeneralized"motive" definition, one is left with the question why there are not different terms being utilized for each of these seemingly very diverse albeit "preparatory" processes. As we will show below, the organismic and individualized analysis exemplified here is a recurrent theme in all of Woodworth's successive works. He never really manages to move beyond that level of analysis and this is a very important point to note for future reference.

Woodworth's shift to S-O-R

On the more progressive side of things, however, Woodworth does eventually recognize that his early "expanded S-R account" -which he originally claims to "do justice to all of human behavior" (1921, p. 84) is not, in fact, adequate to the job. This transition toward a relatively fuller S-O-R account takes place gradually for him between 1921-1934.

In 1926, for instance, while still proposing an expanded S-R account, he is careful to address the issue of the apparent linearity of the early account in particular. After opening the paper with a direct reference back to Dewey and James, Woodworth (p. 122) first reiterates his 1921 point that a stimulus is not to be identified with cause; and then suggests too that the often assumed "serial order" of events (S-R) is also a mistake because "very seldom does a stimulus find the organism in a completely resting, ... unpreoccupied state" (p. 124).

"Ordinarily, a stimulus breaks in upon some activity in progress... This activity has a trend towards some goal, immediate or remote. We have, then, not first stimulus, then activity of the organism; but first an activity going on, next an intercurrent [disrupting] stimulus, and then the activity modified in response to the stimulus.... What we see is an activity going forward in a definite direction and rendering the organism unresponsive to certain stimuli, while unusually responsive to others" (Woodworth, 1926, pp. 124-125).

This progressive (though still organismic and individualized) theme is further elaborated in the 1929 text, where, -after mentioning that the old S-R account is "sometimes interpreted to mean that if you know the stimulus, you can predict what the response will be"- Woodworth presents the adjacent straight-line Stimulus-Organism-Response diagram (p. 226) to distinguish his position from that older view.

Woodworth's new S-O-R account, however, reaches its height in the 1934 edition with an explicitly stated continuously environmentally interactionist, and dynamic S-O-R account of subject matter. In the two following diagrams (from Woodworth, 1934, pp. 8-9), W=world; S=stimulus; O=organism; and R=response:

While this new account was most certainly a progressive step at the time it was proposed, the main question which concerns us at our present historical juncture is whether the "continually interacting" (reciprocal) relationships depicted in these S-O-R diagrams really solved the problems indicated above, or whether they were just as inadequate to that task as Woodworth's initial expanded S-R account had been. A definitive historical judgment on the methodological viability of the S-O-R account, however, will have to await our fuller assessment of the two most cogent ways that the new formula was applied by Woodworth (1934): (i) to define the very structure of psychological experiments; and (ii) to make pronouncements on the heredity vs. environment debate. To gain a better grasp of the disciplinary setting in which Woodworth's descriptive S-O-R formula was initially put forward, however, we must turn to the roughly contemporaneous efforts of E.C. Tolman.

E.C. Tolman and the early application of 'molar' Behaviorism

Unlike Woodworth (whose disciplinary contributions were ones of methodological, historical, and metatheoretical analysis), Edward Chance Tolman (1886-1959) was primarily an experimental psychologist who oversaw a rather circumscribed, hard-nosed empirical research program at the University of California (Berkeley). Tolman utilized the forthcoming evidence in support of a set of argumentative propositions which he successively called: "purposive" (1925, 1932), "molar" (1926, 1932), "operational" (1936), and "cognitive" behaviorism (1948).

While the structure of his research program and subsequent analysis adhered roughly to the molar S-O-R account outlined above, Tolman himself forswore to avoid making pronouncements on the ontological status of what he called "intervening variables" (those that reside between observable Stimulus and Response variables). In his 1925 critique of "mentalistic" positions (like McDougall, 1912; and Woodworth, 1918), for instance, Tolman argued that "purpose" need not be inferred as an intentional power of the mind but was a mere matter of referring to the goal-directedness of observable behavior (see also Tolman, 1920, 1923, 1935). Accordingly he has subsequently been classed by historians of psychology as a "methodological" rather than a "metaphysical" (mentally eliminative) behaviorist.

From the time of his earliest argumentative and empirical review articles (1922, 1932) through to his major book-length or summary reviews of the molar behaviorist maze learning experimental tradition (1938, 1948), Tolman argued that early S-R theorists by adopting the language of reflexes had mislead themselves into a reductive molecular account. They had portrayed observable behavioral responses as merely equivalent to what is meant in physiology by muscular reflex. Watson, in particular, he suggests, had vacillated between those two different notions of behavior -psychological and physiological- without ever understanding how they differ. For Tolman, behavioral responses are more than the sum of their physiological parts. Behavior, as such, is an "emergent phenomena" which has descriptive and definitional properties of its own: "We shall designate this as a molar definition of behavior" (Tolman, 1932, p. 6).

Given the bearing of Tolman's research and reviews on many of the contemporaneous disciplinary debates already mentioned, we will briefly summarize a few of the most relevant empirical studies carried out in this tradition (including Simmons, 1924; Elliot, 1928; Blodgett, 1929; MacFarlane, 1930; Tolman & Honzik, 1930a&b; Wickens, 1938; Tolman, Ritchie, & Kalish, 1946). The essential tension or strained inconsistency between Tolman's initially assumed ontologically agnostic stance toward mental processes and his eventual recognition of goal-expectancy, global avoidance responses, place learning, and cognitive maps as psychological "entities" will be highlighted along the way.

 

Context and debates of the Berkeley research program

Although Tolman (1932) would eventually overstate the case for the generality of the methods and the conclusions drawn from these experiments (as being suitable for investigating "everything in psychology save language and society"), they did collectively constitute a significant contribution to the reorientation of early behavioral research away from merely physiological and towards at least rudimentary psychological questions. They warrant our historical attention because it was by way of reference to them that Tolman was first able to abandon Watson's muscle twitchism, and then counter Clark L. Hull's newer, though equally molecular, "reinforcement gradients" form of behaviorism.

The widest debate entered into by Tolman's Berkeley lab was regarding: Which (or how many) of the available associationist laws of conditioning to accept. Recency and frequency, or the "law of effect" too? In some respects this was the most general debate they entered but it was also the easiest for them to resolve definitively. Here, Tolman took up the middle-ground between Watson and Thorndike so some initial commentary on their respective points of view is necessary before wading into the empirical evidence mobilized in this regard.

Watson (1914) had proclaimed that all learning in rats, cats, and humans is dependent upon and requires immediate reinforcement. He rejected Thorndike's (1890s era) law of effect (regarding pleasant or unpleasant results) and maintained that the principles of recency and frequency of reward are the only necessary principles needed to explain how learned habits in animals and man occur.

Thorndike's account of his cats (see Section 4) was one of "trial-and-success" learning, with the neurological "stamping in" of the habits which immediately precede escape from the puzzle box -without reference to use of "ideas." Watson further radicalized this rejection of ideation by suggesting that there was no room in psychology for even such a backwards looking neurological account of new habit formation because not only did it seem to assume conscious awareness in animals, it also smacked of "teleology" which he viewed as hopelessly metaphysical. All accounts of cause and effect for Watson followed the linear mechanical (efficient) causal S-R chain pattern. Both Woodworth's continuously interactive S-O-R and Tolman's purposive behaviorism would take exception to this one-way linear aspect of Watson's account.

In Tolman's 1915 Harvard dissertation, he compared the memory of nonsense syllables learned in the presence of noxious and pleasant odors as presented to the nostrils of human subjects by way of an olfactory apparatus. Likewise, one of his first few publications (Tolman, 1917) reported a retroactive inhibition effect under such conditions.

Upon reaching Berkeley, Tolman like many others of the era made simultaneous use of texts by Thorndike (1911) and Watson (1914) to teach classes in comparative psychology. So, it should not be surprising that while he agreed with Watson and Thorndike regarding the bracketing of reference to introspective conscious states in animals and man, he also sought to temper the radicalism of Watson's complete rejection of goal-directed behavior.

On the whole, the research program of the Berkeley lab was aimed at drumming up empirical evidence for the latter options of the following five interrelated issues: (I) Passive peripheralism or active expectancy of reward?; (II) On Laws of association: mere recency and frequency or effect too?; (III) On the perceptual processes used by rats to run mazes: additive concatenation of molecular kinesthetic responses or molar place learning?; (IV) In humans: muscle twitches or global response learning?; and (V) Countering Hull: numerical response gradients at maze choice points or cognitive maps? So, let's take a look at the evidence.

(I) Passive peripheralism or active expectancy of reward?

Watson had put forward a thoroughly molecular and largely passive-mechanical account of behavior. Observable behaviors of all sorts are triggered by immediately present environmental stimuli on the basis of recency and frequency. When carefully studied in the laboratory, they can be rendered physiological by analyzing them into the complexes of smaller muscular or glandular components from which they are made. Tolman understood behavior, however, as purposive. For him, the aim of laboratory study was to analyze the goal-directed nature of integrated molar acts.

An initial series of investigations presented by two of Tolman's students (R. Simmons, 1924; and M.H. Elliot, 1928) constitute an important early phase of the empirical attack on Watson's above stated "peripheralisms." Simmons reasoned that if Watson was correct, there should be no measurable difference in the influence of different sorts of reward on the maze performance of rats because, after all, a maze was an apparatus in which an experimenter could render the recency and frequency for each of various kinds of rewards the same. Simmons' 1924 work is notable, therefore, because it was the first to clearly indicate that a hierarchy of reward preferences exists for rats in a maze.

Simmons ran separate groups of equally food deprived rats through a simple alley maze and measured the average time it took each group to master the maze for different food "incentives". With the time from start to food box being defined as a "run," and with a successful run being defined as an error free (direct route to the food) run, and with three successful consecutive runs being defined as "mastery," Simmons found that: (i) On average, rats rewarded with bread and milk ran fastest; (ii) those rewarded with sunflower seeds ran the next fastest; and (iii) those that were simply removed from the goal box after each successful maze trial, mastered the maze the slowest. Certain rewards, he concluded, were more "demanding" (motivating) on maze performance than others.

Elliot (1928), also weighed in on this issue of "reward demand." When rats that had been trained (had already mastered the alley maze -shown right) with a "highly demanded" reward, encountered a less demanded reward on later trials, they ran the maze more slowly and made more "errors" on those subsequent trials. Alternately, rats trained first with a "less demanded reward" improved their average run performance when the higher demand reward was substituted. Elliot concluded that maze performance is not a direct result of mere reinforcement, it was also a function of the kind of reward.

For Tolman (1932, 1948) the Simmons research (showing the relative effectiveness of various incentives on maze running performance) and the Elliot research (showing the effect on maze running of changing from high to low demand incentives) was clear evidence that rats acquire specific expectancies about the goal to which their behavior is directed. Another Tolman student, O.L. Tinklepaugh we should also note drew similar conclusions from his 1928 study with monkeys. In other words, the former linear mechanical S-R unit of Watson, had now become expanded to a forward-looking "Stimulus--Intervening Variable--Response" analysis in Tolman and his students.

(II) Laws of association: Recency and frequency or effect too?

In a related set of findings, it was H.C. Blodgett (1929) who challenged Watson's most generalized assumption that learning could not occur in the absence of reinforcement. Just how simple it was to resolve this issue is indicated by the rather uncomplicated structure of the Alley Maze used by Blodgett. Starting (bottom left) the rats proceeded through various one-way doors (D) to the Food box (top right).

Three groups of rats were trained to run this Alley maze. Group 1 (the control group) was rewarded with food every time they reached the goal box. Group 2 (the first experimental group) did not find food for the first six days of training, but were merely removed once they reached the goal box. Group 3, (the second experimental group) ran without food reward for two days, found food on the third day, and continued to find it for the rest of the experiment.

Both experimental groups showed a marked reduction in the number of errors the day after the initial transition from nonfood to food reward conditions and continued this improved performance thereafter. Tolman & Honzik (1930b) repeated the experiment with some variations to produce the following table (reproduced from Hilgard, 1987 in its entirety).

Clearly these rats had "learned" something during the nonfood trials and Tolman suspected that they had become aware of the surface layout of the maze (see also Tolman & Brunswick, 1935). Tolman (1932) called the initial learning occurring during the non-reward trials "latent learning" and suggested that latent learning was a pervasive aspect of everyday experience for both rats and human beings (p. 343). Since rats did in fact learn in the absence of food reward, Tolman also made the explicit distinct between "learning" (which can occur without reward) and "performance" (which is heavily dependent upon reward) at this time.

(III) Which perceptual processes were used? -additive concatenation of molecular kinesthetic response versus molar place learning.

Similarly, with regard to the debate over the actual perceptual processes utilized by rats to run mazes, recall that both Watson (1907a), as well as Watson & Carr's (1908) analysis of their "Kerplunk" experiment, had argued that kinesthesia (bodily-muscular sense) was the predominant, if not the only means used. The combined results of various experiments presented by D.A. MacFarlane (1930), however, served to counter that hypothesis.

MacFarlane first trained rats to swim a maze in order to obtain food placed on a raised central goal platform. When the rats had learned their way through the maze, a false bottom was inserted so that they could now wade towards the food platform. After a transient period of disruption, it was found that the rats soon made no more errors than in the original mastered training trials. Further, once all the water was drained from the maze (so that the rats were now required to run through the maze), the same effect was found. These studies, therefore, provided evidence of the flexibility of goal oriented behavior.

The main empirical finding that subsequent success was not effected by a change in the modus of locomotion, strongly suggested that what was being learned was where the reward was located rather than what motor responses were needed in order to reach the goal. In other words, the rats were not as mindless as Watson and Carr had made them out to be. Whatever they had originally learned, it could not have been the mere response of performing some specific kinesthetic swimming motion associated with the stimuli at each choice point in the maze.

According to Tolman's (1932) coverage of these experiments, the rats had not learned a mere series of responses but instead the spatial layout of the maze (pp. 77-82). This "cognitive map," as he called it, could then be used to get from the start to the goal in any of a number of ways (swimming, wading, running). While these experiments certainly show that something more than a mere concatenation of discrete or successive kinesthetic S-R associations was learned, they did not on their own resolve the question as to whether it is anything like a cognitive map or not. For this, more evidence was required and Tolman would have to utilize that evidence repeatedly to counter the views of both Watson and Clark L. Hull (see that evidence presented under "V" below).

(IV) Human research: Muscle twitches or global response learning?

For now, however, we should probably follow the order of the above Watson account which moved into the arena of human research by way of utilizing the technique of shock avoidance with the finger reflex apparatus (1916, shown above). In Watson's view, behavior in human subjects could still be defined as molecular muscular responses caused by the specific stimuli with which they had become mechanically associated by way of recency and frequency. In Tolman's view, however, a molar response category was associated by the organism (human or animal) within a given stimulus situation.

For example, if a person learned to withdraw their finger from an electrode when a warning signal preceded an electric shock, then the molecularist would say that a specific conditioned muscular reflex has been learned. By contrast, a molar behaviorist would claim that a global avoidance response had been learned. It was D.D. Wickens (1938), who did the study to test the respective veracity of these two accounts for this particular experimental situation. He first taught the above "Watson-Lashley" finger withdrawal response to his human subjects, and then turn the subject's hand over to see what would happen next. Since this experimental procedure necessitates that an anatomically "antagonistic" muscle group now needs to be utilized to carry out finger withdrawal, the Watsonian position predicts that a new muscular reflex will have to be learned (as the original one will drive the finger into the electrode). Tolman's position, however, predicts that the subject will immediately avoid the shock since they have already learned a global (molar) shock-avoidance response (not a specific muscular reflex). The Wickens results supported Tolman's prediction rather decisively.

(V) Response gradients or Cognitive Maps?

Finally, when the Yale-based Clark L. Hull came on the scene with his molecular response gradients hypothesis, Tolman's Berkeley lab already had most of the empirical ammunition readily at hand to cut his hypothesis down. Tolman's conception of "purpose" had expanded over the years -moving from the status of an hypothetical intervening variable to a near cognitive power- but he always had suggested that goal-oriented responses were a real and important aspect of observable behavior for psychology to study. Hull on the other hand, set out to explain purpose and cognition away by working out an hypothetical-mathematical account of the mindless mechanical processes upon which our belief in those intervening entities might be based. Just as Newton had derived the motions of the planets from a small set of physical laws, so Hull (1934a&b; 1935; 1943a&b) proposed to predict the behavioral motions of organisms from a set of mathematically describable molecular quantitative response gradient laws.

One of the classic studies Tolman utilized to emphasize that rats learned the layout of a maze was first described in Tolman & Honzik's article "'Insight' in rats" (1930a). Although it predates Hull's reinforcement gradients hypothesis, it is still instructive in various ways. First of all, this study utilized what is called an elevated maze (similar to that later used by N.R.F. Maier -shown right- but different in actual layout). Note that these elevated mazes contain no walls and, at least potentially, allow the rats to take visual stock of the layout of the maze structure. In Tolman & Honzik's study, however, it was the rather clever layout and "path-blocking" procedures used by the experimenters that seemed to provide decisive evidence for the use of cognitive maps by rats.

After allowing the rats to explore the elevated maze freely, a food reward was then placed in the goal box and the animals quickly began to favor the shortest (straight) route from the start box. Once this habit was formed, however, the experimenters blocked the shortest route at block point 1, to see what the rats would do. Upon encountering block point 1, the rats tended to backtrack and take the next shortest (leftward) route and this new habit rapidly became more frequent on subsequent trials.

Note that this initial backtracking behavior (on its own) is consistent with not only the learning of specific response hypothesis (a la Watson) but also both Tolman's cognitive maps hypothesis and Hull's numerical reinforcement gradients hypothesis. According to the latter, the rats favoritism of the second shortest route is a wholly mechanical affair involving the bodily computations of previous route-taking behavior preceding the blockage. No matter, however, because the next phase of the experiment decisively ruled both Watson's and Hull's hypotheses out of contention.

 

When the initial block point was removed, and a new block placed at block point 2, both Watson's and Hull's hypotheses would predict that the rats would again backtrack to attempt the leftward second route, but the data did not bear this sort of prediction out. Instead, they backtracked to take the rightward (long) route directly to the food box.

Tolman & Honzik (1930a), it is true, overstated their case for they suggested that this latter behavior was proof of "insight" in rats on par with that found in Kohler's ape experiments -an issue we will come back to later- but again no matter. The data itself had shown rather clearly that rats could utilize some sort of awareness of maze layout to reach a goal. Tolman's view that maze learning is the acquisition of knowledge about the environment had born rather pungent experimental fruit which Hull and his supporters never really managed to swallow.

To this ready arsenal of prior findings was added a few notable new findings too. Some of these were summarized in Tolman's 1937 APA presidential address "The determiners of behavior at a choice point" (Tolman, 1938), but it must be mentioned that no death blow was struck there to the Hullian hypothesis. This would only come later when it was finally recognized that one way to assess the likely relative importance (preponderance) of place versus response learning is to assess the relative ease with which they are learned respectively.

Tolman, Ritchie, & Kalish (1946), therefore, compared place learning versus response learning in two different groups of rats by utilizing the ingeniously simple elevated maze situation (shown below). One group of rats (the "place" learners) always found food at the same place, even though depending upon where they started the maze on any given trial -S1 or S2- they might be required to turn either left or right at the choice point (C) to obtain food. The motor responses in this place learners group differed, but the food location was always the same (F1).

For the other group (the "response" learners), it was the food that was shifted so that no matter where the rats were started -S1 or S2- in any given trial they were always required to turn in the same single direction (e.g., left) to obtain the food.

The results showed that the place learners performed significantly better than the response learning group and five of the latter group did not master the maze situation even after seventy-two trials.

Despite various subsequent equivocations from within the Hullian camp (including Kanner, 1954), it should be pretty clear even from this somewhat impoverished sampling of a very rich and intricate tradition of research, that Hull's position was made untenable. Thus Tolman (1948), in looking back over nearly 20 years research could say with some confidence that maze learning in rats "consists not in stimulus-response connections but in the building up in the nervous system of sets which function like cognitive maps" (p. 193).

We have noted above that Woodworth advocated the use of experiments as one of the empirical methods of psychology, and that Tolman both carried them out and oversaw an active lab at Berkeley in the tradition of molar behaviorist experimentation. We should now turn to a closer account of their respective early "variable" models of experimental research because this comparison will be useful in our consideration of the subsequent "operationalist" and "crisis of relevance" eras of the discipline.

How experimental psychology got its "variables"

Having made the above contrasts (regarding both Woodworth's early S-R vs. later S-O-R versions of psychological subject matter; and Tolman's molar behaviorism vs. Watson and Hull), a further comparison can now be made between Woodworth's professional strivings to popularize the exact nature of psychological experimentation with the contemporaneous use of the term "variables" by E.C. Tolman and others. We will then make a few comments on their respective uptake into subsequent General experimental psychology.

The now standard North American psychology textbook definition of an experiment as the systematic assessment of the mathematical relationship between "independent and dependent variables" (the IV-DV model), was a disciplinary product of the early 1930s-1970s (Winston & Blais, 1996). The term "variable" as it related to empirical psychology was first introduced sporadically in E.G. Boring's The physical dimensions of consciousness (1933). Tolman, Skinner, and others also began to use the term at this time but it was Robert S. Woodworth (1934, 1938) who popularized and formalized the "IV-DV" terminology through his widely read introductory text, Psychology, and his 1938 "Columbia Bible," Experimental Psychology (see Winston, 1988, 1990).

Independent, Organismic, and Dependent variables (Woodworth)

The first two editions of his Psychology utilized a rather loose and indefinite conception of psychological experimentation with systematic variation of conditions in the laboratory being viewed as one kind of experiment and the giving of a mental test to compare individuals as another kind. But by the third edition, a more particular sharpened definition of experiment was presented.

As indicated in the figure (right) from Woodworth's (1934) text, it is here that he first suggests that only those studies that manipulate one condition (the Independent variable under consideration) while holding all others constant are properly called experimental ("I" = independent variable; "C" = held constant; and "D" = dependent variable). Experimental research is here depicted as the active manipulation of an independent variable to discover the preexisting cause of the resulting dependent variable under study.

"The rule for an ideal experiment is to control all the factors or conditions, to keep all of them constant except a single one -which is then the independent variable- and to vary this one systematically and observe the results" (Woodworth, 1934, p. 19).

While Danziger & Dzinas (1997) -as well as Danziger (1997)- have suggested that reference to Galtonian Anthropometric statistics played a major role in Boring's initial adoption of the concept of "variables" (as that which is studied in psychological inquiry), Woodworth's 1934 account is most notable in that it seems to be demarcating a disciplinary divide between "experimentalist" and "correlationist" variable research with one branch being university based and the other applied.

This implied demarcation is made explicit in Woodworth (1938) when he reiterates his earlier sharpened definition of experimentation and distinguishes it as different from correlational research in that it seeks out causes rather than mere interrelations between effects, thereby explicitly excluding mental testing from the Provence of experimental psychology.

"To be distinguished from the experimental method, and standing on a par with it in value, rather than above or below, is the comparative and correlational method. It takes its start form individual differences. By use of suitable tests, it measures the individuals in a sample of some population, distributes these measures, and finds their average, scatter, etc. Measuring two or more characteristics of the same individuals it computes the correlation of these... and goes on to factor analysis. This method does not introduce an 'experimental factor'; it has no 'independent variable' but treats all the measured variables alike. It does not directly study cause and effect. The experimentalist's independent variable is antecedent to his dependent variable; one is cause (or part of the cause) and the other effect. The correlationist studies the interrelation of different effects" (Woodworth, 1938, p. 3).

The ongoing professional divergence between these two groups of variable psychologists was initially manifested in the formation of the American Association of Applied Psychology (est., 1938), but was suspended by mutual consent during W.W.II and in 1945 the AAAP was amalgamated back into an expanded APA Divisional structure (see Hilgard, 1987, pp. 758-761). Despite these postwar efforts at affiliational solidarity, the divides between those practicing primarily experimental and correlational methods remained to such an extent that L.J. Cronbach (1957) would eventually describe "The two disciplines of scientific psychology" with one being the domain of the university laboratory and the other being that of applied or clinical psychometrics (in therapeutic assessment, military, educational, or higher educational entrance exam settings). Likewise, on December, 31, 1959, the Psychonomic Society, being just one of a succession of experimental psychologist splinter groups, was also officially formed.

Noting these affiliation rifts between the mid-20th century experimental and correlational subdisciplines is quite helpful for our present historical purposes. First, it allows us a distinct retrospective advantage over early psychological writers such as Boring, Tolman, S.S. Stevens, and Woodworth who could only have guessed at the eventual combative professional relations between these two divergent empirical research traditions. Secondly, it allows us to recognize the import of contextualizing and analytically unpacking the varied assumptions of successive versions of psychological operationism. Two of these have been labeled by Tim Rogers (1989, 1991) as experimental and correlational respectively. A third version, which attempted to smooth over the above disciplinary rifts, has more recently been labeled as "convergent" operationism by Randolph Grace (2001 a & b).

 

Stimulus, Intervening, and Response variables (E.C. Tolman)

As mentioned above, Woodworth (1929, 1934) abandoned his own initial S-R viewpoint to adopt an S-O-R account of subject matter in which it is necessary to know one's organism. Tolman, using a slightly different perspective, gradually came to a similar conclusion. He is credited with having been the first to clearly formulate the concept of the "intervening variable" within the behavioral tradition (1935; 1938; 1948) -a discursive concept which is still utilized in much of the contemporary experimental and cognitive psychology literature.

The linear (passive, mechanical) S-R unit of Watson had become expanded to a forward-looking Stimulus-Intervening variable-Response analysis in Tolman and his students. Like Woodworth (1921), Tolman was attempting to reject Watson's postulate of environmental immediacy. Something is going on inside the organism that mediates the link between what is learned (or perceived) from the stimulus environment, and what particular behavioral response is "performed" and thereby observed by the experimental researcher. The observable properties of behavior included the goal-directedness of that behavior. Recall that for Tolman, it was a matter of observable empirical fact that rats could utilize some sort of awareness of maze layout to reach a goal. Further, the comparison of groups of rats under conditions of place learning versus response learning indicated that some sort of building up in the nervous system of sets functioning like cognitive maps was at work.

By eventually replacing the passive slot-machine view of Watson with an adjustive mechanism ("central control room") analogy, Tolman (1949a) is often said to have anticipated the later information processing approach of cognitive psychology (see Hilgard, 1987; Leheay, 1991).

"I do not hold, as do most behaviorists, that all learning is, as such, the attachment of responses to stimuli. Cathexes, equivalence beliefs, field expectancies, field-cognition modes and drive discriminations are not, as I define them, stimulus-response connections. They are central phenomena, each of which may be expressed by a variety of responses" (Tolman, 1949a, p. 146).

Methodologically speaking, however, his long-standing equivocal stance on the ontological status of such "central phenomena" (a.k.a., intervening variables) contributed to the ongoing uptake of experimental operationism into General experimental psychology texts as well as into the cognitive psychology movement which followed thereafter. As Tolman (1959) put it himself:

"Although I was sold on objectivism and behaviorism as the method in psychology, the only categorizing rubrics which I had at hand were mentalistic ones. So when I began to develop a behavioristic system of my own, what I really was doing was trying to rewrite a commonsense mentalistic psychology... in operational behavioristic terms" (Tolman, 1959, p. 146).

While it is true that Tolman's early conception of "purpose" had expanded over the years -moving from the status of merely descriptive intervening variable to a near cognitive power (see Tolman, 1949b)- the above quotation exposes the commonly-held and highly problematic "operationist" underbelly of both Tolman's neobehaviorist scheme of experimental research and the later so-called Cognitive Psychology tradition. Much of what would later be labeled as early manifestations of cognitive psychology is indeed a "commonsense mentalistic psychology framed in operational terms" hence the glowing references to Tolman as one plank in their initial disciplinary platform. In the sense that cognitive psychology continues to adopt Tolman's roughly mechanical 'Stimulus-Intervening variable-Response' analysis, it is merely an updated form of neobehaviorism albeit framed in "hypothetical construct" or "information processing" language.

Examples of their respective uptake into General-experimental psychology

Winston & Blais (1996) have carefully outlined the rise of Woodworth's IV-DV experiment definition in psychology over three decades (1930-39, 1950-59, and 1970-79); which as they say nicely "encompass" the respective introduction, dominance, and beginnings of doubt about the variable model of experimentation. Woodworth's initial use of the "independent and dependent variable" terminology to define experimentation increased dramatically in psychology texts from the 1930s to the 1970s:

As partially indicated in the parallelogram, the use of Woodworth's "manipulation" of an independent variable definition jumped in psychology from 5% in the 1930s (1 text, Woodworth's Psychology) to 95% of the sample from the 1970s. Winston & Blais also indicate that the adoption of this "variable" terminology appears to a relatively lesser degree in sociology texts after its adoption in psychology and in markedly lesser degree in biology and physics texts too.

"In sum, introductory psychology textbooks gradually adopted a highly uniform view of experiment as defined by manipulation of an independent variable. This uniformity was achieved relatively recently (between the 1950s and 1970s). By the 1970s, psychology texts imply that experiment is... superior to other methods. This view was not borrowed from the textbooks of other disciplines, although other disciplines may have recently begun to borrow the construction of method used by psychology texts. In physics, which psychologists traditionally take to be the model science, discussions of research method and definitions of experiment are generally absent. When physics texts define experiment, they generally do so in a much broader manner than psychology texts and with a different meaning accorded to manipulation of a variable" (Winston & Blais, 1996).

Given the influence and so-called ascendancy of experimental cognitive psychology in the decades following the 1970s, I thought it would be advisable to provide you with some specific examples (explicit textbook depictions) of experimental variables to illustrate the respective uptake of both the Woodworth and the Tolman variable models of research. Surprisingly, after having made a fairly extensive search for such specific examples in introductory texts (dating from the 1930-1990s), I found only a few texts willing to externalize their experimental-methodological assumptions in this manner. Perhaps less surprising (though nonetheless vital to note), is the fact that none of these depictions conform completely to either Woodworth's (IV-DV) or Tolman's (Intervening variable) schemes. Instead, they all to varying degrees maintain a decidedly mixed (eclectic) flavor.

For the purposes of the present discussion, two of the better diagrams have been selected. So, let's look at both to see what conclusions might be drawn out from them respectively. First up is an example from Munn et. al. (1969), which seems to have taken up a mixed version of Woodworth's views on S-O-R (subject matter) and the manipulation of "variables" as the definition of psychological experimentation:

"Variables in a Psychological Experiment. As shown in this simplified schema, response (dependent) variables are influenced by both stimulus conditions and characteristics of the organism" (From: Munn, Fernald, & Fernald, Basic Psychology, 1969, p. 24).

It is highly appropriate to find such an explicit figure in this particular text because the authors are all major figures in mid-through-late 20th century experimental psychology. Norman L. Munn (1902-1993) was initially trained in experimental animal psychology under W.S. Hunter at Clark University to receive both an MA and PhD there in 1928 and 1930 respectively (see Munn, 1980). He authored a number of textbooks with an emphasis on experimental (and then evolutionary or developmental) methods including: An introduction to animal psychology (1933); Psychology (1946); Handbook of Psychological Research on the Rat (1950); The Evolution and Growth of Human Behavior (1955); Introduction to Psychology (1962); and The Evolution of the Human Mind (1971). As the titles of his works indicate, Munn's unit of psychological analysis expanded over the years to culminate in the 1971 work -which included chapters on "Cultural Evolution" and "The Shaping of Modern Minds." It is important to recognize, therefore, that it was during the penultimate transitional phase of Munn's career (subsequent to his 1963 departure to Adelaide University -Australia- but prior to his final reconsideration of his own mid-career viewpoint on psychology), that he took on a junior team of brothers as coauthors including L. Dodge Fernald (PhD Cornell University, 1961) and Peter S. Fernald (PhD Purdue University, 1963). These junior authors then went on to edit subsequent editions of the 1969 text for the American undergraduate marketplace.

The initial step in considering the strengths and limitations of the 1969 diagram, is to recognize that it was an advance over the one depicted in Munn's 1962 edition of the same textbook (p. 46) -which, when taken at face value, was more S-R in structure (a la Woodworth, 1921) than S-O-R in flavor (a la Woodworth, 1934). Recall that Woodworth's revisionist efforts regarding psychological subject matter began in 1921 with an expanded S-R analysis -where he argues (like Dewey, 1896) that these functional categories of analysis should not be regarded as separate disjointed events occurring in a unidirectional sequence. His efforts then culminated in 1934 with a full-fledged swing away from S-R toward a dynamic interactionist (bi-directional) S-O-R account of subject matter accompanied by a complementary (IV-DV) experimental methods rationale' (see also Woodworth, 1938).

Given that the subtitle of the 1969 (Munn, et al.) text was "an adaptation of Introduction to Psychology" and given that a similarly revised explicitly S-O-R diagram is to be found in the subsequent joint authored 1972 edition too (p. 37), it is a fair guess that all three coauthors considered this particular adaptation away from mere S-R portrayals to be of lasting significance and value to the discipline. Our main concern, however, is whether the combined unit of analysis and experimental rationale being depicted in the above 1969 diagram is indicative of the best aspects of Woodworth's overall analysis; in any way an improvement over Woodworth's analysis; and a sufficient encapsulation of what experimental methods are intended to do.

On the first count, the 1969 diagram comes close to the mark but also misses the bull's-eye in various respects. For one thing, it fails to pick up (bring forward) the bi-directional and self-looping aspect of S-O-R subject matter as conceived by Woodworth (see figures 1 & 2, 1934). What we find in the 1969 depiction is an array of unidirectional arrows passing from stimulus conditions through a seemingly static organism to produce observable (physiological, verbal, and emotive) responses. As such, both the 1962 and 1969 diagrams have more affinities with the intellectual baseline of Woodworth's (1921) revisionist efforts than with his later efforts.

To be fair to Munn and his later coauthors, however, we should mention that the textual content of the 1962 book contained the following 'Woodworthesque' dynamic interactionist statement (p. 38) which was then reiterated in the 1969 edition too (p. 22):

"In the most general sense, psychologists deal with responses of organisms to stimulation. This fact is usually represented by the symbols S->O->R.... Sometimes a formula like R= f (O,S) is used, meaning that response is a joint function of the organism and stimulation" (Munn, 1962, p. 38).

Let's also recognize that the 1972 edition (of Munn et al.) attempted to remedy the above depicted staticness by portraying the central male figure as walking away (back turned) from the reader. But this addition of external spatial-mechanical movement on the part of the organism is by no means a substantive improvement because it in no way shows that "response variables are influenced by both stimulus conditions and characteristics of the organism" (1972, p. 37).

If we were to confine ourselves to combining the progressive aspects of the Woodworth and Munn, et al. depictions, the best we could do with even the 1972 diagram is to overlay Woodworth's (1934) rolling-hills W-S-O-R-W diagram (see figure 2, 1934) -intended to indicate temporal shifts (change)- along the axis of spatial movement of the central male figure. Yet, as we have already argued above, even this effort on the part of Woodworth fell short of the mark of depicting truly developmental analysis. A revised diagram along this combined line would certainly imply that experiments provide empirical snapshots of a shifting succession of S-O-R loops (a la Woodworth's 1934 argumentation), but no depiction of mental development per se would be provided (a la the intent of Woodworth's dynamic psychology). In other words, Woodworth fails us on this count and so do the depictions contained in Munn et al., 1969; 1972. Woodworth's rolling-hills diagram was at least evocative of change in an active organism, however, and the addition of change over time was surely the intent of the 1972 diagram as well.

Therefore, while we can be empathetic to the likely intent of the successive Munn et al. diagrams and textual content, we are forced to the conclusion that there is no substantive improvement to be had over Woodworth's analysis by merely referring to those sources. The analysis is stuck in a rut of its own problematic assumptions (not the least of which is the assumption of organism-environmental "interaction" as opposed to the transformative mental development across the normal life course of human beings). Woodworth's S-O-R diagrams (despite their recognition of central processes, final causality, and reciprocity with the environment), are too individual or social rather than socio-historical in their analysis of human psychological functions. His dynamic interactionist approach to psychology had been designed conceptually to capture and empirically to measure the growth (numerical transitions) and environmental reciprocity of "an organism's" psychological functions without sufficiently recognizing the further empirical imperative -to acknowledge and measure the normal collective development (qualitative transformations) of psychological processes specific to human beings.

Fortunately, we are not in fact limited to merely combining Woodworth with the Munn et al., dynamic interactionist accounts. We can turn to the developmental or educational psychology tradition for depictions of the horizontal and vertical mental developmental ingredient which is missing from all of the above diagrams and textual analysis:

This figure from the developmental psychology literature explicitly depicts both the horizontal (scope) and vertical (levels) aspects of "mental growth" (a.k.a., mental development) as an increasing spiral, successively encompassing the individual, social, and semi-societal realms of meaning (From Lindgren, Educational Psychology in the Classroom 1956; after the fashion of Arnald Gesell's 1925-29 era "mental growth" cycle).

I hope that you are beginning to entertain the possibility that this ontogenetic and sociohistorical account of mental processes constitutes a necessary though missing ingredient in the recipes provided by Woodworth (1934, 1938) and successive experimental psychology texts. If experimental methods work together with other empirical methods, and if we are to avoid the mistakes and overstatements of Woodworth himself, then what should really be depicted in such albeit "simplified" diagrams is the way in which experimental or empirical methods are used to study the horizontal scope, ontogenetic timing, and vertical levels of various dynamic mental processes.

If we were to superimpose this sort of truly developmental diagram onto the middle portion of the "simplified" Woodworth or Munn et al. diagrams, then we would have a clearer depiction of what experimental methods are designed to do. They are designed to take empirically static snapshots of ongoing, dynamic, and upwardly mobile mental processes. Having searched for such a diagram in the existing literature, however, I've come up empty handed. So we will eventually have to provide one of our own. Our task in producing such a diagram will involve carefully combining the theoretical insights of Dewey (1896) with the disciplinary intent of Woodworth (1934) as well as with the sincere efforts at depiction of empirical methods made by successive generations of both experimentally minded textbook writers (including Munn et al., 1969; 1972) and developmentally minded textbook writers (e.g., Lindgren, 1956) respectively.

It may seem to you, however, that I am merely splitting intellectual hairs with past authors in what is often called an unfair historically presentist manner (judging past efforts in light of current standards or requirements). My main reply to that impression or accusation is to simply ask: Is this not the best possible use of our historical vantage-point over past authors who were struggling with issues which still effect our current practices?

As long as it is now recognized as it was not in the past recognized that all we do with the experimental or correlational techniques is to take linear unidirectional snapshots of active qualitatively changing (and potentially transformative) mental process, then all is fine and good with the overly simplified depictions of the past. But as soon as we let our habitual use of these snapshot methods confine our understanding of those processes, and as soon as we begin to reify analytical variables themselves as the subject matter of empirical investigation, a host of theoretical and practical problems (with implications both within and outside the discipline) are encountered. Munn (1971) seems to have reached the same conclusion but, alas, having taken up residence in Australia, he was now an outsider (both intellectually and physically) to the ongoing American tradition of introductory texts -which then carried on without any serious reference to or acknowledgment of those insights. Relatedly, the timing of Munn's departure from the North American scene coincided with Woodworth's 1962 passing. A full-fledged, long-standing, affair with statistical "cookbook training" for undergraduates and then graduates too -one involving the teaching of empirical techniques separate from content- has also become notable in which the issues as raised above were glossed over or ignored altogether (cf. Psyc. Monitor, Dec. 1999).

Our second exemplar of introductory textbook depictions (from Evans & Murdoff, 1978) is indicative of the contemporaneous disciplinary (theoretically noncommittal) glossing over of formerly divisive contentual and methodological differences. This diagram of "the experimental method" seems to conform to both Woodworth's IV-DV classification and E.C. Tolman's (intervening variable) account as well.

Let's note that in this particular text, the diagram (and its accompanying commentary) was intended merely as an instructional orientation device for student readers rather than as a confining set of prescriptions to be followed doggedly. The main emphasis of the text resides in outlining the contemporary applications and sociological implications of empirical psychological knowledge. It should not be surprising therefore that the authors had no bones about diverging from the confining methodological prescriptions of any prior psychological theorist.

Relationship of independent, internal, and dependent variables in a psychological experiment (From: Evans & Murdoff, Psychology for a Changing World, 1978, p. 14).

Tolman's ontologically equivocal "intervening variables" have been re-labeled as "internal variables" thereby implying a degree of confidence in the status of motivation, personality, and intelligence as referring to existing mental processes which in practice must be studied by way of a variety of methods (as suggested by Woodworth). Similarly, although Woodworth's IV-DV model is retained in an albeit more S-M-R (m=mental) version, the appeal to operational definitions of hypothetical constructs indicative of mid-20th century General psychology has been replaced by an updated appeal to the falsification of empirical hypotheses along Popperian lines.

The senior author Idella M. Evans (PhD University of Oregon, 1955) and Ronald Murdoff (a long-serving teacher at San Joaquin Delta College, Stockton, CA) state these initial instructional preliminaries rather succinctly as follows:

"The experimental method is the most definitive and rigorous of all the methods available to psychologist. It offers an approach to behavior study that is systematic and explicit. Experimenters begin with an idea, or hypothesis, about some particular aspect of behavior they wish to study. They make a tentative assumption about it that will serve as a guide for their investigation, then design an experiment that will provide specific information to confirm or reject their hypothesis.... The experimental method involves the control, manipulation, and measurement of experimental variables. A variable is anything that can vary or take on different values. A variable that is manipulated by the experimenter and is independent of the subject's control is called an independent variable. The change in behavior that results from or is caused by the independent variable is called the dependent variable. In our example a moderate amount of the active chemical of marijuana (presence or absence) is the independent variable, and the resulting score on reading comprehension is the dependent variable" (Evans & Murdoff, 1978, pp. 12-13, emphasis original).

Although this preliminary statement seems to afford the experimental method a higher status, and is framed in terms which are stronger than Woodworth's (1938) "on par with correlational methods" position, the authors go on to cover correlational and developmental methods later on. In practice, they combine these methods rather eclectically (after the fashion of Munn, et al., 1969). Similarly, even though unidirectional arrows appear in the diagram, the authors are quick to suggest that "internal variables" can serve as independent or dependent variables in experiments. The examples they use throughout the text are as timely and relevant to contemporaneous student interests and concerns as their opening "marijuana & learning" example. The increasing use of intellectual achievement tests, for instance, as supposed predicators (independent variables) of early higher educational performance is covered in some detail. This is to be expected given that the overall viewpoint of the authors was not merely psychological but also sociological-historical as well. Their final chapter "Psychology for a Changing World" (from which the book gets its title), includes subsections on seemingly diverse topics like: the population explosion; urbanization; science and technology; global organization; world peace; changing occupational roles; education in transition; and assuming responsibility for the future. In other words, it provides a form of sociological analysis missing from most introductory texts which both precede and followed it thereafter.

In this respect, a comparison between this text and that of Hilgard et. al., 1979 is fairly enlightening. While both texts are "eclectic" in terms of the empirical methods they outline, one contextualizes psychological knowledge and urges social action; the other demarcates the limits of an academic discipline and promotes nothing but intellectual equivocation and sociological impotence. The Evans & Murdoff text is most notable in this last societal relevance respect because it was an exception to the rule. Yet, -and here is the point to note for future reference- even the progressive societal aspect of their text can be said to have been present "despite" and not as a result of the details of their explicitly outlined stand on experimental-empirical methods.

By 1979, the discipline of psychology was on the cusp of a major methodological crisis in which all of the assumptions and empirical applications of its combined experimental or correlational research enterprise would be drawn into open societal debate. It was in 1980, for instance, that Columbia University student Allan Nairn and his associates (working under the auspices of consumer advocate Ralph Nader) published The Reign of ETS [Educational Testing Service]. In this "Nader Report" criticisms traditionally reserved for old-time correlational intelligence scales and vocational tests were now directed at tests being used in higher education admissions (including the SAT, LSAT, MCAT, and GRE). Similarly, it would be in the 1980s that theoretically minded psychologists would begin taking North American General-experimental psychology to task for its "irrelevance" to the realm of meaningful human existence. For example, the British psychologist Paul Kline turned the tables on his own profession by coming out with Psychology Exposed (1988). It argued that unless the discipline of psychology addressed the radical "disjunction" between its methods and the "nature of man," it would soon be relegated to the class of "intellectual rubbish" along with phrenology. Far too much of the academic research in memory, personality, attitudes, and perception is a mere collection of abstracted empirical results "lacking anything resembling practical significance" for human affairs. The discipline, he argues, is lost in a "flow of trivia" that is of interest "merely to other academic psychologists." Likewise, the fact that psychometrics has been applied widely and mobilized as evidence on important disciplinary issues "is no guarantee that it is good psychology" (p. 45).

To assess the veracity of these accusations and if need be to propose disciplinary remedies, it is incumbent upon us to take a step backwards in our historical narrative. We must delve more deeply into "the operationist era" of psychological thought which falls between Woodworth's (1921) hopes for the discipline and the eventual uptake of the combined operationalized variable model into General psychology texts. This will be done, not only so that we can personally avoid becoming trapped in its rather narrow intellectual confines, but also so that we can establish a firm methodological basis for promoting progressive change within and outside the discipline.

Rationale and limits of the combined operationalized variable model

This subsection provides a brief account of: (1) the varied disciplinary origins and eventual downfall of "operationism" as a stand-alone movement; (2) the rationale behind the continued appeal to "operational definitions" and to the "convergent validity" of empirical measures as a rhetorical methodological device; and (3) the rather fundamental explanatory limitations of the resulting "combined" operationalized variable model of research -a problematic yet persistent feature of General-experimental psychology. It is hoped that these considerations will allow us the kind of common historical footing to then go on to outline and assess the ultimate results of the so-called "crisis of relevance" era of General psychology.

For every complex methodological problem there is a solution that is simple, neat, and wrong. Operationism, as adapted to psychology by S.S. Stevens (1935a&b, 1939) from the work of physicist Percy Bridgman (1927, 1936) was one of these so-called solutions (see also Hardcastle, 1995; Nicholson, 2005). Stevens (1935a) opened with the argument that the "instability" of the psychological discipline was caused by an uncritical acceptance of nonempirical "a priori postulations" and appeal to "reified entities" (1935a, p. 323). In 1935b, he calls for the "examination of psychology's conceptual heritage under the search-light of operationism" (p. 515) and further specifies the latter reified entities as including: "the existence of mind in animals, consciousness in man, emotion in cats, instincts in infants, and space perception in the blind" (pp. 518-519).

In order to remain "scientific," psychologists would now have to abandon appeal to all theoretical terms which could not be readily translated into empirically demonstrable publicly observable events (operational definitions). According to Stevens, all these formerly assumed entities could rather conveniently be defined so as to allow the "translation" of claims about them into hypotheses regarding measurable and observable performances. This translation would "rid our most fundamental concepts of metaphysical excess" (1935b, p. 523). He argued successively, therefore, that the adoption of this "straightforward procedure" (1935a; p. 323) would not only help solve our disciplinary woes by way of diminishing theoretical "controversy" (1935b), it would also be the "revolution which put an end to the possibility of revolutions" in psychology (1935a, p. 323).

As such, the Stevens version of operationism used in the psychology of the late 1930s and 1940s, drew upon the tradition of Logical positivist philosophy of science which allowed empirical, formal logical, or linguistic statements into its purview -eschewing all others as metaphysics. It also drew, however, upon a shared preexisting "ontologically reticent attitude" toward appeal to psychological processes indicative of the methodological behaviorist and intelligence testing traditions from about 1921 onwards (see Leahey, 1991; Rogers, 1989, 1991; Mills, 1991).

This initial operationist movement consisting of Stevens, Boring, Tolman, and Skinner gained some early converts (e.g., McGeoch, 1935; McGregor, 1935; Pratt, 1939; Bergmann & Spence, 1941). It also received occasional early critiques on grounds of its own procedural instability or philosophical inadequacy (Waters & Pennington, 1938; Israel & Goldstein, 1944). Various 1950s era psychologists also recognized operationism as: (i) a new rationale for continuing the narrow focus of comparative animal psychology on the Albino rat (Beach, 1950); (ii) an unwarrantable exclusion of topics like personality and of the empirical aspects of the introspective tradition in General psychology (Roback, 1952); or (iii) simply another problematic "circumnavigation of cognition" (Ritchie, 1953). It was already clear by 1948, however, that the early proponents of operationism had failed -even amongst themselves- to reach the kind of "unanimity of agreement" regarding basic definitions or general psychological methods which Stevens and Boring were hoping to achieve (see Boring, 1953; Green, 1992).

Around 1956, the initial radical form of operationism in psychology began to be quietly replaced with a softer version sometimes called "convergent" operationism (see Grace, 2001a). The disciplinary aim of this milder operationism was to allow empirical research (in both the individual differences and experimental manifestations of General psychology) to go forward unimpeded by the snares of philosophical or ontological debate. Accordingly it was from within this convergent operationist tradition of the glossing over disciplinary divisions and avoiding acrimonious debates regarding psychological entities that the 1960s-era "combined operationalized variable model of research" initially sprang. It, in turn, became the dominant vehicle for both the disciplinary initiation of students and the more public rationalization for continuing existing applications of psychological measurement technologies in society at large.

But as someone once said a hidden metaphysic is often a bad metaphysic. General psychology as thus constituted and applied would eventually be force into a prolonged, public, and rather embarrassing reassessment of its own most fundamental beliefs. Although we will develop this latter theme more fully later on, we should mention -here and now- that this reassessment was not carried out as a merely self-motivated matter of disciplinary relevance to human affairs, but was also forced upon the discipline as a matter of public trust.

Thus, while I concur with Randolph Grace (2001a) that the convergent operationist or "convergent validity of measures" approaches, as respectively proposed for experimental (Garner et al., 1956) and individual difference psychology (Cronbach & Meehl, 1955; Campbell & Fiske, 1959), became "the main methodological foundation" upon which contemporary General psychology still rests, I must demur from his conclusion that it "provides the only potentially valid method of making inferences... on the basis of empirical data" (p. 26). Grace is flat-out wrong on that count and the details which follow (about the rise, fall, and critique of that methodological foundation) will be aimed at demonstrating to you why that conclusion is wrong.

Let's not forget too that the point of taking you through these historiographical twists and turns is to allow you a safe third-hand opportunity to consider the limitations embedded within the eventual 1960s-era operationalized variable model of empirical research. Along the way we will want to form some sort of firmer methodological foundation by which we can face the rather anti-empirical disciplinary proposals put forward during the "crisis of relevance" era head on. Simply putting our methodological blinders on, as Grace seems to be suggesting, will not be a sufficient response to the critiques leveled against the discipline because it falls into the "them or us" trap set by those who leveled the critique in the first place. We want a third option which avoids the foibles of both the combatant scientistic and anti-empirical camps.

A Downhill Slide: From "psychological states" and "intervening variables" to "operational definitions" and the purging of "hypothetical constructs"

When we consider the disciplinary conditions under which operationism as a self-contained movement rather than a mere rhetorical precaution against fuzzy thinking was proposed, applied, debated, and exposed as insufficient the most central point to note upfront is that it underwent a rather predictable argumentative degeneration of the sort we have encountered in preceding Sections. Although operationism was proposed by Stevens as a "straightforward" methodological procedure to guide empirical method and theoretical discourse in psychological science, it was also tied to an underlying, problematic, and under-recognized idealist metaphysic (which starts from individual human "experience" -however defined- and works outward). Consequently, the formerly observed degenerative argumentative series (best seen in Section 2) of representationalist realism (e.g., Locke), followed by skepticism or agnosticism regarding making any ontological claims (e.g., Berkeley, Hume) was now repeated in an albeit modernized psychologically informed manner.

In this particular early 20th century case, Boring's (1923; 1930) preexisting as well as contemporaneous (1942) indirect realist arguments regarding psychological states and Tolman's somewhat ontologically equivocal appeal to the empirically demonstrable aspects of "intervening variables" as indirect measures of those psychological entities (Tolman, 1926, 1935, 1936), were with Stevens (1935a&b, 1939) turned into an ontologically skeptical anti-realist appeal to "formal or empirical" operational definitions as the only sound basis for carrying out scientific investigation or psychological discourse.

This skeptical starting point allows very little room to maneuver so in answering the rather serious critiques leveled against his approach, Stevens tended to skirt rather than resolve the issues as they were raised up to 1939. It fell to a mixed set of younger individuals to further explicate the inherent assumptions laying in wait beneath the straightforward surface layer of the Stevens account. First came MacCorquodale & Meehl, (1948) who distinguished between "intervening variables" and "hypothetical constructs" and suggested that both might have a role in producing empirically rigorous theory. Then, Melvin Marx (1951) accepted the above distinction but favored the former over the latter because they make no extra-operationalized ontological claims. Ultimately, after another round of debates (which included a rejection of Stevens' own 1951b new naive realist position), the task of elaboration fell to Howard Kendler (1952 onward), who, during the course of persistent attempts to make operationism more "internally consistent" and applicable to psychology, ended up exposing it as a mere modernized form of "nominalism" (naming without claiming).

Around 1956, these elaborations and radicalizations of the operationist movement started to become watered down. The succeeding era of "convergent" experimental and correlational research, once one cuts through the official science-speak of "critical realism, multiple variables," etc., gradually returned to the sort of representational indirect realism of Tolman (1949) or Boring's (1953) positions. In other words, the argumentative cycle had started to repeat itself.

By the late 1970s, however, it became clear that while data gathering in experimental and correlational research settings had continued and the statistical sophistication of discipline as a whole had increased, the theoretical-explanatory knowledge products which were supposed to be produced by such intense scientific activity had not shown much progress. To the extent that General psychology had long since settled into a given set of contradictory theoretical perspectives marched out in a fairly uniform set of introductory textbooks chapters each with their own set of "reference" experiments -and to which was now simply added a neo-mechanical "cognitive" approach,- Stevens (1935a) seems to have been correct that operationism, even in the milder form in which it was in fact adopted, had "put an end to revolutions" in psychology.

Gregory Kimble (1981), in his review of the remarkable stability of text chapters over the preceding years, seems to have been making this very point and was also completely comfortable with it. Others, including myself, were very uncomfortable about such a static state of disciplinary affairs. Something, apparently the same sort of something which dated all the way back to the time when Woodworth set out to improve the discipline, was still wrong with our basic methodological assumptions and explicitly held views on methods or subject matter.

Antecedents (Correlational research and Stevens vs. Boring and Woodworth)

It is impossible to appreciate how the initial rise of operationism came about without noting that the 1930s Stevens version fed directly into the preexisting status of debates in the area of correlational research after (as opposed to before) 1928. Prior to this date, debates in this area over the proper methodology of testing predominated. As Rogers (1989, 1991) points out these methodological debates were concerned with the issue of what should come first: tests (J. Mck. Cattell; Spearman; Terman; Yerkes) or some sort of understanding of that which the tests are supposed to measure (Galton; Binet; Scott) respectively. From 1928 onward, however, it was the statistically guided approach to ability testing -marked not by ontological concerns (regarding the nature of human intelligence) but by concerns over the administrative sorting of school children, standardized test sales, and other market-oriented or professional issues (such as the proper training of psychometricians)- that began to gain the subdisciplinary upper hand.

Elsewhere (see Ballantyne, 2002) I've elaborated in depth on the sociohistorical context of this subdisciplinary shift including the ongoing expansion of the "consolidated" K-12 school system and the concurrent increase in high school graduation numbers. These produced a market for administrative assessment devices like the Stanford Achievement test batteries (Terman, Ruch, & Kelly, 1923), the Scholastic Aptitude Test (developed by a committee including Yerkes in 1926), and ultimately the Graduate Record Examination (1936).

Just as important for our present purposes, however, is the related internal subdisciplinary theoretical debate between adherents of hereditarian versus environmental interpretation of test results which (at least according to some of the participants) came to an insoluble draw in 1928. The pivotal period (1921-1928) opened on the one hand with the seemingly amiable 1921 formation of the Psychological Corporation -a nonprofit organization made up of 12 academics dedicated to improving mental tests and funding testing research (see Cattell, 1923; Achilles, 1937; Sokal, 1981), and on the other with the publication of an explicitly racist account of W.W.I Army testing data (Brigham, 1923). These were, in turn, met with an embarrassing series of exposés regarding the subdiscipline's fundamental misconceptions about the hereditarian source or lack of malleability of human intellect in the rather public forum of the New Republic magazine (Lippmann, 1922-1923).

It was into this transitional period of acrimonious argument that E.G. Boring (1886-1968) ventured by making a few statements regarding the contemporary status of "intelligence" tests. His little article "Intelligence as the tests test it" (1923) appeared in the same New Republic issue as some of Walter Lippmann's harshest critiques and is generally acknowledged as the best available contemporaneous statement regarding the structural process of test production and accompanying (resulting) limits of interpretation of existing testing data. Much has also been written about Boring's article being "foundational to" rather than "merely anticipatory of" the later Stevens form of operationism, but a careful reading of the following quotations marks off his attitude as one of an indirect realist regarding mental measurement (as contrasted to Stevens' skepticism). In commenting on the rough state of contemporary knowledge, data gathering methods, and interpretation of existing IQ tests Boring writes:

"They mean in the first place that intelligence as a measurable capacity must at the start be defined as the capacity to do well in an intelligence test. Intelligence is what the tests test.... [Despite the inherent dangers of misinterpretation]... no harm need result if we but remember that measurable intelligence is simply what the tests test, until further scientific observation allows us to extend the definition" (1923, p. 35).

"It is high time for a change of words [in the subdiscipline]. The present usage requires us to say that the average adult has a 'mental age' of about fourteen and that 'mental growth' on average stops at fourteen. Nothing could be more untrue. The statement can be true only of intelligence as the tests test it" (Boring, 1923, p. 36).

Boring's intent was clearly to demarcate the contemporaneous structural limits of existing tests and indicate the need for further research regarding the correlates of intelligence test scores (like socioeconomic status, subsequent academic achievement, and mean annual incomes) before jumping to "harmful" conclusions.

In 1924, Lewis Terman, a longtime adherent to the hypothesis of genetic constriction, confidently predicted that a series of ongoing empirical investigations would fairly well settle the outstanding question of how much heredity influences intelligence. Over the next four years, Terman chaired the National Society for the Study of Education committee on "nature and nurture," a group whose members represented a broad cross-section of opinion on the matter: W.C. Bagley, L.B. Baldwin, C.C. Brigham, F.N. Freeman, and R. Pintner. 

In the introduction to the 1928 report, however, Terman admits that even he had been forced to modify the boldness of his former claims because the collected evidence was in no way favorable to his personal bias toward nativist interpretation. Given that his own initial follow-up studies on academic achievement of "gifted" children, as well as the sort of research produced by environmentally leaning investigators -e.g., the soon to be published B. Baldwin, et al., 1930 study comparing not only the test performance of rural vs. city, but also common vs. consolidated rural school subjects- indicated that tests scores were more malleable than formally thought, Terman was forced to concede the point that IQ scores may not be a measure of genetic endowment per se. "It is conceivable," he wrote, that the "elusive nature of the problem is such as to preclude for a long time to come, if not forever, a complete and final solution."

Terman's later efforts would be aimed at improving the psychometric "reliability" (repeatability of respective rankings) and "predictive validity" of test scores (criterion validity) so that they could still be used for administrative "sorting" purposes -all the while bracketing the thorny ontological issue of what (exactly) the tests actually measure. As Henry Minton's (1988) critical biography of Terman put it, he thus became less of a researcher into human intellect and successively more of a purveyor of a psychometrically stratified society.

Many of Terman's psychometric colleagues followed his example to some extent. So, when the Stevens form of ontologically agnostic operationism came along, it provided a ready-made rationale for continuing what they were already doing. As Rogers (1991) put it, operationism "emasculated" correlational researchers from the former requirement of making ontological claims about "elusive" entities like personality or intelligence. Other psychologists, however, -particularly those whose initial works predated either the rather cautious indirect realist account of testing provided by Boring (1923), Terman's (1928) deadlock argument, or the Stevens (1935a&b) operationist articles- would continue rather unabashedly to seek out better ways to understand the source and observed developmental pattern of human intellect. One of the most influential of these was R.S. Woodworth; so we should make the respective contrast between the pre-1928 Terman and Woodworth (1929 onward) at this point.

As part of the early "school efficiency" movement, Terman (1916a, 1919, 1922a, 1922b, 1924) had utilized an additive "container of liquid" analogy to argue that pouring an equal portion of educational water into the precast container of each child's particular genetic endowment would produce differential results which could be measured by way of "intelligence and achievement" tests. Further, he suggested that it would be unfair (and even "undemocratic," 1924) to both poorly endowed individuals and to "gifted" individuals to expose them to the same schooling experiences. In the former case, it would be a waste of societal resources (for the extra educational content would merely spill over the edges of the container), and in the latter case it would be a waste of potential talent (for the container would remain only partially full).

Woodworth's Psychology (1929), however, was the first to employ the now standard multiplicative "rectangular" interactionist metaphor (heredity x environment) for considering the nature vs. nurture debate. This new metaphor was explicitly intended as a disciplinary corrective for the older additive container of liquid analogy and was subsequently reified in the following pedagogical diagram from the 1934 edition (p. 140):

What was the net intellectual-explanatory gain and practical implication of advocating this multiplicative "interactionist" account of heredity and environment? The intellectual gain, I suggest, was rather scanty and the practical implication (as stated by Woodworth himself) was not only self-contradictory but also undemocratic.

First of all, this "rectangular capacity model of human intellect" leads Woodworth (in his consideration of the origin of "uniformity of behavior" seen in paired twin data) to pull back from the seemingly commonsense view that "Biological uniformities are... transmitted by the chromosomes, and cultural uniformities by the social environment" (1934, p. 154). His argument for this pullback not only makes specific reference to the rectangular metaphor but also provides a valuable exemplar of how the reification of such an abstraction can lead even well-meaning systematic thinkers like Woodworth astray:

"We easily fall into the error of attributing certain uniformities to heredity and others to environment. Uniformity in breathing is said to be handed down by heredity, and uniformity in language by tradition... But this way of thinking runs counter to our previous decision that all behavior depends on both heredity and environment. Let us go back once more to our rectangle. Two or more rectangles cannot have the same area just because they are alike in one dimension; they must be alike in both, or else differences in one dimension must be compensated by reverse differences in the other. By the same logic, if two identical twins develop alike, they must have been exposed to practically equal environments. Uniformity in development or behavior [according to the logic of the metaphor] means, not that either heredity or environment has been uniform for all the like individuals, but that both... have been uniform.... This is sound logic, but the conclusion does not appear entirely convincing.... [because]... individuals of differing heredity become more and more alike when kept together in the same environment.... The heredity of all normal individuals can be regarded as the same, so far as simple speech is concerned. Therefore, having sufficiently uniform heredity, they develop alike with respect to the rudiments of a language when exposed to the same environment.... A similar example is afforded by the use of the bow and arrow in nearly all tribes and peoples, the world over.... None but human beings have the hereditary capacity for this mode of behavior, and even the chimpanzees have not picked it up from their human neighbors" (Woodworth, 1934, pp. 155-156).

Although the above quotation on its own may seem to the modern reader that Woodworth might be arguing for the abandonment of reference to "heredity" by psychologists (in which case I would agree with that suggestion) he is in not in fact doing so. Exactly what Woodworth is and isn't saying here, becomes abundantly clear in the next few pages under the heading of "Compensating or standardizing influences." There he starts by noting that while the normal environmental conditions certainly make people "more alike than their [differing] heredities would lead us to expect," he also emphasizes by way of utilizing the example of school children that it is "unequal environmental pressure" on the part of teachers that "compensates" for these "native differences" by bringing differing individuals "to a uniform standard" (1934, p. 156).

One might well ask how Woodworth's reliance upon an abstract analogy to the "area of rectangles" is helpful. I suggest, that despite the prolific subsequent use of this analogy in General psychology, it is in fact as problematic as the additive container of liquid analogy which it replaced. If in fact the transmission of language requires only the "common heredity of species specificity" (as Woodworth himself recognized), why not adopt a more transformative account (a la G.H. Lewes, John Dewey, C.L. Morgan) that truly recognizes that cultural uniformities are indeed formed and transmitted by way of the sociohistorical environment?

Woodworth's own answer to this, is one of advocating a seemingly logical "precautionary principle" but the most telling part of his having done so is indicated by which side of this false genes vs. environment dichotomy he ends up leaning toward. Read it for yourself! While doing so be cognizant of the worldwide economic depression and pre-W.W.II era in which it was written:

"While awaiting the final decision of science on the question of human heredity and environment, the safest assumption to act upon is that heredity is an important factor. For if we should act on the assumption that everything could be accomplished by environmental control -by good cultivation of our human garden- and should allow breeding to proceed with no effort to perpetuate what appears to be the best stocks, we should lose those stocks and never be able to replace them if and when heredity should later prove to be important. If we should act on the plan of improving the human stock, and if this dependence on heredity should later prove to be misplaced, we should have lost some time but done no irreparable damage to the future of mankind.... [The] best members of each generation should become as far as possible the parents of the next generation" (Woodworth, 1934, pp. 156-157).

This concluding comment exposes just how archaic even Woodworth's "multiplicative" views were on the heredity and environment debate. We should note here too that, by 1934, Adolph Hitler was already in power in Germany and his Nazi party would soon point out that their successive sterilization, Eugenics, and Euthenics (State-sponsored selective breeding) programs were merely carrying out the practical implications of the logic used by North American psychology (see Guthrie, 1976). On the same point let's not fail to mention also that the ostensible (explicitly utilized) W.W.II rationale for the attempted systematic extermination of European Jewry was also a more or less direct implication of holding to the logic of a multiplicative "interactionist" rather than a transformative sociocultural approach to the nature vs nurture debate. There were, of course, more demonstrable ideological, economic, and idiosyncratic reasons for the latter Nazi program but the propaganda of "promoting genetic fitness and purging unfit genes" provided the primary vehicle for both their propaganda and their crimes against humanity.

Symposium on operationism 1945

Given the inadequacy of the pre-W.W.II intellectual tools of General psychology, is it surprising that experimental psychologists, -once faced with the rather grotesque practical implications of those tools- might become very wary (like the correlational researchers proceeding them) to make any bold statements about the ontological status or developmental underpinnings of psychological processes? Whether or not the ideological context of the war per se played any part in Boring's decision to hold a "Symposium on operationism" (1945), I can not say for sure. Nor have I yet checked this issue out further by thumbing through his (1961) autobiography. All that I can tell you is that by the Spring of 1945, Boring was both aware of the urgent need to revise General psychology, and was also willing to act upon his beliefs regarding which of the rather fundamental ideological issues needed to be excised from the realm of acceptable professional conduct; for (as the following account by an eyewitness indicates) it was at this point that he had an acrimonious face-to-face professional run-in with R.S. Woodworth:

"Partly because he was so long lived, Woodworth served as a gentle reminder of the persistence and power of discriminatory attitudes and practices even at the end of WWII. At the annual Eastern Psychological Association in the Spring of 1945, Woodworth publicly endorsed a scurrilous, anti-Semitic editorial by F.C.Thorne in the then new journal, Clinical Psychology, that Thorne edited.... In the midst of a large gathering of faculty and students Woodworth argued Thorne's case on humanitarian concerns for the state of mental health of the rest of America. The gathering had numerous faculty, researchers, and students who had come to this country as prewar émigrés, refugees, and survivors from Hitler's Europe. They and many others were very excited and loud in demanding that Thorne be condemned and that he be removed from the journal. Despite Woodworth's mild manner and well-known sweet, paternal[ism] he did not succeed in mollifying the audience. Suddenly, the crowd around him parted and a short, rather plump man I didn't know bustled to the center and said loudly to Woodworth and the crowd something like "Nonsense! Of course you can't let Thorne get away with something like that. We're just finishing a war to settle stuff like that." It was E. G. Boring and he got a tremendous hooray and clapping. That ended it for the moment. Woodworth took it in stride; no argument; he just looked at the younger Boring and then walked away to, it has to be said, some jeers. Yes! I was there and it was very exciting indeed, even exhilarating, this confrontation with what seemed to most of us to be evil in the heart of psychology and academia" (Richard A. Littman, December 02, 2002).

This is all well and good as far as it goes. But is it not also possible that this mid-20th century generation of experimental psychologists, in their operationist zeal to free psychology from "groundless metaphysics" and unsound theory, may have retreated too far from the well-founded aspects of developmental analysis and ontological claims regarding psychological processes made by older albeit imperfect figures like Woodworth, Dewey, even William James for that matter? I think a case can be made for this argument by looking at the rather disappointing: anti-realist starting point; linguistic-centered (rather than object or process-oriented) content; and divisive (rather than unifying) outcome of the 1945 Symposium -as well as the equivocal (rather than decisive) disciplinarily aftermath which operationism in general helped to bring about.

The anti-realist starting point of the 1945 Symposium (and the linguistic-centered content which follows from it), was set up successively by Steven (1935a&b, 1936, 1939) so some elaboration on those two counts is necessary. In these articles, Stevens makes it perfectly clear that he considers the persistent attempts of past empiricists and early positivists to maintain a "realist" epistemology to be an exercise in futility. The rather seductive argument used by Stevens on this point runs as follows. Since operationism is aimed at purging psychology of "a priori postulates" (1935a, p. 323), and since realism (the suggestion that we have access to objects and events in the world) is a "metaphysical doctrine" rather than an empirical or logical position (1939, p. 231), it -like other such questions (e.g., mind-body dualism vs. monism)- must be purged from the scope of operationist discourse. Furthermore, although operationism requires appeal to the "reportable [public] aspects of experience" (1935a, p. 327), that experience per se, is not to be understood as the sort of fundamental "given" that past realist accounts had suggested it to be:

"We can only 'examine' (react to) the [infinite] system [of nature] at some particular time and place and then try to order our reactions into a scheme which we call science. Consequently, when we try to go back to experience, as positivists have sought to do, we never reach a last and final given out of which constructs are generated.... That is to say, even the most elementary experience, such as seeing a color and recognizing it (naming it, let us say), is conditioned upon the subject's previous history and present attitude, and therefore exhibits the 'inferential' character that is essential to constructs.... [W]e are forced to conclude that there is no... 'summum genus of which everything must have been a part' before it was a construct. There are only constructs, first, last, and always" (Stevens, 1935b, p. 521).

"In [1935b] I... tried to demonstrate that an empirical (operational) definition of immediate experience is possible provided we note precisely what its advocates do when we ask them to indicate an example of it. Almost invariably they point to a situation involving an elementary discrimination such as: 'I see red.' Elementary discriminations, then, are what is meant by the immediately given and discriminatory reactions, of course, are public and communicable" (Stevens, 1939, p. 239).

Let's try to understand why this sort of apparently circular and self-contradictory anti-realist argumentation was so seductive to Stevens and other mid-20th century "operationist" psychologists. First of all, acceptance of anti-realism had long been recognized as the only possible outcome of holding to the sort of indirect theory of perception shared by nearly all physiological investigators (e.g., Helmholtz); as well as by Critical (Mach, Avenarius) or Logical (Schlick, Neurath, Carnap) positivists since the period in which Wundt had attempted to mark-off psychology as a distinct discipline (see Section 3). Recall, however, that Mach (the physicist) and Wundt had squared off on the seemingly irresolvable question as to which discipline held the proper claim to immediate vs. mediate (inferential) experience.

The Stevens argument, which appealed to operationally defined "discriminatory reactions" of organisms (as the immediately "given" of experience), seemed to tip the balance of this interdisciplinary argument in favor of psychology. If the "sole business of psychology" is to "test and measure the discriminatory capacities" of organisms (Stevens, 1935a, p. 325), and if "all sciences... are founded upon human experience" (1935b, p. 520) as so defined operationally, then psychology is the most fundamental of science of all. Like Hume and Wundt respectively, Stevens considered psychology to be the most fundamental ("propaedeutic") science (see Stevens, 1936). The initial elation and disciplinary pride in this regard, however, is rather short-lived once one starts to consider the constrained knowledge claims and implied theory of truth which accompany such a proposition.

This brings us to the second problematic aspect of operationism set up by Stevens and carried over into the 1945 Symposium: a circuitous overemphasis upon analysis of psychological "language;" the nature of operational definitions; and of how psychologists might come to "agreement" about "definitions, concepts, or constructs" in the absence of any possible appeal to their correspondence with psychological entities (acts, events, or processes). In other words, having cut himself off from realist epistemology and from the traditional ontologically materialist appeal to a correspondence theory of truth Stevens, like the Logical positivists before him, was forced to come up with some other sort of extra criteria for establishing the truthfulness of knowledge claims. The respective contrasts on this rather central point can be made easily by referring to the following table and by way of elaborating the procedural implications of each position with respect to the illustrative "I see red" example used by Stevens (1939).

PositionCriterion of truthPrimary Referent
Realist-materialist Science

Correspondence between empirical statements and the world

objects, events, processes
Logical positivismCoherence between empirical or logical statementssensory experience
OperationismAgreement between disputants concepts

Under the traditional realist-materialist account of science (a.k.a., the Standard view), the truth of any empirical statement, observational generality, or theory can in principle be ascertained by referring to its "correspondence" with the nature of the object, event, or process under study. Under this account, the above mentioned presentation of a "stimulus card" to an experimental subject is a rather straightforward affair. When asked to report to the experimenter, the subject says "I see a red card" or some such statement. Regularities and shifts in these reports (under various lighting conditions, at different times of day, and for both a single or multiple subjects) can then be collected to yield observational laws, and these observed regularities can in turn lead to a theory about the perceptual process of vision. Note that the primary "given" in this realist account of science is the object, event, or process under study and that the truth or falsity of various -even contradictory- statements about them can be ascertained by virtue of our direct access to these primary referents.

It is a great historical irony, however, that the most prevalent theory of perception to come out of the mid-19th century physiological ferment was the indirect (three-moment) theory proposed by Helmholtz among others. The Critical Positivists of that era, suggested that if the indirect theory of perception is held to consistently, all that should be reported in such an experimental situation is "I have the sensation of red." Quite obviously, if one attempts to carry out research within this restricted Humean framework of "sense data" (where appeal to objects is ruled out of bounds), the definition of truth had to mean something else (e.g., the mere regularity of observed stimuli).

When faced with the task of reconciling this supposed "barrier of the senses" with their desire to make scientific statements about the world, the Logical positivists (of the 1920s-1950s) suggested that the correspondence theory of truth must be replaced with a comparative analysis of the "coherence" between various empirical or logical statements such as: "I have the sensation of red; it is deeper than the prior stimulus by approximately twice the magnitude; and it is lighter than the initial stimulus by about the same order; and I am now inferring (anticipating) that the next stimulus will be deeper still if the current pattern of stimulation continues; etc." For them, it was the overall "coherence" (observable pattern or logical consistency) between these varied statements which became the criterion for making truth claims not only about the presented stimuli, but also about the perceptual processes which allowed those statements to be made in the first place.

Let's be quick to note that under the realist-materialist point of view what we would expect (deductively) is that: If a proposition is true by correspondence, it will also be coherent with other true propositions. Why? Because each of these true propositions reflects the coherence of the world. Coherence with other true propositions is the "outcome" of its truth, it is not the "criterion" for it being true. It was this very realization that had previously led evolutionary thinkers like James and then Dewey away from the indirect theory of perception and toward various attempts to reassert direct realist epistemologies (Section 4).

In contrast, since the Logical positivist movement was premised upon an under-recognized Humean epistemology (now dressed up in "physicalist" language or physiological guise), it could no longer appeal to correspondence, and therefore retreats to the position that coherence between empirical or logical statements might be a "workable criterion" for truth (see Passmore, 1967). The same thing happens in operationism with their appeal to "agreement between disputants" as a criterion of truth. For instance, if I look at the presented stimulus card and say "it is red," and you concur with my statement, then that statement is deemed to be true because it has been publicly agreed upon. Yet, under the materialist account, such agreement follows from the truth of the proposition. The truth of that proposition does not depend upon such "intersubjective agreement," but rather the other way around!

These two rather fundamental methodological errors (anti-realism and the agreement theory of truth) constitute the respective raison d'être and sine qua non of the operationist movement in psychology as originally proposed by Stevens. It is now left for us to consider whether the 1945 Symposium made significant headway in recognizing or improving upon these errors.

Given that Boring was the figure who both proposed holding the "Symposium on operationism" and who wrote up the initial list of questions to be handed out for discussion (by the Editor of Psyc. Rev., H.S. Langfeld), my personal expectation was that the indirect realist position of his 1923 article as well as that outlined in his more recent major work Sensation and Perception in the history of Experimental Psychology (1942) would in turn be somehow reflected in either the "complete list" of 11 symposium questions (see Langfeld, 1945) or in his personal written contribution (Boring, 1945). The likely candidates in this first regard seem to be Question 8 (which mentions "intelligence") or Question 10 (which suggests that it is important to know whether one is "presupposing" or attempting to albeit indirectly "justify" a "logical apparatus for dealing with the language of science"). I've played with the order of wording in that latter question but it is fairly clear that if one looks through Boring's selected writings on science and operationism (see Boring, 1963) he is attempting successively to assert the latter "indirect measurement or justification" argument.

As for Boring's personal (1945) contribution, he concurs with the Stevens view in various respects including: the definition of experience as "differential reaction;" the potential usefulness of physicalist language to unify science; and the ability of psychologists to decide on good versus poor operational definitions depending upon whether they lead to agreement -"univocality"- between disputants or not. Boring, also, however, seems to demur from the overly skeptical Stevens account on other points.

The main point of dissension from Stevens is seen in Boring's comments on Question 8 where he reasserts his earlier indirect realist position regarding intelligence testing: "If intelligence is what the tests test, it is still possible to ask whether what the tests test is neural speed or normal education or something else" (Boring, 1945, p. 244). Quite predictably, this position raised the hackles of more skeptical members of the symposium. Bridgman, for instance, points out that "The assertion that the intelligence test tests a 'what' implies... that the results of the test have the properties of a 'what'" (p. 248-249). This implication, he suggests, is "question begging," and he moves on to a more skeptical (and self-contradictory) assertion that "the actual situation here is one of spiral approximation" of successive tests and conceptions to "aspects of... behavior" which by convention "we will be willing to call intelligence" (p. 249).

As indicated in the "Rejoinders" section of the symposium, however, Boring was certainly not surprised with this particular form of critique. Regarding Bridgman, Boring writes: "On certain... matters he has said exactly what I hoped he would say in answer to some of these questions" (Boring, et al., 1945, p. 278), and "has... [overall] modified his more extreme position about the pluralism of constructs..." (p. 278). Here too, Boring is quick to sing the praises of Herbert Feigl's contribution "Operationism and scientific method" which (albeit somewhat dogmatically and sporadically) at least referred to the ultimate "cash value" of respective operational terms as residing in their "factual reference" to "items of direct observation" (see pp. 251-253).

Ironically, the only symposium member to not have some sort of hang up regarding our epistemological access to the objects and events around us was B.F. Skinner, whom (if we are going to pigeonhole him anywhere), took a naive realist position. This point can best be made by simply stringing his various Rejoinders comments together as follows:

"In spite of the present symposium.... I believe that the data of a science of psychology can be defined or denoted unequivocally.... The position [of Boring and Stevens] is not genuinely operational because it shows an unwillingness to abandon [subjectivist] fictions [like the supposed distinction between 'discrimination' (public) and 'sensation' (private)].... What is lacking is the bold and exciting... hypothesis that what one observes and talks about is always the 'real' or 'physical' world (or at least the 'one' world) and that 'experience' is a derived construct to be understood... through an analysis of verbal... processes.... One can see why the subjective psychologist makes so much of agreement.... The agreement is likely to be shattered when someone discovers that a set of terms will not... work... in some... neglected field, but this does not make [inter-subjective] agreement the key to workability. On the contrary, it is the other way round" (Skinner, 1945, pp. 292-294).

Accordingly Skinner was the lone symposium participant to unapologetically "presuppose" not only a logical apparatus for dealing with the language of science (see Question 10) but also our unimpeded access to the world to which such an apparatus refers. It was, however, Boring's indirect realist position -i.e., "of justifying such an apparatus"- that was picked up during the subsequent 1950s-1970s general psychological research tradition.

We are now in a position to make our historical pronouncement about the headway made by the 1945 symposium. While none of the participants managed to solve the two above stated methodological errors residing in the Stevens approach (anti-realism and appeal to the agreement theory of truth) hands down, Boring most certainly came close to doing so. In other words, without quite coming out and saying it, Boring's indirect realist position seems to have allowed him just enough room to recognize that the coherence between empirical or logical statements (striven for by the Logical positivists) as well as the intersubjective agreement of belief between disputants (striven for by Bridgman, Stevens) and the "convergence of operational results" (appealed to by Feigl, 1945, p. 255), are all outcomes of the correspondence of those statements, beliefs, or results to the "what" to which they refer. Furthermore, it was but a small step from this apparent recognition on Boring's part, to the more explicitly held "indirect measurement" viewpoint of subsequent "convergent" operationism. Yet, before this revised methodological position could be adopted unequivocally, certain transitional metatheoretical hurdles (like the 1948 kerfuffle over "surplus meaning") would have to be either surmounted or sidestepped.

The kerfuffle over "surplus meaning" (1948 and beyond)

Just as the Stevens form of operationism was beginning to be adopted into psychological discourse, the Logical positivist philosophy upon which it was based was fast loosing its footing. During the 1930s, the Logical positivists found that as they tried to force the objectivist Standard view of science (that there is a reality there and we engage in practice to learn about it, that we create hypotheses and try to verify our views against the external world) into a Humean framework of sense data, it gets successively more arid and impractical (see Passmore, 1967). The initial 1920s appeal to the "verifiability principle" (Carnap, 1928) which, as Stevens himself admits found a "twin" in his concept of operational definition, was losing discursive ground to a new weaker principle of "testability" (Carnap, 1936; 1937). After all, how do we "verify" that a statement is correct if all we know is dependent upon our sense experience?

Within Logical positivist circles it was gradually realized that the "verifiability principle" -the proposition that the "meaning of a statement is given by the methods of verifying it"- was itself not an empirical proposition but rather a methodological one which depended upon the very assumed metaphysical positions which Logical positivism had hoped to avoid:

"The course taken by the subsequent history of logical positivism was determined by its attempts to solve a set of problems set for it... by its reliance on the verifiability principle. The status of that principle was by no means clear, for 'The meaning of a proposition is the method of its verification' is not a scientific proposition. Should it therefore be rejected as meaningless? Faced with this difficulty, the logical positivists argued that it ought to be read not as a statement but as, a proposal, a recommendation that propositions should not be accepted as meaningful unless they are verifiable. But this was an uneasy conclusion. For the positivists had set out to destroy metaphysics; now it appeared that the metaphysician could escape their criticism simply by refusing to accept their recommendations" (Passmore, 1967, p. 54).

Stevens (1935b, 1939) attempted to escape the solipsistic fate of Logical positivism by asserting (rather dogmatically) that if we limit our definition of perception to that of "discriminatory reactions," human "experience" is thereby rendered a publicly observable affair. But the same rhetorical counter-arguments (the "metaphysician's loophole" as outlined in the above quotation) applies equally well to his operationism. For, if the meaning of a psychological concept is given by an appeal to operations with respect to it (after Stevens) the operational definition turned back upon itself is not an empirical proposition either, but merely a recommendation.

Furthermore, its method of generalization -explicitly portrayed by Stevens as one of mere "classification" of observed operational regularities- would take psychological inquiry only so far. Recall that in Section 2, I provided a fairly extensive subsection on Kurt Lewin's (1931/1935) distinction between "Galilean vs. Aristotelian" approaches to lawfulness. The Aristotelian approach defined lawfulness abstractly in terms of amount (frequency or regularity of occurrence); while the Galilean claims that everything is lawful and seeks not an abstract but a concrete conception of the events (objects or processes) under investigation -including peculiar or occasional cases thereof. In the 1930s era Stevens account of psychological generalization, we are sent hurdling back to Aristotle's merely categorical or classification position (see Stevens, 1939, p. 234). It was precisely this sort of operationist undercutting and restriction of the traditionally open and productive materialist (Galilean-Baconian) approach to scientific method and generalization -including its hypothetical deductive, descriptive-inductive, and experimental aspects- that Lewin (1935) was arguing against.

Two of the most hotly debated topics of the late 1940s through 1950s, therefore, were just how far appeals to such operations took psychological inquiry; and what sorts of generalizations should be the goal of psychological inquiry. It is these sorts of procedural and knowledge claim questions that provided the proximate disciplinary backdrop for the post-W.W.II consideration of the difference between "intervening variables" and "hypothetical constructs" (MacCorquodale & Meehl, 1948; Tolman, 1949; M. Marx, 1951; Roback, 1952; Kendler 1952, Ritchie, 1953; Boring, 1953).

In their attempt to both highlight the past inconsistent usage and draw a more workable distinction between the terms "hypothetical construct" and "intervening variable" MacCorquodale & Meehl (1948) open by noting the disparity between two tendencies in psychological discourse. On the one hand, "tough-minded" empirical psychologists tend to be wary of unobservable or hypothetical terms and exhibit "an almost compulsive fear" of passing beyond the safe reference to observable data as anything more than tentatively defined intervening variables. Hypothetical entities like emotions or motives are introduced by them with a "degree of trepidation and apology quite unlike the freedom with which physicists talk about atoms, mesons, fields, and the like" (p. 95). On the other hand, there was the soft-minded "theoretical" approach which argues that if neutrons are admissible in physics, it must be admissible for us to talk about the psychoanalytic damming up of libido and its reversion to earlier channels, etc. just as easily.

Clearly, the first disciplinary tendency is too confining because it allows no theorizing per se about the nature of underlying psychological processes which cause the referred to observational generalities being described by empirical inquiry. The second tendency is too lax because it implies that all possible hypothetical constructs can be treated as if they are on the same sound empirical footing and provides no way of deciding upon the veracity of one construct over another.

MacCorquodale & Meehl argue for the virtues of a theoretically inclusionary though still empirically rigorous middle disciplinary path. This path, they suggest, is to be had by way of explicitly distinguishing between the differential methodological utility of intervening variables (after the fashion of Tolman, 1938) which have a purely summary descriptive character (an operationally defined observational reference), and hypothetical constructs which contain "surplus meaning" (after Reichenbach, 1938) since the latter "concepts" refer to non-observable entities "residing in the organism" and make semi-ontological truth claims.

"The view which theoretical psychologists take toward intervening variables and hypothetical constructs will... profoundly influence the direction of theoretical thought. Furthermore, what kinds of hypothetical constructs we become accustomed to thinking about will have a considerable impact upon theory creation. The present paper aims to present what seems to us a major problem in the conceptualization of intervening variables, without claiming to offer a wholly satisfactory solution. Chiefly, it is our aim... to make a distinction between two subclasses of intervening variables, or we prefer to say, between 'intervening variables' and 'hypothetical constructs' which we feel is fundamental but is currently being neglected" (MacCorquodale & Meehl, 1948, p. 95).

While reading through their article, one begins to feel that MacCorquodale & Meehl are suggesting that the former term (intervening variables) may have a prominent role in early descriptive phases of investigation while the latter term (hypothetical constructs) has a role in later explanatory phases of investigation regarding some particular domain of subject matter (e.g., learning, memory, motivation). This point, however, is not drawn out in high relief by them and we should at least attempt to understand why.

Long story short, the most lucid few lines of the article suggest that if the above "linguistic conventions" are adopted it may be fair to "demand" that the hypothetical terms used by any "theory" in a given research domain (say human learning) should have some "probability of being in correspondence with the actual events" underlying the observable phenomena -"i.e., that the assertions about hypothetical [entities] be true" (p. 105). Yet it is just here that the major stumbling block of MacCorquodale & Meehl's position is encountered. That is, even while making these sorts of important definitional distinctions, they attempt to remain "metaphysically neutral" on the issue of realism versus anti-realism:

"It is perhaps unnecessary to add that... we do not mean to defend any form of metaphysical realist thesis. The ultimate 'reality' of the world in general is not the issue here; the point is merely that the reality of [unobservable] hypothetical constructs like the atom... is not essentially different from that attributed to [observable] stones, chairs, other people and the like.... The present discussion... is intended to be metaphysically neutral" (Footnote 1 in MacCorquodale & Meehl, 1948).

It is clear that MacCorquodale & Meehl have resurrected the old derogatory Logical positivist (straw man) definition of "metaphysics" (see Appendix 2 under Definitions) and are therefore attempting to avoid siding with either realism or anti-realism accordingly. Yet, surely, what we have seen time and again in the history of the psychological questions we have covered thus far is that attempting to remain "neutral" on such rather fundamental de facto philosophical or methodological issues doesn't work! If one attempts to remain "agnostic" about or gives up on the realist-materialist correspondence view of truth, one is eventually forced back -by virtue of the practicalities of everyday life and of the requirements of carrying out scientific investigation itself- to choose some other sort of criteria for making truth claims.

In this particular case, the adopted criteria for "truth" of a given "hypothetical construct" is one of "compatibility with... general [commonsense, ordinary] knowledge and... with whatever relevant knowledge exists at the next lower level in the explanatory hierarchy" (p. 107). To utilize their examples: Some of the hypothetical terms used in Hull's (1943) response gradient formulas may have a "real although [as yet] undetermined neuromuscular locus" (p. 100). In contrast, since we already know from anatomy and physiology that the nervous system "does not... contain pipes or tubes with fluid in them" -to which the supposed "hydraulic properties of libido could correspond"- that particular hypothetical construct "is likely to remain [merely] metaphorical" (p. 106).

MacCorquodale & Meehl's careful equivocations on the issue of realism constitute a mere methodological sidestep of the "surplus meaning" (ontological claims) aspect of hypothetical constructs. It was such a small step that Tolman (1949) felt comfortable enough with it to expand his previous severely constricted view of "theory" -as equivalent to a set of intervening variables (1938) which we "the theorists" use to break down observable events (p. 9)- and to embrace the new appeal to hypothetical constructs as necessary parts of "more general models of behavior" (1949, p. 49). We might simply mention in this regard that Tolman was arguing from a position of theoretical strength here because the Tolman, Ritchie, & Kalish experiment (1946) had already obliterated the rival Hullian hypothesis in favor of a cognitive map account.

Other, more conservative operationist figures like Melvin Marx (1951), however, refused to go along with this flirtation with scientific realism. He considered Tolman's (1949) shift toward openly embracing hypothetical constructs as a "dangerous" affront to the ultimate disciplinary task of ensuring "operationally sound" research. Hypothetical constructs were, for him, merely "temporary expedients" used during the early (speculative) phase of research: "If psychological theories are to be placed on a sound scientific basis, logical constructs of the more distinctly operational type must first supplement and eventually replace those of the hypothetical construct type" (1951, p. 235). Disciplinary progress under this conservative procedural scheme would revert back to the mere extent to which early ontologically loaded "hypothetical" (theoretical) terms could be reduced to measurable "operational valid" (observational) terms, "which are the only kinds of constructs ultimately admissible in sound scientific theory" (p. 246).

One is left with the question whether any scientific proposition ("construct" in Marx's terminology) having been thereby stripped of its "surplus" ontological meaning and rendered measurable ("operationally valid") could ever be rightfully called a "theory." I would suggest not, and we should note too that Skinner's article "Are theories of learning necessary?" (1950) had already suggested the same sort of point. For if there has indeed been a tendency to inadequately distinguish between measurable events in the world (facts) and ontologically loaded statements about the psychological processes they are attempting to measure (theories); or if we are indeed unable "in principle" to do so, then what procedural options are left open for the discipline to follow? Either a mere "state of the art" appeal to temporarily concentrate our disciplinary efforts on observationally derived mathematical generalizations (a la Skinner, 1950) or to some sort of long-standing "nominalist" (naming without claiming) position such as presented by Kendler's "What is learned? -A theoretical blind alley" (1952).

Benbow Ritchie (1953), as one of the moderate figures of the period, attempted to provide a whimsical reductio of Kendler's nominalist arguments. In the final analysis, however, Ritchie's critique fell flat because like most of his contemporaries his consideration of the issue remained within the confines of appeals to "concepts" about worldly events (indirect realism) rather than to observable facts, empirical generalizations and eventually explanatory theories about processes allowed by direct realism (a la James or Dewey) or naive realism (a la Skinner). What Ritchie seems to want to say is that for psychological scientists, "cognition" is like the shape of the earth for those who carried out historical voyages of exploration. They are the material subject matter (the stuff) about which empirical statements and ontologically loaded theories must be made. To doubt their existence is to undermine the very raison d'être of exploratory investigation, scientific measurement, and theorizing. What he actually said, however, was that: "The failure of Hume and others [including Stevens, and Kendler] to recognize the usefulness [indeed necessity] of such [metaphysically loaded] nonsense was due to their misunderstanding of the [procedural] relation between thinking [, measurement] and theory" (p. 220). But these two arguments are hardly the same! The first argument relies upon direct or naive realism by asserting our undeniable "objective" access to the basal factual level of inquiry while the latter argument relies upon indirect realism and assumes only that we have access to concepts about that basal factual level of inquiry.

Another moderate voice was that of E.G. Boring (1953). Yet even here we see the pervasive force of indirect realist arguments of the period reasserting themselves in a counterproductive manner. Boring suggests, in "The role of theory in experimental psychology" (1953), that the major task for "theoretical psychology" is to now consider whether "the new concepts,... hypothetical constructs, [or] ...intervening variables, [are] getting to thinghood fast enough?" But, given that his indirect realist understanding of the "real world" to which these hypothetical or intervening "terms" refer is one of "constructs" alone, Boring ultimately remains as stuck within the confines of his own analysis as was Stevens, Kendler, or Ritchie!

Exemplars of "convergent operationist" positions (experimental and correlational)

Due to its anti-realist assumptions, the Stevens account of operationism had painted 1930s experimental psychology into a confined corner of its own self-looping discourse from which neither MacCorquodale & Meehl's (1948) metaphysically neutral "linguistic convention" (between intervening variables and hypothetical constructs) nor Boring's (1945, 1953) indirect realist appeal to the "thinghood" of empirically valid psychological constructs was able to escape.

Given the methodological limitations of the above equivocal epistemologies, it should not be surprising that attempts were made in psychology to reintroduce some form of naive or "critical" realist account back into the discipline (see Stevens, 1951b; Garner, 1954a&b onwards, respectively). For along with those attempts came an empirical rationale which promised to allow continued research in and perhaps even a resolution of the dichotomy between correlational and experimental research. Such were the respective and then combined intents of the varied so-called "construct validity"; "converging-operations"; and "multiple-measures" positions proposed during the late 1950s (Cronbach & Meehl, 1955; Garner, et. al., 1956; Cronbach, 1957; Campbell & Fiske, 1959; Cronbach, 1975).

Stevens adopts Naive Realism

At some point during the W.W.II-1952 period, in which Stevens was assuming the reins of Harvard's Psycho-Acoustic laboratory (shown below); writing a contribution to Boring, et al.'s Foundations of Psychology (1948); and also carrying out his own editorial duties for the Handbook of Experimental Psychology (1951a), it appears to have dawned on him that his former ontologically agnostic (operationist) position was far too confining.

Accordingly in "Mathematics, measurement, and psychophysics" (which constitutes the opening article of the Handbook) Stevens writes:

"The stature of a science is commonly measured by the degree to which it makes use of mathematics. Yet.... at no place is there perfect correspondence between the [utilized] mathematical model and the empirical variables of the material universe.... Measurement is possible in the first place only because there is a kind of isomorphism between (1) the empirical relations among objects and events and (2) the properties of the formal game [e.g., correlation] in which numerals are the pawns. When this correspondence between the formal model and its empirical counterpart is close and tight, we find ourselves able to discover truths about matters of fact..." (Stevens, 1951b, 1-2).

This talk by Stevens (1951b) of a "material universe" and the "correspondence" of our measurements to it, marks a clearly progressive shift in his position toward a naive realist epistemology similar to that proposed by Skinner's (1945) contribution to the "Symposium on Operationism". As stated in Appendix 2, this sort of naive realism is one rather particular and conservative form of "direct realism" (see figure 1 therein) which, although retaining the representationalist theory of indirect perception, is unlike the indirect realism of its contemporaries (e.g., Boring, 1942, 1945, 1953). The difference lies in its albeit somewhat dogmatic assertion that scientific objectivity (correspondence of our investigations to the nature) is possible because empirical measurements are considered as "scaled value[s] of the phenomena itself" (Stevens, 1951b, p. 48).

Stevens (1951b) retains a "procedural" appeal to the use of "operational definitions" but gives up the formerly problematic Humean epistemology and ontological agnosticism that went along with his 1939 argument in favor of a naive realist account. To paraphrase his new argument: Although the "individual terms" used in psychology should "satisfy operational criteria" by being stated as "empirical propositions" (operational definitions) that are potentially "confirmable" by means of mathematical measurement, correlation, or experimental contingency, we should also be careful to consider the "validity" of the "mathematical formulas" or "models" utilized to make such assessments (e.g., correlation or a telephone exchange respectively) because they often "fit" the empirical world "as poorly as a borrowed hat" (p. 3).

Further evidence that Stevens had adopted this somewhat conservative form of naive realism can be obtained in his "final" (1951b) distinction between empirical "measurements" and "indicants" which may be worth quoting here at length before moving on to what followed thereafter:

"One final point and we are through.

Although psychologists devote much of their enthusiasm to the measurement of the psychological dimensions of people, they squander more of it in an effort to assess the various aspects of behavior by means of what we my call indicants. These are effects or correlates related to psychological dimensions by unknown laws. This process is inevitable in the present stage of our progress, and it is not to be counted a blemish. We know about psychological phenomena only through effects, and the measuring of the effects themselves is a first trudge on the road to understanding.

The end of the trail is measurement, which we reach when we solve the relation between our fortuitous indicants and the proper dimensions of the thing in question.

In the meantime we take hold of our problems by whatever handles nature provides. We count the number of pellets hoarded by a rat in order to assess its hoarding drive. We count the number of trials required for a man to learn a task, and use this number as an index of his ability. We measure changes in the resistance of the skin and call it an indicant of emotion. In short, we are far more frequently engaged, as the following chapters will demonstrate, in the measurement of indicants than we are in devising scales for the direct assessment of physiological and psychological phenomena, or of 'intervening variables,' as they are sometimes called.

Occasionally the measurement of an indicant is sufficient for the task at hand e.g. when we gauge a worker's ability by his productivity we may be interested in no more than the relation between his production and that of his neighbor. But more often we would like to measure his ability, intelligence, drive, emotion, hunger, etc., on a scale of the attribute in question rather than by effects that bear a dubious relation to it.

The difference, then, between an indicant and a measure is just this: the indicant is a presumed effect or correlate bearing an unknown... relation to some underlying phenomenon, whereas a measure is a scaled value of the phenomenon itself. Indicants have the advantage of convenience. Measures have the advantage of validity. We aspire to measures, but we are often force to settle for less.

This distinction between measures and indicants disappears... as soon as we learn the quantitative relation between the indicant and the object of our interest, for then the indicant can be calibrated and used to measure the phenomenon at issue.... We measure the psychological pitch with a frequency meter after we have established a scale relating pitch in mels to frequency in cycles per second. The more mature a science, the more it uses calibrated indicants" (Stevens, 1951b, pp. 47-48).

One of the argumentative points being made by Stevens is that the more we know about "the object of our interest" the more precise our "quantitative" assessment of it becomes; and furthermore, the more precise that assessment becomes, the more frequently we can shift from merely "convenient" indicants (indirect indicators) to "valid" (direct) measures thereof. Let's be careful, therefore, to highlight two important methodological aspects of that argument. Firstly, the word "operation" does not appear anywhere. Secondly, the "validity" of the "direct" measures referred to therein resides in their "correspondence" to the actual (real) nature of the subject matter under investigation. That second methodological aspect is the very definition of direct realism -whether or not the proponent thereof also holds an indirect or direct theory of perception to back it up,- and such direct realism is the necessary though not sufficient foundation of scientific objectivity. By virtue of adopting naive realism, Stevens (1951b) had charted a navigable course out of the epistemological corner he had painted himself into during his earlier "operationist" articles (1935a&b, 1936, 1939).

Accordingly we will now consider whether the above naive realist position was adopted or in any way improved upon by the varied contemporary figures who wrote under the headings of "converging-operations"; "construct validity"; or "multiple-measures." I will suggest that while there was a shared intent at work in those positions (to escape the anti-realist confines of the 1930s era operationism), there was no such adoption or improvement over the Stevens (1951b) account achieved. Instead, there was a fair bit of methodological backsliding. Furthermore, it will be pointed out that this lack of disciplinary advancement is just one of the many historical rubs counting against Grace's (2001a&b) portrayal of "convergent operationism" as the mid-century savior of psychological discourse and empirical practice.

Wendell Garner adopts Critical Realism

Wendell Richard Garner (1921-), who studied directly under Stevens, took issue with the anti-realist aspects of his mentor's original 1930s era operationist account, and also explicitly rejected the eventual 1950s era "simple, or naive, realism" position of Stevens. Instead of following either, Garner asserted what he called a "critical realist" path of indirect indicators or correlates of psychological "processes" (especially perception) and, between 1954-1966, encapsulated his procedural approach to the empirical investigation of such processes under the heading of "convergent operationism."

Since Garner's efforts were aimed rather sincerely at counteracting the undeniably problematic ontologically agnostic reticence of 1940s-1950s era research, they should be applauded before being dissected and assessed in terms of their own internal inconsistency, practical outcome, and disciplinary influence. As we have seen in the case of many other disciplinary figures (including Woodworth, Boring, and Stevens), however, sincere intent is one thing, and achieved methodological progress or disciplinary advancement is quite another.

In this particular case, the philosophical and personal origins as well as the progressive intent of Garner's critical realism and convergent operationist positions are easy to establish because they are both highlighted in his (1972) autobiographical statement which makes most of the relevant points rather candidly in his own words:

"The epistemological position which seems to me valid both for the scientists and the ordinary person perceiving is that of critical realism. Realism, as contrasted with idealism, is the epistemological position that knowing is of a real world. There is a reality which is independent of the perceiver, be he scientist or layman. So to know is to know reality. But one can come to know reality in different ways. The simple, or naive, realist believes that we come to know reality directly.... The critical realist believes that we come to know reality by relating various types of knowledge so as to form a correct construction of the real world, but that knowledge of reality is denied us as directly given....

A... disagreement I had... with S.S. Stevens can illustrate this issue very nicely. Professor Stevens' role in this argument has always intrigued me because he... played an important part in my development as a critical realist, and yet he ended up arguing for the simple realist position.

Professor Stevens was... my supervisor... during World War II. One day I showed him some data I had collected, a graph that indicated an apparent anomaly under one condition. He asked me what I thought about the graph, and I replied to the effect that that's what the data showed. He then said, 'yes, but what's the truth?' This.... distinction between data and truth or as I would put it today, the reality we as scientists are trying to construct, is the distinction between simple realism and critical realism.

The disagreement mentioned above concerned the nature of a sensory scale of loudness. I (1954a) had argued for a scale which was based both on direct estimates of the ratio of loudnesses and on interval scaling of loudnesses, feeling that a scale must satisfy both of these scaling data to be valid. Stevens, on the other hand, argued in a series of papers (e.g. 1957) that the loudness scales based on direct estimates of loudness were the valid ones. That is to say, Stevens wanted to accept the naive realist position that verbal statements by an observer directly gave evidence concerning the magnitude of a sensation. My own position... [from 1954b, was that]:

'Direct validation of the numerical responses is impossible, because we have no independent measure of the sensory process itself... [However]... if two or more... sets of data, involving basically different indicators of the nature of the sensory process, lead to the same sensory scale, then we have a [converging operations] form of validation'" (Garner, 1972, pp. 75-77).

When we compare Garner's opening argument (that "Realism, as contrasted with idealism..." etc.), with the basic philosophical decision tree we have been utilizing throughout this course (see figure 1 of Appendix 2), it becomes rather apparent that he has made a fundamental philosophical error. He has confused epistemology with ontology. In point of fact, the indirect form of realism which Garner is clearly adopting (residing in the right side of figure 1) does not actually "contrast with" but rather relies upon an "Objective idealist" ontology (residing in the left side of figure 1). In any case, this appeal to indirect realism was the major methodological commonality between Garner's "critical" realism and Boring's more explicitly held representationalist (indirect perceptionist) viewpoint during the era under consideration. I would hazard a guess too that Stevens adopted naive realism, -a view which Garner correctly points out is "akin" to the direct realist position (depicted in figure 1)- as a last ditch effort to avoid the slippery slope presented by both indirect realism and his own former anti-realist position. Yet only by looking further into their respective procedural implications for the "practice" of empirical research can we truly appreciate the respective overlaps and differences between those three "epistemological" positions. So, let's do turn to those now.

Aside from rejecting the unnamed "naive realist" account of the later Stevens on seemingly erroneous philosophical grounds, Garner also leveled two "procedural complaints" against the 1930s-1940s era of operationism: (1) that perception was "mistakenly" defined by Stevens (1935b, 1939) as synonymous with observed or measured discriminatory reactions (dial turning, verbal statements about observed luminance, etc.); and (2) that early empirical studies had too often appealed to convenient "singular" operational definitions (instead of to various empirically "converging" operations) as sufficient to define any psychological concept which they chose to study (speed of maze mastery for learning; or score on a paper test for intelligence, etc.).

The first of these procedural complaints on the part of Garner was by far the strongest of the two; for with regard to the second (weaker) complaint, Stevens (1951b) had already openly recognized the difference between early state of the art appeals to convenient empirical "indicants" (indicators) and well-substantiated (externally "valid") "measurements" of psychological processes. But this rather important naive realist self-correction on the part of Stevens, the intellectual seeds of which seem to date right back to the time when Garner was studying under him, was rejected on philosophical grounds by Garner and was then overlooked completely by Grace (2001a) as well.

Regarding the stronger procedural complaint, -made successively in Garner, et al.'s, "Operationism and the concept of perception" (1956); as well as in Garner's follow-up article "To perceive is to know" (1966)- Garner's most impressive accomplishment resides in recognizing and reasserting that perception is an "act of knowing" and not merely equivalent to a passively evoked though immanently observable response, reaction, or statement produced in a laboratory. Furthermore, as indicated by the following (1972) extract, it is clear that Garner's efforts in this regard constitute an attempt to free the perceptual research subdiscipline from its ongoing anti-realist confines:

"A statement of this position was... elaborated in a paper I wrote with Harold Hake and Charles Eriksen (1956), at that time colleagues of mine at Johns Hopkins University. In that period, empiricism and a simple form of operationism were quite accepted. It was widely felt that a concept was no more than the operations on which it was based. I had heard psychologists say such things as 'We cannot differentiate between a perceptual process and a response process because the response is our observable, and thus the concept of perception is synonymous with the responses which constitute our data.' This attitude... caused all three of us certain difficulties. We all agreed that it is not only permissible to consider perception as a concept independent of the operations which indicate its nature, but that as psychologists it was imperative that we do so. Editors felt differently, so we had some difficulty getting papers published when we tried to differentiate response and perceptual processes. So we wrote a complete statement of the position we shared, elaborated on the idea of converging operations as... a method, and illustrated its use in several research areas.... Fundamentally, the [1956] paper... is a statement of a critical realist position for science.... the converging-operations [technique] requires that we attempt to understand a process by using a variety of experimental or observational procedures, with sufficient variation that we can critically construct reality, i.e., can converge on the appropriate construct..." (Garner, 1972, pp. 77-78).

While the above message is somewhat mixed in with his appeal to "critical realism," let's attempt to appreciate what is being said here by Garner about the era to which it refers. It boils down to a statement on the part of an empirically-minded psychologist who wants to talk about psychological "processes" but is encountering a profound reticence on the part of his contemporaries to do so! Thus, despite the internal logical inconsistency between his chosen critical realist philosophy and the practical research implications of his perception as an act argument, this attempt to refer to "psychological processes" rather than merely to operational definitions in the "simple" anti-realist or merely representationalist sense, marks off Garner as theoretician from contemporary figures writing under the banners of "construct validity" or "multiple-measures." Under all these contemporary accounts, the properties of the investigatory system which are being converged upon, are the properties of a concept or construct rather than the properties of psychological processes. Although these contemporaries shared the intent of escaping the confined early operationist account of experimentation or individual differences research, Garner's appeal for the "permissibility" and professional "imperative" of investigating the properties of psychological processes per se works better against their respective Boring-style indirect realist or Logical positivist indirect measurement approaches than it ever did against albeit "naive" direct realism.

Cronbach & Meehl on Construct validity

For example, in Cronbach & Meehl's "Construct validity in psychological tests" (1955), Garner's call for convergence of measures is first echoed with regard to the ability testing area. The difference from Garner's approach can be appreciated by noting that their discussion becomes bogged down in, and largely confined to, a consideration of "constructs" or "assumptive networks" rather than the nature of personality or other such "elusive" psychological processes per se. "Construct validity" itself, they suggest, comes into play "whenever a test is to be interpreted as a measure of some attribute or quality [presumably in the test taker] which is not [sufficiently] 'operationally defined'" (p. 282). By this, they mean when no single (or imminently observable) operationally defined "criterion" is "entirely adequate" (p. 282) to contain the varied aspects of the construct under consideration -e.g., time lapse between feeding vs. energy expenditure of the animal for relating "behavior to 'hunger'" (p. 284); or IQ score vs. teacher judgments vs. school achievement for relating a new test to "general learning ability" (pp. 286-287). Further, they hold that while the "best constructs" are ones "around which we can build the greatest number of inferences, in the most direct fashion" (p. 288), these are as such and at best "adopted, never demonstrated to be "correct" (p. 294).

So, even though Cronbach & Meehl's opening argument claims that the empirical problem faced by the investigator is to ascertain which constructs best "account for observed variance in test [or experimental] performance" (p. 282), and their final argument recognizes an available range of "employable" constructs which "vary in definiteness" (p. 300) from "pure description... to [those] ... involving hypothesized entities and processes" (p. 300), their middle argument undermines these commonsense positions by suggesting that the main (perhaps only) scientific role of our efforts to produce such converging measurements is to force presumably theoretical "adjustments" or "adaptations" in our "nomological net" (pp. 290-292) or "assumptive network" (pp. 292-297) from which we never really escape. This indirect measurement of constructs aspect constitutes the weakest, most abstract and inconsistent aspect of their article. In terms of epistemology, it adheres stubbornly to the unsatisfying middle ground between MacCorquodale & Meehl's (1948) agnosticism and the Logical positivist convergence of operational results argument put forward by Herbert Feigl (1945) with whom the second author was working at the time it was written.

There is a more progressive aspect of that joint authored (1955) article which bears mentioning because it is picked up in two subsequently influential articles authored by Cronbach alone (1957, 1975). The common discipline-building intent among all three articles is to smooth over the supposed method gap between experimental and individual difference research; to indicate that below the surface layer of their conflicting research goals there is some sort of underlying commonality at work in the empirical practices of both subdisciplinary specialties. Cronbach & Meehl, for instance, maintain that the "investigation of a test's construct validity is not essentially different from the general scientific procedures for developing and confirming theories" (p. 300). They provide analysis about the nature of the constructs to which not only personality or ability tests but also experimental research are said to refer, and end the article by implying that recognizing this overlap of method "would be preferable to the widespread... tendency to engage in what... amounts to construct validation research... while talking an 'operational' methodology which, ... force[s] research into a mold it does not fit" (p. 300). Ironically, this latter point holds equally well for their own joint authored account as will become clear in our review of Cronbach's solo attempts.

In Cronbach's (1957, 1975) articles, the conciliatory disciplinary message is advanced more clearly as well as successively updated. There is more mention in each of psychological processes, evolution, development, and the historically sensitive context of data collection which must be somehow included within our "combined" forms of research. They are still shackled by an overall indirect realist account of experimental or individual differences research (as converging on constructs of varying exactitude) but to a notably lesser extent. Like Boring (1953), Cronbach from 1957 through to 1975, seems to want something more than the ontologically dismissive and epistemologically neutral stance of the past but just can't seem to get at it! His efforts to to bridge the experimental versus correlational method gap are rather illuminating because they constitute a perfect exemplar of the limiting effects of two pervasive mid-20th century methodological assumptions -namely indirect perception rather than naive or direct realism, and an interactionist rather than transformative view of mental development- on a sincere, sustained, and well-intentioned effort to promote disciplinary progress.

Lee Cronbach on resolving the correlational versus experimental research divide

In his most famous article "The two disciplines of scientific psychology" (1957), Cronbach laments the estrangement of experimental and correlational (individual differences) research subdisciplines which occurred around 1923. "While the experimenter is interested only in the variation he himself creates, the correlator finds his interest in the already existing variation between individuals, social groups, and species" (p. 671). Cronbach's rationale for attempting to to draft an outline of a combined approach which utilizes the respective strengths of each within what he calls an "integrated" discipline, is as follows: If the "job of science is to ask questions of Nature" (p. 671), it is "shortsighted" to have one empirical branch of psychology devoted to discovering the "general laws of mind or behavior" and another "separate enterprise" concerned with measuring "individual minds" (p. 673).

Any true "federation" of these two branches of empirical "method," however, would have to recognize how the "divergent" philosophical positions, "scientific values," research "interests," and perhaps personal proclivities of those adopting each seemingly "independent" analytical technique, necessarily "converge" when it comes to addressing "certain important problems" to which their respective techniques when utilized separately, "give only wrong answers or no answers at all" (p. 673). As Cronbach points out, the "important" and necessarily related problems which neither technique handle very well on their own are ones of the normal or natural conditions of simultaneous "situational" and "mental" transition, change, or development. After describing the respective inconsistencies of each technique in this regard, he then proposes the discipline adopt a combined "Aptitude X Treatment interaction" (ATI) approach to handle those problems.

"In this common labor, [applied and general scientific work] will almost certainly become one, with a common theory, a common method, and common recommendations for social betterment. In the search for interactions we will invent new treatment dimensions and discover new dimensions of the organism. We will come to realize that organism and treatment are an inseparable pair and that no psychologist can dismiss one or the other as error variance" (Cronbach, 1957, p. 683).

Before getting into the details, let's pause to acknowledge that Cronbach is putting forward a rather amiable Baconian argument. Science looks to nature. If we find in nature that situations change and mental processes develop, but our two main empirical techniques of measurement attempt to hold either one or the other statically in place in order to carry out their analysis, then we are impelled to question their internal logic and set "boundary conditions" for the applicability of those techniques. The main historiographic issues for us to consider are: How far does Cronbach run with this initial Baconian argument over the course of his two solo articles? Will the conceptual tools which he has at hand hold up under the strain of closer examination? I don't mind telling you beforehand that the answer to the latter question is a qualified rather than a definitive: "No."

All the same, Cronbach does a superb job of stating the respective boundary conditions for the applicability of the two broad empirical methods under consideration. Furthermore, since his ATI approach -which was intended to extend the analytical boundaries of the discipline- was indeed adopted by the better part of subsequent "variable psychology," let's endeavor to critically analyze its main structural features and consider the extent to which each feature succeeded or failed. Regarding the contrasting research goals and "between treatments" versus "within treatment" emphasis of each empirical technique Cronbach writes:

"Individual differences have been an annoy rather than a challenge to the experimenter. His goal is to control behavior, and variation within treatments is proof that he has not succeeded. Individual variation is cast into that outer darkness known as 'error variance.' For reasons both statistical and philosophical, error variance is to be reduced by any possible device. You turn to animals of a cheap and short-lived species, so that you can use subjects with controlled heredity and controlled experience. You select human subjects from a narrow subculture. You decorticate your subjects by cutting neurons or by giving him an environment so meaningless that his unique responses disappear... You increase the number of cases to obtain stable averages, or you reduce N to 1, as Skinner does. But whatever your device, your goal in the experimental tradition is to get those embarrassing differential variables out of sight.
The correlational psychologist is in love with just those variables the experimenter left home to forget. He regards individual and group variation as important effects of biological and social causes. All organisms adapt to their environments, but not equally well. His question is: what present characteristics of the organism determine its mode and degree of adaptation?
Just as individual variation is a source of embarrassment to the experimenter, so treatment variation attenuates the results of the correlator. His goal is to predict variation within a treatment. His experimental designs demand uniform treatment for every case contributing to a correlation, and treatment variance means only error variance to him" (Cronbach, 1957, p. 674).

While the "typical experiment" of that era still manipulated a single "dependent" variable along the line of Woodworth's univariate IV-DV model (1934), Cronbach is careful to point out that multivariate manipulations (after R.A. Fisher's 1925; 1930; 1935 approach) were starting to be utilized too. But whether the experimenter manipulates a single variable or multiple ones, the main concern is "between treatment" variance -to assess the "average" resulting effect or change brought about by treatments. Conversely, with individual difference research, which relies heavily upon the statistical correlation of multiple test "dimensions" (subtests or test items), the main concern is "within treatment" variance -to assess respective performances within a given standardized situation.

Cronbach's main point above is that the investigator's notion of what is considered as "error" -messiness in the data analysis- differs between these two empirical techniques because one is attempting to estimate the "central tendency" -the "averages" (mean, mode, median)- common among all in that population before versus after a given treatment, while the other technique is attempting to assess or rank order the performance of particular individuals according to a previously estimated "standard deviation" of performances on various subtests for that population under a similar condition of testing. The first technique requires that the situation of measurement be changed in order to obtain accurate averages but the second technique requires the situation be held as still as possible in order to obtain accurate rankings.

These somewhat contradictory stipulations constitute the "empirical" boundary conditions upon which the internal logic of each of these techniques depend. But with regard to their respective actual-historical versus proper application in psychology, there are also somewhat broader "theoretical" boundary conditions to consider too. As a first step in outlining these respective theoretical boundaries, Cronbach highlights the empirical utility to date of adopting factor analysis in both experimental and individual differences research. Here, a colorful metaphor intended to emphasize how that statistical tool is used respectively by each technique is employed. Cronbach also starts to hint at what he views as a potential theoretical upside of factor analysis as well:

"Factor analysis is rapidly being perfected into a rigorous method of clarifying multivariate relationships. Fisher made the experimentalist an expert puppeteer, able to keep untangled the strands to half-a-dozen independent variables. The correlational psychologist is a mere observer of a play where Nature pulls a thousand strings, but his multivariate [computations] make him equally an expert, an expert in figuring out where to look for the hidden strings.

His sophistication in data analysis has not been matched by sophistication in theory. The correlational psychologist was led into temptation by his own success, losing himself first in practical prediction, then in a narcissistic program of studying his tests as an end in themselves. A naive operationism enthroned theory of test performance in the place of theory of mental processes. And premature enthusiasm exalted a few measurements chosen almost by accident from the tester's stock as the ruling forces of the mental universe" (Cronbach, 1957, p. 675).

Someone once said, "There are three kinds of people in the [mental] universe: Those who can count and those who can't." Lee J. Cronbach (1916-2001) was certainly someone who could count better than I can. Yet, in addition to his ATI argument, -which as outlined below is a progressive disciplinary position to take at the time- he also makes various statements which appear as if he views the adoption of multivariate techniques themselves as a way to firm up our theories in both experimental and individual differences research. But whether we are talking about experimental manipulations, individual differences research, or a combined ATI version of both, I still can't see such statistical techniques as anything other than a "rough data grouping tool" to be utilized in the early stages (though not the initial stage) of empirical investigation. So, together, let's look at these ATI and theory improvement features of his argument to see if they add up or not.

In order to set the stage for the ATI and theory improvement features of his argument Cronbach first highlights the past foibles and current assets of both empirical subdisciplines. Individual differences researchers, being absorbed in the "complexities" of their own test application endeavors initially felt no urgent need to offer up "theories to organize the facts" they were collecting and even showed a "tendency" to ignore experimental findings collected elsewhere in the discipline. The failure of mere test "criterion validation" on its own to evaluate claims that a given test measured a given "psychological trait or state" (e.g., anxiety), however, eventually necessitated their acknowledgment of Meehl's emphasis on "construct validity" which advocated the use of experimental investigations as one of the proper means of validating tests. The "most valuable trading good" of correlational research, though, is its "multivariate conception of the world," and its related "discovery" that the "simultaneous consideration of many criteria is needed for a satisfactory evaluation of performance" (pp. 675-677). Conversely, since the typical experimental design of the era was still utilizing a single dependent variable (a single measure of response), "theoretical progress" was being "obstructed" and the "experimenter has no systematic way to classify and integrate results from different tasks or different reinforcers" beyond "simple inspection" or the "creative flair" of the theorist. But the "multivariate techniques of psychometrics," Cronbach suggests, "are suited for precisely this task of grouping complex events into homogeneous classes or organizing them along major dimensions" (p. 677).

Testing method-->Psychological Construct and its theoretical network <--Experimental method

The double-barreled empirical approach being advocated by Cronbach (1957) is one in which the testing and experimental communities work together to "converge" upon a given psychological "construct" and its "theoretical network" (which generates predictions about observations). Furthermore, the "potential contributions" of each respective empirical subdiscipline will only be realized in practice if such a twofold research goal is explicitly recognized and this requires some fundamental readjustments of previous positions in each research community.

First of all, E.A. Fleishman's work on changes in motor skills "as a function of practice," as well as the views of G.A. Ferguson (1954, 1956) force upon us "a theory which treats abilities as a product of learning, and a theory of learning in which previously acquired abilities play a major role" (p. 676).

"We may expect the test literature of the future to be far less saturated with correlations of tests with psychologically enigmatic criteria, and far richer in studies which define test variables by their responsiveness to practice at different ages, to drugs, to altered instructions, and other experimentally manipulated variables" (Cronbach, 1957, p. 676).

Similarly, experimental researchers must take note of Egon Brunswik's (1955, 1956) critique of their past "ad hoc" (improvised) selection and classification of treatment variables. Rather than proposing informal or idiosyncratic groupings of such treatments as an analytical afterthought, experimenters should utilize a representative "sampling" procedure to select the treatment situations to be manipulated and, having done so, then seek out the "organization" residing among their effects in a more formalized manner too. Cronbach's reasoning in this regard is that since "factor analysis" substitutes "formal" (logical-mathematical) methods for "intuitive methods" of data grouping, and since it has been "of great help in locating constructs" within the ability testing domain, it can be "expected" that the adoption of "multivariate [analysis] of response measures" would serve a similar function in experimental psychology (pp. 676-677).

To illustrate this point, Cronbach provides a correlational reanalysis of an experimental study which manipulated four "stress conditions" -mental arithmetic, letter association test, hyperventilation, and a cold water immersion "pressor"- with human subjects (Wenger, Clemens, & Engel, 1957). Such reanalysis, Cronbach suggests, would group those treatments which have similar physiological "effects" and "permit us to locate" each treatment within a "continuous multidimensional structure having constructs [like motivation to complete the task perhaps] as reference axes" (p. 677). These statistical techniques of abstracting further information from the collected data are immanently preferable, he suggests, to merely attempting to describe or classify such stressors "superficially, by inspection," because they can yield surprising findings: "According to these data, a mental test seems to induce the same physiological state as plunging one's foot into ice water!" (p. 678).

Can such multifactorial statistical techniques alone actually help the investigator "carve nature better to the joint," as Cronbach seems to be suggesting? Being linear mechanical as well as quantitatively additive (or multiplicative), they may -when used properly- be suggestive or even somewhat descriptive of, but they do not actually match up very well with (correspond to) the qualitatively developing and ontologically embedded psychological processes toward which they are aimed. For our own delimited empirical-descriptive purposes, these techniques help us select out abstract groupings of variables (orthogonal factors) from a data set with a "minimum of redundancy," but nature in its concrete form is highly redundant. Conversely, although there is a "continuous structure" of quantity in nature, it also contains qualitative discontinuous structure. Formal logic, on its own, has no way of resolving these objective contradictions of nature or human nature.

So, while I agree with Cronbach that such statistical techniques can be utilized as a rough data grouping tool, I would also caution that, when it comes to producing "theories" -the task of ordering those empirically notable groups of variables into a developmental sequence as well as into an emergent ontological hierarchy of their causal relevance to the psychological process under study- we need to move well beyond the formal logical confines of these tools.

Cronbach's high opinion of multifactorial techniques, however, turns out to be rather central to his particular (1957) definition of how the "conflicting principles of the tester and the experimenter can be fused into a new and integrated applied [ATI] psychology" (p. 678). His appeal to them as analytical tools by which we can firm up our theoretical constructs about psychological processes leads me to suspect that mental development is being conceived here as merely a continuous change in the amount of some "indicative" quantity (to use Steven's 1951b term) rather than as the qualitatively shifting transformative process we already know it to be (see Section 4). As we proceed, let's be careful to consider whether Cronbach ever diverges sufficiently from this apparently quantitative portrayal of mental development in his respective (1957, 1975) accounts of the ATI approach.

In the next part of his 1957 article, Cronbach considers the testing versus experiment "schism" in applied psychology. He traces their conflicting conservative versus liberal recommendations about "social betterment" back to their respective historical roots in 19th century Social or Mental Darwinism:

"The program of applied experimental psychology is to modify treatments so as to obtain the highest average performance when all persons are treated alike -- a search, that is, for "the one best way." The program of applied correlational psychology is to raise average performance by treating persons differently -- different job assignments, different therapies, different disciplinary methods. The correlationist is utterly antagonistic to a doctrine of "the one best way," whether it be the heartless robot-making of Frederick Taylor or a doctrinaire permissiveness which tries to give identical encouragement to every individual. The ideal of the engineering psychologist, I am told, is to simplify jobs so that every individual in the working population will be able to perform them satisfactorily, i.e., so that differentiation of treatment will be unnecessary....

To understand the present conflict in purposes we must look again at historical antecedents. Pastore [1949] argues... that the testers and classifiers have been political conservatives, while those who try to find the best common treatment for all --particularly in education-- have been the liberals. This essential conservatism of personnel psychology traces back to the days of Darwin and Spencer.

The theory of evolution inspired two antagonistic movements in social thought.... [p. 679].... To Spencer, to Galton, and to their successors down to the present day, the successful are those who have the greatest adjustive capacity. The psychologist's job, in this tradition, is to facilitate or anticipate natural selection. He seeks only to reduce its cruelty and wastage by predicting who will survive in schools and other institutions as they are. He takes the system for granted and tries to identify who will fit into it. His devices have a conservative influence because they identify persons who will succeed in the existing institution. By reducing failures, they remove a challenge which might otherwise force the institution to change.

The experimental scientist inherits an interpretation of evolution associated with the names of Ward, James, and Dewey. For them, man's progress rests on his intelligence; the great struggle for survival is a struggle against environment, not against competitors. Intelligent man must reshape his environment, not merely conform to it. This spirit, the very antithesis of Spencerian laissez-faire, bred today's experimental social science which accepts no institution and no tradition as sacred. The individual is seen as inherently self-directing and creative. One can not hope to predict how he will meet his problems, and applied differential psychology is therefore pointless.
Thus we come to have one psychology which accepts the institution, its treatment, and its criterion and finds men to fit the institution's needs. The other psychology takes man --generalized man-- as given and challenges any institution which does not conform to the measure of this standard man" (Cronbach, 1957, pp. 678-679).

A "clearer view" of evolution, he suggests, removes the apparent "paradox" between these antagonistic positions. In Dewey's (1903) argument that "evolutionary theory" necessitates we recognize every organ, bodily structure, or mental function as an "instrument of adjustment or adaptation" to "particular" and "specific" situations, Cronbach believes he has found an adequate organism-environment "interaction" rationale for keeping the job description of a "joint" applied psychology on track:

"We are not on the right track when we conceive of adjustment or adjustive capacity in the abstract. It is always a capacity to respond to a particular treatment. The organism which adapts well under one condition would not survive under another. If for each environment there is a best organism, for every organism there is a best environment. The job of applied psychology is to improve decisions about people. The greatest social benefit will come from applied psychology if we can find for each individual the treatment to which he can most easily adapt. This calls for the joint application of experimental and correlational methods" (Cronbach, 1957, p. 679).

The historiographic hitch here is that the interactionist view of mental evolution (along with its rectangular mental capacity metaphor) had already failed the functional psychologists (including Angell, Carr, and Woodworth) when it came to dealing adequately with the transitional contiguity-discontinuity aspects of human mental development in general as well as with the nature-nurture controversy in particular. It does not, on its own, provide a "clear" enough view of the mental evolution of "people" in those important respects. Mental adjustment and adaptation just don't "cut it" as explanatory concepts when it comes to human beings and any psychology built around them alone is bound to be artificial and ultimately unsatisfactory.

Given the central role of the interactionist view of mental "adjustive capacity" in Cronbach's ATI approach, it should not be surprising that we will now find the same conceptual limitations reappearing in an albeit more empirically sophisticated manner. Our task in considering the ATI approach is to appreciate the empirical-procedural points it made but also bear in mind that some further notion of culture and the orderly -characteristically human- pick up of that historically situated culture are necessary to keep psychology on the "right track."

As an empirical-procedural tool, the explicit intent of the ATI approach is to improve the "practical predictions" and applied policy "decisions" made by psychologists. Psychological data, for example, was now being utilized to "help a college" select and train up students to become scientists. Initial student intake decisions were being carried out by way of utilizing individual differences techniques (such as the SAT and GRE) in the Search for Talent tradition of that overzealous Cold War era (see Chapter 6 of Ballantyne, 2002) and experimental techniques were typically being used to compare the effectiveness of various educational training programs (treatments), but there still seemed to be a radical disconnect between the actual assumptions as well as the resulting institutional proposals made by these two subdisciplines in this applied context. Hence, Cronbach's concern is to show how a combined "Aptitude x Treatment interaction" approach might serve to simultaneously "maximize" the selection-based investment "payoff" for the institution as well as the treatment-based (failure reduction, life achievement) "outcome" for every student:

"The aim of any decision maker is to maximize expected payoff [e.g., student graduations, or subsequent achievement in science].... The experimentalist assumes a fixed population and hunts for the treatment with the highest average [e.g., student graduations] and the least variability. The correlationist assumes a fixed treatment [p. 680] and hunts for [preexisting student] aptitudes which maximize the slope of the payoff function [e.g. graduation rates]. In academic selection, he advises admission of students with high scores on a relevant aptitude and thus raises payoff for the institution (Figure 5).

Pure selection, however, almost never occurs. The college aptitude test may seem to be intended for a selection decision; and, insofar as the individual college is concerned only with those it accepts, the conventional validity coefficient does indicate the best test. But from a societal point of view, the rejects will also go on into other social institutions, and their profit from this treatment must be weighed in the balance along with the profit or social contribution from the ones who enter college. Every decision is really a choice between treatments. Predicting outcome has no social value unless the psychologist or the subject himself can use the information to make better choices of treatment. The prediction must help to determine a treatment for every individual.

....

Assigning everyone to the treatment with the highest average, as the experimentalist tends to recommend, is rarely the best decision. In Figure 8, Treatment C has the best average, and we might assign everyone to it. The outcome [in overall graduations] is greater, however, if we assign some persons to each treatment. The psychologist making an experimental comparison arrives at the wrong conclusion if he ignores the aptitude variable and recommends C as a standard treatment.

Applied psychologists should deal with treatments and persons simultaneously. Treatments are characterized by many dimensions; so are persons. The two sets of dimensions together determine a [overall] payoff surface. For any practical problem, there is some best group of treatments to use and some best allocation of persons to treatments. We can expect some attributes of persons to have strong interactions with treatment variables. These attributes have far greater practical importance [for prediction of life success] than the attributes which have little or no interaction. In dividing pupils between college preparatory and non-college studies, for example, a general intelligence test is probably the wrong thing to use. This test, [p. 681] being general, predicts success in all subjects [for a limited domain of academic tasks and], therefore tends [has been designed] to have little interaction with treatment, and if so is not the best guide to differential treatment. We require a [wider, broader,] measure of aptitude which predicts who will learn better from one curriculum than from the other; but this aptitude remains to be discovered. Ultimately we should design treatments, not to fit the average person, but to fit [specific] groups of students with particular aptitude patterns. Conversely, we should seek out the aptitudes which correspond to (interact with) modifiable aspects of the treatment" (Cronbach, 1957, pp. 679-681).

Even though the above example, if read in isolation, could be misinterpreted as a call for a psychometrically guided educational system, the actual intent is to indicate that the practicalities of carrying out applied research in that area requires the joint application of the two subdisciplines. The further example Cronbach utilizes while outlining his ATI approach is highly illustrative of this point:

"The studies showing interaction between personality and conditions of learning have burgeoned in the past few years.... Wolfgang Böhm at Vienna.... showed his experimental groups a sound film about the adventures of a small boy and his toy elephant at the zoo. At each age level, a matched control group read a verbatim text of the sound track. The differences in average comprehension between the audiovisual and the text presentations were trivial. There was, however, a marked interaction. For some reason yet unexplained, a general mental test correlated only .30 with text learning, but it predicted film learning with an average correlation of .77. The difference was consistent at all ages.

Such findings as this, when replicated and explained, will carry us into an educational psychology which measures readiness for different types of teaching and which invents teaching methods to fit different types of readiness. In general, unless one treatment is clearly best for everyone, treatments should be differentiated in such a way as to maximize their interaction with aptitude variables. Conversely, persons should be allocated on the basis of those aptitudes which have the greatest interaction with treatment variables. I believe we will find these aptitudes to be quite unlike our present aptitude measures chosen to predict differences within highly correlated treatments" (Cronbach, 1957, p. 681).

Cronbach's willingness to acknowledge the contemporary limits of empirical or theoretical knowledge (regarding which kinds of "readiness for different types of teaching" to study, as well as what the "relevant" aptitude measures used might "be like") provides a refreshing contrast to other psychometric writings of that era -which I have described elsewhere as constituting a disingenuous "Cold War confidence game" designed to sell testing technologies to all levels of the American educational system (Ballantyne, 2002).

Cronbach's candor is also carried over into the final "shape of a united discipline" part of his 1957 article where he states: "Clearly, we have much to learn about the most suitable way to develop a united theory, but we have no lack of exciting possibilities" (p. 682). Two of the most notable possibilities include Jean Piaget's The child's conception of causality (1930) and Psychology of Intelligence (1950) which, together with Harry Harlow's "The formation of learning set" (1949), might "ultimately... unite the psychology of intelligence with the psychology of learning" (p. 682). Cronbach also mentions R.B. Cattell's Factor Analysis (1952) and even puts it in its proper place -i.e., "as only one of many choices... which modern statistics offers" us to "organize data" about treatment and organism simultaneously.

Even though, in this particular instance, Cronbach is more reserved with respect to the disciplinary role of these statistical tools, his high opinion of them still colors his portrayal and assessment of other approaches considerably as well as his (1957) proposals regarding how a united discipline is to proceed. For instance, to portray Piaget as someone who "correlated" reasoning processes "with age" to albeit "discover" a developmental sequence of schemata "whose emergence permits operational thought" is to misrepresent a productive rationalist approach to the study of psychological processes and imply that its richness might be retained by (or somehow translated into) a variable psychology approach. Parenthetically, Charles J. Brainerd attempted just such a translation of Piagetian conservation tasks into variable psychology during the 1970s.

It is in this final part of the article too that the old positivist fact-theory confusion crops up in Cronbach's argumentation. In its updated though still disappointing convergent operationist form, however, it is more properly called a methods-theory confusion. In any case, this confusion can be seen best while Cronbach is (quite rightly) demurring from Woodworth & Schosberg's (1954) "S-A-R" unit of psychological analysis and attempting to suggest his own so-called "theory" as follows:

"Woodworth once described psychological laws in terms of the S-O-R formula which specifically recognizes the individual [organism]. The revised version of his Experimental Psychology [1954], however, advocates an S-A-R formula, where A stands for "antecedent conditions." This formulation, which is generally congenial to experimenters, reduces the present state of the organism to an intervening variable.

....

The theory psychology really requires is a redundant network like Figure 11. This network permits us to predict from the past experience or present characteristics of the organism, or a combination of the two, depending on what is known. Filling in such a network is clearly a task for the joint efforts of experimental and correlational psychology" (Cronbach, 1957, pp. 682-3).

The terminology being used here by Cronbach is so loose and inconsistent that it might be hard to tell what he is getting at. However, let's attempt to make two corrective points regarding it anyway.

First of all, Woodworth's S-O-R "formula" was never actually a "description of psychological laws," nor even a "theory," but rather an ontological statement regarding the appropriate "dynamic unit of analysis" toward which the various empirical and rational tools of the discipline were to be aimed. For Woodworth (1934, 1938) the empirical tools (including experimentation and correlation) were to work together with the wider rational investigatory tools (including introspection and developmental observation) to produce descriptive observational laws and eventually explanatory theories of psychological processes. These investigatory tools of the trade were the means by which we gather the kinds of empirical and theoretical knowledge that enables us "discover" the general and more specific laws of psychological processes. Despite our various quibbles with details of Woodworth's approach, he should be recognized as someone who at least stayed within the context of a Standard view of Science. The same cannot be said of Cronbach in particular nor of the convergent operationist movement in general.

Secondly, Cronbach's "theory" is merely a procedural outline or flow diagram for carrying out empirical ATI research and not a theory at all. Although this point might seem obvious to the present reader, it was far from so for Cronbach's contemporaries who went on to produce a host of so-called cognitive attribution "theories" or intuitive statistics "models" according to the implied neopositivist empirical "tools to theory" principle. As Gerd Gigerenzer (1991) put it:

"After the institutionalization of inferential statistics [roughly 1954 onward (see Sterling, 1959; Edgington, 1974, Rucci, & Tweney, 1980)], a broad range of cognitive processes, conscious and unconscious, elementary and complex, were reinterpreted as involving "intuitive statistics." ... Tanner and Swets (1954) assumed in their theory of signal delectability that the mind "decides" whether there is a stimulus or only noise, just as a statistician of the Neyman-Pearson school decides between two hypotheses. In his causal attribution theory, Harold H. Kelley (1967) postulated that the mind attributes a cause to an effect in the same way as behavioral scientists have come to do, namely by performing an ANOVA and testing null hypotheses..." (Gigerenzer, 1991, p. 255).

In the follow-up article "Beyond the Two Disciplines of Scientific Psychology" (1975), Cronbach opens with an obligatory, celebratory, comment that the ATI "hybrid discipline is now flourishing" but defers any "comprehensive review" of its applications in "learning and motivation" to the upcoming Cronbach & Snow handbook called: Aptitudes and instructional methods (1977). Instead, Cronbach (1975) highlights the gleaning awareness -even amongst disciplinary conservatives- that the experimental, individual differences, and combined ATI research varieties of American General psychology had not made sufficient "progress" in achieving the kinds of "theoretical" generality formerly expected of them:

"Some... years ago, research in psychology became dedicated to the quest for nomothetic theory... Model building and hypothesis testing become the ruling ideal, and [empirical] research problems were increasingly chosen to fit that mode. Taking stock today I think most of us judge theoretical progress to have been disappointing. Many are uneasy with the intellectual [abstract] style of psychological research... Here I shall cut short my comments on ATIs as such, in order to join in that discussion. I shall express some pessimism about our predominant norms and strategies and offer tentative thoughts about an alternative [concrete] style of work" (Cronbach, 1975, p. 116).

So, after providing a procedural example of ATI assessment -comparing class grade outcomes under conditions in which the instructor's teaching style either matched or was discordant with student Ai (Achievement via Independence) versus Ac (Achievement via Conformance) individual difference scores, Cronbach then turns to other, wider, disciplinary issues. We will concentrate on three of these latter aspects of the 1975 article because they are rather indicative of the methodological (assumptive-practical) strengths and weaknesses of the ATI approach as a movement.

In our first selection from Cronbach (1975), he makes a few key retrospective and corrective comments on the assumptions made in Cronbach & Meehl (1955) with regard to how they bear on various "open" or closed systems of psychological subject matter:

".... The half-life of an empirical proposition may be great or small. The more open a system, the shorter the half-life of relations within it are likely to be.

This puts construct validation (Cronbach, 1971; Cronbach & Meehl, 1955) in a new light. Because Meehl and I were importing into psychology a rationale [from] physical science, we spoke as if a fixed reality is to be accounted for.... Propositions describing atoms and electrons have a long half-life, and the physical theorist can regard the processes in his world as steady. Rarely is a social or behavioral phenomenon isolated enough to have this steady-process property. Hence the explanations we live by will perhaps always remain partial, ... distant from real events..., and rather short lived....

Our troubles do not arise because human events are in principle unlawful; man and his creations are part of the natural world. The trouble, ... is that we cannot store up generalizations and constructs for ultimate assembly into a [long-lasting theoretical] network. It is as if we need a gross of dry cells to power an engine and could only make one a month. The energy would leak out of the first cells before we had half the battery completed. So it is with the potency of our [statistical] generalizations. If the effect of a treatment changes over a few decades, that inconsistency is an effect, a Treatment X Decade interaction that must itself be regulated by whatever laws there be. Such [changes] frustrate any would-be theorist who mixes data from several decades indiscriminately into the phenomenal picture he tries to explain.

The obvious example of success in coming to explanatory grips with interactions involving time is evolutionary theory in biology... Darwin considered observations on species against the background of ecologies... in Galapagos as only the latest snapshot of an ever-changing ecology. The positivistic strategy of fixing conditions in order to reach strong generalizations... fits with [a different view] that processes are steady and can be fragmented into nearly independent systems. Psychologists toward the physiological end of our investigative range probably can live with that as their principal strategy. Those of us toward the social end of the range cannot" (Cronbach, 1975, p. 123).

Although this passage is indicative of a subtle shift (since 1955) in the methodological nuance of Cronbach's views away from an ahistorical "physicalist" approach to the validation of so-called psychological constructs, it also indicates that his ATI approach will continue to be an emphasis on mathematical interactions rather than on the discovery of the developmental aspects of psychological processes. While human nature is "lawful," Treatment x Decade "change" is considered by him as constituting a logical confound or "paradox" for our scientific observations which are themselves portrayed as a succession of static empirical-mathematical snapshots. Ironically, Cronbach seems to miss the point that even though Darwin's observations were "snapshots" of an ongoing ever-changing process, the enduring explanatory usefulness of organic evolutionary theory is its recognition of the internal objective contradictory (continuity-discontinuity) aspects of the transmutation of one species into another. In the final analysis then, Cronbach's corrective comments do not resolve the existing methodological standoff between the lower and upper limits of psychological inquiry, but leaves it intact.

The intent of our second, more successful, selection from Cronbach (1975), is to call into question the typically assumed boundaries and appropriate starting points for theory and observation respectively. Here Cronbach highlights a rather telling and problematic disciplinary dichotomy between concrete empirical "Interpretation in Context" versus abstract statistical "Generalization" and even encroaches upon proposing a rough solution to it.

"Social science has been dedicate to formal testing of nomothetic propositions. Given the difficulties [changing] interactions create... what might a better strategy be?....

.... Advocates of "theory" mean many things. I am as prepared as anyone to endorse the value of such model building as we see... Jack Atkinson [(1974)] doing..... But a point of view [regarding the data at hand] is not a theory, capable of sharp predictions to new conditions.

....When the system of interest cannot be constrained to fit a limited model, the function of [empirical] research... is primarily to identify pertinent variables and to suggest possible [ways] to study [them] in more natural situations....

....

.... Originally, the psychologist saw his role as the scientific observation of human behavior. When hypothesis testing became paramount, observation was neglected, and even actively discouraged by editorial policies of journals. Some authors now report nothing save F ratios.... Let the author [now] file descriptive information, at least in an archive, instead of reporting only those selected differences and correlations that are nominally "greater than chance." Descriptions encourage us to think constructively about results..., whereas the dichotomy significant/nonsignificant implies only a hopeless inconsistency.

....There are more things in heaven and earth than are dreamt of in our hypotheses, and our observations should be open to them.... The [typical] theorist performs a dramatist's function; if a plot with a few characters will tell the story, it is more satisfying than one with a crowded stage. But the [empirical] observer should be a journalist, not a dramatist. To suppress a variation that might not recur is bad observing.

Correlational research is distinguished from manipulative research in that it accepts the natural range of variables, instead of shaping conditions to represent a hypothesis. By sampling ... a domain of situations in the Brunswikian sense, one puts himself in a somewhat better position to generalize....

.... I am sure we can make better use of Brunswikian... analysis. But I believe that in past research the psychologist has been too willing to stop as soon as he has calculated the statistics stating the strength of the relationships he specified a priori. The experimenter or the correlational researcher can and should look within his data for local effects arising from uncontrolled conditions and intermediate responses... He can do so, of course, only if he collected adequate [descriptive] protocols from the start.

Instead of making [statistical] generalization the ruling consideration in our research, I suggest that we reverse our priorities. An observer collecting data in one particular situation is in a position to appraise a practice or proposition in that setting, observing effects in context.... As he goes from situation to situation, his first task is to describe and interpret the effect anew in each locale, perhaps taking into account factors unique to that locale... As results accumulate, a person who seeks understanding will do his best to trace how the uncontrolled factors could have caused local departures from the modal effect. That is, generalization [of this descriptive sort] comes late and the exception is taken as seriously as the rule...

When we give proper weight to local conditions any [statistical] generalization is a working hypothesis, not a conclusion. The personnel tester, for example, long ago discovered the hazard in generalizing about predictive validity, because test validity varies with the labor pool, the conditions of the job, and the criterion... Hence [they] are taught to collect local data before putting a selection scheme into operation, and periodically thereafter....

....

The two scientific disciplines, experimental control and systematic correlation, answer formal [hypothetical] questions stated in advance. Intensive local observation [, however, ]... goes beyond ... an open-eyed, open-minded appreciation of the surprises nature deposits in the investigative net. This kind of interpretation is historical more than [merely empirical]. I suspect that if the psychologist were to read more widely in history, ethnology, and the centuries of humanistic writings on man and society, he would be better prepared for this part of his work" (Cronbach, 1975, pp. 123-125).

Quite clearly, this is the most methodologically progressive segment of Cronbach's 1975 article. The above noted qualification regarding whether the intellectual tools at Cronbach's disposal will hold up or not applies especially to this selected segment for it is here that Cronbach has refashioned and extended those tools to their maximum utility.

In our third, selection, however, Cronbach turns more directly to setting out what he views as the resulting implications of each of the above selections for "Realizable [theoretical] Aspirations" in "Social inquiry." Here we see that the sharpened intellectual tools Cronbach has on hand have a limited durability and should ultimately be replaced because, otherwise, the outlook for psychology as an explanatory discipline is rather delimited and even bleak.

"Social scientists... and psychologists... have modeled their work on physical science aspiring to amass empirical generalizations, to restructure them into more general [observational] laws, and to weld these scattered laws into coherent theory. That lofty aspiration is far from realization.... Theorist are reminded from time to time that the person who states a principle must also state the boundary conditions that limit its application....

The forecast of Y from A, B, and C will be valid enough, if conditions D, E, F, etc. are held constant in establishing and in applying the law. It will be actuarially valid, valid on the average if it was established in a representative sample from a universe of situations, as long as the universe remains constant. When the universe changes, we have to go beyond our actuarial rule....

.... [A] college counselor might classify instructors as pressing for conformity or individuality, and might then advise the high-Ac student as to which course section to enroll in. But the [empirical] generalization gives no guarantee of his individual success because it ignores additional variables... Hence the [statistical] generalization ought not to control irreversible assignments... Short-run [ATI] empiricism is "response sensitive"...; one monitors responses to the treatment and adjust it, instead of prescribing a fixed treatment on the basis of a [statistical] generalization from ... other persons or... other locales.

.... The ... constructs that we ... combine into a [general] view of man, his institutions, and his behavior.... will not necessarily progress from hazy vision to crude sketch to articulate [theoretical] blueprint. A general [theory] can be highly accurate only if it specifies interactive effects.... [some of which] will change in form, in a span of one or two generations.... [Thus, even though] our sketch of man may become more elaborate, it will remain a sketch.

....[Statistical technique] is what we uniquely add to the time-honored ways of studying man. Too narrow an identification with [physical] science, however, has fixed our eyes upon an inappropriate goal. The goal of our work... is not to amass [statistical] generalizations atop which a [enduring] theoretical tower can someday be erected... The special task of the social scientist in each generation is to pin down the contemporary facts. Beyond that, he shares with the humanistic scholar and the artist ... the effort... to realign the culture's view of man with present realities" (Cronbach, 1975, p. 126).

The ATI movement was a progressive disciplinary half-step beyond the former "two disciplines" tradition of stopping inquiry once one had produced either a statistically significant empirical-experimental potshot ( operationally defined measurement of a given psychological processes like learning) on the one hand, or an abstract (ahistorical) statistical generalization based upon the application of some ability testing or IQ battery on the other.

The language, investigatory scope, and methodological basis of Cronbach's successive 1957 and 1975 analyses, however, remains artificially circumscribed in various ways. Cronbach (1957), for instance, points out that to conceive of mental development "in the abstract" is the wrong way to go and that such development must be understood in the concrete. But any interactionist view of mental development, whether in its original rectangular mental capacity (genes x environment) manifestation (Woodworth; and Arthur Jensen later on), or in its ATI versions (of aptitudes x treatment, or aptitude x decade), is still an abstraction. No matter how concretely the empirical aspects of "adaptive or adjustive" capacity are laid out, these concepts alone are inadequate to encapsulate the fuller meaning or intent of Dewey's emphasis on the role of characteristically "human intellect" in "reshaping" the environment. Mere interactionist analysis, no matter how empirically detailed, will remain abstract in this respect.

In 1957, Cronbach proposes a rough procedural outline for capturing empirical-mathematical ATI interactions but as a methodological basis for social psychological science, interactionism itself will only take us so far. By 1975, for example, Cronbach is making specific reference to successive "change" from one generation to the next, which is portrayed as gumming up the otherwise adequate ATI work of concrete statistical "description" of "local" effects (p. 125), but no adequate theory of mental development per se is put forward to account for that sort of "historically" embedded change. He is eventually forced to conclude that statistics can give successive static "snapshots" (p. 123) of each generation's performance but no enduring theories about human nature can ever be produced and the effort to produce them is an "inappropriate" aspiration for social science (p. 126).

To avoid the intrinsic artificiality of the interactionist approach, one must recognize the notion of "appropriation" of culture (a.k.a., cultural evolution) and its qualitatively "transformative" effect on the mental development of human beings. This is the fuller, clearer, emergent view of mental evolution that is missing in Cronbach's analysis. In order to get beyond the merely descriptive level of measuring generalized "organisms" in context, and move the discipline toward an explanatory level of theory regarding specifically human psychological processes, we need to adopt such a transformative view of mental development. As Cronbach (1975) has indicated, and as other disciplinary figures like Piaget, Vygotsky, Luria, or Leontiev have shown, this particular kind of "concrete" scientific analysis of psychological processes will be "sociohistorical" as well as phylogenetic and onotogenetic (individual).

Variable psychology and the anthropology of the abstract individual

Even though the Stevens version of operationism came under immediate critical attack and essentially self-destructed as a stand-alone movement by 1954, appeals to construct validity in individual differences research or to convergent operationism in experimental research were quickly put forward in its place. In turn, the well-intentioned efforts of Cronbach (1957) to highlight the divergent procedural requirements of experimental versus correlational research and to combine these into a "single method" of descriptive empirical psychology, served (along with the older traditions of Tolman or Woodworth) to promote an unofficially adopted "combined" variable model of research which became popular during the mid-1960s. Like its predecessors, however, this new rationale for empirical research came with its own rather exacting price. It set down delimited (descriptive rather than explanatory) "boundary conditions" for the kind of "theory" which such a combined discipline might "aspire" to produce. It is within this somewhat circumscribed, combined rationale of descriptive "variable psychology" that the selected diagrams from Munn, et al., 1969 and Evans & Murdoff (1978) fall. It is also this variable psychology rationale which has remained with us up to the relative present as the disciplinary norm.

By way of drawing together our coverage of this large umbrella of varied disciplinary positions (from the 1930s on up to 1978), it should now be clear that they each contained a graded series of procedural-epistemological commitments to either anti-realism, dogmatic agnosticism, indirect realism, critical realism, or at best naive realism. Furthermore, it should also be starting to become apparent that the American General psychology of even the later "modern" part of this period still lacked any firm grasp of how evolutionary theory applies to psychological subject matter. Like the earlier functionalist school (of Angell, Carr, and Woodworth) the best they ever came up with in that regard was a "socially" embedded "interactionist" view of mental development.

These methodological shortcomings played a large part in confining the analyses of mid-through-late 20th century variable psychology to issues of procedural validation, convergence of measurements, identification of attainable applied empirical research goals, or to maintaining a noncommittal textbook account of the admittedly conflicting biological, individual, or social-historical aspects of the discipline. The wider disciplinary issues, -especially those regarding the "relevance" of the data being produced to everyday human existence; or of how the descriptive "empirical generalizations" being discovered "correspondence" to psychological processes per se- were either ruled out of bounds due to their metaphysical content or were eventually dealt with in a manner that is discouraging to anyone seeking an explanatory-theoretical (rather than merely a descriptive-empirical) approach to any aspect of psychology.

Whether by intentional "scientistic" artifice, or by way of the more subtle combined influence of their own formal logical assumptions and the procedural requirements of their favored empirical research tools, a rather fundamental set of interrelated methodological questions was being avoided by each of these successive empirical psychology rationales: What actually constitutes the ontological (evolutionary and developmental) basis for the proposed procedural-epistemological "convergence" of the varied empirical and rational research methods in psychology?

Cronbach is a wonderful exemplar of this point and his views were carried forward en masse into the subsequent tradition of combined variable psychology. In Cronbach's writings, "scientific" psychology tended to be equated with the adoption of empirical method. As a consequence, his major concern was with how to best combine two particular empirical subdisciplines (which contain their own formal logical assumptions and requirements) into an applied ATI form of empirical research rather than with finding ways to unify or utilize the best aspects of the empirical and rational methods of the whole discipline so that we might produce psychological knowledge that corresponds better with the nature of the development of the ontological processes under study. To a greater or lesser extent, all of the required aspects of a sound methodological-assumptive basis for a systematic convergence of empirical and rational research methods are missing from his 1957 and 1975 articles. Those required methodological aspects are as follows: (1) an unequivocal direct realist appeal to nature; (2) an emergent "theory of levels" approach to comparative mental evolution (including adjustment, adaptation, and human appropriation); and (3) a dialectical "transformative" (rather than a formal logical interactionist) understanding of mental development.

The main methodological object lesson we should take away with us from our consideration of the ATI movement is that: Instead of merely seeking out situation x treatment "interactions" (defined quantitatively) we should be seeking out higher-order culturally embedded mental "transformations" as well. Unfortunately, there are but few indications as to how to proceed in accomplishing such a revised disciplinary goal in the ATI movement itself or in the combined variable model version of General psychology which followed.

Well into the 1990s, the day-to-day practice of empirical psychology was still governed by what has come to be called the combined "variable model" of empirical research (C.W. Tolman, 1994a). Under this empirical research tradition, the subject matter of psychology was conceived of as a universe of potentially measurable variables, the statistical relations among which form the basis for all the discipline's "scientific" propositions and lawful "generalizations." The disciplinary heyday of this variable psychology approach, however, can be pinned down historically to the 1970-1987 period within which it could still be portrayed convincingly as the epitome of the only possible scientific psychology. The professional bias that "if one is not doing psychology in that way, one is not doing scientific psychology at all" still lingers among some segments of the discipline, but the practical, theoretical, and ideological outcome of that tradition reveal this particular bias to be both unsound and even dangerous.

So, after making some preliminary comments on the respective merits of formal and dialectical logical assumptions for producing concrete empirical generalizations and for guiding comparative-developmental investigations of psychological processes, we will then consider a few pertinent variable model research examples from the 1970-1987 period in this respect. By pointing out that we are constantly forced to maneuver around or beyond the analytical confines of the variable model of research to answer the kinds of theoretical-developmental questions we really want answered, it is hoped that you might better appreciate what the net outcome of that tradition was and how we might possibly avoid or overcome its shortcomings in the future.

Statistical generalization and the Platonic-Aristotelian aspects of the variable model

As our consideration of Cronbach (1975) has already indicated, most of what passes itself off as empirical methodology in psychology is aimed at producing abstract statistical generalization. Cronbach suggested various remedies, but the practice of relying on statistical tools of analysis right at the beginning of our investigation of psychological processes is even more problematic than he ever let on or perhaps realized.

According to the procedure of the variable model of investigation, the first thing we do is avoid dealing with individual cases -i.e., with the concrete qualitative distinctness of the individual animals, human research subjects, or the particular ontologically graded level of the process under study. Instead, we assume that a quantitative continuity exists among such individual cases or processes and we take a "set of numerical scores" from the "research dimension" under study (first abstraction). Second, we take those scores and calculate a "descriptive statistical" mean, mode, and median (second abstraction). Then, in the hope of reaching some statistically significant "inferential" conclusions regarding an hypothesis we have set (presumably before collecting the data), we calculate the difference of that descriptive mean from another mean or set of representative means (third abstraction). Very soon thereafter we are so far into the realm of abstract statistical analysis and generalization that we don't even recognize the original data sources -the human beings, animals, or psychological processes- in the statistical knowledge we have acquired.

Cronbach (1975), in considering some of the aspects of this tradition, laments the "waste" of effort (and research funds) that is entailed in not having reached a statistically significance result. He suggests that it is time to "exorcise" (excise) the prominent place of null hypothesis testing from its central role in guiding the structure of research and its status as a requirement in deciding which empirical studies are to be published.

"It is time to exorcise the null hypothesis. We cannot afford to pour costly data down the drain whenever effects present in the sample 'fail to reach significance.'.... Descriptions encourage us to think constructively about results....

The canon of [statistical] parsimony, misinterpreted, has led us into the habit of accepting Type II errors [(saying that a measured relationship is not significant when it is)] at every turn, for the sake of holding Type I errors [(saying that a measured relationship is significant when it is not)] in check. There are more things in heaven and earth than are dreamt of in our hypotheses, and our observations should be open to them... (Cronbach, 1975, p. 124).

Similar (albeit brief) provisos are found in the typical psychological statistics textbook regarding the distinction between the "meaningfulness" (non-triviality) of obtained empirical results and their mere statistical significance per se. Relatedly, we might add, such texts have always duly raised the important distinction between obtained "correlations" versus judgments of causality but have rarely (if ever) indicated how one might go about making such decisions in a principled fashion. Furthermore, during the early 1980s onward, successively more of these texts began introducing the concept of "power" and power estimates:

"Speaking in terms of Type II errors is a negative way of approaching the problem, since it keeps reminding us that we might make a mistake. The more positive approach is to speak in terms of power, which is defined as the probability of correctly rejecting a false [Null Hypothesis]...." (see Howell, 1985, pp. 165-181; After Cohen, 1977; Welkowitz, Ewen, & Cohen, 1982).

The intent of introducing power estimates into our psychometric toolbox was that we might somehow rest assured that statistical generalizations about "small populations" are easier (likely to be more accurate); while alternately, -when attempting to make generalizations about large populations- increasing the "N" will likewise "increase" the statistical power. As an undergraduate I was not reassured by these claims in the least and have since found others who remain uneasy about the statistical cookbook approach to the issues of causality or relevance of research. Beyond the narrowly circumscribed statistical technique concerns covered therein, lays a world of ontological and practical relevance concerns regarding the knowledge produced when carrying out such quantitatively guided investigations.

As indicated by the "Crisis of Relevance" subsection below, the widespread flirtation with Platonism contained in the combined variable model of research procedure became a matter of disciplinary embarrassment. Like Plato, that tradition seems to assume the existence of real abstract universals which might be identified by way of mathematics and which presumably penetrate the observable (but merely apparent) individuality of our data sources. This Platonism, along with its routine psychometric combination with an Aristotelian notion of "lawfulness" -as that which is "exceptionless"- accounts for the fact that many of the so-called laws of experimental psychology (e.g., the Yerkes-Dobson law which is simply one version of the quantitative physiological balance view of motivation) strike the uninitiated as both intuitively obvious and as instrumentally (practicably) lacking because they do not shed much light on individual cases. They are merely overgeneralized statements of empirical regularities which have systematically relegated individuality (the actual societally contextualized human psychological experience or even species differences) into the "shadowy" realm of surface irrelevance and exceptional cases into the realm of "statistical outliers" (After C.W. Tolman, 1994 a & b).

It was the abstractness of the statistical generalizations it produces which led to calls for disciplinary alternatives to mid-century "scientistic" psychology. Such alternatives were aimed at somehow reconnecting our disciplinary efforts with individual experience, personal meaning, social context, and societal values. Relatedly, -given that we now understand the Platonic and Positivist metaphysical basis of American General psychology- we can add that it was their underlying commitment to an ultimately unworkable indirect realist epistemology, rather than their overall attempt to obtain objective knowledge about psychological processes, that made variable psychology seem so methodologically vulnerable to various "anti-objectivist" positions during the 1960s-1990s. Having not been sufficiently resolved at that time, these aspects of the "Crisis" era continue to dog us up to the present.

Cronbach (1975), who is writing prior to the peak of the disciplinary crisis, suggests that psychologists should obtain some sort of wider "historical" grounding to augment or diminish the analytical impact of this unfortunate (abstracting) tendency of standardized empirical procedure. Cronbach's suggestion was certainly a progressive half-step in the right direction. But, having proposed the ameliorative remedy of his ATI approach (along with its "socially" embedded "interactionist" view of mental development), his position thereafter seems to be: Well, that's about it, we'll just have to live with these discrepancies indefinitely.

As we have already tried to indicate, the adoption of these abstract and indirect realist traditions of research were motivated by the new demands of the discipline-building and administrative (sorting) requirements of the early-to-mid-20th century rather than by any lack of results by the older Baconian-Jamesian tradition of "concrete" descriptive research and direct realism (see also Ballantyne, 2002 for a detailed elaboration of the "ability testing" subdiscipline in this regard). The actual historiographical and methodological dilemma we face is not that the discipline lacks isolated contributors who have produced the kinds of concrete knowledge we require, but that the distinction between "abstract and concrete" empirical generalization has been largely absent from the research training tools (statistical methods texts) we typically use and -as a consequence- they provide no grounds for taking the next step: The step of working out an explicit methodology for intentionally producing "concrete concepts" as a new starting point for empirical psychological research (After C.W. Tolman, 1994a).

The proposition being put forward here and now is that in order to overcome the procedural shortcomings of the typical stand-alone formal logical statistical techniques still utilized widely in the discipline, an analytical dialectical logic is required to guide our assumptions, investigations, and conclusions. By way of highlighting the potential utility of this somewhat neglected approach, we will not only gain a better understanding of what the proper subject matter of psychology is itself, but also a valuable rational-intellectual tool for recognizing, conducting, interpreting, and promoting relevant empirical research in psychology.

Contrast between variable model research and a materialist dialectics in psychology

One of the defining characteristics of materialist methodology throughout all the eras we have covered has been that it begins with the nature of the "thing" (object, processes, or event) under study. The point of departure for materialist investigation is the nature of the thing under study. The "dialectical" materialist takes this defining characteristic one step farther by suggesting that it is the nature of the development of the object, event, or process under study that dictates the methods of investigation.

I trust that this simply appeals to commonsense. If we are trying to discover the cause or treatment for a disease, what we do depends a lot on the nature of the disease organism, the nature of the diseased organism, as well as the respective developmental relations between those biological organisms. We cannot, for instance, take a course on the structural mechanics of bridge-building and expect to discover the causes of air-born microbiological diseases because that requires another kind of investigatory method altogether.

This seems commonsensical yet it is violated all the time in psychology. We have been trained up to assume, for instance, that analysis of variance (ANOVA) and other such multifactorial methods of quantitative data analysis are the ways to investigate all psychological topics. Yet, again, if you reflect on it a bit, there is a lot of discrepancy between the nature of the development of psychological processes and the statistical-mechanical assumptions of ANOVA.

In fact what has happened historically is that we have brought the ANOVA to the object and our "understanding" of psychological subject matter has become shaped considerably by that statistical tool. Put simply, that understanding in some respects (not all respects) is clearly wrong. What distinguishes the classic 20th century dialectical psychologists (e.g., Piaget, Vygotsky, Luria, Leontiev) from their contemporaries is that they were going back to the object. What is appealing about Vygotsky, is that here is someone who is really very open to the "processes" he is studying. That's rather refreshing against the past and current psychological literature in which this attitude is still hardly ever encountered.

The three laws of dialectics, themselves, are simply concrete generalizations regarding what we can expect to find out about processes when we consider them in motion as opposed to considering them merely by way of a mechanical empirical procedure that takes static "snapshots" of that motion. They are general rules intended to guide thought while dealing with events in motion, change, and development. I've tried to illustrate up to now why this is important. In the history of North American "General psychology," our methods of investigation have been guided by a lot of principles which are undeniably static and mechanical. If we are not aware of that fact, and we continue to assume that these methods (ANOVA for instance) are the end-all and be-all of empirical procedure, we may fail to grasp the essentially dynamic nature of the processes we have chosen for investigation.

In our daily lives and in our professional interpersonal exchanges, most of us act as if we are not satisfied with the merely static, mechanistic view of the universe implied by the variable model of investigation. Instead, we act as though the dialectical view is true. So, what is at stake here is the struggle to make these implied beliefs explicit in the accepted procedural practices and standards of the discipline -to bring our empirical investigatory procedures and our theoretical conclusions back in line with our awareness of the "dynamic nature" of our subject matter.

Good theory, as we have argued throughout this work, tends to facilitated effective action. All we are doing with dialectics is moving away from a form of thinking and investigative procedure which tends to stand in the way of understanding processes in motion, toward one that grasps such motion -i.e., away from a procedure that (on its own) does not adequately conform to reality toward one that does. The intent here is to remove the existing disciplinary barriers to that understanding we already have about the motion, change, and development of psychological processes.

In terms of professional psychological practice, this all boils down to finding a way of somehow getting back to the nature of the development of the particular object or process under study. Both Woodworth's (S-O-R) and then Cronbach's (ATI) tried to do this but failed. The combined variable model of research that became popular thereafter, however, seems to have given up on that admirable project altogether. The IV-DV variable model (and especially ANOVA) as a guiding and dominant procedural technique has now reached such a hegemonic status in psychology that when we are faced with any particular empirical psychological question we immediately look to find what numerical variables we can identify and measure and how these variables can be organized in an orthogonal manner so that we can conduct a statistical analysis of variance. In other words, what tends to dictate our typical method is not the particular object or process per se, but rather some abstracted a priori notion of what constitutes proper statistical procedure.

The object lesson to note in this regard is that by emphasizing dialectical logic as a guiding principle, we are not talking about assuming some a priori procedural schema or automatic list of logical contradictions (e.g., thesis-antithesis) to be applied invariably to all psychological questions, but simply a return to the nature of the development of the object or process under study. While setting our empirical sites on the nature of some particular psychological subject matter, however, we do want to be prepared to appreciate certain generalized features of developmental processes -i.e., their internal objective contradictions; their particular pattern of transformations of quality and quantity; and the ongoing stage-like retentive though progressive secession (negation of negation) implied by observing those contradictions and transformations carefully.

Whether you are interested in perception, learning, memory, motivation, personality or some other process, materialist dialectics provides a flexible strategy of approach to empirical investigation and not some guaranteed recipe for success in any of these specific empirical endeavors. What the specific contradictions residing in any particular psychological process (or graded level of that process) are, as well as how they are resolved in individual cases thereof, are both questions for empirical scientists in the various fields of the discipline to discover.

At this point, some of you more pragmatically-minded students might still be inclined to goad the dialectical psychologist into giving a quick and ready answer to a seemingly pressing question like: "Give me a formula, how do I do science?" But for the dialectical psychologist there is no such stand-alone formula or recipe. In contrast, let's note that the variable psychologist does provide a ready answer as follows: "Well, first you have to take some basic courses in research design, report writing, t tests and ANOVA. Then, if you are smart enough, you go on to take some upper-level courses in more fancy statistical techniques." The text I used as an impressionable undergraduate, for instance, provided a wonderful encapsulation of this latter statistical cookbook approach to data analysis:

Decision tree for "selecting among available statistical procedures" according to "the type of [numeric] data," the "question" of interest (relationships versus differences between groups), the "number of groups," and "whether the variables" were obtained from "independent or dependent [related]" samples (From Howell, 1985).

Thus, despite all the abstractness of its knowledge products and despite its demonstrable irrelevance to concrete human affairs, the variable model of research seems -at face value- to be seductively simple to those who just want to get on with the job of collecting numerical data and a steady paycheck. In this regard though, each of you might want to consider whether you are aiming to become merely a number crunching experimental or psychometric technician, on the one hand, or aspiring to actually understand the nature of psychological processes and thereby utilize or apply that knowledge on the other; for it is at such tempting early ethical junctures that these career paths begin diverging and only rarely converge again thereafter. Each path, of course, has its own rewards and consequences for psychology and for society.

Exemplars of the contrast: Piaget, Vygotsky, Skinner, Brainerd and others

Assuming that you are still with us, let's briefly entertain the possibility that a dialectical approach to psychology might serve to help us avoid some past disciplinary errors, reinterpret previous empirical results and perhaps even guide the structure of subsequent empirical research. The most expedient way to address these related methodological considerations is by contrasting the efforts of those who have taken a dialectical approach to mental development seriously with a few high-profile exemplars from the variable psychology tradition.

Piaget versus Skinner on empirical procedure and Brainerd on "conservation tasks"

Jean Piaget (1896-1980) was a Swiss biologist whose career interests gradually shifted beyond the organismic purview of his doctorate-level observations (on the way mollusks "adapt" to various environments) toward successive psycho-social attempts to describe and explain the normal pattern of progressively more "adequate" cognitive structures which mediate the intellectual development of individual human beings. In psychology (and sociology), Piaget is best known for his four-stage theory of cognitive development (from Sensorimotor through to Formal operations) which usually graces our introductory textbooks in the following hierarchical manner.

Formal operations (beginning at ages 11-15)Child's knowledge base and cognitive structures are much more similar to those of an adult. Ability for abstract thought increases markedly.
Concrete operations (ages 7-11)Child is developing considerable knowledge base from physical experiences. Child begins to draw on this concrete knowledge base to make more sophisticated explanations and predictions. Begins to do some abstract problem solving such as mental math, etc. Still understands best when educational material refers to real life situations.
Preoperational stage (ages 2-7) Child is not yet able to form abstract conceptions, must have concrete hands-on experiences and visual representations in order to form basic conclusions. Typically, experiences must occur repeatedly before the child grasps the cause and effect connection.
Sensorimotor stage (birth - 2 years old)Child interacts with environment through physical actions (sucking, pushing, grabbing, shaking, etc.). Object permanence is discovered early on (things still exist while out of view). These interactions continue to build the child's cognitive structures about the world and dictate how it functions in or responds to changes in that world.

This stage theory was initially worked out by Piaget by way of carrying out successive rational case-study method investigations on a limited sample of Parisian and Swiss children (including his own). It was introduced gradually to the North American audience through various English translations (Piaget 1926 through to 1950) each of which received a round of disciplinary critique. As Ernst Hilgard puts it:

"The cycle of acceptance, rejection, and then a second acceptance of... Piaget... provides a historical lesson about the climate of readiness for a particular kind of theory. When Piaget's work first came to the attention of American psychologists, [introspective] psychology was at a low ebb, with behaviorism at its height [and the mental testing industry still in its early years]. This does not mean that Piaget was not read when his books first appeared in translation (Piaget, 1926, 1928, 1929, 1930, 1932), and his importance was soon recognized by the inclusion of his chapter on 'Children's Philosophies' in the first Murchison Handbook (Piaget, 1931).

[An initial round of disciplinary critique was also carried out regarding some of the weaker features of Piaget's proposed stage theory and investigatory technique to date (Mead, 1931; Huang, 1930; Deutsche, 1937).] Whether or not these studies captured the [enduring] essence of Piaget's theories, for nearly two decades there was little interest in his [approach] and Piaget's chapter was omitted from the first two editions of Carmichael's Manual of Child Psychology (1946/1954), which followed upon the Murchison volume.

Piaget continued his productive work with.... The Psychology of Intelligence (Piaget, 1950).... [and] by the time Mussen edited the third edition of Carmichael's Manual (1970), Piaget's chapter was again prominent (Piaget, 1970a), as one of a 12-chapter section entitled 'Cognitive Psychology.' Piaget's views had of course matured in the meantime, .... but I believe that the readiness to take Piaget seriously again was determined more by the independent rise of [American] cognitive psychology than by changes in Piaget's theory. To put it another way, developmental psychology needed an intellectual leader to symbolize and give encouragement to the direction in which it was moving; psychology needed Piaget, more than it was converted by the intrinsic persuasiveness of his theory" (Hilgard, 1987, pp. 565-566).

We are interested in expanding upon two of the points alluded to by Hilgard. Firstly, that disciplinary critiques of the Piagetian approach (up to 1950 and onwards) tend to miss the mark with regard the more "essential" features of his initial stage theory or investigatory methodology. Second, that the "intrinsic persuasiveness" of Piaget's views improved in the 1970 through 1980 period but this was not picked up by the North American tradition of variable psychology which was already moving in its own (overly operationalized) direction.

Our introductory texts dutifully mention that in Piaget's initial accounts of the advancement from less to more complex stages of thinking, he appeals to a modified process of organismic adaptation -more specifically to the forward-reaching experiential "tension" between previously "assimilated" information about the world and subsequently required "accommodation" to newly acquired information- as the means of transition between cognitive stages. Some of the later 20th century texts even provide cumulative critiques of Piaget's initial theory as understood by their era of General experimental psychology (see Baron, et al., 1995).

The vast majority of these however fail to mention that Piaget's later works (1970 onward) moved beyond mere appeal to adaptive assimilation and accommodation to include a more potentially psycho-social "equilibration" account of the transition between cognitive stages in the particular case of human beings. In these later works Piaget recognized that adaptive assimilation and accommodation -by virtue of the fact that they are aimed at reaching an "equilibrium" with changing environmental conditions- have the descriptive downside of being common to not only all novel species-specific physiological or psychological adaptations but also the merely cyclical mechanical adjustments present in cybernetic feedback loops. It is true that from within the confines of the later Piagetian account all the equilibration concept really does is distinguish the two types of systems ( novel versus recurrent-cyclical) rather than explain in detail how such novel constructs are formed. We are not, however, justified in condemning or ignoring this central insight of the later Piagetian project simply because the elaboration of it was lacking.

Although we'll eventually utilize Brainerd as a convenient exemplar of someone who has missed out on the stronger aspects of the Piagetian project by following the variable measurement model of research too closely, we'll also mention various legitimate disciplinary concerns over the descriptive limits of even Piaget's later works as well as suggest how the complementary efforts of Vygotsky and Leontiev can be utilized to mitigate those concerns. In particular, the related issues of the transition between successive stages and the transformative ontological status of each stage will feature prominently in these albeit brief considerations. I'll save you the suspense right now, however, by simply stating outright that both are observable as well as measurable. Do remember too that our aim all along has been to improve the professional working relationship between empirical method and the methodological assumptions underlying psychological science.

During the early years of his psychological career Piaget's investigatory interests and argumentative efforts were directed at upholding a methodological middle ground between mentalistic empiricism (e.g., Titchener) on the one hand and any form of implied intellectual nativism (e.g., Terman's mental testing approach) on the other. He wanted to address issues of mental development and intellectual growth which seemed to be either left out or actively avoided by both sorts of early 20th century psychology. Similarly, his mid-career efforts would not fall neatly into the widespread operational definition format of American General psychology in either its Stevens-through-Boring (hypothetical mental construct) variety nor its somewhat anti-mental Skinnerian (input-output) variety (a la Skinner, 1950). These successive strivings for a workable middle ground surely weighed in against the possibilities for a favorable disciplinary reception and influence of Piaget's mid-career approach to the investigation of mental processes (which was again refined during the latter years of his career).

Instead of falling prey to any of these successive methodological or procedural dichotomies, Piaget adopts a rough though workable third approach to the empirical investigation of so-called "hidden" cognitive regularities and stages. Despite its procedural roughness and the admitted inadequacies of its eventual empirical or theoretical knowledge products, this middle-ground approach was still consistent with the Standard view of how science works:

"Piaget begins with the plausible assumption that what accounts for the transformations of experiential input into behavioral output are cognitive processes. Next he observes children in problem-solving situations, and notices similarities in the task-specific response patterns of children of similar ages. He then conjectures on what mental structures might account for these behavioral response patterns and tests his conjectures in a wide variety of .... situations.... It is conceivable, of course, that all of Piaget's specific empirical claims are false..... [but] the hypothetico-deductive procedures that Piaget deploys are precisely the same ones that legitimize physicists' right to claim explanatory power for hidden entities and processes like electrons, .... and electromagnetic fields" (Flanagan, 1984, p. 131).

Piaget's emphasis on the "cognitive" (like that of James, Dewey, and even Angell or Woodworth), is a commitment to the need to include intentional mental states in the explanation of mental processes. More specifically, the "developmental" side of Piaget's stage theory itself is a recognition that the way in which we humans process experience changes in an orderly, increasingly adequate, and species-specific fashion. In postulating the developmental stages from Sensorimotor to Formal operations, Piaget recognized qualitative differences between them while remaining within an (albeit implied) realist metaphysic which he eventually called "genetic epistemology". The contrast with the objectivist though often anti-realist accounts of operationism (as defined variously above) is marked in this respect. There is also an implicit dialectical unity of opposites contained within his stage theory which both allows for and attempts to describe transitions from lower to higher mental abilities. In other words, in terms of methodological assumptions, there is a lot of overlap in Piaget with the more progressive (enduring) aspects of the functionalist approach to mental processes (see Section 4) but in some ways (see below) he also eventually surpassed even that tradition too.

This does not mean, however, that Piaget's portrayal of intellectual stages was ever fully satisfactory either as initially stated nor even after it was refined up to 1980. As originally proposed there are three main features of Piaget's hierarchical stage theory that we can consider in this regard: (1) That the stages occur in an invariant sequence (children may vary somewhat as to how long they are in each stage, but they progress through them in the same order and do not skip a stage); (2) the stages are universal do not vary from one culture to the next); and (3) each stage is a logically organized homogenized whole (each occurs in an all or none fashion and when present is generalizable to all relevant tasks). As we'll see, the latter two of these features did not hold up as well under the scrutiny of subsequent empirical research as the first feature does.

Probably the earliest and most apt disciplinary critique leveled against Piaget's initial account of his theory was the cultural egocentrism critique. This was aimed specifically against feature 2 above regarding the cultural universality of cognitive stages. Mead (1931), for instance, found little evidence of preoperational thinking (of the animistic, superstitious sort) among even relatively young Samoan children. Instead, there seemed to be a marked and early preference for concrete operational thinking as indicated by their ability to grasp cause and effect relationships. She concluded that mental stage transitions were likely less universal as well as more culturally embedded than Piaget believed and that his account of them was tied too closely to one particular culture. While such observations do not discount feature 1 of Piaget's theory (regarding the orderly transition from less to more adequate mental stages), they do seem to suggest the possibility that the acceleration of occurrence as well as the relative preponderance of a given mental stage within a given culture might be closely tied to the respective adequacy of those stages for success within that particular culture.

In this connection it should be mentioned that during the consecutive summers of 1931 and 1932, A.R. Luria's Moscow-based group of researchers investigated the historically unique circumstances of the ongoing shift from individual peasant farming practices to collectivized farming in the countryside villages of Uzbekistan.  Their particular focus was on establishing a descriptive outline of the observable pattern of changes to the predominant mental tools used by these villagers resulting from this technological shift. By utilizing a probing but conversational method of leading questions (rather than simply applying standardized mental tests), Luria's researchers managed to assess the way these villagers understood and approached the world Luria, 1976. Initially, it was found that Uzbek children and adults were either unwilling or unable to form abstract concepts. The following summer it was found that the preliterate members of the Uzbekistan villages were still using the traditional primarily functional-descriptive reflection of reality (the very form of thinking which had been successful for their survival for centuries) while those with even a modicum of formal schooling were now using (to various degrees) the characteristically modern abstract-conceptual approach to reality. As the educational level of these subgroups increased so did the appearance of distinctly modern forms of "conceptual" (abstract relational) thinking. Through basic education and on the job experience, modern society was truly rearming the minds of these villagers to deal with the contingencies of the technological (economic and political) shift from individual farming to collective farming practices.  

The observational data obtained supported their earlier contention (based on anecdotal anthropological evidence) that the adoption of literacy in any preliterate culture resulted in a transformative effect on the speech, counting techniques, and modes of memory of those involved (Vygotsky & Luria, 1930/1992). Luria argued that what they were observing in these outlying Soviet villages was a condensed form of the very historical shifts in mentality "over a brief period," which under ordinary circumstances had "required centuries" in other locations (Luria, 1976, p. 164). The new demands of that societal transition had changed what was adequate and thereby changed the preponderance of the highest reached (or most predominant) stage within that culture. Unfortunately, this fascinating research was not made known to a North American audience until much later (see Luria, 1976, 1979; Vygotsky & Luria, 1930/1992).

Returning to our narrative, Huang (1930) found little evidence for childhood animism as young children attempted to explain what happened in a magician's trick. Even very young children recognized intentional sleight-of-hand when they saw it. In considering these sorts of results, Huang and others began pointing out various deficiencies in Piaget's original empirical technique by suggesting that the very design of some of the Piagetian tasks may have thrown younger children off the scent of a solution. For instance, while assessing a child for conservation of substance the Piagetian investigator instructs the child to pay attention to his (or her) actions as a ball of clay is rolled out or flattened into a pancake. This sort of instruction may incline a younger child to think that the solution to the task involves figuring out what the experimenter is doing. Success on this task, however, actually demands that the child discount the investigator's actions. The claim can be made that in some instances the procedures used may not have been measuring cognitive differences so much as the ability of younger versus older children to understand the requirements of the task at hand.

This bad experiments critique has recurred numerous times over the years (see Flavell, 1963). It boils down to an issue of whether we have or can ever design our investigatory procedures tightly enough so that we can answer the questions we are setting out to answer. But this is surely not the kind of objection that can't be overcome. A review of successive studies aimed at tightening up on such procedural shortcomings indicates that when tasks are modified so as to allow younger children to better understand the problem to be solved, they show far less egocentrism and greater competence on conservation and causality tasks at younger ages than the albeit rough Piagetian age-riders predict (Gelman, 1978; see also Donaldson, 1978).

In sum, early American researchers first devoted themselves to verifying the existence of the developmentally differential pattern of approach to physical reality that Piaget described (e.g., that most 5-year-olds believe that the quantity of water changes when it is poured into a container of a different shape while older children do not). Mid-through-late 20th century American researchers then tried to challenge Piaget's theory by demonstrating that children can advance beyond such preoperational thinking at earlier ages. Piaget jokingly referred to such efforts to rush individual development on particular tasks as "the American question" because he was not concerned with age-riders so much as establishing an account of the normal sequence of advancement of thinking across a broad set of representative tasks. In short, Piaget regarded the age ranges associated with his proposed cognitive stages as only crude guidelines and made many attempts to be clear on that point. One of the sterner efforts to do so can be found in a footnote to the chapter on the sensorimotor stage in Piaget & Inhelder (1969): "It should be noted once and for all that age information in this book refers only to an average and still-approximate age" (p. 119).

Even though the American preoccupation with age-riders -which Piaget, himself, never took very seriously- is indicative of their preexisting concentration on individual performance as a source of numerical data, there is still a degree of overlap in this regard to be recognized with Piaget's approach up to 1970 and even afterwards. Although Piaget occasionally noted that the cognitive advances of the individual involve adaptations to the social environment (see Piaget, 1950, on peers and parents), his primary empirical focus remained within the confines of observing the pattern of how the individual child invents mental constructs to understand circumscribed aspects of physical reality (e.g., conservation tasks) rather than on actually observing the wider relationships or cultural contexts that contribute to such individual development per se. We'll return to this social relationships aspect of the issue shortly, but we should note that there was (even from the earliest era of Piaget's career) an important disciplinary exemplar to be had in the figure of Lev Vygotsky who did concentrate on that wider unit of analysis. Once again, however, this Vygotskian approach was not well known or taken seriously in North American circles until the period after 1976 when an English translation of Vygotsky's Mind in Society became available.

Meanwhile, Deutsche (1937) not only questioned the homogeneity/cohesiveness feature of Piagetian stage theory -by way of noting that mental growth as indicated by "children's concepts of causal relations" was more gradual and less cohesive than first believed,- but he is also said to have sided against the very existence of such stages. In this respect, he was one of the earliest of many such anti-Piagetian stage deniers to be encountered over the years (see also Brainerd, 1970, 1973, 1978; Fodor, 1980; Carey, 1983 for more exemplars). We'll return to that thorny ontological "stage" question shortly and concentrate on the "cohesiveness" aspect of feature 3 for now because it has in some respects already been resolved by subsequent research. It has now been established that children typically understand conservation of substance well before they understand conservation of weight, and they understand conservation of weight before they understand conservation of volume. Piaget clearly overstated his case for the all or none occurrence of, and cohesive generalizeability of conservation.

This evidence for what Piaget himself called "décalage" (a heterogeneity differential achievement on problems which seem to require the same sort of mental operations) in no way, however, dislodges feature 1 of his theory regarding the ontological existence of an orderly progression of competence and mental stages. All it really requires is that our description or definition of such stages has to become more sophisticated. It does not even, in fact, undermine the notion of homogeneity per se. For the sake of clarity, let's consider this point. There is certainly a point at which say the 8 or 9 year old child normally has fully mastered conservation in all its aspects while the 3 or 4 year old clearly has not. If a stage is viewed as the range of competency between qualitatively unable through to qualitatively able -regardless of whether it has a very long complex transition, a medium transition period, or a very short and simple transition,- that really does not change the stage nature of the transition being described. At some point too, the new competency really does become homogenous. So in actuality, even the homogeneity aspect of feature 3 is not threatened by the recognition of décalage, but merely the presumed suddenness of the occurrence of homogeneity. The proper allegation against Piaget here is that to him the onset of each stage had looked more sudden and their contents more cohesive than has been supported by subsequent research.

The cultural egocentrism, bad experiments, and décalage critiques have succeeded in forcing careful refinements to the initial statement of Piagetian stage theory (in particular to features 2 & 3 therein). In this regard Piaget can now be said to have adopted an indefensible cultural universality position and to have overemphasized the speed of onset as well as the initial (all or none) cohesiveness of each eventually homogeneous cignitive stage. Recognizing the existence of décalage does not in itself count against Piaget's belief in the existence or causal efficacy of mental stages (feature 1). On the contrary, it has by way of combination with the other critiques opened up a host of research opportunities to investigate the ontogenetic timing of onset and progression of ease of completion, as well as the comparative social-cultural contexts and source of those mental abilities which comprise the contents of such cognitive stages.

None of the above disciplinary critiques threaten the overall Piagetian project of outlining the normal pattern of progressively more adequate cognitive structures which mediate intellectual development, nor do they undermine Piaget's fundamentally dialectical position that each cognitive stage represents qualitative developmental differences in modes of thinking rather than merely quantitative growth in the amount, efficiency, or scope of thinking. At face value, however, an altogether more serious critique of Piaget's project is encountered in the somewhat complex charge that he is merely "describing" behavior rather than actually producing "explanatory" propositions about the hidden mental processes which lay between observable input and output variables or between so-called cognitive stages. It is here that in one version of this describing versus explaining critique the contrast between Brainerd's input-output variety of variable psychology and the Piagetian inferred cognitive structures approach comes into high relief. It is here too that in a second version the contrast between the early Piaget and the later Piaget (regarding transition between stages) comes into its own as an overlooked aspect of the debate between Piagetian stage supporters and detractors.

One easily dismissed version of this describing versus explaining critique is the suggestion that Piaget is guilty of the old mentalistic naming fallacy similar to that of Johannes Müller who referred to a vitalistic aliveness principle to distinguish between rocks and living organisms (see Section 3). It is sometimes insinuated that when Piaget claims that the child has acquired the "conservation principle" (as indicated by the child's ability to perform conservation tasks), he is just describing a change in the child's behavior and ascribing that set of observable changes with a fancy "mentalistic label" which adds nothing of explanatory value to our overall descriptive analysis. If this proviso against mentalism and advocacy of descriptive behavioral analysis sounds familiar it is probably because this the sort of procedural limitation that Skinner (1950) placed on all scientific psychology.

For the sake of clarity on the disciplinary context for this procedural point of view, let's recall that Skinner (1945) was the only member of the "Symposium on operationism" to reject operationism of the early-Stevens through Boring variety. His rejection of operationism hinged on Question 10 of the Symposium where he was the lone participant to adopt a direct realist position that "presupposed" not only a logical apparatus for dealing with the language of science but also our unimpeded access to the world to which such an apparatus refers. In contrast to Boring, Skinner felt no philosophical compunction to (strictly speaking) "justify such an apparatus" -as the indirect realist is always obligated to do- but merely took its existence for granted as the undeniable epistemological circumstance within which we are able to carry out empirically guided scientific research.

Even though Skinner remained (from 1945 onward) a naive direct realist who recognized the ontological existence of psychological processes, he still felt compelled to advocate a "temporary" procedural avoidance of reference to those intentional mental states in favor of deriving mathematical generalizations regarding input-output relationships. Skinner (1950, 1956, 1963, 1971, 1984), in short, remained pessimistic about the possibility of ever explaining the outcome of S-M-R relationships by way of referring to intervening (middle) "mentalistic" terminology. For him, as long as we have some way of connecting input and output variables it doesn't really matter what is going on "inside" the particular organism under study.

According to Piaget, however, we can describe these hidden mental structures or processes, and moreover it is important that we do so because differential behavioral consequences would be expected with developmental changes in those intervening (middle) processes. Relatedly, if one is to pin down Piaget's "evolutionary epistemology" into the terminology we have been using throughout this work, he must be labeled (like Skinner) as a naive direct realist who likewise believe in and outlined a procedural method that is conversant with a "correspondence" theory of truth. We will return to that related epistemological point shortly but for now let's encapsulate the accompanying practical-procedural divide between Skinner and Piaget.

In brief, this seemingly complex methodological issue boils down to a rather simple procedural divide regarding proper empirical method of the following sort: When we are looking for a satisfactory disciplinary account of psychological processes, are we looking merely for an account which allows us to predict and control behavior or performance accurately, or are we also seeking a description and explanation of the mental processing that is going on inside the organism? Skinner's procedural limitation on proper empirical method suggests that all we can ever do is produce predictive models (repeatable numerical or graphically depicted predictions which correspond to the nature of input-output relations). Piaget's position, however, suggests that we can go further than this by producing psychological theories (empirically grounded descriptions and explanations which can be confirmed to correspond to the nature of the mental processes to which they refer). Piaget, in short, was much more concerned with the disciplinary import of discovering what is going on inside the organism because we will not be able to adequately predict output without knowing what the intervening intentional mental states are actually like.

On the one hand, there is a notable procedural-methodological overlap in this regard between Piaget and the respective traditions of both the American functional psychology movement as well as the Vygotsky-through-Leontiev tradition of research (more on that later). On the other hand, the procedural overlap between the Skinnerian approach and what mid-through-late 20th century Boring-style (operational definition) research was doing is equally important to recognize. During this latter period it was easy for the traditionally trained empirical psychologist to understand the graphical displays of reinforcement schedules and the between treatment groups correlation coefficients being produced by Skinnerian researchers. Even though there may have been a difference of opinion between them regarding the importance of appealing to "intervening variable" terminology, their shared reliance upon statistical analysis and numerically defined data provided a common frame of reference that reassured both traditions that they were carrying out "hard" statistical science rather than "soft" descriptive, clinical, or merely mentalistic style psychology. It was quite understandably difficult for an empirically trained psychologist of this era, however, to go back and read Piaget. Doing so would often produce the following sort of quandary: "I've read the book and feel like I've learned something about kids, but where is the data, where is the statistical analysis?" That's because Piaget was interested in the way kids think and how that differs from the way adults think. He observed kids, asked them leading questions, set out specific problems for them to solve, and whatever he did it was always dictated by this open curiosity about the shifting two-sided dynamics between older and newer cognitive capabilities.

In Piaget and others like him, what we find is marked and refreshing object-centered approach to psychological investigation. This is indicated by both the dialectical methodological assumptions he made about mental processes and the context sensitive observational methods he utilized. Piaget was not somebody with some abstracted decontextualized set of statistical measurement techniques which were to be applied to all objects like some magic preestablished formula. This was a person who probably never ran a correlation coefficient or t-test in his life but one can come away from reading his works with the feeling that you've learned something about the development of the cognitive aspects of human nature itself.

In the broadest possible terms, we have in Piaget somebody who is making a methodological claim that the nature of the middle term in the S-M-R relation is not just discoverable, but that direct reference to it is important from an explanatory point of view. In other words, that there is an orderly development of successively more adequate cognitive structures and that this is what accounts for changes in behavior (task performance). Given the disciplinary context of the rise of operationism (which as mentioned above relies on both indirect realism and formal logic alone) this dialectical approach to psychological investigation was not a particularly welcome methodological stance to take at the time. In historical hindsight, however, the fact that subsequent statistical research has refined our understanding of the normal onset of specific varieties of conservation does not dislodge our confidence in these broader procedural aspects of the Piagetian approach. On the contrary, it provides further grounds for optimism that our successive conjectures on the shifting two-sided nature of mental structures might continue to be tested and refined still further under a wide variety of investigatory situations. As we will see shortly, this wide variety of situations aspect of Piaget's optimistic view plays a rather key role.

The General psychology of the mid-through-late 20th century remained for all practical purposes an empirical-method centered approach which took its subject matter as numerically defined variables. Whenever one gets into a methodological rut like that, one tends to put one's subdisciplinary blinders on and begin either actively avoiding other ways of doing psychology or attempting to translate them into one's own. In fact, some folks were so puzzled by Piaget that they sought to redo the whole project in terms of variables. So it was that Charles Brainerd claimed that at last psychologists could understand Piaget because his views were now expressed in terms of numerically defined variables, correlation coefficients, and ANOVA. But when you look at the results produced by this translation of Piaget's rationally informed object-centered investigations into an empirically operationalized variable model approach, it becomes evident that Brainerd was forced into denying a lot of Piaget's basic theoretical assertions. Chief among these was Piaget's assertion that conservation is not something that can be learned at anytime in the child's development because it requires a higher stage of cognitive structure to be mastered fully.

Brainerd and his coworkers trained up young children using reinforcement principles to behave in a conservation-like manner. They took children who should not be able to conserve, reinforced them for giving conserving answers and, sure enough, these subjects now began behaving as if they were conserving. In such research there is always a convenient supposition that there is no distinction to be made between what these subjects are doing and the normal process of conservation studied by Piaget and other researchers. But surely to make this supposition is to confuse a merely operationally defined and experimentally produced abstraction with a concrete reality. To put it more plainly, one can produce the appearance of conservation, without the presence of real conservation -i.e., the implied cognitive mastery of the physical relationships that goes along with such changes in performance. What Piaget was talking about was not just the fact that what children say or how they perform within the context of a given investigatory situation changes over time, but that the structure of way they are thinking is now different and there is some sort of new understanding being expressed in these observable changes in performance.

Having made the questionable equation between mere operationally defined performance on a set of target tasks and the deeper intellectual mastery of them, Brainerd concludes in the Skinnerian manner with the further claim that we need not appeal to mental structures to explain this data, we need only appeal to external contingencies for explanation. Accordingly the ensuing theoretical debate between Piagetian researchers and Brainerd became one of whether conservation could in fact be accounted for simply by appealing to such external contingencies.

Well, now, were the Piagetian researchers able to provide any way by which we could discriminate between the two cases? As I recall, the experiments which followed soon thereafter very clearly discriminate between the two cases and the two theoretical accounts as well. This was done by way of setting up a variety of situations by which the behavioral implications of the inferred mental structure could manifest themselves. Piaget, himself, had already put forward some rough empirical propositions about the kinds of prerequisite experiences and transitional capabilities that were necessary to lead up to the mastery of conservation. In doing so he recognized that since these transitional capabilities reside within a wider ongoing dialectical process of development, they were bound to pose a temporarily limiting influence on the success of observable performance until such time as they were resolved by the intellectual mastery of conservation.

For Piaget this simply meant that under normal everyday circumstances children in the concrete mode of thought would constantly be foiled by tasks requiring them to consider the more abstract relationships required by conservation tasks. It is by way of observing these temporary behavioral contradictions, he suggests, that we can trace out the span of transition from less adequate to more adequate mental structures. For subsequent Piagetian researchers, however, -who were now armed with a more sophisticated understanding of the extended transition between these two particular cognitive stages- this meant that they could rather comfortably counter Brainerd's position. This was done by way of not only pointing out that kids who had already mastered conservation of volume under normal developmental circumstances were always able to perform tasks requiring conservation of weight and substance, but by also demonstrating empirically that younger kids who had been trained up on conservation of volume tasks were not likewise able to generalize backwards along the ontogenetic trail of development to successfully handle conservation of weight tasks. This was considered proof that even though the training up of this latter group allowed them to master certain aspects of conservation tasks, their actual mental structure was still qualitatively different and less developed than those who mastered such tasks under normal circumstances.

So, if what I'm describing to you is a more or less accurate account of what happened in these subsequent experiments, then clearly in the principle of conservation we are dealing with the makings of an explanatory principle; something that differentiates between one type of theoretical explanation and another. It produces certain kinds of predictions that when they are confirmed to occur are contrary to and superior to those predictions made by another theory (e.g., external contingencies); and after all, that's a pretty good indication of whether it is an explanation or just a "vague" description.

In any case, our central concern is not with the details of such experiments but rather with noting the broader disciplinary implications of their procedural contrast with the approach of variable model research to developmental subject matter per se. In having recognized the continuous and discontinuous aspects of cognitive development Piaget provided these researchers with a useful disciplinary exemplar of the kind of approach that can be utilized in our analysis of other capacities like learning to speak, read, etc.

This dialectical approach to mental development, however, encountered hostile resistance throughout the 1960s-1970s period. It was suggested for instance, that the appeal to an invariant sequence and a logically integrative secession of qualitative stages looked like interesting empirical claims but are not and that the Piagetian approach to the study of mental development is in fact empirically empty (Peters, 1966; Flavell & Wohlwill, 1969) or even "trivial" because it is hard to imagine the described process turning out any other way (Brainerd, 1978a). But surely this is no objection at all, it is instead a statement of the virtue of Piaget's theory.

When considering any developmental process, what you look for and what you expect to find are internal necessities and contradictions which are somehow resolved as the process moves forward (horizontally) and upward (vertically). Once one has revealed the nature of such a developmental process, what you will have revealed is something that has a logic -a set of internal necessities- of its own. Now, if the Piagetian approach has outlined such a necessary ontological set of vertical intellectual milestones as well as refined the horizontal timeline for their occurrence, and then somebody complains that it is logically inconceivable that the process being thus described could have come out any other way, that should not be considered a complaint at all but rather a statement of highest possible praise.

By way of contrast, let's consider the inability of the reinforcement theory to produce an equally logically resonant or even intuitively obvious account of other concrete learning milestones like learning to speak, to read, or to write. Surely here is a thoroughly nondialectical theory which makes very little headway in this regard. Within the the reinforcement theory itself, the whole process of such change in performance is portrayed as a set of accidents leading up to the children learning to read. You don't see anything in that theoretical account which reveals or even suggests any kind of internal necessity -that starting a child off in some particular direction will necessarily lead the child along the path of obtaining the novel capacity under study.

Reinforcement theory is both nondialectical and anti-dynamic in various respects. First of all, contingency and necessity are assumed to be and are also portrayed as formal logical opposites (see Sections 1-4 for accounts of the limitations of adopting that false opposition). Secondly, regarding the dynamic and directional (forward-reaching) aspects of all such learning processes, what strikes us immediately when we actually look at the way children learn to read is that the vast majority of kids with very little attention from their parents and teachers learn to read. But if we limit ourselves to the inherently mechanistic reinforcement theory in attempting to account for this fact, the chances that successive accidents and contingencies (1, 2, 1, 2, etc., all the way through) are going to come out just that way nearly every time are so improbable that it is truly difficult to imagine how children seem to learn to read the way we know they do. Such discrepancies between theory and observable fact indicate that the reinforcement account of learning is thoroughly inadequate. It does not tell us something very important -that there is something about children and about the wider context in which they live that necessitates their learning to speak, to read, to conserve, etc. In short, there is something going on in the developmentally directional (forward reaching horizontal and vertical) process under study itself that is left out of our account when we appeal to successive though purely accidental contingency or even to the cumulative effects of structured though mechanical schedules of reinforcement.

Finally, as O.J. Flanagan points out rather nicely, there is a severe and probably intentional "failure of imagination" at work on the part of those who make the triviality complaint against the theoretical product of Piaget's investigative methodology: "It is simply not true, as Brainerd [1978], for example, says, that 'it is rarely possible to imagine [the] predicted sequences [of competencies] turning out any other way'" (Flanagan, 1984, p. 132). The normal order of occurrence may be that competency "A" is mastered and then somehow followed by competency "B", but maybe someone might learn certain aspects of competency B first. After all, this was precisely what Brainerd and his colleges did in their "conservation training" experiments! They set up the abnormal conditions by which an albeit limited violation of the normal temporal (horizontal) pattern of skill acquisition was brought about.

What strikes me most about this type of complaint against Piaget is that it was raised by people who were not yet attuned to a dialectical mode of thought (to questions of internal necessity and objective contradiction in development), nor to the rather fundamental questions of qualitative transformation in the development of higher mental processes from lower ones (e.g., Fodor, 1980). Brainerd in particular, -at this time at least- was someone who just didn't appreciate what Piaget was doing for us. The accompanying procedural contrast between Brainerd and Piaget can be stated rather simply as follows: Piaget approached the issue of intellectual development openly. He was interested in children, and he let them dictate what was important -i.e., to the ontological structure of thought; how that changes over time; its central ontological dynamics, contradictions, or transformations. His investigation was entirely dictated by those sorts of considerations. Brainerd, in contrast, brought with him a good strong training in ANOVA and so on, and doesn't quite see that there is anything else to be drawn into the investigation of higher mental processes.

Even though the specific intent of their ancillary complaint was to highlight the fact that Piaget's empirical procedure of discovery and dialectical means of analysis was not "data driven" in the usual numerical sense of the word, it should be apparent by now that it was still quite reasonable for Piaget to have adopted an appeal to the observable sequence and timing of milestone events as a working hypothesis because being a direct realist he recognized that the observable logic of the development of the cognitive processes under study is necessarily a reflection and a product of the way the world is. The logically resonant and intuitively obvious necessity of Piaget's hierarchical stage theory was an outcome of remaining sensitive and open to observing the normal development of cognitive processes on their own terms. Stated yet another way, an account of the rise of successively more adequate cognitive structures that is grounded by (situated within) the concrete ontogenetic development of the infant into adulthood was the best possible guess at the internal dialectical logic of the process.

If one simply takes a few hours to contrast Piaget's (ontogenetically contextualized and dialectical) approach with the contemporaneous rise of a decontextualized and numerically driven "intelligence testing" movement, our procedural counterpoints are again driven home rather soundly (see Chapters 3 & 4 of Ballantyne, 2002). Despite all its mountains of data, that latter movement has been naming without claiming and describing without explaining for some time. Furthermore, the expected nontrivial theoretical knowledge products that were supposed to come about as a result of all that empirical activity have not been forthcoming in that subdiscipline. The theories at work there are virtually the same as those of 1929 and have proceeded no further. In other words, the ongoing interpretive theoretical debates between interactionist and innatist mental testers -let alone the wider ontological "what" question about the nature of human intelligence- will never be resolved from within the methodological confines of the standardized testing subdiscipline because the mechanical and formal logical nature of their measurement devices are at variance with the dialectical nature of the development of human intellect itself (see Chapters 7 & 8 of Ballantyne, 2002).

The "naming fallacy" version of the describing versus explaining critique along with its ancillary "triviality" complaint against the Piagetian approach, has therefore been overcome to one extent or another. For any number of reasons what we can come away with from these considerations is that Piaget was producing falsifiable propositions, objective (object-oriented, empirically grounded) hypotheses, and he was generating potentially explanatory theory about the nature and development of higher mental processes. Piaget's project also provides a valuable historical exemplar that indicates that we are not locked into adopting the operationalized (hypothetical constructs) position of Boring, nor the Skinnerian de facto anti-mentalistic position of appealing to mere environmental contingencies or schedules of reinforcement. His approach can be seen in historical hindsight to have utilized the better aspects of each of those traditions -i.e., an attempt to outline the import of the middle term in shifting S-M-R relationships, and an implied direct realism respectively.

Personally, I am quite enamored with the disciplinary possibilities of the Piagetian approach, and may thus far appear to be less credulous than usual in my coverage thereof. Our following joint consideration of the next version of the describing versus explaining critique, however, may allay some of your fears in this regard.

Our second more legitimate version of the critique points out that the mechanisms of transition between stages are never fully explained from within the confines of Piaget's particular account of human intellectual development. Stated another way, the critique is that Piaget fails to tell us exactly how vertical cognitive development comes about. This issue of the "transition" between stages -of what pulls the infant's mentality forward and upward into becoming its adult form- was tackled successively by Piaget. Our task will be to consider carefully the strengths and limitations of his early and later career attempts to do so.

Some of the more amiable characteristics of Piaget's approach warrant reiteration before tackling this pivotal issue of transition per se. Firstly, let's recall that various realist-materialist assumptions are implied in Piaget's genetic epistemology, adequacy thesis, and object-oriented empirical method. Piaget talks a lot about children "constructing" the external world but the only way in which this poetic license makes the most possible sense is when we interpret his views in a naive direct realist manner. When Piaget seeks to describe the child's acquisition of increasingly more powerful and "adequate" cognitive structures, toward what are those successive structures more adequate? They are more adequate to the world within which that child lives and more specifically to particular physical relations the child is exposed to over the course of its intellectual development. This cognitive adequacy thesis makes specific reference to the child's intimate though changing connection with the world and more generally it assumes the knowability of various aspects of that world. By describing his position as "constructivist," then, Piaget is merely indicating that the child's cognitive "schema" have to be acquired over a long period of active experience with the world and that successively more adequate schema are somehow developed (constructed) in order to function adequately in that world. I reiterate this point because those whom interpret Piaget in the phenomenalist manner and whom thereby advocate a "radical departure" from the Standard view of science (e.g., von Glasersfeld, 1979, 1989) are hopelessly trapped in a problematic self-contained loop of their own sensory experience from which they (and their approach to education or psychological analysis) can not escape. Furthermore, in having adopted that problematic position, they share much in common with others during the disciplinary Crisis era to be covered below.

Secondly, let's recall that Piaget's proposed "stage" theory meets all of the requirements of a "dialectical" theory because it is a nonreductive, emergent-integrative evolutionary approach to the ontology of cognitive processes which recognizes that higher mental processes develop from lower ones. Those whom emphasize the admitted inadequacies of Piaget's ontological account of stage transition sometimes take those shortcomings as license to adopt a reductive and implied nativist account (Fodor, 1980) where certain "potentialities" or aspects of the later "higher states" are considered to be somehow present in the lower ones according to the formal logical principle of "ex nihilo nihil fit" (out of nothing comes nothing). The dialectician, however, whom has been exposed to other examples of emergent ontological processes both within the discipline and outside its confines, comes prepared to see what Piaget is emphasizing (see Bidell, 1988; Lawler, 1975). This is why we designate Piaget as a dialectical thinker, as someone who recognizes developmental processes as creating new qualities (ex nihilo if you wish).

The opened armed acceptance of these amiable characteristics of Piaget's approach should not be taken to mean that he has provided us with a completely satisfactory theory of transition from stage to stage. It is one thing to acknowledge that there are such qualitative emergent transformations in cognitive development -and doing so is better than denying their existence- but it is quite another to come up with a sufficient account of how those transformations come about. We will aim to acknowledge the inadequacies of Piaget's successive accounts of mental transition but also avoid becoming locked into taking the reductive methodological path characteristic of figures like Fodor or the equally reductive account of figures like the Churchlands with their mechanical-computational view of mind (1988, 1989, 1992). What we require is an integrative view of psychological development that is closer to that proposed by Dewey and by Piaget too -one in which successively wider envelopes of experience are encountered and successively broader intellectual competencies attained by the individual. It is by facing up to (rather than shying away from) this complex continuity and discontinuity of its varied subject matter that both a vertical as well as a horizontal account of such mentality will ultimately be attained by psychological science.

By way of preliminaries, then, let's consider some of the commonalties between Piaget's efforts in this regard and other developmental transition theories like Darwin's theory of "natural selection" or the functional psychology school's use of concepts like organismic "adaptation" and "adjustment" covered in Section 4. Like Darwin, Piaget realized that there was two broad tasks he needed to perform: identify the stages in the developmental process under study and propose a theory of transition between one stage and another. The goal of any truly developmental account is to produce an account that reveals the necessary order and interconnection of its stages. One can start motivated to do this and make mistakes along the way both in identifying stages and in supplying a theory of the transition between such stages. As we know, for instance, biologists have debated for years about what a "species" is and, relatedly, they are now agreed that natural selection (as Darwin's proposed means of transition between one species and another) is not sufficient on its own. Reference to population genetics, mitochondrial mutations, recombinant DNA, etc., are all needed to round out the account (see Mayr, 1982, 1991). This does not mean that natural selection was wrong (as the Creationists long held) but merely that it was not entirely sufficient on its own to account for everything we have subsequently learned about such quantitative change and qualitative transformations. So, while looking for the psychological analog of Darwin's natural selection theory in Piaget's position, we should not be surprised to find that his initial account of stage transition was considerably limited by the contemporaneous intellectual context of the times in the discipline, and was improved upon thereafter once these limitations became manifest to both Piaget and others.

Within psychology, one of the most important aspects of that disciplinary intellectual context was the "functional school" of the late 1890s to early 1930s. That school started out as an effort to emphasize the developmental and emergent aspects of "conscious mental life" (James, 1890; Dewey, 1896) but gradually retreated from questions of mind, of consciousness, and even of mental activity toward a "biologized" account of adjustive movement (observable behavior) with the latter being understood in largely passive terms resulting from bodily or environmental contingency. Angell (1920) for instance sometimes portrays the human condition in the Jamesian manner as a "balance" between the "individual" and "society" (p. 221) but the conceptual particularities of his overall psychological account more often remains one of socially modified organismic functions. He never quite adopts the three phased Dewey-style account of human mentality (as the successive qualitative transformation of individual internally dynamic functions into socialized functions and then into distinctly cultural human competencies). In Carr's (1925) text the conceptual foothold of functional psychology slips still further toward a biologized-behaviorist account. He focused on "adjustive activity" of organisms rather than on the "adaptive acts" (e.g., of attention) we saw in Angell (1904, 1918, 1920). The functional psychologist, says Carr (1930), is concerned with the "biological process of adjustment" and regards mental processes as a means for an individual organism to "adapt" itself to its environment "so as to satisfy its biological needs" (p. 61).

This is precisely where my personal view of what happened to functional psychology appears to differ somewhat from that of C.D. Green (2005, 2007). In my opinion, the most promising nonreductive and dialectical emphasis on the development of intentional mental processes that was characteristic of the founding functionalist figures (James, Dewey, Baldwin) was not sufficiently carried over nor "absorbed" into the rise of "mainstream" general psychological practice -which as we have seen in detail went on to become a psychology of "variables" (in Woodworth, 1934, 1940; Boring et al., 1948, and many others) thereafter.

In any case, throughout much of Piaget's career, he (like Angell and Carr) simply appeals to a slightly modified process of individual mental adaptation. In Piaget's particular account he emphasizes the forward-reaching experiential "tension" between previously "assimilated" information about the world and subsequently required "accommodation" to newly acquired information as the means of transition between cognitive stages. Early on at least, this was a reasonable position to hold because it has the advantage of stressing the two-sided nature of such ontogenetic mental shifts. The combined concepts of assimilation and accommodation seem to capture the ability of the individual organism (or child) to adjust and adapt to objective changes in the world around it. Both the Jamesian emphasis on "adjustment of inner and outer relations" and Baldwin's effort to "ground psychology in a natural selectionist framework" are thereby retained in Piaget's early account of mental transition. Furthermore, at first glance, reference to these Piagetian concepts seems to acknowledge the active psychical effort of the organism in carrying out such mental adjustments.

But when we consider this Piagetian position on mental transition a little more carefully and in light of the dialectical-emergent points made by Dewey (from 1896 onward), certain shortcomings begin to be revealed. First of all, the outcome of any "adjustment" or "adaptation" is optimum environmental fit -equilibrium or harmony with the environment. Accordingly there is an ultimate passivity implied in the concept of mental adaptation (however described) which should be noted. In the most simplified biological case of a quadrapedal animal wondering into a new and significantly different ecosystem or likewise being cut off from its normal habitat by climatic change, it is the animal that adapts (by way of its adjustive movements or actions), but the initiation for such adaptive adjustment is always conceived of as coming from outside (by the demands of the environment). In a slightly more psychological example, that same animal pursuing live prey and devouring it is surely indicative of a somewhat wider psychical reflection of the environment than the ameba passively absorbing proteins from its immediate aqueous surroundings, or even the daphnia actively adjusting its location in that aqueous medium toward light and thereby facilitating such vital absorption. But what concepts shall we utilize to distinguish or explain the discontinuous transition between those sorts (or grades) of obviously qualitatively different vital relations to and psychical reflections of those surroundings? Furthermore, how do these relations and reflections differ from those of human beings? The answers to these central explanatory questions (as I tried to show in Section 4) were never provided by the functional psychology tradition, nor will they be provided sufficiently by Piaget.

Given that Piaget's concepts of assimilation and accommodation are an extension of the adaptive emphasis in the functional psychology tradition, it is understandable that we don't see any significant departure from that physiological-individual mode of analysis in his initial account of human mental transition. The functionalists squeezed everything they could out of their two main analytical concepts and although Piaget adds to them with his double "a" elaboration -as an albeit amiable attempt to describe still further the internal dynamics of individual human mental adaptation,- these ancillary concepts are not specific enough to do the job they are intended to do. Let's consider this point for a moment.

Within both accounts, animals adapt and human beings adapt, but the discontinuous difference between the way they adapt remains clouded over. This shared shortcoming becomes manifest when one recognizes that the physiological stomach functions of mammals adjust and adapt to different food sources (milk, grass, grains) as well as assimilate and accommodate to that food. That functioning, however, always remains a thoroughly physiological affair. The stomach functioning of such organisms does not develop qualitatively in the same sense as we already know that their mental processes do across their lifetime.

During its period of den life the individual wolf pup, for instance, has to learn not only its place within the primarily physiologically given hierarchy of its siblings but also a great many other vital preadaptive (preparatory) skills which will subsequently aid its ability to adequately carry out successively wider social-psychological strategies in order to establish, maintain, or improve its place in the pack. Since this is true for such animals, and since it is this latter sort of qualitatively integrated expanding psychological development that Piaget is seeking to get at in his analysis of human cognition we are surely in need of a clearer account of these vertical aspects of such mental development. His double "a" concepts do describe two somewhat overgeneralized horizontal aspects of internally dynamic mental activity -the individual mental recognition of the demands of the environment (conscious or otherwise) and the psychical effort exerted to carry out such mental adaptations (intentional or otherwise) respectively. Let's note, however, that these two respective aspects of horizontal internal mental activity seem to fall within the realm of a merely biological-phylogenetic or individual-ontogenetic developmental context and, as we already know, any satisfactory account of even individual human mental functioning requires something more.

The overgeneralized functionalist descriptions of how animals adapt or adjust do not adequately encapsulate how different animals adapt or adjust. This shortcoming is only compounded once Piaget applies his horizontal double "a" elaboration of individual mental adaptation to the issue of how human infants come to obtain successively more adequate cognitive structures. Assimilation and accommodation are not sufficient by themselves because they retain the older (somewhat passive and continuous) "adaptive" emphasis on harmony with the environment and are also far too individualistic. Piaget's vertical "stage" theory itself -when considered in the light of all of the subsequent empirical data collected with respect to it- emphasizes that the phylogenetically preadapted psychological structures with which human infants come into the world do not adequately match the richness or complexity of their postnatal individualized-ontogenetic experience with physical objects, nor the wider social and then cultural-historical relations they will encounter and strive to function in later on. Piaget's initial adaptive concepts of assimilation, accommodation, and restored psychological "equilibrium" are simply not up to the task of showing how the transition from one cognitive stage to the next qualitatively higher one comes about. We need a better set of dialectical (two-sided) concepts to provide a vertical as well as a horizontal-individual account of the development of such mental transformations.

In his later works, Piaget (1970 through to 1980) began explicitly differentiating between "nonconstructive" and "constructive" autoregulatory systems. As with the older functionalist tradition (which emphasized adaptation, adjustment, and environmental equilibrium) the problem Piaget eventually ran into was how to distinguish a human which assimilates and accommodates from anything else. In an effort to resolve this conundrum, he proposes "equilibration" as a unique constructive form of autoregulation to account for that difference. So let's briefly consider this effort before passing any hasty judgments on Piaget's overall account of stage transition. It should be stressed at the outset of this coverage, however, that as was the case with Piaget's use of the double "a" elaboration, etc., the emphasis here will be to draw out some order to his euphonious and somewhat imprecise usage of these analytical categories. Our immediate goal is to evaluate and place emphasis on its potential practical-theoretical utility rather than to provide a straightforward noncommittal historical account as such.

When we consider a household thermostat as part of a mechanical autoregulatory system like a furnace or central air conditioner, it is easily recognized that the function of that hardware is to set limits on the work output of the system so as to maintain the desired internal conditions of the home. Once the system is powered up it can be said to assimilate all the thermal input within a certain specified limited temperature range and to accommodate to such input by producing heat or cold only when the household varies from those preset limits. The use of these descriptive euphemisms, of course, are in this particular case simply a matter of convenience and substitute for a more detailed technical explanation that is of necessity more properly posed in blunt mechanical terminology.

Piaget's point with respect to this particular sort of autoregulation is that "disequilibrium" in such mechanical or cybernetic feedback systems will always lead back to the preset limit and that in no case does the system vary its preestablished specified output (unless it is turned off or otherwise breaks down). When the thermostat helps perform this autoregulatory household function day after day, the only new occurrence is mechanical wear but overlooking that it will be doing exactly the same thing today as it did ten years ago. In short, nothing new comes up.

In contrast, the autoregulatory mental activity of a child born on the same day we installed the thermostat will have -some ten years later- become so noticeably different that the whole mode of the child's engagement with the environment can be said to have been transformed qualitatively. The thermostat, suggests Piaget, is part of a "nonconstructive" system but the child's mentality is a shining exemplar of a "constructive" autoregulatory system in this sence.

For Piaget to have raised this distinction in this way was rather clever because it pinpoints the issue under consideration in the latter case as one of psychological development (or at least novelty) rather than mere cyclical change, motion, or even growth over time. In a constructive system such as animal mentality or human cognition, disequilibria do not necessarily lead back to a previous form or equilibrium but often to a better form of engagement with the environment. When mental assimilation and accommodation occur in these sorts of systems, they result in some kind of psychical move forward that is recognized as development and not simply the reinstatement of a stable state.

So, to further emphasize the qualitative difference between nonconstructive and constructive autoregulatory systems Piaget introduces the so-called principle of equilibration. "The difference is this: the principle of equilibrium is the principle whereby a system seeks harmony with its environment; ... the principle of equilibration, on the other hand, is the principle whereby a system strives for maximal control over the environment" (Flanagan, 1984, p. 136). Within Piaget's account, the equilibration concept is also intended to augment the formerly merely descriptive and somewhat overgeneralized double "a" nature of ontogenetic mental evolution (Piaget, 1971, pp. 1-13; 1980, p. 25). As we will see, however, it too will not quite fill the bill as a theoretical explanation of transition between cognitive stages.

But let's be careful to consider the merits as well as the limitations of this later account of mental transition. We have on the one hand, an amiable recognition of the disciplinary need to distinguish a developing autoregulatory system from one that does not develop. To this developing constructive autoregulatory system Piaget applies a rough label (equilibrating) by which he implies that the organism or child possesses some sort of ongoing drive for optimal environmental or self control. Given that for Piaget what is being constructed in such developing systems are the mental structures which are required to effectively guide or regulate effective action in the world, he then suggests that it is the equilibrating drive that constantly motivates the organism or child to create (construct) new mental schema out of its successively expanding experiences and thus develop still further intellectually.

With regard to the merits of this account, I think we have to acknowledge it is an important contribution on Piaget's part to have recognized that the adaptation-equilibrium approach of the functionalist school was problematic. Appeal to adaptation and harmony with the environment may be necessary but it is not sufficient to account for the complex changes in the mental development of human beings and perhaps of many other animals too. Adaptation itself is a passive and perhaps even a reductive concept which had for many years the same kind of seductive quality as "instinct" once had because wherever we look we do see animals and people adapting. Piaget comes along and says, yes, but that's not all that happens. We need, he suggests, a new concept "equilibration" not equilibrium. And again, giving him full credit, it is sometimes the recognition of such a methodological-assumptive flaw in the discipline which is far more important historically than even having worked out a fully formed and proper solution.

You'd think that if we were only in the business of adaptation (maintaining an environmental equilibrium) we would avoid novel experiences whenever possible -that it would be better to stay where you are than to go out and find new intellectual challenges. In fact, however, children don't run into new challenges accidentally they are constantly seeking them out. The same goes (as far as I can tell) for a broad range of higher animals ranging from rats through to chimpanzees. There are surely qualitative grades of mentality to be recognized in the manner by which those representative species set out to achieve maximal control over the environment (or self) but it can be argued that Piaget's theoretical point still holds. There is something more going on there than those organisms simply attempting to maintain an equilibrium with their environment.

Well, he's made the recognition and this is historically very important, but has he succeeded in outlining the new concept enough to account sufficiently for the transitional aspects of our mental development? I think it must be granted here that Piaget has not gone much beyond recognizing the problem and describing the process to be explained. The typical criticism leveled against his account is that equilibration is not a good explanatory principle because it simply describes what is going on without actually telling us how it occurs per se. So let's consider this point for a moment. In my opinion, the main shortcoming of the equilibration concept is that it is still too vague or overgeneralized. In particular, one of the rather central explanatory issues that even Piaget's later accounts gloss over is how to distinguish human equilibration from that of animals. In his Biology and Knowledge (1971), for instance, animal intellect is sometimes assigned to the realm of biological or cybernetic autoregulatory systems and sometimes not. As mentioned previously, imprecision or inconsistency on this sort of point can be fatal to the future reception of any psychological system builder's account.

In the Piagetian account, all the new "equilibration" concept really does is help distinguish the two overgeneralized types of autoregulatory systems rather than explain how novel psychological processes or cognitive constructs (respectively) are formed in any particular human being or species for that matter. If taken on its own confined merits then, Piaget's use of equilibration can indeed be legitimately accused of "circularity." It attempts to account for the development of intelligence by citing the mere existence of intelligence itself. When considered in conjunction with all of his other efforts, however, the equilibration concept can be viewed more favorably as referring to something rather important -that it is through active and somewhat species specific mental accommodation that new psychological or cognitive structures are ultimately developed (somehow) and that this is what makes an equilibrating system (wherever it is found) different from a nonconstructive mechanical or cybernetic system like a thermostat on the wall or even a computer for that matter.

What we must not lose sight of after having recognized the legitimacy of these collective criticisms, is that Piaget is on to something that needs to be done. We need something like the equilibration principle to tell us how such mental transformations are brought about in the case of animals and human beings. The developmental process of intellectual expansion and transformation is there to be accounted for in this sort of manner. The human child really does develop intellectually. It is able later to do and understand certain things that it could not do or understand earlier. Moreover, the fact is that children eventually learn intellectual strategies by which they can ask questions or tackle problems that are truly novel not only to them, but after a while, to anyone. Let's consider this merely implied aspect of Piaget's combined adequacy thesis and equilibration concept briefly before winding down our account of him per se.

The development of a given child's basic intellectual skills, interpersonal competencies, as well as the maintenance of their curious nature are surely for a time sufficiently guided, molded, or addressed by family, peers, and formal educational institutions. It is out of this typical context of intellectual development, however, that they may eventually begin posing novel questions or encountering new problems to be solved which require answers or solutions for which they can turn to no other living person. Hopefully by then, however, they will have already learned the problem solving, investigatory, or analytical skills by which those answers or solutions can be discovered. Quite often, those latter mentioned analytical skills require finding out how others in the past dealt with similar questions or problems. This sort of thing must go on, of course, because we hear of new knowledge and novel or reworked ideas all the time; we are exposed to new technologies that expand our individual and collective intellectual capacity; we are inundated with novel jingoisms, advertising blitzes, entertainment systems, etc., and so forth ad nosium. All of this indicates that the production of intellectual novelty is a rather fundamental aspect of our historicized human condition and should be accounted for in any satisfactory developmental approach to General psychology.

For a long while now the disciplinary rub of General psychology has been how to achieve or maintain a descriptive and explanatory account of what we already know is going on. If we are going to maintain such a distinction between description and explanation, let's remember that whatever explanation we are looking for depends on accurate description. Piaget seems to offer us a merely "descriptive" account of mental transition but even if that's all he did, he still provides us with a valuable service which went well beyond that provided by many of his 20th century disciplinary contemporaries or successors. It is simply not the case, however, that the equilibration principle is inherently inadequate but rather that it is not adequately elaborated in detail by Piaget himself. I am highlighting this argumentative point because (among other things) Piaget's collective descriptive efforts provide some cogent hints as to how we might proceed forward and onwards toward a truly developmental and thus explanatory approach to studying psychological processes.

What is really interesting to me is that many of the conceptual limitations in Piaget's equilibration principle and overall account of mental stage transition seem to be overcome in the work of figures like Vygotsky and Leontiev (Leontyev). Both of these figures highlight that which is only implied in Piaget's concept of equilibration. They put forward an account of human intellectual development which suggests that the social and cultural-historical context of that intellect not only places external demands on the individual child but also provides the new expansive possibilities for and the mechanisms by which the drawing forward of that child's ontogenetic intellectual development is brought about. In particular, what I'm suggesting is that by carefully combining Piaget's overall account with Vygotsky's "Zone of Proximal Development" concept, and Leontyev's concept of distinctly human "appropriation," we might begin to produce a theory of equilibration (if you wish) that is more adequate and which meets the criticisms which have been brought against it above.

Since these latter works only came to the attention of North American psychologists during the mid-1970s through late-1990s period some elaboration on them is now in order. Along the way, and in the hope of establishing some useful common methodological grounding, we will also make a concerted effort to compare and contrast their approach with the empirical practices of the Life-Span Developmental area of psychological research.

On the overlap between Piaget, Vygotsky, and Neo-Vygotskian accounts

In Vygotsky's Mind in Society (1978), his chapter on the relation between "learning and development" can be read as being concerned directly with the equilibration issue -of how to draw children forward in their intellectual development. What Vygotsky points out there is that children do not develop in an individualized vacuum. They grow up with other children and in a socialized context in which demands by parents, peers, and adults are constantly being placed upon their actions. Furthermore, the wider culture itself provides specific mechanisms that are utilized and successively taken up in different ways by children of different ages.

Here, Vygotsky takes issue with three older views of education: (1) the purely maturational view which suggests that a child's individual level of mental development constitutes a tightly restrictive prerequisite or precondition for efficient learning to take place (e.g., Binet and the so-called Piagetian readiness to learn approach which feared premature instruction); (2) the somewhat looser (older) view that learning and mental development are synonymous (as in James' account of education as the mere acquirement of habits of conduct or tendencies toward desirable behavior); and (3) the mind as a homogeneous network of generalized rather than specific capabilities (as in Koffka's Gestalt approach or the Classics training tradition where the mind was assumed to be a muscle that when exercised in any given domain of knowledge would produced imminently transferable learning elsewhere). In the alternative Vygotsky presents, it is the activity of encountering and engaging in new intellectual tasks that leads the child's understanding and mastery of those tasks. Effective teaching, he suggests, is always ahead of the student's attained mental development. Such learning takes place in a Zone of Proximal Development (ZPD) which Vygotsky defines as the: "distance between the actual developmental level as determined by independent [individual] problem solving and the levels of potential development as determined through problem solving under adult guidance or collaboration with more capable peers" (Vygotsky, 1978, p. 86).

Vygotsky realized that describing a merely individualized sequence or progression of intellectual performance (as done by the mental testing tradition and Piaget) is but a starting point for tackling a deeper, more central, question that verges on explanation. That central question is one of carefully outlining the wider social-cultural context and mechanisms of individual intellectual development itself. In the case of human infants and younger children, such an outline requires us to note the reciprocal social context in which new learning is forthcoming. In the case of older human children and adults such an outline requires us to note the equally reciprocal cultural context in which new learning is forthcoming. He is careful to highlight the point, therefore, that learning is not synonymous with mental development but that properly organized learning results in mental development. By properly organized, he means directed towards (aimed at) the specific form of mental development that is currently in progress rather than at that which has already occurred. He advocates not only a constant guiding of infants and children to do that which at present they are not yet able to do on their own but also implies that the very form of that guidance must change as those children move into adolescence and adulthood.

In his opinion, teaching that is directed toward the actual (achieved) developmental level is ineffective in promoting further horizontal or vertical mental development because it is aimed at an already ended and relatively static product. Effective instruction is always carried out in the zone of insufficiently mastered capacities which lays just ahead (or above) of the "crystallized" knowledge and present capabilities of the child or adult. The guidance of a skillful teacher helps the child or student attend to the relevant aspects of a new task, stay on track so to speak toward the successful completion of the required task, and thereby become aquatinted with the sorts of competencies which can be used the next time similar tasks are encountered.

In Vygotsky's overall account of learning and development, the ability of the infant, child, or adult to truly understand what they have just done is dragged in well-after having performing a new task. He highlights this point early on in the 1978 work as a generalized law of "internalization" -that all novel human capacities start out in the joint interpersonal realm and are only then internalized in the individual intrapersonal realm (p. 57). Accordingly the Vygotskian concept of ZPD is a thoroughly dialectical (two-sided, horizontally and vertically mobile) description of the learning process. From the teaching side of this reciprocal relationship it is a process of leading actions on the part of the teacher. From the student's side, the process is one of what might best be called guided participation in an unfamiliar task (see Rogoff, 1990). A good teacher in the elementary schools, for example, is one that is moderately demanding. Such teachers are always pulling their students just a little bit farther along in their mental development than they would be willing to go by themselves. Of course the child may not be able to carry out these intellectually challenging school assignments by themselves but become able to do so once help is provided by a more competent peer or an adult.

So, what I'm getting at here again, is that maybe part of the equilibration process should be understood socially. We don't want to limit our investigation to the internal dynamics of what is going on in the mind of the isolated individual child, adolescent, or adult. Rather, given that the even a very young child is in a social context at all times, the explanation of the development of these new and expanding mental processes will not be gained from looking merely inward only but by also looking outwardly to the larger social and then cultural unit of analysis as well.

During the 1980s through 1990s the "social" unit of ontogenetic intellectual analysis message was being broadcast loud and clear by the Neo-Vygotskian movement but it took some time to work its way into our mainstream introductory textbook coverage of the discipline. For the most part, what introductory coverage of Vygotsky that did exist during this era concerned his disagreement with Piaget regarding how to interpret the functional status of egocentric speech. Vygotsky's point was that such self-directed speech is social communication turned inwards. It is utilized by children during their efforts at self guidance and is not merely a symptom of cognitive immaturity per se but a valuable indication of the leading edge of their current individual mental development.

In any case, the Neo-Vygotskian movement did produce some useful encapsulations and even expansions of the Vygotskian account of ontogenetic intellectual development. For instance, in the following diagram of "progression through the Zone of Proximal Development" (from Gallimore & Tharp, 1990) most of the core ontogenetic Vygotskian concepts are laid out rather nicely. In some ways this generic ZPD diagram is too generic. In other ways it is not generic enough. Let me specify why by stating its descriptive advantages and disadvantages.

First of all, it has the advantage of emphasizing the typically consecutive horizontal (albeit stage-like) pattern of transfer of dominance between "assisted" social-regulation (by teachers, etc.) during the early part of learning to "perform" a new task toward self-regulation (by egocentric speech, internalization) during the latter part of learning -where the task is mastered. To some extent all that stage I-III above really says is that what children can do today with assistance they can perform independently and competently sometime down the road.

Gallimore & Tharp dutifully mention that during stage 1 the "amount and kind" of required social assistance depends on the "age of the child and the nature of the task at hand" but their inclusion of the terms "fossilization" and "automatization" (automation) in stage III above -though not necessarily problematic- can tend to be misleading. Put simply, not all attained human learning -in the sense that Vygotsky meant by "crystallized knowledge"- actually becomes automatic or "fossilized". This quibble may seem supercilious but its importance will become clear once we consider the vertical aspects of the account of such human learning activity by Vygotsky and his colleagues. All I'm suggesting here and now, however, is that in order to be generic enough stage III above should read: Internalization, Crystallization and/or Fossilization (automation).

A second advantage of the diagram is that it attempts to explicitly extend the horizontal aspects of Vygotsky's ontogenetic account slightly to include so-called "recursions" through the ZPD. In the light of what we already know about the lack of longevity of much of our new learning, this makes perfect sense. After all, anything worth doing in the wider human sense of the term is often an uphill slog on muddy ground with many slips backward and downward too. Hence the popular colloquialism: Two steps forward, one step back. The maintenance of excellence in any human endeavor requires persistence, practice, and review. In other words, we must occasionally retrace the ontogenetic path that led us to such mastery. Without these, one soon finds that what one formerly could do one can no longer do. Hence another colloquialism: Use it or lose it. This is the point of the recursive loop depicted above, and of course although a good elementary teacher is one that is willing to repeat an earlier lesson, each of us do this for ourselves later on.

Finally, there is one rather significant disadvantage to consider. This diagram lacks any vertical dimension at all. It is simply a generic depiction of a ontogenetic ZPD which in its particulars is most applicable to the internalization of repetitive and easily automated practical actions (like shooting basketball hoops, playing a musical scale, or to use one of their examples assembling puzzle pieces). It is certainly not a depiction of the one and only ZPD. To be clear on this point is vital because Vygotsky's work was very much interested in outlining the upwardly mobile aspects of human intellectual development.

Symbolic use of language (externalized writing systems, books, libraries, etc.) used for the communication of meanings and recording of historical events both within and beyond a given generationSocietal, Cultural, and Historical relations
Sign usage (use and internalization of verbal sentences or sign language) utilized instrumentally within a given generation for communication and fulfillment of needs, desires, and meaningsSocial relations and communication with others
Referential pointing (preverbal instrumental communication) used for the indication and fulfillment of immediate needs or desiresNatural (biological) relations and communication

For example, according to Vygotsky, referential pointing in the late infant is indicative of its movement out of the realm of merely biological ("natural") relations and into the higher and wider "social" (interpersonal) realm of relations. Similarly, the successive adoption of verbal "signs" (e.g., telegraphic word use, sentence structure) and then "symbols" (e.g., alphabet, and written communication) by preschool and school-aged children is indicative of their movement from the realm of merely social to truly cultural relations. Furthermore, above and beyond all of these, there are the historically specific processes of ongoing lifelong internalization or utilization of externally mediated forms of human knowledge (written personal notes, reference books, archival or filmed documentary evidence, alphanumeric databases, etc.) to be considered and drawn into our psychological understanding of distinctively human ZPDs.

In particular, these latter (higher) "externalized" forms of human knowledge and meaning are specifically cultural as well as historically relative. In literate cultures it is from late adolescence onward through adulthood and old age that they effectively take the place of former merely "social" recursions through the ZPD (which characterize the relearning efforts of the child, early adolescent, or illiterate adult). Vygotsky & Luria (1930/1993) made significant contributions to our understanding of these "culturally mediated" forms of linguistic human intellect by portraying them as cross-historical rather than merely cross-culturally relative. They emphasized too that the same comparative hierarchical analysis could be applied to "numerical operations" within and between cultures as well as across the history of a given culture. Thus "perceptual numerical complexes" (visual estimates) which are highly developed and which predominate in preliterate cultures, are successively replaced by "tally systems" (knot tying, stick notching), and then formalized into written "number systems." These fall into the same sort of natural relations, social relations, and societal/cultural relations hierarchy of functionality respectively. It was Leontyev (1981), however, who highlights the distinctiveness of the higher societally mediated manifestations of such "exteriorized" processes from the comparatively transitional "interiorization" of social relations -which exist in their rudimentary form in ape groups as well as in the interpersonal relations of preliterate children- by way of giving the successive uptake and utilization of them their own descriptive terminological designation: "appropriation" of culture.

Don't each of these vertically transformative aspects of human mental development require their own specific form of social or societal assistance to occur -i.e., their own form of ZPD? I think they do. So, we will most certainly return to this issue again. For now let's simply acknowledge that like Piaget, the Neo-Vygotskian movement surely made important argumentative and descriptive contributions to the formerly individualized ontogenetic sphere of the discipline. It is vital to note too that in the overall Vygotskian account there is an explicit recognition that whenever one is investigating human mentality one must consider the intersection between three lines of mental development (phylogenetic, ontogenetic, and socio-historical). The value of adopting this successively wider unit of psychological analysis never really becomes apparent until -as we have encountered many times above- the discipline reaches descriptive or explanatory roadblocks after having not done so.

In this regard, although the interpersonal social relations aspect of Vygotsky's account is beginning to be picked up by our introductory psychology texts (which occasionally refer to considering a child's ZPD as a viable adjunct to merely individualized intellectual assessment), they still seem to have missed a more central feature of Vygotsky's theoretical account: During the course of each child's ontogenetic development their psychological processes are transformed twice over, first by social relations and then by cultural-historical (societal) relations. So, for the sake of clarity on this important methodological point let's look a little more closely at what Vygotsky and his colleagues had to say about the intersection between these three vertically situated lines of intellectual development.

The key to understanding what Vygotsky and his colleagues were on about lays in not only appreciating the difference between the old S-R disciplinary scheme and the "S-X-R" account he was advocating (where X stands for "mediating" signs or symbols), but also how this account differs in turn from the S-O-R scheme we are all so very used to using. Stated plainly, when we consider the anthropological history of human beings we can't help but to notice that the adoption of successive technological advances in the form and kinds of material tools utilized (mechanical, steam, electronic, etc.) have transformed the labor relations of such societies. That's why anthropologists distinguish between the type of culture they are investigating (hunter-gatherer, agricultural, preindustrial, industrial, postindustrial, and so-called knowledge economies). Vygotsky realized that as these changes in the structure of "labor activity" occurred so too did changes in the predominant structure and pattern of the development of human intellect.

According to Vygotsky (1978), in the case of "modernized" human beings, what starts out as a quantitative expansion of "unmediated natural" capacities in the human infant somehow becomes qualitatively transformed (by social and then cultural-historical forces) into new forms of so-called "mediated" intellectual engagement with the world. This S-X-R account is very similar to the S-O-R we all know and love but it has the advantage of specifying a little more carefully the difference between us and even our closest animal cousins. Knowing one's organism as Woodworth once put it, is in this case stepped up a notch to knowing the ontogenetic, social, and cultural-historical situation of one's human subjects. After all, isn't that what the vast majority of us are actually interested in? Thus Vygotsky argued for the existence of unmediated natural processes which are transformed both socially and then culturally across the life-course of individual human beings. His approach to developmental intellectual analysis is concerned with pinpointing the critical turning points from one mode of psychological functioning to the next vertically higher one.

The term mediation simply means to serve as go-between and it applies equally to both material tools (hammers, pencils, etc.) and to the intellectual tools (such as signs and symbols) which serve as psychological go-betweens and which are likewise provided successively by the social or societal relations of the developing human being. One might quite rightly ask, therefore, what the difference between a material tool and a sign or a symbol might be. To this question Vygotsky provides a ready answer. He distinguishes between material tools and signs according to the direction of their functional utility. While material tools are directed outward and are utilized as means of control over environment, signs or symbols are directed inward and are utilized as vehicles (means) of control over oneself.

The overlap between Vygotsky's account of mediated human activity and the Piagetian concept of equilibration (control over self) finally becomes manifest in this interesting distinction. Whereas a material tool is something that goes between us and the object upon which we are working, a sign or symbol is something which goes between the object of our interest and our understanding of it. Material tools function as mediating vehicles by which we attain control over nature. Signs and symbols function as mediating psychological vehicles by which we gain control over ourselves.

Vygotsky extended the dialectical materialist concept of mediated tool use (its transformative role on the human-environmental relationship) to the use of signs and symbol systems. Like material tools, signs and symbol systems are created by societies over the course of human history and change the form of that society. The internalization of these bring about observable transformative shifts in the early and later forms of individual mental development. Got it? Good. Now, let's consider the disciplinary implications of accepting such distinctions.

There are important conceptual and empirical method implications to be drawn out of Vygotsky's account. Before moving on to the latter in its own subsection, we should first finish up with the former. In this regard, the "Activity Theory" work of A.N. Leontyev is of specific interest because in having expanded the Vygotskian style of analysis both downward (to the phylogenetic "origins of the psyche") and upwards (to "exteriorized" forms of cultural-historical "appropriation") he reveals many of the apparent differences between Piaget and Vygotsky to be mere matters of emphasis or nuance instead of unresolvable dichotomies.

Conceptual Implications: Leontyev's Activity Theory

Like Vygotsky, Leontyev explicitly acknowledges that activity (doing) leads reflection or understanding in all three of phylogenetic, social, and cultural-historical lines of psychological development. His approach has therefore come to called Activity Theory. Vygotsky's work introduced almost all the issues to be raised in this activity theory approach but it was Leontyev who is responsible for creating the viable and comprehensive vocabulary that has been missing in the discipline for so very long.

In the sense that his comparative psychological efforts were aimed at elaborating both what we share with various other representative species and what is the distinctive province of human intellect, his approach should be understood as not only an elaboration of the methodology of other Soviet psychological figures (such as Vygotsky and Luria) but also an extension of emergent evolutionary themes found in the works of earlier "Western" figures such as George Romanes and C. Lloyd Morgan. Romanes and Morgan (working within the early years of comparative psychological analysis), encountered considerable philosophical and empirical limitations when attempting to both name and elaborate the exact nature of the serial mental evolutionary transmutation of the levels they proposed (see Tolman, 1987a). In contrast, Leontiev's direct historical connection with the Vygotskian tradition and his knowledge of the subsequent cumulative comparative psychology record of empirical results allowed him to do so in great detail.

In reading through his Problems of the Development of the Mind (1959/1981) one starts to get a glimmering of what a general theory of mental evolution (comparative in its explanatory power with Darwin's theory of organic evolution) might look like. Not only does his account provide a developmental stage theory of expanding qualitatively different capacities but it indicates what those capacities are directed at, and still more importantly it specifies the respective means of transition between those stages.

Chapter 1 "The problem of the origin of sensation" is intended to be read in conjunction with and as a background to his "An outline of the evolution of the psyche" which appeared together with it in the 1981 English translation. The first section outlines why "the problem" of the origin of psyche (locating the lower limit of mind) has been a persistent methodological issue in the history of philosophical, physiological, and psychological research. A dualistic (subject vs. object and/or mind vs. matter) approach to the study of psyche, he argues, has been shared by almost all post-Cartesian psychology (p. 17). Instead of persisting with any of these former one-sidedness "comparisons of subjective and objective data," Leontyev advocates adopting a monistic approach to psyche which recognizes the "unity of the subject's mind and activity" and empirically investigates "their internal reciprocal connections and transformations" (p. 26; see also Tolman & Robinson, 1997).

By way of building an ontological foundation for his monistic account of mind and matter, Leontyev starts with the typical dialectical materialist emphasis on the "activeness" of all physical substance (matter in general) itself. He then, however, utilizes this notion to further differentiate between the active properties of inorganic chemical reactions (in which the elemental participants are obliterated) from those of living matter in which such reciprocal relations -a.k.a. interactions- with the environment lead not to obliteration but rather to the survival of the organism. Here, the long awaited answer to Johannes Müller's old vitalism principle is spelled out in some refreshing detail (pp. 27-32).

The second section, called "Hypothesis", sets out Leontyev's fundamental methodological distinction between a pre-psychical stage called "simple irritability" (in cells or in the simplest viable organisms, p. 38), and a subsequent evolutionary stage of "sensitivity proper" (p. 42) in various other lower organisms. These first two "sections" of Chapter 1 are followed by a rather detailed presentation of the author's experimental investigations into the sensory psychical level.

This basic distinction between simple cellular "irritability" and "sensory psyche" provides the theoretical launching point for his wide-sweeping Chapter 2 "An outline of the evolution of the psyche" in which the various forms of such active vital relations (organism-environment reciprocity) are traced on up through the phylogenetic scale by utilizing data from representative species collected from varied traditions and disciplines of research.

Here, after considering the phylogenetic emergence of irritability (chemical processes on cell membranes) and "sensory psyche" (in which the properties of objects but not objects per se are reflected), Leontyev moves on to the level of "perceptual psyche" which entails a perception of objects (p. 182). The psyche of most mammals remains at this stage. But there is one other higher stage of animal psyche called "animal intellect." He then goes on to outline its nature and the implications for our understanding of the learning process of apes (see also C.W. Tolman, 1987b for a contextualized summary of that outline; and Tolman, 1988b for more on the basic vocabulary of Activity Theory).

Further elaboration of his combined phylogenetic, ontogenetic, and socio-historical approach to comparative psychology would require a course of its own. It can be said, however, that in this part of the work, Leontyev (1981) makes many important distinctions not the least of which are: (1) a "general law of animal psyche" which states that animal actions always remain within the realm of biological and or bio-social relations (p. 197); and (2) a distinction between animal adaptation and human appropriation. The latter entails the individual human's use of and/or reproduction of historically provided material and intellectual tools of thought (p. 296).

StageGoverning Aspect of RealityStructural Units
Human IntellectConcrete and Abstract Relations, Meaning (personal, social, and societal) Operations, actions, activity
Animal Intellect

Concrete Relations between objects (biological sense, social sense)

Operations, Individual Actions; Leading and Joint (social) Actions
Perceptive PsycheObjects and conditionsOperations; Individual Actions (with respect to objects in the environment -including other organisms)
Sensory PsycheProperties of objectsOperations (with respect to aspects of the environment)
IrritabilityConditions (e.g., water salinity or acidity; light level; cell polarization or permeability)Assimilation/ Metabolism
Physico-chemicalPhysical, chemical, and protein gradientsActiveness and reactiveness of matter

 

The manner in which the "structural unit" aspect of his activity theory analysis applies to human beings is laid out a little more succinctly in one of Leontyev's summary articles (Leontiev, 1979). Here, he starts by criticizing the so-called "postulate of immediacy" (pp. 42-46) that was indicative of early 20th century S-R accounts, and which as we have already seen, R.S. Woodworth struggled so long and hard to find a way out. Leontyev's means of escape falls soundly within the Vygotskian "mediated" (S-X-R) approach without actually citing it per se. All the same, Leontyev makes it clear from the outset that he is rejecting both past merely introspective and behaviorist descriptions of psychological subject matter and proposing the category of objective "activity" as the key to overcoming those disciplinary difficulties (pp. 46-69).

Let's consider this point for a moment. The first two traditional definitions of psychological subject matter were mind (which concentrated upon inside) and behavior (outside) so that most texts define it as the science of behavior and or mind. There are, of course, all kinds of reasons why a science built upon the mere appeal to observable behavior (to the exclusion of so-called mentalistic topics) would be inappropriate and given these limitations it is no wonder that Skinner (1971) would eventually suggest that freedom and dignity are superfluous concepts. By contrast, if we switch to some sort of concept of action and or activity, as was indeed tried out near the end of the American functionalist movement (by which they meant the "behavior of an organism with respect to something"), then these intentional-mental aspects of our subject matter seem once again to be within the "province" of psychology. Actions -as even the functional psychologists knew- are always inherently object or goal oriented. They are teleological in the best possible sense of the term. An appeal to them as subject matter seems like a subtle shift away from the stark behaviorist or the ontologically unequivocal operationist position but it is really quite profound. The common allure for Woodworth, Angell, Carr, and Leontyev of action or activity as subject matter is that it includes the possibility of a contextualized, functional, and object oriented (objective) concept of mind. You don't discard mind, but rather redefine its scope and contextualize it so to speak.

The advantage of adopting action or activity as a unit of analysis for psychology is that these units embrace both subject and object. This is a means of retaining within our analysis the concrete unity of subject and object which had been lost (or at least confounded) by the former merely mind or behavior definitions of subject matter. Furthermore, once one does adopt this more concrete monistic unit of analysis, it becomes clear that the myriad of "eclectic" dualistic mixes that have been attempted (e.g., behavior plus mental states or cognition) are merely disciplinary holdovers of the abstractness of these former partial definitions of psychological subject matter.

Leontyev's activity analysis is an attempt to reunite the inner and outer relations of our subject matter; or to state it another way, it is an attempt to "demystify" the mind (1979, p. 53). In its particulars, the Activity Theory approach does this by indicating the vertically downward and upward transformations between its proposed structural analytical units -i.e., by indicating the typical functional transitions between action and operation, as well as between action and activities too (see the accompanying diagram).

Although the specific means of vertical transition are not indicated in this particular diagram, it should be noted -for now- that each of these somehow take place with respect to the environmental conditions, the goals of the organism, and the motives of adult human mentality respectively. This provisional point can be readily appreciated by simply reading the following three quotations from the 1979 paper and carefully noting how they relate to the accompanying summary diagram.

"Activity [in its generic sense] is the nonadditive, molar unit of life for the material, corporeal subject. In a narrower sense (on the psychological level) it is the unit of life that is mediated by mental reflection. The real function of this unit is to orient the subject in the world of objects. In other words, activity is not a reaction or aggregate of [S-R] reactions, but a system with its own structure, its own internal transformations, and its own development" (Leontiev, 1979, p. 46).

"[In] the general flow of activity that makes up the higher, psychologically mediated aspects of human life, our analysis distinguishes, first, separate (particular) activities, using their energizing motives as the criterion. Second, we distinguish actions - the processes subordinated to conscious goals. Finally, we distinguish the operation, which depends directly on the conditions under which a concrete goal is attained" (Leontiev, 1979, p. 65; emphasis added).

"Thus, systematic analysis of human activity is also, of necessity, analysis by levels. It is precisely such an analysis that allows us to overcome the opposition of social, psychological, and physiological phenomena, and the reduction of one to another" (Leontiev, 1979, p. 69).

Since Leontyev systematically outlines the vertical relationships between these structural aspects of an Activity approach to psychological subject matter, he provides that which was lacking during the latter phase of the functional psychology movement and thereby highlights and revives a valuable methodological underpinning of most of the productive empirical, analytical and theoretical work that has been done ever since that time. In short, by way of distinguishing between the various forms of activeness, operations, actions, and human activity (proper), Leontyev has outlined the general qualitative differences between sensory or perceptual adaptation to environmental conditions, individual adjustment to biological or socially significant relations, and distinctly human appropriation of cultural-historical tools of thought. The crucial quote regarding this latter capacity from the 1981 work runs as follows:

"From its very birth a child is surrounded by the objective world created by people, namely everyday objects, clothes, very simple instruments, and language and the notions, concepts, and ideals reflected in language. A child even encounters natural phenomena in conditions created by men; clothing protects it from exposure and artificial light dispels the gloom of night. The child, it can be said, begins its psychic development in a human world.

Does the child's development, however, proceed as a process of its adaptation to this world? It does not; in spite of the widely held opinion to the contrary, the concept of adaptation by no means expresses the essential in child's psychic development. The child does not adapt itself to the world of human objects and phenomena around it, but makes it its own, i.e., appropriates it.

The difference between adaptation in the sense that the term is used in regard to animals, and appropriation, is as follows: biological adaptation is change of the subject's species properties and capacities and of its congenital behaviour caused by the requirements of the environment. Appropriation is another matter. It is a process that has as its end result the individual's reproduction of historically formed human properties, capacities, and modes of behaviour. In other words it is a process through which what is achieved in animals by the action of heredity, namely the transmission of advances in the species' development to the individual, takes place in the child.

Let us take a simple example. In the world around it the child comes up against the existence of language, which is an objective product of the activity of preceding human generations. In the course of its development the child makes this its language. And this means that such specifically human capacities and function are molded in it as the capacity to understand speech and to talk, and the functions of hearing speech and articulation" (Leontyev, 1981, pp. 422-423).

Even at this very moment each of you are engaged in this sort of self-improving process of distinctively human appropriation. That is, whether reading or listening to this account of the discipline's history, you are hoping to come away from it with a set of guidelines or strategies that (in the Piagetian sense) are more adequate to the world and which provide you with a greater competence in dealing with the kinds of methodological issues that are being raised herein. Some of these will already be familiar others quite novel; some may seem inane others profound; some will be superfluous to your personal interests others rather central; and yes, I'll just admit this openly, some will appear dated while others still applicable.

Having recognized this latter specific point about the overlap between appropriation and the so-called equilibration principle, we can finally turn to the more general task of reconnecting the Leontyev narrative with the respective Vygotskian (ZPD) and Piagetian (adequacy thesis) accounts of education. This can be done quite easily by way of considering a few "simple examples" of our own in which both the vertical transitions taking place and their means of transition become manifest. I am supposing here that it is usually easier and more succinct to go with what you know, so we will utilizing two of the examples mentioned earlier in our initial account of the Neo-Vygotskian ZPD diagram (above) and expand upon this analysis somewhat to drive the newer points made therein home a little more firmly.

In order to facilitate this combined sort of analysis, however, one must first become acquainted with the following diagrammatic encapsulation of the respective placement of the terms utilized by Piaget, Vygotsky, and Leontyev. Note, for instance, that the Piagetian terms "accommodation, adaptation, adjustment, and social cooperation" are all mentioned but are also placed in their proper vertically subordinate location to "Societal Appropriation" (or equilibration if you wish).

Transformative Levels of Animal and Human Mentality. The left panel called STRUCTURAL/ONTOLOGICAL ASPECTS (a.k.a. "What" is being done), describes various Levels of Mentality and Levels of Learning (a.k.a. means of environmental reciprocity). In the right panel called FUNCTIONAL/ACTIVITY ASPECTS (a.k.a. "How" and "Why" it is being done), the "Highest functional attainment" (including their *Means of Transformation) are covered as well as the various Motivational Levels needed to explain the "Why" of what is being done. The vertical aspect is to be read as a description of the typical pattern of transformations from lower to higher levels of mentality. The overall pattern being depicted, however, is one of an upwardly mobile spiral of capacities expanding in scope to include individual, social, and then societal-cultural realms of motivational "drive, need, and meaning." Each of these transitions are not strictly upwardly linear (additive) but vertically and horizontally integrated as well.

Relatedly, and most importantly, note that the specific "Means of Transformation" between Leontyev's "Individual Actions, Joint Actions, and Joint Activity" are all indicated as being particular kinds of ZPDs (Zones of Proximal Development). The intent of making these latter ZPD designations explicit is to address one of the main Piagetian critiques of the Neo-Vygotskian tradition, which (as the Piagetians quite rightly point out) has been slow to specify what sorts of "interpsychological assistance" are required during different phases of the learning or educational process (see DeVries, 2000, for a particularly pointed instance of this critique).

These "ZPD" designations have been expanded vertically downward beyond Vygotsky's original solely human (societal-cultural) usage to better recognize the important role of "guiding or leading actions" in the case of animal caregivers with their young, as well as the similar role of "joint actions" among older and younger members of a merely social group -i.e., be that group animal or human or both in terms of its membership (see also M. Cole, 1996, for someone else who allows for such animal and cross-species ZPDs). These different sorts of ZPDs (respectively designated as "guiding actions, leading actions, and leading -or teaching- activity") constitute the long sought-after particular forms of interpsychological means of transition from one vertically situated "level of learning" to the next higher one (say from "guided orientation" to "individual adjustment" as well as from that to "social cooperation" on up to "societal appropriation" with only the latter being the place where cultural-historically contextualized educational forms of "joint activity" -teaching and leading in the specifically human form- takes over.

The recognition of cross-species ZPDs goes a long way in accounting for not only how Dr. Loh Seng Tsai (1963) was able to teach his animal subjects to act jointly in order to obtain a food reward, but also why it was so very hard to teach these "natural enemies" to do so. As shown, the cat and rat must press their respective foot pedals simultaneously to raise a separating screen and thereby gain joint access to the food. The accomplishment of this unusual social ZPD between the "rat-killing" cat and cat-shy rat took some 700 trials to become operationalized. Quite obviously it is within-species social ZPDs that are much more common and vital to the survival of each of these animals in normal naturalistic settings. Such abnormal settings require each organism to adjust and adapt their respective repertoire of species-specific actions according to the novel environmental conditions set up in the experimenter's laboratory.

Although the naturalistic within-species social cooperative relations characteristic of "higher animal intellect" (e.g., in apes) have been recognized for some time, their exact similarities with and differences from the sort of societal appropriation characteristic of "human intellect" remained a rather contentious issue throughout the latter half of the 20th century. The proclivity of experimenters to reduce the higher to the lower or to elevate the lower to the higher remains a prevalent methodological shortcoming of General psychology right up to the relative present. So, we may want to keep this methodological point in mind as we work through our selected examples outlining the respective proper placement of the terms utilized by Piaget, Vygotsky, and Leontyev.

In the shooting basketball hoops example, that which starts out as an individual practical "action" (requiring intentional "goal-oriented" conscious attention) can with practice become automated into an internalized "operation" or set of operations which will be utilized depending upon the "conditions" for carrying out subsequent shots (whether the hoop is to be shot while standing still or on the move, etc.). Above and beyond this confined individualized assessment of the act itself, however, lays the wider interpersonal "activity" context within which that act is practiced, mastered, and operationalized. Here there is surely a significant difference in terms of the "motives" at work in the various sorts of activities which function as a context for the learning of such an act -such as: minimal compliance during a mandatory gymnasium class activity; the seeking of friendly camaraderie or bragging rights during a recreational game; all out competitive besting during formally supervised tryouts for the school team or the somewhat more sophisticated school spirit motives involved in preparing for an upcoming big game against an opposing school side.

The list regarding possible activity settings for the performance and progress of this simple "individual act" into a set of generalizable operations could go on and on. I do hope that you can appreciate even from this somewhat contrived example, however, that the activity theory approach is not merely a recognition of the importance of drawing in motives for our account of human activity but is, in fact, a way to do that which Woodworth (1930b) said he wanted to do: "bring [psychological] motives right down into the midst of [observable] performance instead of leaving them to float in a [detached] transcendental sphere." For one thing, knowing the motives at work in a given activity context for the action under consideration is highly instructive in terms of deciding what sorts of interpersonal assistance is needed if any.

Somewhat relatedly, since one could certainly train up an ape to shoot hoops (either for reward or even as a matter of mere social mimicry), this particular example raises the issue of appreciating the difference between social cooperation carried out by way of "joint action" (on the one hand) and societal appropriation which is carried out by way of "joint activity" on the other. We will return to this issue below but you will probably already be beginning to recognize that the key to zeroing in on that difference lies in pinpointing the forms of ZPDs that function as the means of transition both within and between those levels of mentality.

The ape with a basketball is merely "adjusting" its natural species specific capabilities for "action toward a reflected goal" with respect to the "conditional" requirements of a new artificial object. In this instance, its preadaptive capacity to carry out a one-armed underhand throw of a natural object is transferred over and adjusted to a new, though still "individually significant" reflected goal (to shoot the hoop using a two-hand underhand throw). These sorts of combined practical and reflective adjustments are always carried out by the ape within the context of "individual or joint action" and for the purposes of satisfying its current biological and social "needs" (such as receiving a food reward or praise from the handler respectively). It is the chimp's own immediate end of satisfying those needs that dictates the "form" (content, structure, and upper limits) of any cross-species ZPD "assistance" to ones of mere reward and leading actions within the "social" realm (e.g., tactile or vocal praise, and manual hand correction or overall throw demonstration).

In contrast, even the young child with a basketball is voluntarily (though initially somewhat unreflectively) participating in the context of a wider and higher realm of societal "meaning" by way of adhering to the confines of rule dominated play (e.g., using one's hands instead of feet to move that particular ball around the court). This is a form of specifically human activity which like the social realm of mentality expands horizontally in breadth (e.g., in terms of shooter's performance accuracy) but also vertically in terms of the player's "appropriation" of successively higher-order rule-oriented aspects of the game being played (e.g., the current score of the home team versus the visitors, as well as the past record of wins and losses to that opposing team, etc.). In contrast to the child, the ape (no matter how accurate its shot may eventually become) will never share in these higher-order "societal" aspects of the game. It is this distinctively human growth in and shift toward reflective awareness of such societal aspects of their current collective activity that provides new possibilities for the structure of the ZPD used by the coach to help lead and "motivate" the team toward victory (e.g., by providing flexible offensive strategies to help break down the opposing team's standard defensive play, etc.).

Our second example is a little more instructive in this latter social versus societal contrast because, after all, a musical instrument (violin for instance) is a distinctive product of human labor activity. In the confined individualized assessment of the external practical act of playing a musical scale on such a stringed instrument, we can indeed follow the young child's progress through an interpersonal ZPD: from playing that scale (say in the key of C) under the guidance of a teacher and as an initially attention demanding action through to its automated form as an operation -which in turn thereby allows the child to later play a whole musical piece in that particular key as a new wider attention demanding action. Initially, many corrections of finger placement and bow elevation are required on the part of the teacher but these basic mechanical aspects of the task at hand eventually become second nature to the child. This allows their "joint" attention to shift to the musical tonality of the scale being played and then later the piece which has now become the new attention demanding action for the child.

But having reached this point in the student's musical training there is once again certainly more to consider than was present in the initial somewhat social interpersonal ZPD described above. In other words, if we are to understand the child's further progress toward musicianship per se, the successive interpersonal means by which they are helped to begin reading musical notation -not mere as "signs" indicating a given mechanical finger placement and/or bowing technique but also as a higher-order "symbolic" language which communicates nuanced meaning- must be drawn into our account of their musical development. Surely, then other more vertically oriented ZPD distinctions must be made depending upon the particular societal context within which such higher-order participation in musical practice is being carried out.

Relatedly, since this latter aspect of our example is one which outstrips the capacity of even the most highly trained ape, we are likewise getting a little closer to the well-rounded kind of expanding vertical analysis we require to understand the development of such higher capabilities -an analysis that emphasizes qualitative shifts in the form and vertically situated local "adequacy" of such individualized practice to successively wider envelopes of capacity for performance, virtuosity, and historical appreciation of music in the cultural-historical sense. Do not each of these wider analytical envelopes constitute a new set of expanding "activity contexts" which can be differentiated according to the "motives" they are respectively directed at fulfilling? I think they do. Does not even the young child's pleasure from participating in musical practice spring from their "individual or personal sense" of accomplishment in having done so, or likewise a little later from the more "social" need (or desire) to doing mommy and daddy proud at the class concert? Still later the motives for continued participation may be an upcoming audition for the school band; or beyond all of these, the more distinctively societal motive of obtaining a musical scholarship for higher education, etc.

By combining the concise vocabulary of activity theory with such straightforward concrete Vygotskian examples as well as with the Piagetian concept of adequacy, it is possible to come up with the makings of what might be called a generalized "form law of ZPD" which can be stated as follows: "As the capacity of the student to reflect what is going on within diverse human activity contexts changes qualitatively (from a mere matter of individual/social significance upward toward personal/historical meaning), so too does the most adequate form of assistance by the mentor (from leading/joint actions toward teaching/joint activity proper); and it is these successive reciprocal (two-sided) interpsychological relations that brings about the student's own transformative intrapsychological shift (from being dominated by the directional pull of biological/social needs toward control over the selection of motives in the proper sense of those words). Consequently, the higher this dialectical dance of assistance and intellectual effort progresses vertically (with regard to that orderly needs/motive structure) the more the transfer of control of recursions through the ontogenetic ZPD is given over from the master to the apprentice."

Although this generalized form law of ZPD is applicable to our analysis of learning activity in preliterate cultures, it is especially applicable to that of literate, modern, and postmodern society. For instance, with regard to our music learning activity example, the "consequential aspect" of the form law is applicable because audiovisual recordings of the style and technique of former masters (on film, record, tape, or digital media) can be utilized by an ambitious student to achieve a degree of virtuosity that outstrips even the capacity of one's own teacher. Here too, for those wishing to pursue a career in music, an appreciation of not only music history but also of the current trends, technological advances, and occupational or business prospects for such a career all come into the analysis and ultimate adequacy of such wider activity.

The above account has been put forward as a set of suggestions as to how we might proceed to produce a more adequate concept of animal intellect and distinctly human equilibration. I'd hazard a guess that each of you can utilize what has already been said to, for instance, parse out the respective levels of mentality and the means of transition being shown in any nature program you wish to analyze. Trying this out can be quite revealing when one compares the relative inadequacy and looseness of the means of analysis being utilized by either the show's narrator, in the interviews being conducted during the course of such a program, or -as is most often the case- by both sorts of participants.

There are, to be sure, occasional valuable insights being shared as well as advances that have been made in this analytical respect during more recent years. Sue Blackmore's (1999) popularization of the concept of "Memetic" cultural evolution (the passing down of social gestures and societal technologies); or the more ontogenetically specialized work of Whiten concerning the emergence (around age four) of "the attribution of false belief" (Byrne & Whiten, 1988; Whiten, 1991; Whiten & Byrne, 1997) are two notable examples. But so far as I have been able to ascertain, nobody has yet explicitly adopted this combined systematic account of the transformative levels of animal and human mentality and that means that there is still a whole lot of work to be done by those interested in pursuing such a so-called metatheoretical career path. I want to leave you with the impression that there might be rather firm theoretical and empirical grounds for optimism for further disciplinary progress in this regard.

Consider how far we have already come from the stark prospects of the behaviorist position covered at the outset of this section. For instance, we have been arguing all along for the possibility of mental processes and the need to study them. Like Piaget, most of us now find ourselves attempting to adhere to a rational-methodological middle ground between any extreme forms of empiricism and nativism. The cognitive changes we humans undergo developmentally speaking surely involve not only the content of our thoughts but the mode of thinking as well. These new modes of thought are acquired in an environmentally sensitive way but are not caused in any deterministic way by the immediate effect of environmental contingency. We no longer accept the position that the content of our minds is just thrown at us in and passive fashion (simply received, registered and that's all), nor do we accept the position that we are prepared at birth (by our genes) to carry out all of the sorts of mental activity that are involved in adult human relations with the world. There has to be an orderly and extended period of development of these qualitatively distinct intellectual competencies. Furthermore, although human cognition is empirical in the sense of being sensitive or responsive to the environment, it is responsive in particularly human ways. Accordingly some sort of weak preformationist position (in the sense of phylogenetic preadaptation) is in order. You simply can't take a kitten, give it a specific set of ongoing individual or social experience and have it eventually come out in the same place as your ten-year-old child. Even the smartest chimpanzees don't come out in the same place, only human children going though this kind of culturally-embedded apprenticeship in thinking will develop into competent, psychologically well-rounded adults.

So, what I'm getting at is that if we combine the concepts of equilibration, ZPD, appropriation and perhaps a few other concepts as well -which are are all obviously zeroing in on the same issues- then we might begin to gain a gimps at a truly generalized account of human mental development in all its varied forms. I'm urging the conclusion that we could work with this sort of approach to mental development -one that utilizes a two-sided theory of intellectual continuity and novelty, and which understands human teaching activity as a process of assisting children to engage in successively wider scopes of social and societal relations. This is the dialectical approach to mental development.

Leontyev (1979, 1981) may have carried off this further elaboration a little better than Piaget or even Vygotsky because his particular account was explicitly informed by other bodies of knowledge and distinctions like the one between biological and cultural evolution. Leontyev extends the social aspects of psychical guidance downward from that covered by Vygotsky and even Piaget into the realm of animal intellect, but he also provides a refreshingly clear account how this preadaptive role of social guidance is transformed within the context of human societal-historical relations into a two-sided matter of transmission and appropriation of culture. By recognizing that the outer and inner aspects of psychological subject matter are inherently linked through an organisms externally observable activity (including the respective needs and motives that provide a direction to that activity) he was able to argue convincingly that the form (or mode) of that inter-organismic reciprocity changes qualitatively across three overlapping grades of developmental processes. There is, he suggests, a quantitative expansion and a qualitative integrative aspect to the phylogenetic, social, and cultural-historical aspects of psychical development.

When we consider the probable evolution of those characteristics of the human species which make us distinct from animals, what becomes clear is that activity leads reflective capacity and conscious understanding. So, in investigating anything psychological process (which has developed evolutionarily in response to this activity) it seems very important to carefully assess the respective type of activity that is at work in its initial occurrence as well as in its ongoing expansion and transformations.

We have a very good exemplar of this sort of analysis in Vygotsky's (1978) consideration of play as the leading activity in the psychological development of preschool children. The reason we want to focus on play is because play is the context in which so many important intellectual developments occur during that formative period of the child's life. Not only the development of explicit rule orientation and abstract thinking, but also (as later investigations have shown) very complex social relations are all developed within the context of such play. In short, ontogenetically speaking, the intellectual characteristics of the older child are generated in the activity of play. So it is that Vygotsky talks about play as a leading activity for the intellectual development of preschool children.

That's not to deny, however, the important ontogenetic role of abstract thinking in the older child's play but what is revealed in Vygotsky's account is that play has an ontogenetic-temporal priority to the occurrence of abstract thinking. It is play activity that determines the emergence of abstract thinking not the other way around. Again, that's not to say that later on there is not a reciprocal causal influence at work -for example in the play activity of the adolescent- so that abstract thinking comes back to influence say the selection or the relative appeal of various so-called leisure activities. That's not being denied here at all. What we are doing is simply identifying a certain ontogenetic priority (a leading role) of play in the overall process of such ongoing intellectual development.

The central task of such intellectual analysis is one of specifying which activity is leading at different points across not only given concrete experimentally assigned tasks, but also the life-span of the individual and the society within which that individual is developing. This is why the Neo-Vygotskian works of Doise & Mugny (1984) and Rogoff (1990) respectively take both an extra-individual unit of analysis as an analytical starting point and describe the intellectual development of human beings as a culturally specific "apprenticeship" in thinking. 

However, even their admittedly social and transformative analyses can be considerably sharpened by explicitly adopting the activity theory vocabulary. For example, the two children depicted right (from Doise & Mugny, 1984) are using social cooperation and joint actions toward their shared "goal" of winning a prize for completing the researcher assigned "pulley task" -as can apes given the right kind of task- but these children (as humans beings) are also utilizing higher levels of mentality -i.e., societally "appropriated" language tools- to do so.  Each of these individuals make many false starts and mistakes along the way. Only when the so-called practice-effect takes place (more specifically the downward movement of intentional individual actions to form automatic operations for each child -in part by way of their guiding of each other's actions but also by way of the leading actions of the researcher) can the assigned collective group task (the joint activity) be completed smoothly.

It should be starting to become obvious that although it might be possible to describe the time of onset or other such outward quantitative features of a given psychological process (e.g., rule-governed play or abstract thinking) in the absence of such analysis of activity, it remains impossible to truly explain the concrete developmental particulars (means of initial occurrence, internal contradictions arising in transitional forms, or the reasons for subsequent transformations) of any such process without situating that process in the context of the wider sociohistorical milieu in which it occurs and develops qualitatively. Accordingly in Leontyev (1979, 1981) he points out that material-economic production constitutes the leading activity on the cultural-historical level of analysis. Such production is the wider societal context within which all the other forms of human activity occur.

I hope that it doesn't seem somehow arbitrary to claim that human social relations turn around production (collective labor processes). In a society like ours, however, where we have a very complex division of labor and within which most of us are fairly removed from material production in its classic sense, it is sometimes difficult see that centrality of production. Thus, when certain segments of the so-called working class rise up to remind us they are awfully important, it might be tempting to think that they couldn't really be (because, for instance, we intellectuals most certainly are more important than say people who just nail things together). On the other hand, we all keep sitting inside, on, and around those things they nail together; putting the cloths on our backs that they produce; eating the food they grow; etc. So, in the final analysis there is simply no denying that productive human labor in all its forms is important. When we consider historical change in the cultural evolutionary sense, what evolves is precisely the dominant mode of production. Leontyev's point is simply that it is ill-advised to attempt to understand interpersonal (social) relations in the abstract (independent from the societal-cultural productive activity within which it is embedded).

We should say quickly too (I suppose), that what characterizes any such human form or mode of production is the use of tools. Some animals may be said to produce things, that's true, but if you observe how they do that it is always with the instruments provided to them by nature (hands, teeth, feet, stones, rocks, sticks, twigs, etc.). We tend very seldom to produce anything that way. In fact, it's so unusual these days that we put a label on such items "made by hand" and even that label is most often merely an indication that the item was produced by way of the mediation of some "hand tool" as opposed to having been produced outright by a machine. So the use of mediating tools (however complex or simple) is a fundamental characteristic of human production. Whether it be of the simple screwdriver sort or those huge computerized machines found in a modern production facility, we tend to put something between us and the natural material we are trying to transform.

What I'm trying to indicate here is that the questions raised by historical materialism -regarding the nature of social development, the nature and relations within human societal structure, etc.) are all extremely important from a psychological point of view. Such relations are not static just as psyche itself (e.g., the ontogenetic transition from temperamental individuality to distinctly human personality) is not static. Such relations change historically, and the processes of such change -i.e., qualitative shifts taking place according to the emergence of internal tensions or contradictions which exist within the social-societal relations- are very similar.

No matter what your previous ideological background on such issues might be, I do take it that you can at least appreciate one related point: That this emphasis on the centrality of productive "labor" activity is just a special human case of the centrality of activity (activeness) in all its varied evolutionary and developmental forms. The marvelous thing about adopting the activity theory position is that one begins to see how biological evolution relates to cultural-historical evolution, and how these tie back into an observable pattern of transformative shifts in ontogenetic individual development. In other words, all three lines of mental development fit into one broad picture so that one begins to get the glimmering of a possibility of a general social science of human practice, discourse, and existence.

Having covered all of these conceptual issues in some detail, it is finally time to revisit the thorny issue of what the pragmatic implications are for the practice of empirical research. Here we will make a concerted effort to compare and contrast the empirical approach of Vygotsky and his colleagues with that of the Life-Span Developmental specialty of General psychology. By doing so, it is hoped that we will establish some useful common methodological grounding by which the professional standards of General psychology can be improved upon.

Empirical Implications: Vygotsky on Method versus the Life-span Developmental approach

Everybody knows that people and psychological processes develop. Babies start out as infants with a circumscribed sphere of action and reflective capacity which expands outward as they become children, adolescents, adults and so on. For psychology, however, the long-standing disciplinary challenge has been how to conceptualize and empirically investigate that in a way which reveals what's going on. To address this challenge entails that we reassess the logical tools we use not only in daily life but also when we take up the task of empirically investigating psychological processes developmentally.

As we have been arguing throughout this work, the whole notion of process is one that resists analysis in terms of static components, additive mechanical elements or mere quantitatively defined variables. In fact, when you translate such dynamic processes into the formal logical language of mathematical variables, measurements, and analysis of variance you lose something that is really important in the process itself. That "something" can be variously described as necessary internal motion, qualitative change, and development.

Vygotsky on Method

In Vygotsky's Mind in Society (1978), there is one quite valuable chapter devoted to "Problems of Method" (pp. 58-75) in which he tries to articulate some of the main features of an empirical-developmental method to psychological research. Given that his account of proper empirical method is based upon a generalized dialectical materialist approach to psychological subject matter, I want to call your attention to a few of points he makes. In this regard, there are two rough "principles" that he states and one "problematic" proclivity that he suggests avoiding whenever possible.

The first principle is to "analyze processes not objects" (p. 61). Vygotsky is interested in studying psychological processes on their own terms and not by way of appeal to inherently static mental elements, to relatively static instances (snapshots), nor to the physiological prerequisites of such processes. Quite clearly, his intent here is to pointedly criticize the mechanical mode of analysis in all its disciplinary forms and begin laying out what he calls a "new" approach to the empirical investigation of psychological development per se. In this regard he cites various examples of the former approach drawn from the Wundtian tradition of reaction time experiments as well as the behaviorist (S-R) tradition as it stood to that date. Let's briefly review this contrast between a mechanical and a dialectical approach on our own before delving deeper into what Vygotsky makes out of it.

When one looks back at the empirical work of various mid-to-late-19th century figures like Wundt, Mach, or Helmholtz you see evidence of a mechanistic mode of thought being utilized. They all assumed that the world of observable objects, events, and psychological processes was to be analytically decomposed into elements and the rules for putting them back together were to be sought by science. The early disciplinary textbooks of Wundt and Titchener state this explicitly during their portrayals of the "experimental" investigation of psychological "experience" (see Sections 3 & 4). The task for the psychological investigator, they suggest, is to analyze the elements of experience and seek the rules by which they recombine in a largely additive fashion. As indicated previously, this early (essentially pre-evolutionary and anti-teleological) outline of the discipline was a modernized (experimental-mathematical) elaboration on the predisciplinary forms of associationism which had been around for at least 150 years earlier.

In other disciplines, Hegel (rather early on) and then Marx and Darwin all played their respective part in countering the formerly hegemonic mechanical mode of thought with their own forms of dialectical or evolutionary analyses. In psychology, it was James, the early functionalist tradition, and then later figures like Piaget as well as Vygotsky and his colleagues that attempted to do the same. In countering a tradition of mechanical psychology (characterized by an appeal to external cause and reductive elementism) each suggested their own remedies. For James and Dewey the solution was to readmit the "teleological" (functional and final cause) aspects of psychological process back into the purview of "natural scientific" psychology (see Section 4). For Vygotsky and his colleagues the same functionalist tactic was adopted but was now also explicitly informed by a generalized dialectical and three-tiered unit of analysis strategy of approach to the investigation of psychological processes.

To make a long story short, the trouble with attempting to investigate any developmental process mechanically is that it leads to an overemphasis on: externally imparted motion rather than internal dynamics; accidental contingency rather than necessity; and quantitative rather than qualitative change. Alternately, the great value of adopting a combined dialectical and levels of analysis approach is that it now becomes possible to distinguish between what is "necessary" and what is "accidental" to the particular level of the developmental process we are investigating.

To illustrate these contrasts let's paraphrase a few earthy examples from Georges Politzer's Elementary Principles of Philosophy (1936/1976), plug in assorted corrections, and then relate them back to the empirical approach to psychological processes being promoted by Vygotsky and his colleagues. In attempting to distinguish between developmental process and mere mechanical change, Politzer proposes a distinction between what he views as the necessary history of an apple becoming ripe and the apparently accidental history of a pencil being produced and worn down:

The ripe apple has not always been what it is. Before that, it was a green apple. Before being a flower it was a bud. In this way, we shall go back to the condition of the apple tree in spring. The apple has not always been an apple, it has a history. Likewise, it will not remain what it is. If it falls, it will rot, decompose, scatter its seeds, which will if all goes well produce a shoot and then a tree. Hence neither has the apple always been what it is nor will it remain what it is. This is what is called studying things from the point of view of motion.

....

Let us now take a look at a pencil, which has its own history too. This pencil which is worn down today was once new. The wood from which it is made came from a board and this board came from a tree. We see that both the apple and the board have a history and neither one has always been what it is. But there's a difference between these histories. The green apple became ripe. When it was green could it if all went well not become ripe? No, it had to ripen just as (if it falls to the ground) it has to rot, decompose and scatter its seeds. Whereas the tree from which the pencil comes may not become a board and this board may not become a pencil, and the pencil itself can always remain whole and not be sharpened.

Hence we notice a difference between these two histories. In the case of the apple if nothing abnormal occurs, the flower becomes an apple, and the green apple becomes ripe. Thus given one stage, the other stage necessarily and inevitably follows (After Politzer, 1936/1976).

What Politzer goes on to conclude is I think incorrect and so rather than follow with his conclusion let me just give you my own. Clearly these are different kinds of processes. As he points out both the apple and the pencil have their own sorts of histories and in the case of the pencil's history "necessity" appears to be missing. He ends up concluding then that some change is dialectical and other change is not dialectical. But that creates a serious problem by implying a dual universe in which we have both dialectical motion and nondialectical motion. For example, if we consider a child, that child is bound to change into something else. If allowed to grow it changes into an adolescent, adult and so on. If on the other hand that child gets stepped on by an elephant it also changes and what we want to do is -in some way- make a distinction between those respective kinds of change.

The motion that we are interested in when we are trying to understand the child or the apple is not the motion that we see when the apple rolls down a hill or when the child is stepped on by the elephant. The motion we are concerned with is the apple or the child's own motion (that which is somehow inherent in the process of its own development). Things can happen to the apple. You can for instance step on it and squash it on the ground, but that's motion being brought to it from outside. Similarly, the production of the pencil is brought about by motion not inherent in the wood from which it is made but brought to those materials by human beings for a particular purpose. That motion was brought to the tree to produce the wood and so on. So there appears at least within these special cases (the apple rolling down a hill etc.), a mechanical form of motion imparted to an ongoing process that is not inherent in that process itself, but which is -with respect to that particular process- accidental or "contingent."

Now let's pose the crucial question as follows: If internal necessity is a characteristic of dialectical motion, in what way can we conceive of all these sorts of motion as dialectical? I think the way we do this is by bringing to bear the notion of the relevant unit of analysis. That is, what is the unit of analysis we are interested in investigating? When I refer to the apple ripening, that constitutes a unit of analysis in itself. We are interested in investigating the motion inherent in that particular process of development. In order to consider the pencil developmentally, however, I must adopt a wider unit of analysis which includes the relationship of that product with the human beings that produced it. Within this wider unit there is a notable subject-object relationship in which, given that a human being does such and such, it is absolutely necessary that this wood must become a pencil. If that were not the case, there would be no technology of pencil manufacture. There must be a necessity, but the necessity is not found by referring solely to the pencil as an individualized "object" but rather by recognizing that pencil as part of a larger unit in which that necessity is found.

Notice that when we adopt this layered view of objects, events, or processes, we can begin to appreciate that what appears to be accidental at one level is actually necessary at another level. So, we have an application of dialectics here once again. From the point of view of the pencil as an isolated unit it is quite accidental that it wears down. It may not be used, it need not be used. But in terms of the writer as forming a unit with the pencil, I can't be a writer without a pencil or some kind of writing instrument. In terms of that larger unit, it is absolutely necessary that if I continue using the pencil it will wear down. At one level the wear is accidental or contingent and at another level it is necessary.

What Vygotsky and his colleagues realized is that adopting this kind of layered analysis (which allows us to see the necessity at one level and accident at another) goes a long way to helping us understand the three-tiered internally contradictory relationships between societal, social, and individual existence. A lot of things seem totally accidental to us as individuals but they are necessary at the level of social existence because by virtue of the fact of entering into interpersonal relations with others to form a whole (which we call social existence) there are certain necessities which occur. We see these social necessities manifested in societal-cultural history too so that such history has an orderly appearance. Surely, whether I decide to carry out a given task in one manner or another seems totally arbitrary but we know that on the average people within a given society will tend to deport themselves in a given way and those are the forces which collectively bring about the sorts of changes that we sometimes call human history or cultural evolution.

What I'm getting at here is that in order to discover internal necessity we sometimes have to first determine what the relevant unit and level of analysis is for the particular developmental process we are interested in investigating. This point brings us back to the motives for Vygotsky's critique of former mechanical approaches to psychology. The Wundtian tradition's emphasis on individually experienced "elementary mental events" was far too narrow of a unit of analysis. Likewise, the early behaviorist emphasis on the "physiological components" of individual actions was not only too narrow of a unit of analysis, but was also a confusion of what constitutes the relevant level of analysis for the investigation of distinctly "psychological" processes. If psychology was to recognize its extra-individual content, avoid reducing its subject matter to physiological components, as well as seek out internally dynamic necessity rather than mere external contingency, a new nonreductive approach to empirical investigation of psychological processes would have to be adopted

As Vygotsky (1978) puts it, this new sort of analysis of psychological processes requires a "dynamic display" of the origins and main transition points making up the "history" of the particular process under study. "Consequently, developmental psychology, not experimental psychology [in the classic mechanical or reductive sense], provides the new approach to analysis we require" (p. 61). His main argumentative point in this regard is that if we stop studying static mental elements or mere physiological prerequisites to psychological processes, then all psychology will become developmental psychology. There is no other proper form of psychology because everything within its purview has to be understood as a process and that sort of analysis requires a developmental empirical method be adopted.

"Our method may be called experimental-developmental" in the sense that its aim is to intentionally provoke the occurrence of psychological development that can be analyzed (p. 61). Let's consider the empirical implications of this particular statement briefly. When you look at the work of figures like Piaget and Vygotsky, you don't find them saddled with a lot of a priori procedural rules or mathematical formula. What they are doing is returning the study of the nature of the development of the psychological process under study. It is the developmental process that is important and interesting for them and it is that which dictates the procedures they will use.

Basically, when you look at examples of their empirical work, what Vygotsky and his colleagues do is either zero in on a naturalistic setting in which a developmental shift is likely to occur, or they set up a situation in which the developmental process they are interested in observing is allowed or helped to occur. So, as Vygotsky suggests, the basic task of such research becomes a "reconstruction of each stage in the development of the process" under study (p. 62).

For instance, building on the cultural-historical evolution hypotheses set out by Vygotsky & Luria (1930), Luria took advantage of the rapidly changing societal circumstances in isolated Uzbek villages to study the relative shift in the dominance of concrete vs. abstract thought which occurred in the context of implementing a literacy program and collective farming practices (see Luria, 1976 for a full account of this interesting research). Vygotsky's chapter on "The Role of Play in Development" likewise indicates quite clearly how, by way of careful observation, conversation and active engagement with his young subjects he was able to sketch an outline the developmental transition from thoroughly contextually dependent "concrete" thought through to the beginnings of decontextualized "abstract" thought. That is, he provides a theory of play as a leading activity in the ongoing development of higher mental processes (1978, pp. 92-104).

Alternately, these researchers might select a group of preliterate kids whom haven't quite attained the ability to deal with symbolic writing or number systems and set up situations in which these subjects discover for themselves that their performance can be improved by adopting the rudimentary aspects of such abilities. An instance of this sort of research is provided by Vygotsky later in the Problems of Method chapter, to illustrate a "causal-dynamic approach" to the study of choice reaction time which stands in a fairly obvious contrast to the usual method of studying such processes postmortem (pp. 65-75). Here, the emphasis is on investigating the whole process of choice reaction time performance -from its initial phase where many errors are made through to its ongoing middle and later phases which are respectively characterized by increased accuracy and then smooth and automatic responding being carried out. Furthermore, this is done in a way that reveals what is going on. That is, after these young subjects encounter great difficulty carrying out the required choice reaction responses to the varied presented stimuli, auxiliary pictures are provided as external performance aids which function initially as signs (signifiers) to remember to carry out particular actions (thereby increasing the accuracy of their ongoing performance though not necessarily its speed) but which -with further practice- subsequently become internalized (increasing speed of response too) and finally becoming altogether unnecessary because the actions which those signs once signified have now also become operationalized.

Let's notice that this first guiding principle (to study process) is not hard and fast. It requires some ingenuity on the part of the investigator as well as a basic familiarity with and openness to the particular psychological process that is to be studied. The problem with setting up a priori quantitative-mechanical formula as invariant procedures to follow during the collection of any data set (see the cookbook diagram provided above) is that you run the risk of remaining forever distanced from the process under study. You might become an expert in carrying out an analysis of variance (ANOVA) for instance but you still don't know very much about a kids, people, perception, learning, memory, personality, motivation, and so on.

The second principle is "explanation versus description" (p. 62). By definition mere description does not reveal the actual causal dynamic relations within a given process. Since what Vygotsky means by a "developmental" study of such processes is the disclosure of its genesis or its causal dynamic basis, it follows that such research will attempt to explain rather than merely describe. What Vygotsky wants to do is "lay bare" the essence (the necessity, origins, and internal dynamics) rather than simply enumerating the surface features or recording the external manifestations of the process under study (p. 63).

In contrast, if you consider the so-called learning curves of the behaviorist reinforcement tradition or even the age-performance curves of the later operationalized (S-O-R) experimental tradition from this point of view you will see that these are really just empirical descriptions. You really don't know from such accounts what has gone on in the process under study. In the behaviorist case, the ontological "what" question is not even addressed only the "how much" question. In the age-graded curve case, it is the "how" question that is being avoided almost entirely.

For Vygotsky and his colleagues, the central investigatory question is not merely a matter of demonstrating that the child (or generalized organismic subject) got from here to there in terms of operationally defined performance, but rather how and why the subject got from here to there. This "causal genesis" aspect of psychological analysis is only very seldom revealed by the kind of descriptive numerical analysis that many psychologists are so satisfied with. Like Piaget (and unlike the strict behaviorist or the merely descriptive operationist account), Vygotsky wants to know why a reinforcement works and how a performance task is mastered. The goal is not to merely predict or control external behavior nor to merely record a quantitative increase in performance with age, but to understand the origins and internal dynamics of the process that are indicated by such observable and predictable performance. In other words, -if we are going to use the behaviorist lingo at all- he seeks to understand the changing forms of the developmental links between external stimuli and internal responses that underlie both the lower and higher mental processes under study.

Vygotsky's point here is that the descriptive recognition of age-graded idiosyncrasies is but one small aspect of our overall empirical task. The greater part, to which such mere descriptive differentia are "subordinated" is the one of discovering their "actual origin" (p. 63). So for Vygotsky, the question that must be attached to any merely descriptive recognition of differences in performance is the question of its origins and ends: Where did it come from; how did it get there; what is its functional utility; and -for human beings- toward what intentional societal motive is it directed?

Finally, the problematic proclivity of studying "fossilized behavior" is raised by Vygotsky (p. 63). His point is that what we tend to study is the finalized "product" of development rather than the process of development itself. To have recognized this proclivity as a "problem" follows from what he has already said about proper developmental method. For instance, to utilize his example, if we want to study language, one might attempt to assess it in its already developed form. Vygotsky suggests, however, that we should not have linguistics dealing with language as already developed and developmental psychology dealing with the rise of language capabilities in children separately. As he says, to study something "historically" means to study it in the process of change, that's "the dialectical method's basic demand" (p. 65). This is especially important, he suggests, when we are attempting to understand uniquely human forms of psychological activity.

This particular example always bothered me because it seems to assume a false interdisciplinary dichotomy with early developmental psychologists being portrayed in the good guy camp and linguists in the other camp. As far as I can tell, it was more often the case in psychology as well as elsewhere that the quantitative enumerative aspects of linguistic development at different ages was being addressed albeit to the exclusion of the qualitative aspects that might also have been addressed in such assessment. E.L. Thorndike (1921), for example, carried out a statistical-numerical analysis of the most common words used at varying ages but never really quite got around to answering how or why these differences exist (in any reasonable semblance of a satisfactory manner) and that is really the developmental point of such investigations. As already mentioned above, however, Vygotsky does a better job in addressing this particular issue in his chapter on the relationship between "Learning and Development" where the ZPD is introduced and the necessity of adopting an extra-individual unit of analysis in such investigations is emphasized.

In any case, one of the most infamous examples we have of "studying fossilized behavior," is the tradition of collecting IQ scores as an indirect indication of human intelligence. We know for instance, what kind of scores kids attain at different ages and we know that individual kids differ in such performance, but little research has been given -in fact and despite many claims to the contrary- to the issue of how that develops. The reason that we don't have that research is that General psychology has long assumed (along with Terman, Woodworth, Jensen and others) that such differences are to some degree determined by one's genetic endowment and that assumption has created all kinds of mischief. But that is just begging the question, isn't it? You assume there is a genetic component, and hence any forthcoming so-called developmental research becomes a matter of mere numerical "interactive" correlation between nature (which includes such variables as "race", etc.) and nurture (educational opportunity, etc.).

Early psychometric figures like Binet, Terman, Wechsler, etc., hurriedly produced mental tests, correlated them with age as well as with other quantitatively defined characteristics (e.g., SES, formal education, race, etc.) and passed that off as an operationalized definition of human intelligence. Is there such a thing as human intelligence? I don't think there is any doubt, but to claim that intelligence is what the Stanford-Binet or a Wechsler test battery measures, is quite frankly stupid because if we look back at how and why those mental measures were produced these folks really did not have any firm developmental grasp on the process they were trying to study (see Ballantyne, 2002). Instead, they simply produced quantitative measures and asserted dogmatically that these measures were operational definitions of human intelligence.

The IQ as a stand-alone assessment tool is not informative because it does not tell us about the real "living intellect" of the child, adolescent, adult, and so on. The developmental process of intellectual expansion (in the qualitative stage-like sense that Piaget and Vygotsky understood it) is being almost completely ignored. In other words, since we started out with an operationally defined abstraction of intelligence (the individualized IQ measure) before even taking the time to investigate what the concrete characteristics of human intellect are and then proceeded to expend our disciplinary energy on studying IQs (e.g., how they correlate with this, that, and the other thing; how they change quantitatively over time; under what externally contingent circumstances) we thereby created the illusion of understanding intelligence without really actually doing so.

If you consider, for instance, how much emphasis has gone into the factor analysis of test scores in terms of both producing and analyzing test batteries, it is staggering to find that the actual theoretical knowledge products that have been forthcoming are rather sparse. What do we know from factor analysis about the difference between say talent, abilities, or aptitudes? Very little. We certainly do not learn, for instance, how it is or why a child comes to behave or perform in a given manner at that particular point in their lives. The disciplinary investment in such multivariate techniques never quite lived up to the hopes of Cronbach who suggested that they might help psychologists carve up nature closer to the bone. I'm not the only one who makes that claim. There are a host of other sources predominantly from the tail end of the 20th century that either come to the same conclusion or allude to it (see especially Lawler, 1978; Hunt, 1995; Neisser et. al., 1996; Woodward & Goodstein, 1996; Daniel, 1997).

At long last, it is now getting successively harder for your friendly departmental psychometrician to claim that any given mental test measures the interaction between nature & nurture (heredity and environment; aptitude and achievement; etc.) without being charged with boldfaced dogmatism. In this regard I want to close our coverage of IQ as an exemplar of studying "fossilized behavior" by mentioning that "interactionism" -whether in its early additive or multiplicative form (see Woodworth's 1929 mental capacity metaphor diagram above) or in its subsequent so-called "dynamic" or "revised interactionist" form- is and always has been just political spin. Just like operationism itself, interactionism is one of those disciplinary problems that presents itself as a solution.

What then is the way forward for that subdiscipline and for the discipline as a whole? I think the only way forward is to adopt an updated "transformative" approach of the sort advocated by Vygotsky and his colleagues. One small instance of the usefulness of revisiting past results in the light of such a transformative view of our subject matter is the ongoing reanalysis of the so-called "Flynn effect" of rising test scores that I have provided elsewhere (see Chapter 8 of Ballantyne, 2002 under that heading). Let us now, however, endeavor to likewise render these seemingly lofty Vygotskian empirical principles a little more concrete by considering an illustrative contrast between his approach to the study of play with the empirical method advocated by the Life-Span Developmental specialty of the late-1980s onwards.

On avoiding the "Baltes trap"

So far I've put forward Vygotsky as somebody who adopted a dialectical approach to the developmental investigation of psychological processes. Being a relatively early disciplinary figure he did not, of course, have the last word to say about any of the psychological topics he covered. Our concern has been to outline the basic features of his empirical procedure and to indicate the potential that it has for revealing the kind of information it does about mental development. I've also hinted that the particular version of developmental method of research utilized by Vygotsky (and his colleagues) seems to sit a little more comfortably alongside that practiced by other social sciences than does the explicitly mechanistic IV-DV model of mid-20th century General psychology.

Lest we become overly confident that all of this must surely have become obvious by the late-1980s I now want to indicate that what actually happened was that although the discipline began acknowledging the need for some sort of developmental analysis it also implicitly retained the mechanical method of accident in its particular account of that developmental analysis. This historiographic point can be made by considering the goals, content, and limitations of one of the most definitive books of the era by Baltes, et al., Life-span developmental psychology: Introduction to research methods (1988). In historical hindsight it might be tempting to downplay the important methodological strivings of this particular work so some supportive commentary in this regard is in order.

In the sense that Paul Baltes and his coauthors aim to address an ongoing disciplinary conundrum about exactly how to best lay out a coherent account of developmental psychology research procedure, their methodological strivings can be considered as both the the upper-edge (or endpoint) of one tradition and the start of (or turning point towards) something new. On the one hand, the actually presented outcome of their effort is a discouraging though perfect exemplar of the carryover of mechanical method into "Life-span developmental" psychology's subdisciplinary discourse (hence the "Baltes trap" eponym used above). When we carefully consider the examples of so-called "developmental" analysis they propose, what we discover is that they are not really developmental at all (in the dialectical, internal necessity, and transformative sense of that word). On the other hand, they are openly grappling with and are beginning to see through many of the merely assumed methodological shortcomings of the former explicitly operationist tradition. They know something is wrong with the manner in which prior empirical research in psychology has been conducted but can't quite put their finger on what the central problems might be nor come up with any unequivocal remedies.

What we will see is that Baltes, et al., are not getting at the same sort of developmental account that Vygotsky was getting at. This is because they implicitly adopt a traditional mechanical view (of external causality and quantitative empirical measurement) which goes back to the 17th century while Vygotsky (and his colleagues) were taking a dialectical view that first appears in the 19th century and is only later elaborated further. What should become apparent is that, despite the overlapping good intentions of improving the discipline, we are faced with two entirely different empirical approaches to the investigation of psychological processes. One seems to be answering a certain kind of developmental question (about say the essential role of play in the normal qualitative transformative transition from concrete thought to abstract thought in children); the other one I think remains helpless in producing that kind of understanding on its own.

I believe that wrestling with these methodological issues is very important because they have not been well articulated in the dialectical literature nor in the statistical cookbook account of empirical psychological methods. Our immediate task will be not only to provide an historically informed caution about the limitations of adopting the isolated statistical account of empirical method in developmental psychology, but also make a rational argument about what the respective roles and proper timing of use of these two contrasting empirical approaches might be. Ultimately the central question will become one of how to combine these approaches so that so-called "systematic naturalistic" qualitative investigation and statistical quantification might be utilized together to produce the concrete and relevant theories of psychological processes we require.

In the first two Chapters of their work, Baltes, et al., try to lay down the ground rules for their account of developmental method in psychology by suggesting what development is; by stating the rationale for developmental sciences; by providing a basic example of the empirical developmental method they advocate, and so on. After various intermediary chapters their overall account culminates with Chapter 19 in which a competent state of the art summary of Life-span developmental research procedure is made. We'll zero in on these three chapters to make our arguments as succinct as possible. Given that we are dealing here with an admittedly transitional disciplinary position, there are surely limitations and strengths in it to be recognized. I'll try to point out both but eventually what I'm going to suggest is that the language of development they use is actually inappropriate to the mechanical forms of empirical practice they are advocating.

In Chapter 1, they discuss the difference between description and explanation. On the positive side, they seem to recognize this as a rather vital issue that requires careful preliminary consideration if we are to avoid the foibles of prior empirical positions in philosophy, science, and psychology. Three somewhat conjoined statements about the distinction are made and can be responded to consecutively from the perspective of the arguments already laid out in the present work.

For openers, they suggest: 'The aims of developmental psychology include the pursuit of knowledge about determinants and mechanisms that help us understand the how and why of development. What causes the change. This aspect of knowledge-building is often called explicative, explanatory, or analytic because its goal is to find causal type relationships and thus go beyond descriptive predictions in the nature of behavioral development.' Okay, that sounds so far so good. The only potential red flag raised is one regarding how they might ultimately deal with the nature and type of the "causal" relations they mention.

Moving on, we are presented with a second more clearly problematic statement: 'The decision as to where description ends and where explanation starts and which form of explanation is acceptable to a given scientist [however] will always be an arbitrary one. As a matter of fact, philosophers of science question the logical merit of such a distinction.' Wow! Since they have started out their book by telling us that the distinction between description and explanation is arbitrary, we may already be in trouble. What they are overlooking here is the history of philosophy that tells us where this false dilemma comes from. It comes precisely from the failure to distinguish between appearance and essence which we first encounter with the British empiricists (see Section 3) whom attempted to appeal to nothing other than what was given in immediate experience and which was then carried over into various positivist positions where anything beyond apparent-descriptive judgments is considered metaphysical. That is, if we take Baltes, et al., at their word here we would be forced back into the worst form of sterile positivism (in which all one has is a set of descriptions which we might put together in various assorted ways). These particular authors, however, are more pragmatically savvy than these initial statements indicate. They are explicitly attempting to distance themselves from the former merely descriptive-operationist tradition in psychology and thus recognize that the argument as presented so far will not stand on its own merits.

So, having just knocked the historical foundation for making essential-explanatory judgments out from under their own feet, they turn around and say: 'but for didactic [instructional] purposes the distinction is useful.' In other words, they know there is a disciplinary need for making a distinction between description and explanation but don't actually present any firmer basis for it other than the rather dogmatic suggestion that it tends to promote informative discussion. Well, we needn't delay any further to quibble with them on that specific point but need only note this as the first clear manifestation of the methodological quagmire that Baltes is setting out for us to circumvent. We should and can, however, ask what exactly -in terms of its actual (practical) usage- they appear to mean by "explanation" in other parts of the book.

Very quickly, in Chapter 2 their illustrative (and presumably defining) example of the developmental empirical approach being advocated is one of hearing loss. Is that really a developmental phenomenon in the sense that we have previously been talking about? Is this the sort of instructive example that tells us what aging is about or what distinguishes the adult from the child? It may be something that goes along with it, but how close to the essence of development are we getting with the example of hearing loss? Let's leave those questions aside for the moment and concentrate on what Baltes et al., say about their example.

Firstly, they suggest that 'the example of auditory sensitivity provides a good example of how descriptive developmental changes come to be explained in terms of age correlated mechanisms without using age per se as the final explanatory principle.' In plainer more matter of fact language, however, all they have done is to show us that hearing loss is correlated with external exposure to noise.

Reading a little further on they suggest again that: 'The cumulative effect of noise input on auditory sensitivity is an example of such a [developmental] paradigm. Explaining the cumulative linkage of causative chains, making for developmental change is at the heart of developmental theory.' Yet when you take these latter statements seriously as an encapsulation of "developmental" research and "explanatory" theory, then what they seem to be saying is that all developmental change is the result of external influences. In other words, the "causal" or "explanatory" basis being appealed to by Baltes is really some sort of external antecedent event that impacts or happens to a person. This is the 17th century mechanical model.

Here is our second clear indication that we are dealing with a very different account of empirical developmental method and explanation than that supplied by Vygotsky and his colleagues. In Vygotsky the external influences on -as well as their manifestations in- changes in performance are most certainly recognized as being important but the main emphasis of the developmental method he advocates is one of disclosing (discovering) the internal dynamics of the developmental psychological processes under study.

For didactic purposes one could simplify and reiterate the dispute by taking up the old example of the seed and its development into a plant. You can measure its quantitative growth all you want and also describe the particularities of the minerals, water, and sunshine it is exposed to; but the real developmental-explanatory question is what the seed as a process does with the minerals, water, etc. In other words, any analysis which appeals merely to external events or antecedent (efficient) causes is mechanical and it doesn't actually come close to the teleological "essence" (as it were) of the developmental process under study.

To take another more psychological instance of the differential emphasis under consideration, it could be said that for Vygotsky the real question of reinforcement is one of explaining why the young child (as opposed to say an adolescent or adult) is so susceptible to it rather than merely one of a descriptive acknowledgment that such reinforcement is characteristically followed by a (quantitative or qualitative) change of behavior. Stated in another way, Vygotsky seeks to not only describe the apparent idiosyncrasies of an observed event or process but to discover the actual origin of those idiosyncrasies.

Hence, by disclosure of a causal dynamic basis of a developmental change Vygotsky means something a little different than what Baltes, et al., mean by their appeal to a 'cumulative linkage of causative chains.' In order to drive this point home, however, let's take up the manner by which these issues are dealt with in Chapter 19 of Baltes, et al., and contrast them more specifically to the way they are dealt with in Vygotsky's study of play.

The central instructional example utilized in Chapter 19 is one of 'dart-throwing accuracy.' The ostensive rationale proposed for presenting such a "simulation" study is that it is "non-controversial in that it is unrelated to an existing theory or body of data" (p. 180). This is not an uncommon sort of example for statistical methods texts to use and what we can say about it in this particular case can often be said about most of the others. To speak plainly, what potentially gives the informative meat to the developmental position being attempted by Baltes is totally lacking in this sort of example. It is tailored from its very inception to lend itself precisely to the old mechanical form of external and merely quantitative analysis.

For one thing, we are presented here again with an apparently purely quantitative initial difference in performance between three age groups to investigate. One could easily, for instance, operationalize the dart-throwing difference under consideration as 'average deviation from the target' and measure it in the three age groups yielding a numerical pattern to analyze and explain: At 10 years of age it is 10 inches; at 20 it is 2 inches; and then it becomes 8 inches at 50 years of age.

In contemplating what the reasons might be for this sort of initial observed pattern of difference between age groups, the kinds of possible explanations appealed to by Baltes, et al., are ones of the influence of external contingencies like 'practice and anxiety' on performance. So, basically, what has been presented to us in Chapter 19 is an observed pattern of quantitative change over age with explanation being sought in terms of other activities or states (practice and anxiousness) which may be correlated with that change.

Having made the guess that it is these factors that might influence performance, an intervention is then carried out by the investigators in which the subjects are given practice and their anxiety is reduced. This intervention is found to yield an increase in the dart-throwing accuracy in the younger and older groups that rivals the performance of the 20 year old group.

Yet in having brought about this statistically significant increase in the dart-throwing accuracy for the older and younger groups, have Baltes et al., actually provided us with an example of a developmental process that has been explained? Not really. For one thing, they never ask (let alone answer) the kinds of internal validity questions that would bring us a little closer to explaining even this seriously impoverished exemplar of quantitative performance change: How is it exactly that anxiety interferes with performance in dart-throwing?; Why is it that an older or younger person might be more anxious than say a 20 year old?; etc. Even if they did answer those questions, this is hardly a very interesting or exciting example of a developmental process and surely there are other candidates to pick especially when it comes to highlighting a methodology by which an entire subdiscipline is to follow.

In short, this example is about as exciting and as informative as the Botany of apples rolling down hills. End of story, or nearly so, for the authors themselves admit that there are a host of internal and external validity questions raised by conducting research in this way which will (according to them) 'never be completely resolved' except perhaps through further appeals to an ever more precise measurement of 'multitudinous variables and causes' ad infinitum. Here the implied argument seems to be that explanation is a matter of tapping the correct variables. Yet is this not merely begging the question? How might one (exactly) zero in on and decide what the the relevant variables might be? This question is never quite dealt with in the Baltes, et al., account of developmental research and it is this question -in fact- that is at 'the heart of developmental theory.'

Sooner or later, however, even Baltes et al., recognize that we have to go back to the concrete reality of the wider learning (or perhaps hand-eye coordination) process being alluded to by this particular illustrative experiment. In other words, the discussion they provide about their results indicates that they recognize this problem, but it also shows that they do not see any way around the problem. The one shimmering sign of hope in all of this comes when it is finally admitted that "only systematic natural observation" can provide the necessary evidence (p. 183) to not only check the 'external validity' of a given numerical analysis but also (presumably) to guide the very utilization of such empirical techniques in the first place. Is it not just here that the sort of internal dynamic analysis of psychological processes advocated by Vygotsky and his colleagues might come in very handy? I am inclined to think so.

The great methodological weakness in the Baltes, et al., account of proper empirical procedure with respect to developmental questions is that they are still stuck in the static way of thinking which we find in the ANOVA -an empirical mathematical tool geared to tell us not about change but about correlations between static variables (plain and simple). You can call one of your quantitative variables "time" and make it look like you are capturing developmental change in your empirical net, but that's not the same thing. If you consider what goes on in an ANOVA, you abstract, you measure, you correlate, you get a result in terms of an F or an R(squared), and that is a long way indeed from the phenomenon which you are trying to study.

Quite often it is suggested that the purpose of adopting this rigid variable approach to empirical method is to produce reliability (e.g., repeatability of results). But what is ultimately more important in statistics than reliability? Validity. Yet we have already found in the Baltes, et al., account that the problem of external validity remains problematic throughout! They don't tell you how to achieve it. Variable psychologists tend to do a lot of talking about reliability and validity, but have they not achieved reliability at the expense of validity?

The key challenge in this regard that we might pose to Baltes and other similarly well-intentioned Life-span Developmental researchers is: How might one apply their simulation and quantitative empirical techniques to children and discover what Vygotsky appears to have learned about the nature of play and its leading role in the developmental transformation from concrete to abstract thought? Now, if your empirical question is how does abstract thought develop in the child, I don't quite know how your purely quantitative F or R(squared) values are ever going to tie back into that wider qualitative question very easily or in any sort of truly informative manner.

One could, I suppose, attempt to address this empirical question quantitatively by coming up with a measure of abstract thought (scored from 0-10); giving this measure to children from the age of 3 to 10; and plotting a curve of the obtained data showing a relatively low starting point that increases with age. But having done so, will I have necessarily explained anything by following this particular empirical procedure? Not really. I have demonstrated that the process under consideration happens, but I already knew that before collecting the data. I've also produced an empirical description of the quantitative degree of change with age and perhaps even hinted at how the process unfolds quantitatively across age but in doing so, I've also incurred a few risks along the way. Most specifically, when I produced the original measure of what I have called abstract thought I was presuming that I already know enough about the process under study to guarantee that this particular measure has a certain degree of validity (that it is in some way representative of the entity that I have called abstract thought).

Vygotsky, in contrast, discovered some important points about how abstract thought develops in a way that you would never get if he had started by producing a mental measure, proceeded to collect data, plot that over time and correlated it with other events. Yet that's just what Baltes, et al., suggest we do. They are locked into the tradition of starting their work with the production of measures, the running of correlations, etc., before asking the kinds of rather fundamental orienting concrete descriptive questions as to the nature of the process to be measured (the "what" question). Admittedly, their definition of development is not correlation of such measures with time or age itself, but with other events that change over time. Yet that's, again, not telling us about the process of development under study itself; its only telling us about which observable events correlate with the particular mental measures selected. We only need to note the overlap in their suggested method with the whole problematic tradition of IQ to recognize that this is still a highly problematic approach. Vygotsky's point is that we have to understand the "what" before you start to measure anything. He is indicating to us that we won't know enough about the process under study to measure anything until we know how it develops. In rushing to measurement, the chance I'm running is that I will have a misrepresentation of the process under study so that whatever such an analysis does, it may in fact mislead me rather than assist me.

I'm suggesting that what we detect in the Baltes et al., account of developmental research is the mechanical variable method. That's what's there, that's what has been overcome by dialectics, and given this fact it is vital to recognize their approach has certain kinds of built-in limitations. It is good for what it is doing (summarizing and slightly extending the quantitative tradition within which it falls) but it is also delimited in its applicability. If we accept the implied mechanical model -the only one presented in their book which represents itself as developmental- then we are going to be not only dissuaded from bothering to carry out the kinds of taxing initial concrete observational analysis we require at the front end of our research, but also necessarily excluded from recognizing the sort of objective contradictions we encounter along the way and retaining them during the latter conceptual stage of our analysis -which is (after all) the important part of theorizing developmentally. We might obtain some degree of abstract description but will always fall far short of explanation in this wider sense of that term.

The present critique does not constitute a blanket condemnation of applying quantitative analysis to any sort of empirical psychological issue. It is merely a call for caution about relying too heavily on mere quantification and a prescriptive emphasis on the potential disciplinary value of the sort of empirical-developmental approach being advocated by Vygotsky and his colleagues. Historically speaking the discipline has tended to take refuge in the mathematical-mechanical statement as if that was the end-all and be-all of empirical method and I think that tendency has to be resisted utterly. We would be mistaken, however, to attempt to throw such quantitative methods out and adopt merely qualitative methods as a replacement. I'm really just arguing for putting these two broad types of complementary investigative methods into perspective; as well as for obtaining and retaining a firmer grasp on the descriptive disciplinary function of quantitative measurement or model-building rather than taking them up in the sort of unreflective manner which might tempt one to learn a set of mathematical formulas and go out to challenge the world like Don Quixote.

Crisis of Relevance

Thus far we have attempted to cast off the scientistic trappings of psychological operationism (including those contained in its "convergent validity of measures" variety) by way of sketching out the possibility of a seemingly workable combined "direct realist, emergent materialist, and dialectical-empirical alternative" to stand-alone variable psychology. Ultimately, an Epilog volume to this course will have to be produced. It will have to be aimed at demonstrating that this is the sort of methodological footing for 21st century General psychology should begin adopting as part of its ongoing effort to produce adequate theoretical accounts of specific psychological processes (like perception, learning, memory, motivation, and personality). More immediately, however, given that you have now been exposed to the basic notion of such a methodology, this puts us in a much better position to deal efficiently with the so-called "Crisis of relevance" era of General psychology which reached its peak during the late-1970s through late-1980s decade.

Revisiting this era of disciplinary divisiveness and open subdisciplinary acrimony will be useful in various respects. Some of these will only become apparent once we have worked our way through that material but two of them should be mentioned upfront. First of all, it will provide you with a practical exercise in consolidating the kind of tactical argumentative arsenal required when squaring off with those whom (although now in the admitted disciplinary minority) still explicitly abandon the "Standard view" of science and adopt anti-objectivist argumentation. Secondly, it will help you avoid being accused of holding those views yourself when squaring off with those whom still have the strategic advantage of being in the (albeit merely neopositivist) convergent operationist or variable psychology majority.

Let there be no misunderstanding (between us at least) about the vital importance of those two upfront points. Although it can be argued that psychology is now in a "postpositivist" reconstruction period (see Table 2 of the Introduction) such disciplinary periods are only roughly successive and each of you are faced with a difficult war on two fronts because there is (in certain circles) considerable intransigence on many of the methodological issues we have already been dealing with. On the more cheery side of things though, you should also recognize that you are already better off (methodologically speaking) than any previous generation of students in the history of the discipline and, if you play your professional cards right, psychology as well as society itself will be better off for your efforts than it is at present.

Theoretical Psychology's curious lapse into anti-objectivism (1964-1990s)

We have spent a fair bit of time outlining how and why positivist psychology -especially in its operationist variable model variety- has failed us. In particular, we have suggested that not only a direct realist appeal to nature, but also a recognition of emergent evolution (including its appeal to organismic adjustment or adaptation and human appropriation), as well as a transformative (rather than an interactionist) understanding of mental development were all -to a greater or lesser extent- missing from the largely numerical-empirical concerns or practices of most General psychology.

These methodological shortcomings all combined to produce a mid-through-late 20th century General psychology that was characterized by: a shrinking away from making ontological truth claims and from theory production; an overemphasis on indirect measures of static aspects of given psychological processes; a fixation on the mechanical collection of correlations (rather than the ascertainment of dynamic causal genesis); an inordinate concern about the repeatability or reliability of numerically defined variable sets or test-factor extractions (rather than over the issue of their ultimate validity); and -it must be admitted- a generalized inability to indicate how an astute investigator might zero in on the "relevant" aspects of whatever psychological process might be under study.

The importance of providing that longwinded outline will now become rather apparent because when we look back at the highly critical "crisis of relevance era" reactions to such positivistic (or neopositivist) psychology, what we will notice is that the actual underlying methodological problems of the discipline have been misunderstood by a sizable minority of theoretical psychologists. To put things plainly, it was within the above methodological lacuna (holes, gaps) in the general psychological account that an assorted "anti-objectivist" movement gained an initial foothold and was eventually further formalized under what came to be called the "constructivist" or "constructionist" banner.

Proponents of constructivism claimed (quite correctly) that the positivist-operationist account of General psychology was riddled with scientistic hubris, reductionism, methodolatry, and dogmatism. They sought to attack such psychological positivism but also misinterpreted its main methodological error as being the effort to maintain a scientific "objectivity" itself. Ironically, this mixed group of critical psychological theorists began openly adopting the very same persistent though implied positivist assumption (namely its underlying Humean or Kantian epistemology) which had already led to the argumentative bankruptcy of mid-20th century operationism and which was now thwarting the consensus-building efforts of contemporaneous variable psychologists whom (precisely because they too assumed that epistemology) believed that the only way to obtain valid psychological theories was through the merely indirect route of applying their empirical-numerical measurement technologies and seeing what came out the other end.

The constructivists recognized that the Achilles heal of contemporaneous empirical psychology resided in that assumed indirect realist epistemology but instead of abandoning it, they embraced it openly. Their main argumentative stance was that if the quest for scientific objectivity had led to so many disciplinary difficulties (including the failure of the so-called "empirical tools to theory" approach of the most recent cohort of variable psychologists), maybe it was about time that we try giving it up.

Three prominent disciplinary figures (Sigmund Koch, Michael Wertheimer, and Kenneth Gergen) stand out as representative exemplars of this curious group of intellectual gadflies. If a collective affirmative motive can be ascribed to their individualized energetic efforts at all, it might be that they were seeking to escape from the previous era's inappropriately confined account of how psychological science works and its narrowly defined scope of inquiry. While taking up their arguments successively we'll endeavor to acknowledge the most cogent observations they make about the disciplinary difficulties of the "crisis era" they are criticizing. We'll also, however, distinguish such observations from the more questionable (sometimes self-contradictory) conceptual pluralism or outright relativist positions these figures propose adopting, as well as from the pessimistic conclusions they ultimately reach regarding: the contemporaneous status of psychological science; the overall coherence of psychological subject matter; and the future prospects of psychological theory building.

Objectivity and the "context of justification vs discovery" indicator

In 20th century psychology the successive failure of Logical positivism, of operationism, and of a peculiarly circumscribed neopositivist doctrine of empirical "falsifiability" (see below) was felt in a number of different ways. We have suggested almost as a matter of descriptive historical convenience that all of this somehow came to a head in the "Crisis of relevance" era of our discipline which peaked during the late-1970s through late-1980s. This descriptive term, however, is intended to encompass a whole slew of mini-crises stretching back to at least 1964 and lasting right up to the relative present as well. Westland (1978), for instance, lists most of these as his chapter headings: The Usefulness Crisis; the Laboratory Crisis; the Statistical Crisis; the Science Crisis; the Philosophical Crisis; the Professional Crisis; the Publication Crisis; and the Ethical Crisis.

In the current subsection we'll be touching on all of these mini-crises somewhat simultaneously rather than attempting to deal with them as separate or even distinct entities. The historical premise is that one of the most disconcerting ways in which the collective disciplinary "crisis" manifested itself among certain theoretical psychologists was in their explicit rejection of both scientific objectivity and the Standard view of scientific advancement in favor of an anti-objectivist, anti-realist, and shifting paradigm account of theoretical change.

So, before launching headlong into our consideration of the specifically psychological anti-objectivist views we have selected out to debate, let's take a few moments to: remind ourselves why the definition of "objectivity" contained within that era's updated Standard view requires us to assume at least some form of a realist epistemology; and to emphasize that the admittedly problematic falsification doctrine being utilized by contemporaneous empirical (variable) psychology was consistent with only one delimited version of a wider set competing opinions about what the actual relationship is between the so-called "justification and discovery" contexts of scientific practice. This can all be done succinctly by way of raising a few of the central points contained in the introductory chapter of Israel Scheffler's (1967/1982) rather masterful work on philosophy of science and relating what he says to the particular situation of mid-to-late 20th century psychology.

What I like most about Scheffler's account of the Standard view of science is the way he defines scientific objectivity. A distinctly postpositivist flavor is present in his introductory chapter because Scheffler indicates quite clearly that it is not personal "detachment" from things that is being sought by science but "responsible control over assertion" (p. 7).

First of all, let's notice that when we begin to understand objectivity in this way, the often assumed sharp distinction between science and other human activities breaks down. Not only do we begin to see that responsible assertion is something that we should be striving for in all aspects of our lives, we can also begin to appreciate that objectivity in science has a distinct ethical (a.k.a., "moral") character because responsibility itself is an ethical issue. We want to be held as accountable for what we assert or do as scientists as we are in any other aspect of our life.

Similarly, let's note the realist tradition which is implied by this opening definition of objectivity. That is, to have "control" over one's assertions in this manner requires that we have some basic and shared access to the objects, events, or processes under investigation. Here, Scheffler mobilizes quotations from both Charles Peirce (1878) as well as C.I. Lewis (1929/1956) to suggest that the "purpose of knowledge" is to be true to something which is "beyond" the person asserting it and that the "intent [of our knowledge] is to be governed and dictated to" by something "independent" of our mind.

Scheffler's account makes perfectly good sense for both the simple assessment of commonplace assertions right on up through to the settling of more complex empirical or theoretical scientific disputes. Objectivity is responsible assertion and such objectivity requires an object outside the asserter to which we have access and toward which we must refer or somehow appeal in order to ascertain the truth of -as well as to maintain control over- our assertions.

If, for instance, I suggest that you are currently reading or listening to this sentence while it is raining outside your present location, you'll likely have no trouble checking out the veracity or falsity of that assertion by looking outside or listening at the nearest window ledge. The truth or falsity of my assertion resides in its correspondence with (or lack of correspondence to) the objects, events, or processes to which it refers. This is a simplified empirical-observational example, but the same goes for more complex theoretical propositions or laws. To be considered scientific, they too must be asserted "responsibly" -i.e., in such a way that we can check them out against the facts. It is in this manner that their truth or falsity (whether or not they correspond to any given aspect of nature, and how far they can be generalized), becomes a matter for empirical investigation and further theoretical speculation.

Scheffler's genius resides not in saying anything new as such about this "Standard" three-tiered view of science (facts, observational laws and theoretical laws) but rather in stating something explicitly which had long lay merely implicit in past traditions of scientific practice. The use of at least some such form of "responsible assertion" constitutes not only one of the most prominent positional overlaps between neopositivist and postpositivist accounts of scientific objectivity, it is also a jointly shared methodological debarkation line between them (as a realist group) and various recurring anti-objectivist, anti-realist positions (e.g., Kuhn or Hanson in philosophy of science; and Koch, Gergen, or Wertheimer in psychology).

As Scheffler points out, however, it is always easier to simply reject the unpromising implications of blatant anti-objectivist philosophy than it is to put one's finger on exactly where the problems of its subtler (implicitly anti-realist) forms reside or to likewise produce the alternative views required:

"I cannot, myself, believe that this bleak [anti-objectivist] picture, representing an extravagant idealism, is true. In fact, it seems to me a reductio ad absurdum of the [anti-realist] reasonings from which it flows. But it is easier of course, to say this than to pinpoint the places at which these reasonings go astray" (1967/1982, p. 19).

In the chapters following his introductory comments Scheffler goes on to make his own efforts in these latter regards but we have a considerable historical advantage over him on this count because virtually all of the required positional (objective, realist, and materialist) alternatives have now also been put forward and elaborated on by others. Our ongoing task will be, therefore, to gain some practice in recognizing and countering the "bleak" aspects of anti-objectivist views in psychology as well as asserting the existing objectivist alternatives and highlighting their comparatively more promising disciplinary implications.

Finally, in the interest of closing our précis of Scheffler's discussion and beginning to relate it more specifically to our own disciplinary concerns, let's note that he periodically mentions the distinction between the "context of discovery" and the "context of justification" as an apt description of two distinguishable but related aspects of scientific practice or discourse. As Scheffler mentions, it was in 1938 that Hans Reichenbach (1891-1953) put forward an explicit and reasonably balanced distinction between these two "contexts" of scientific activity:

"The .... well-known difference between the thinker's way of finding [a] theorem and his way of presenting it before a public may illustrate the difference in question. I shall introduce the terms context of discovery and context of justification to mark this distinction" (Reichenbach, 1938, Chapter 1, sec. 1).

Reichenbach's point was simply that we should remain cognizant of the fact that there is a difference between "justification" (the manner by which we go about empirically testing the veracity of our hypotheses or likewise communicating our theories with others) and "discovery" (the manner by which those hypotheses or theories were produced in the first place).

A decade earlier, Reichenbach (1928) had acknowledged that working physical scientists (whose primary professional activity was concerned with the "technical" collection and numerical analysis of observational data) did not have time to delve into such issues too deeply. He not only began arguing that a specialized area of philosophy of the physical sciences should be promoted, he also did something about it by founding (along with Carnap) a highly influential periodical, Erkenntnis (1930–40), as a forum for that new scientific philosophy to be worked out and brought to the attention of frontline scientists.

By 1935, however, Reichenbach was distancing himself from some of the earlier more problematic aspects of Logical positivism. Most notably he abandoned what he called "absolute verificationism" in favor of a carefully qualified "probabilistic" variety: under which, even though one might not be able to formally "prove" a hypothesis (in any absolute sense), it was still possible to "verify" its observable consequences; test (or corroborate) the occurrence of its forecasts; and eventually assess the "weight" of the empirical evidence in its favor (see Reichenbach, et al., 1935/1949).

Reichenbach's explicit intention in Chapter 1 of his 1938 work is to likewise refine or extend ("further develop") other former Logical positivist views too. In the first few sections he is especially concerned with that movement's tendency to mistakenly emphasized mere justification at the expense of discovery simply because the latter issues were previously considered too "metaphysical." The lesson here is that to confuse one aspect for the other, or to likewise to collapse one into the other can lead only to an artificial, narrow, and inadequate account of how science actually works. The justification and discovery aspects of science form a two-sided whole, a unity. They are not synonymous but nor do their different contributory roles constitute an impenetrable intellectual dichotomy (in the Kantian sense) or a genuine professional dividing line (in the sense of one aspect being the sole concern of the scientist and the other being the special province of concern for a metaphysician).

Parenthetically, we should mention that Reichenbach did posit a division of labor (or interest) between psychology and philosophy of science. The psychologist, he suggests, is interested in the "actual way in which thinking processes are performed" while the philosopher of science is interested in working out a "rational reconstruction" of the "way in which" such thinking "ought to occur if they are to be ranged in a consistent system [of scientific justification and discovery]..." (Reichenbach, 1938, Chapter 1, sec. 1).

Reichenbach's impressively balanced account of these two aspects of scientific practice, however, received various subsequent lopsided machinations of emphasis. The most notable and influential of these were produced by Karl Popper (1902–1994) and then Thomas Kuhn (1922–1996) respectively. Popper, a neopositivist philosopher whom emphasized the virtues of empirical "falsification," explicitly bracketed the question of the discovery aspect of science by suggesting that it was a merely "psychological" rather than a logical or philosophical question. In contrast, Kuhn's undeniably reactionary anti-empirical and anti-realist "paradigm" stance on scientific discovery absolutized the issue in the opposite direction. He ended up ruling out the possibility of any "coherent" account of the justification aspect. Let's keep our immediate discussion about these two highly related positions as brief as possible by simply indicating how the 'reasonings' contained in Popper's view ultimately leads one to Kuhn's view.

Like the positivists, Popper was concerned with distinguishing "pseudo-science" (or metaphysics) from proper science. Both types of activity profit from the use of "corroboration" (the marshaling of supportive argumentative or empirical evidence) so this could not be utilized as a procedural demarcation criterion between them. He eventually settled upon a "falsification principle" for such a demarcation. Scientific theories were to state the conditions under which they will be counted as having failed. Scientific hypotheses should be stated in falsifiable terms with the best theories being likewise stated in a precise enough fashion so as to be more "vulnerable" to the outcome of a "crucial experiment" designed as a test. Although a theory can not be proved to be true (verified) it can be shown to have survived a number of serious and relevant attempts at falsification. Such trial, error, and mutation of theories in response to attempts at falsification were said to demonstrate the "evolutionary" nature of scientific knowledge. So, even in his markedly neopositivist version of the Standard view, science still accumulates knowledge providing an overall and notable historical continuity between the succeeding theories in any given discipline.

From the beginning, however, -and certainly by the 1960s when his falsificationist views were routinely being appealed to in psychology- Popper seems to have posited an altogether too sharp demarcation line between the context of justification and the context of discovery. For Popper, the generation of theories and the testing of them are separate questions. He tends to treat the procedure of discovery (the issue of where hypotheses or theories come from in the first place) as if it is a matter which is both beyond the mandate of the philosopher of science and perhaps even beyond the bounds of reasonable rational consideration. Strictly speaking, such "reason" is only properly applied by the philosopher of science to post hoc matters of how existing theories might be falsified. The generalized message one comes away with is that the testing of theories against empirical fact is one kind of method (a necessarily scientific and rational one) while the discovery of things is not only different but somehow separate from that (because it can be and often is an irrational affair). Stated a little more specifically, Popper insinuates that it is the empirical-falsification procedure of science and not its procedure of theory generation which is rational.

This justification-heavy position is, I think, an unnecessary as well as unwarranted over-extension of Reichenbach's mere distinction between the justification and discovery aspects of scientific inquiry. Reichenbach was only really saying that the two aspects of scientific procedure are different. For him, scientists and philosophers of science alike must maintain some degree of purchase on the respective "critical" (empirical-numerical) and "descriptive" (analytical-historical) roles of these two aspects if they are to "work [well] together" in producing a "rational reconstruction" of a given domain of scientific investigation.

Popper also diverges considerably from Reichenbach's earlier balanced account regarding the respective interests of philosophers of science versus psychology. So, given that we have some vested disciplinary interests in this issue, let's take careful note of what is being done to the discovery aspect as well as to psychology itself if we take Popper's view to heart. What, exactly, is the disciplinary implication of portraying the discovery aspect as a merely "psychological" affair? Simply put, if we likewise oppose psychology to such rational ("logical and philosophical") method, it relegates our discipline to mere non-logical, non-methodological, and perhaps even irrational (e.g., psychopathological) concerns!

Ultimately, Popper's overemphasis on justification is not good for science or psychology. It is really an implied appeal to the non-rational as a source for discovery in science. I hope that you can get a little tingle of intuition (if not a clear hint) as to what sort of problems that is going to lead to down the historical road. Its a very problematic position that lends itself all too easily to still further overstatement and radicalization.

It was in the Kuhnian notion of the "paradigmatic" succession of scientific theories that this second, diametrically opposed, phase of radicalization was brought about. The reactionary nature of this notion (as well as of the anti-realist movement it ended up promoting) can be appreciated by noting that it is explicitly discovery-heavy. We need not go into too much detail but I do want to draw your attention to how a few of the closing paragraphs of Scheffler's introductory chapter bear down on this issue. For instance, as Scheffler points out:

"The general conclusion to which we appear to be driven [by these anti-objectivist accounts] is that adoption of a new scientific theory is an intuitive or mystical affair, a matter for psychological description primarily, rather than for logical and methodological codification" (p. 18, emphasis added).

Kuhn's account of theory production (or succession) is both a radicalized version of Popper's particular account of "discovery" and -as a wider positional stance on the respective weight of justification vs. discovery- its mirror image. In short, Kuhn's stance is as lopsided as Popper's but its main emphasis is oriented in the opposite direction. Instead of bracketing the discovery aspect of scientific endeavor (as Popper had done), Kuhn embraces and attempts to liberate it from what he considers to be unwarrantable and dogmatic positivist constraints. Yet in doing so, Kuhn ends up collapsing the Popperian overemphasis on the empirical context of justification into a mere affair of relativized discovery. As Scheffler mentions, this stance (whether intentionally or otherwise) leads ultimately to the abandonment of any discernible objective constraints on scientific endeavor:

"Finally, with cumulativeness [of knowledge] gone the concept of convergence of belief fails, and with it the Peircean notion of reality as progressively revealed through scientific advance. For there is no scientific advance by standard criteria only the rivalry of theoretical viewpoints and the replacement of some by others. Reality is gone as an independent factor; each viewpoint creates its own reality. Paradigms for Kuhn, are not only 'constitutive of science'; there is a [vague] sense, he argues, 'in which they are constitutive of nature as well.'

But now see how far we have come from the standard view. Independent and public controls are no more, communication has failed, the common universe of things is a delusion, reality itself is made by the scientist rather than [actually] discovered by him. In place of a community of rational men following objective procedures in the pursuit of truth we have a set of isolated [individual] monads within each of which belief forms without systematic constraints" (Scheffler, 1967/82; p. 19).

The Logical positivist movement remained open to the charge of "dogmatism" by way of its implicit acceptance of Humean epistemology (which had been passed down from their predecessors the British Empiricists). Popper in turn, argued that science is not concerned with verification (or corroboration) so much as it is with falsification. He attempted to sidestep the epistemological problems of his predecessors (including Reichenbach) by charting out a tight-rope-walking empirical justification-heavy approach (sometimes referred to as Neopositivism). But in that rather fundamental epistemological respect, there is no essential difference between the Logical positivist (absolute or probabilistic) verification position; the Popperian negation doctrine (a.k.a., falsifiability); and the Kuhnian notion of relativized paradigms. They are all simply twists upon the same basic Humean theme. In other words, given that they are all either mere indirect realist or outright anti-realist positions, they also rule out the decidedly postpositivist direct realist possibility -contained in the revised Standard view's definition of objectivity as "responsible assertion"- of eventually evaluating the correspondence of theories to whatever particular aspects of the world are under study (e.g., physical, chemical, biological, or psychological processes).

For us, the implication should be clear. If (as Scheffler seems to suggest) the "context of justification versus discovery" indicator is a central dividing line (between neopositivist, merely reactionary, and truly postpositivist views), the pattern of emphasis can be expected to have been repeated in the history of 20th century psychology and might even be quite helpful in teasing out the differential emphases of operationism, constructivism, and the sort of postpositivist psychology being advocated (here and now) for adoption. In the interest of brevity, the following self-explanatory table indicates how the three main positional stances we are considering (operationism, constructivism, and postpositivist psychology) vary with respect to their emphasis on the justification or discovery aspects of scientific practice, as well as to the assumptive (methodological) platform they provide for that emphasis:

PositionView of Scientific Progress/ (epistemology)View of Truth/ (criterion of proof)Fact-Theory EmphasisDiscovery-Justification Emphasis
OperationismConstrained Standard view (Indirect Realism at best)

Consensus (falsifiability but not corroboration)

 

Facts or empirical tools only and a shying away from theory (room for variable psychology alone)Discovery aspect abandoned in favor of Justification
ConstructivismParadigms (Anti-Realism)

Agreement (persuasion but not corroboration or falsifiability)

 

Theories only and a shying away from empirical inquiry (no basis for variable psychology)Justification collapsed into Discovery
Postpositivist Psychology Updated Standard view (Direct Realism, Direct Perception)

Correspondence (corroboration, falsifiability and persuasion)

 

Reciprocal relation between facts and theories (room for variable psychology, Vygotskian approaches etc.) Discovery and Justification

 

At this point I want to gently caution those readers who might be much less interested in tracking down such historiographic-methodological machinations than getting on with the job of doing sound scientific psychology. Some degree of familiarity with this particular "justification vs discovery" distinction is a vital requirement of embarking on just such an amiable and important empirical career path! Why? Because this distinction provides just the sort of key descriptive vocabulary for resolving concerns (which I am sure have already been forming in your mind) regarding the utility, strengths, or potential complementary nature of the "variable psychology" and "Vygotskian methods" traditions outlined above.

While variable psychology is almost entirely oriented toward the justification aspect (quantifiability, accuracy, replicability, etc.), the Vygotskian tradition is more geared toward the discovery aspect of psychological inquiry (novelty, parsimony, internal-developmental genesis, validity, etc.). The short and skinny of the caution being made is that any psychological position or movement which brackets or collapses one of these two aspects of psychological practice into the other (as done under operationism and as we will soon see under constructivism too) is bound to lead to an unsatisfactory result (for you, for psychology, and for society).

It was only at the tail end of the "crisis era" that a few notable theoretically minded psychologists started to deal a little more adequately with the preparatory methodological issues we have been considering above. By doing so, they started a continuing disciplinary momentum in the generalized progressive direction of the current "Postpositivist period of Reconstruction, Integration, and Reformulation." A.W. Staats (1983) gave the "crisis of disunity" a name, lamented its distracting influence on data assessment, and attempted to chart out a possible exit strategy by proposing a neopositivist doctrine of basic methodological assumptions called "uninomic positivism" (see also Staats, 1987, 1991). J.R. Royce (1988), for his part, noted that the traditional empirical criteria of variable psychology are justification-oriented only and suggested that the solution to psychology's problem of theoretical indeterminacy would come from obtaining "the same kind of deep and penetrating analyses of the discovery aspect" of our science (p. 63). That is, we must outline the "careers" of psychological positions as they advance from "weak to strong." This central point was followed up in a very important article by Tolman & Lemery, (1990) which likewise answered Staats by arguing that neither the application of a "monolithic unifying principle" (e.g., logical or uninomic positivism) nor the collection of more data in and of itself will be helpful. The required antidote, they suggest, is to outline the successive types of positions in each professional psychological domain of interest in order to ultimately evaluate the theories contained within that domain for their correspondence to various aspects of the developmental processes under study.

As I have previously tried to indicate (see Ballantyne, 1995) we now have a useful skeleton outline of these combined justification and discovery aspects at our immediate disposal (see also Figure 3 of Appendix 2 in this regard). A working knowledge of this proposed "empirical and theoretical assessment methodology" could potentially allow you to get on with the job of carrying out markedly postpositivist psychological research without necessarily participating (personally) in the sorts of distracting crisis era debates which we are about to cover. A similar familiarity with the assumptive methodological basis which allowed that assessment tool to be produced in the first place would likewise allow you to rest assured that any anti-objectivist views you do encounter do not have to be taken seriously per se. As will be argued below, however, the proviso in this regard is that the sociological and political implications of the stubborn persistence or periodic recurrence of both operationist and anti-objectivist views in our discipline should remain a matter of grave concern.

Assuming that you do enter any one of the research fields of our discipline, it is a given that much of your career activity will be predominately data driven. Would it not be advantageous, however, to be ready with the requisite counter-arguments or tactics to utterly devastate both operationist and anti-objectivist views when they are encountered? Well, I'll leave that decision to up to you and press on with a roughly chronological account of the crisis era views of Koch, Wertheimer, and Gergen in the hope that it will come in handy for at least some of you.

Koch's critique of crisis era psychology

Since it is probably hard for any of you to imagine the appalling state of our discipline between 1960 and 1979, we might as well hear it straight from the horse's mouth by starting our account off with three of Sigmund Koch's most biting critical review articles (from 1964, 1969, and 1981). Ultimately we will want to make our own historically informed assessment of Koch's somewhat shifting position during these two decades, but we are beginning with these articles because they parse out his early views on our disciplinary crisis quite nicely and set the stage for considering the merits and shortcomings of the more explicitly relativized perspectivism stance he eventually adopts (see Koch, 1984; Koch & Leary, 1985; Koch 1992a&b, 1993, 1999).

By 1964, Koch was already well known for his active participation in mid-century psychological debate and subdisciplinary analysis. Initially, he produced a few notable but comparatively mild critical reviews (1951a&b, 1954) and in 1956 he likewise contributed to the Nebraska Symposium on Motivation (which was one of the more prominent consensus-building efforts of that era). More recently, however, he had just finished editing a wide-sweeping six-volume anthology of the discipline (Psychology: A study of science, 1959-63) and it was this collaborative venture that raised his professional stature to new heights.

This intellectual baseline of Koch's early position maintains an admirable demeanor of professional candor and inclusiveness. Consider, for instance, the balanced tone of his 1962 "Introduction to Study II" (volumes 4-6) dealing with the rather touchy and often poorly handled so-called "interrelational " issues of the discipline:

"Among the many types of questions that can be asked about a science none could well be more important than those concerning the relations among its chief fields of inquiry and its interpenetrations with other disciplines concerned with overlapping objects of study. Yet in psychology few questions have been pursued with less vigor...." (p. xvi).

".... The rather crassly defined fields into which convention parcels psychological knowledge could be distinctions of convenience, yet not of mere convenience. Certainly if we trace fields like perception, learning, cognition, motivation, emotion, into the history of psychology, near or remote, it becomes clear that such fields were premised on analyses of psychological phenomena meant to have systematic and even a crude ontological significance. The late nineteenth-century psychologists... thought they were talking about dimensions of analysis which in some sense fitted psychological phenomena.... The ... recent psychology prefers to see its subareas as collections of functional [mathematical] relationships among ... classes of variables. It is perhaps this nominalism which, .... has supported the rather nonchalant attitude toward interrelational problems. But, whether these collections of ... relationships are seen as having 'functional,' systematic, or even ontological force or not, their relation must be taken seriously if we are to have a meaningful science. Plural inquiries must stand in some kind of relationship if we are to have a science. If they stand in none, we have no science" (p. xvii).

"Words like 'fields,' 'areas,' 'research clusters,' 'disciplines,' are deceptive. The terms in which we talk about the architecture of knowledge inevitably suggest knowledge to be more architectonic than it is. Study II supposes that more explicit and intensive interest in the emerging structure of psychology is desirable. But ... [there are] ... limits upon any such enterprise.

The study does not, for instance, suppose that everything that has been or is being done in psychology can fall into place, or in some way be 'salvaged'... Far from it -much that happens in a science is expendable (would that we knew precisely what!).... The study does not presume that what is currently called psychology is best regarded as a single cohesive field of knowledge; rather, it stresses the importance of asking ... penetrating questions about the degree of integration or fractionation of the field relative to prevailing definitions...." (p. xx).

".... Though it has assembled the views of many... it has no intention of fusing them into some single... view of ... our science. On the contrary, it has sought to ensure against the emergence of an 'official' map by arranging that most sectors of the terrain... be inspected by a plurality of viewers. In the end, what matters... is that the individual reader enrich his own view of the science..." ( p. xxi).

"Field names are labels, variably applied, to what is seen by men as related clusters of inquiry. The flux of history, the variability of individual vision, and the unsystematic variety of senses in which a field itself can be defined, inevitably makes these labels highly ambiguous. Different men will -and should- continue to see fields differently relative to their own systematic beliefs and options.

Any new classification arrived at must be an organization dictated by the terms of some systematic view or theory. Until a 'theory' sufficiently compelling to command general acceptance comes along there can be no breakdown of fields any more serviceable than the present one. A theory of the requisite scope, analytic power, and adequacy to warrant any extensive realignment of fields is not exactly imminent and, indeed, my be unachievable in principle" (Koch, 1962, p. xxxviii).

Given that these initial congenial introductory overtones were eventually openly abandoned by Koch, it might be tempting to assume they were merely a manipulative though politically astute (foot in the door, slap in the face) technique exhibiting some sort of disingenuous (malice of forethought) motives on his part. I suspect they were more sincere because indications of the gradual shift away from his initial merely doubting Thomas position regarding the "imminent" prospect of "theoretical integration" in psychology toward an outright and dismissive constructivism are present throughout the articles we will be considering together.

In any case, the particular point being raised by providing the above baseline quotations is somewhat more contextual or biographical-career oriented than argumentative per se. It is simply this: With the one exception of Lee Cronbach (a lackluster neopositivist who in any event tended to overemphasize the potential role of multivariate techniques as a way to firm up theories or bridge subdisciplinary divides), Sigmund Koch (1917-1996) was now probably the best-placed professional personage of that time to begin conducting "his own" extended and critical summary assessment of the contemporaneous status of psychological science.

From about 1964 onward, instead of adopting Cronbach's style of smoothing over subdisciplinary acrimony or attempting to portray our discipline in the best possible and least controversial light, the highly charismatic Koch set out on an entirely different tack. By way of successively highlighting existing subdisciplinary differences as well as zeroing in on and fostering controversy about some of the most sensitive methodological issues of the crisis era, he can be credited with helping to wake general psychologists from their dogmatic slumbers; but it should also be admitted that he is guilty of throwing a lot of unnecessary rhetorical sand in their eyes too. So, while sorting our way through the strengths and weaknesses of his successive arguments we will have to remain aware of this intentionally provocative function of Koch's subsequent work.

Reply to Koch (1964)

In "Psychology and emerging conceptions of knowledge as unitary" (1964), Koch opens his "Introduction" by calling his reader's attention to the ongoing self-reflective "revolution" afoot in the humanities as well as in the philosophy of science. He suggests this "reappraisal" of the "rationales" and constrictive presumptions of prior science has not as yet been incorporated into psychology as our discipline "has long been hamstrung" by an inadequate and outdated (Logical positivist) conception of the "nature of knowledge" and of its "own subject matter." This is ironic because, should psychology break out of the "strange circularity" just described, it could "assume leadership" in the ongoing humanization of science; and one of the most "immediate" steps that can be taken in this direction is to abandon our disciplinary "reliance" on "the incubus of behaviorism" (pp. 1-6).

"Behaviorism has been given a hearing for fifty years. I think this generous. I shall urge that it is essentially a role-playing position which has outlived whatever usefulness its role might once have had... I suspect that there is a class of positions that are wrong but not refutable and that behaviorism may be in such a class.... If behaviorism is advanced as a metaphysical [ontological] thesis, I do not see what, in final analysis, can be done for a truly obstinate disbeliever in mind or experience, even by way of therapy. If it is advanced as a methodological [procedural] thesis, I think it can be shown (a) the conception of science which it presupposes (especially of concept definition and application and of verification) does not accord with practice even in those sciences which the position most wishes to emulate, and (b) that its methodic proposals have had extremely restrictive consequences for empirical problem selection and a trivializing effect upon the character of what are accepted as 'solutions' by a large segment of the psychological community. More than this, I think that for both metaphysical and methodological variants of behaviorism ..., the following can be aid: These are essentially irrational positions (like e.g., solipsism) which start with a denial of something much like a foundation-tenet of common sense, which can, in the abstract, be "rationally" defended for however long one wishes to persist in one's superordinate irrationality, but which cannot be implemented without brooking self-contradiction. The exhibition of such self-contradiction is I think, as close to a 'refutation' of behaviorism as one can reasonably get" (Koch, 1964, p. 6).

After an obligatory historical excursion through the unquestionably untenable reductive assumptions and confined (S-R) learning focus of "classical behaviorism" (pp. 7-9), the main argumentative thrust of Koch's critique is first aimed at exposing the more recent proclivity of particular "neobehaviorists" to adopt "watered down" and overextended (S-Intervening variable-R; or IV-Intervening variable-DV) versions of their approach so as to address "such formerly eschewed areas as perception, language behavior, and mediational processes in general." Koch suggests that these tenuous empirical extensions of former explicitly mechanical "connectionism" are engaged in so as to serve the researcher's own needs for professional "comfort and security" rather than motivated by "a passion for knowledge" (pp. 9-21). The critique is then retargeted toward cutting short the duration of disciplinary acceptability for such "unfruitful" extensions by way of questioning the "rule-saturated" and "mildly camouflaged" mechanical nature of the "intervening variable strategy" upon which neobehaviorism relies so heavily, as well as by pointing out its necessarily "restrictive impact on problem selection and (what in effect can amount to the same thing) problem treatment" (pp. 21-34). Koch also, however, provides a notable (albeit brief) criticism of the "regrettable" subdisciplinary rise of "existentialism" in the American context (pp. 34-37) and this is a pivotal theme he will return to again in 1969.

Three interrelated features of the 1964 article warrant our careful attention. Two of them stand out rather explicitly and are indicative of Koch's position to date. The third, although somewhat more implicit, is none the less central because it is symptomatic of the generalized direction his arguments will take thereafter.

The first notable feature is that Koch's rejection of contemporaneous "neobehaviorism" seems to be roughly appropriate as well as genuinely progressive. For the most part, what he is lamenting is its Logical positivist roots, restrictive operationist methods, and dehumanizing effects on psychological inquiry. Koch is quite correct, of course, that when early through mid-century psychologists attempted to be "scientific" according to the "strict rules" of operational definition or the intervening variable strategy, they ended up constraining their approach so severely that all sorts of interesting and important psychological questions (about mind, perception, emotion, language, etc.) remained almost "untouched."

Can human perception, for instance, be adequately investigated by treating it as a set of discriminative responses to presented stimuli? Should human emotion really be treated as being analytically equivalent to a set of publicly observable (though merely descriptive) operationally defined bodily postures or physiological measurement variables? Surely such respective "classical behaviorist" and "operationist" research strategies lose something important along the way. Is the actual nature of human perception or emotion further elucidated by adopting the "neobehaviorist" stance that hypothetical "intervening variables" occur between Independent Variable inputs and Dependent Variable outputs? Well, you can go a ways with that sort of "hyphen psychology" approach but sooner or later you'll have to admit that placing such "intervening" mechanisms into the brain of the organism -(e.g., a set of Tolmanesque purposively guided "drive discriminations" for perception; or some sort of Hullian socially contingent "rational equation" for emotion)- is not all that different from claiming that thinking is surreptitious movements in the throat (and we know whom said that: Watson). This is, for the most part, what Koch means by his assertion that the optimistic "Age [of S-R] Theory" is over; and further, that what was being passed off as "theorizing" in that long drawn out tradition was merely a mildly-camouflaged reference to mechanical combinations between two or three classes of surreptitiously "agreed upon variables" (1964, p. 34; see also Koch, 1959, 1961, 1963).

The second notable feature of the 1964 article is simply that Koch is still writing from the unmistakable perspective of a concerned psychological scientist. This is made evident in a number of ways. For instance, even though he is appalled by the "lack of respect for ontology" (p. 32) being exhibited by contemporaneous neobehaviorists, he retains a faith that (whatever the disciplinary difficulties) "the subject matter will eventually reassert itself" (p. 31) and that empirical "experiential" investigation into central processes can be picked up in earnest (pp. 34-35). Likewise, when it comes to assessing the Existential movement, he at least mentions in passing that they "do not seem to think like scientists" (p. 37); and by 1969 he stops just short of calling its ensuing spurious therapeutic applications an abomination (see pp. 67-68).

Koch (1964) was proving himself to be the odd man out in the play-safe discipline of academic General psychology, but he most certainly still considered himself a member of that wider scientific community. The motive for emphasizing this biographical feature of the 1964 article is that despite many claims (in some quarters) to the contrary, it is in no way necessarily exclusionary with respect to the first feature. In other words, one can indeed reject the "hegemonic" roots, methods, and effects of positivism on our discipline while still remaining a psychological scientist. In this regard, however, we have bear in mind the further biographical fact that Koch can not be said to have remained an empirically-grounded psychological scientist per se throughout the entire remainder of his subsequent career. Somewhere along the line the peculiarities of his own intellectual orienteering efforts, the formidably challenging disciplinary "terrain" he was facing, or both, led him to stray from that initial path; and, perhaps, if we remain hypersensitive to any early indications of such divergence we might avoid a similar fate.

The third notable feature, therefore, involves a peculiarity of Koch's analysis of positivism that is present (albeit merely implicitly) in the 1964 article but which, -by pulling it out into the open now- will be quite helpful in countering some of his later views. This peculiarity can be encapsulated as an over-identification between positivist inspired empirical procedure and scientific "objectivity" per se. The positivists themselves had put forward a propagandistic identification of objective science with positivism and Koch seems to increasing buy into this identification as his career moves forward. As our above analysis has indicated, however, the problem with positivism was not its attempt to be objective (to produce responsible assertions), but rather its underlying Humean epistemology which undermined that attempt by cutting off the researcher's access to objects, events, or processes and thereby necessitating the mere rule-base use of reductive physicalist (and ultimately sensationist) language.

More specifically, when Koch (1964) undertakes to attack positivistic psychology (whether in its classical behaviorist, operationist, or neobehaviorist manifestations) what he's responding to are the strict procedural rules they imposed on empirical inquiry; and he's right they don't work. His critique of the reductive and restrictive procedural aspects of such mid-century scientism is very useful, but what becomes increasingly evident (not only in 1964, but especially thereafter) is that he is going to attempt to solve psychology's problems by not only throwing off those particular procedural shackles but by also falling back on an implied subjectivist position.

There is, in short, already an implied non sequitur at work in Koch's (1964) analysis or treatment of "positivist science" which becomes increasingly pronounced later on. One early indication, however, can be found while Koch is attempting to defend his brief suggestion that the promotion of various disciplinary "language communities" might be one way to overcome past positivist versions of research procedure in psychology. Here, he openly adopts the mixed historical argument and epistemological position that "the history of mankind and of science gives overwhelming evidence that high degrees of inter-observer agreement are attainable" (p. 30). Whatever his actual intent was in utilizing that particular phraseology, it does not strictly follow at all as an attempt to break away from positivist assumptions (or even definitively positivist procedures in psychology for that matter) because positivism itself was very heavily invested in just such an "inter-observer agreement" rather than correspondence view of truth. Section 3 of the current work outlined various 19th century positivist manifestations of that assumption, and the above major subsections of Section 5 have elaborated (in great detail) its underlying presence in early through late 20th century psychological research procedures.

Before moving on, we need only mention one further 1964 instance in which a suspicion is raised that even Koch's early critique of positivist psychology (itself) might be working incorrectly. When Koch refers to the "absurdities" of the behaviorist's "quest for conceptual homogeneity and unbridled generality" (p. 34), should we be taking him to mean that any quest for homogeneity and generality is ultimately doomed to failure? In this particular early instance, Koch allows this central methodological issue of our discipline to remain open to debate, but later in his career (in fact very soon thereafter) he adopts an increasingly dismal view.

It might be claimed that, early on, Koch simply failed to appreciate the deeper (subjective) methodological aspect of the positivist movement. Certainly in the 1964 article, his critical attention is focused merely on the dogmatically objectivist surface features of positivism as they were being played out in the discipline of psychology. In any case, it is his ongoing stubborn adherence to the fundamentally positivist "inter-observer agreement" theory of truth that becomes not only a significant contributory factor to his personal disillusionment with the discipline as a "coherent science" (see Koch 1969, 1971, 1973, 1981) but also -rather ironically- an inherently vulnerable methodological mainstay of his eventual efforts to promote an apparently ameliorative "psychological studies" account of disciplinary method (see Koch, 1984; Koch & Leary, 1985; Koch, 1993).

I do not take it as in any way credible, however, that Koch remained forever oblivious to this significant and continuing methodological overlap between his position and that of the positivists. Why? Because during the interim years -between his disillusionment and his attempts to promote ameliorative (albeit "asystematic" -see Koch, 1984) disciplinary intervention,- it was just this shared methodological mainstay that presented itself as an irresistible and explicit target of attack for more radical disciplinary figures like Michael Wertheimer or Kenneth Gergen. In other words, just as in the case of positivism itself, it takes but a few swift hacks to set Koch adrift in a sea of solipsistic subjectivity. So, while attempting to ponder and critique the ultimate outcome of Koch's mid-through-late career analysis a little more fully, let's briefly intersperse the contemporaneous radicalized "constructivist" positions of Wertheimer and Gergen along the way because (in their own peculiar manner) they do a lot of our arguing for us.

Koch, Wertheimer, Gergen, and the dialectics of disciplinary disagreements

While presenting the possible methodological options that crisis era psychologists might adopt, Koch (1969, 1981, 1984, 1993), Wertheimer (1972, 1988), and Gergen (1978, 1981, 1984) each argue as if there are but two mutually exclusive general positions available: The overly optimistic, reductive, and demanding Logical positivist view of unified science which wanted to be objectivist but failed; and the pursuance of some kind of more modest (as yet unnamed) subjectivist position which does not make any such excessive demands on our discipline. Ironically, in a reactionary bid to avoid past dogmatic procedural scientism in psychology, each figure (in their own way) adopts as a starting point for their particular analysis the philosophical endpoint of the Logical positivist movement (that objectivity is not possible) and attempts to proceed onward from there. As an anti-objectivist group, they gravitate successively toward the use of some form of anti-realist theoretical or conceptual pluralism in psychology. There is, in effect, a shared and central non sequitur running through their varied subjectivist accounts because each of their highly relativized positions were being portrayed not merely as an accurate description of the contemporaneous state of theoretical indeterminacy in the discipline, but as an indication that this is the necessary, ongoing, and perpetual state of our disciplinary affairs.

What I hope you can already appreciate is that psychology was not, and is not, in fact limited to any such forced either/or (dogmatically objective positivism vs subjective relativism) choice. There was, and is, a third relatively neglected methodological option available for psychology to adopt: The decisively nondogmatic and objective postpositivist one which is not only epistemologically connected to the objects or processes under study (by way of a direct realism) but also ontologically nonreductive too (by way of an emergent-dialectical materialist integrative levels theory). I am asserting openly here that this third methodological alternative is both supportive of scientific objectivity (in the sense of responsible assertion) and optimistic with regard to the possibility of theoretical integration and further empirical progress in our own discipline.

With specific regard to Koch, Wertheimer, and Gergen, the ways their particular positions differ are highly instructive for us. For one thing, they each serve as cogent disciplinary exemplars of kinds of methodological positions to avoid in future. Just like the relativized and reactionary philosophies of science upon which they explicitly rely (those of Kuhn, Hanson, and/or Feyerabend), each of these disciplinary figures bought into one or another of the more problematic aspects of the positivist account of science and attempted to presented their own positions as something radically different. But they're not! Secondly, and perhaps more importantly, they function as successive pragmatic checks on the methodological robustness of a truly postpositivist Standard view of science and thereby provide us with hints as to the manner by which we might go about utilizing that updated view of science to reform or improve contemporary psychology.

Reply to Koch (1969)

In considering Koch's "Psychology cannot be a coherent science" (1969), let's begin by acknowledging that the strange pleasure one gets from looking back on this particular article springs (I believe) not only from the fact that Koch seems genuinely and justifiably concerned about the lack of theoretical progress made in such areas as "learning" (which he picks out specifically for mention); but beyond that still, that he is quite rightly appalled by the current state of the therapeutic aspects of crisis era psychology. The wacky and weird goings-on in that latter "professional" psychological application are of particular note in attempting to understand the motives behind Koch's nearly complete disillusionment with the current state or imminent prospects of such a confused discipline.

We must also notice, however, that the two main suspicions raised by his 1964 article -i.e., he is not in fact departing from the old positivist identification of "science" with a mere objective procedure; and might be therefore drifting off in an albeit qualified subjectivist methodological direction- are in no way assuaged in 1969. Instead, they are further exacerbated. There is, in short, a marked discordance between the observational-historical points he scores near the beginning of the article (regarding for instance, the persistence of divisive theoretical "sects" in psychology, p. 64; or the "lack of agreement on the empirical conditions under which learning takes place," p., 66) and the overly-subjectivized (perspectival) methodological conclusions he eventually draws from them near the end (in particular on p. 67 as quoted further below).

The somewhat rhetorical historiographic hinge argument he utilizes to link up such cogent observations and thoroughly questionable conclusions is instructive itself. He starts this argument off rather early on by abandoning his former illusions about the humanities acting as a model for psychology, because they too (he suggests) have now been infected with the "scientism" bug (p. 14). He then proceeds to contrast the "justly discriminated fields" (p., 64) of the "natural sciences" (namely physics and biology) with psychology. While the natural sciences "won their way to independence by achieving enough knowledge to become sciences;" psychology "was stipulated [by 19th century edict rather than content] into life" (p. 64) and its 100-year history "can now be seen to be a succession of changing doctrines about what to emulate in the natural sciences" (p. 64).

But it is just at this rather crucial and central point that he erects a severely confined straw man account of the de facto breadth of past "natural" scientific investigatory "methods." This is done by way of defining "science" itself as "a special analytical pattern" consisting of presumably empirical-mechanical-reductive procedures (p. 66). He then completes the hinge argument by suggesting -not surprisingly- that such a procedurally homogeneous view of scientific method is too narrow for the "social sciences" and for most of "psychology" to abide by:

"A century and a quarter ago, John Stuart Mill argued that the backward state of the social sciences could be remedied only by applying to them the methods of physical science, 'duly extended, and generalized.' His strategy has now been applied in billions of man-hours of research, ardent theoretical thinking, scholarship, writing, planning and administration in hundreds of laboratories by thousands of investigators.... In my estimation, the hypothesis has been fulsomely disconfirmed. I think it by this time utterly and finally clear that psychology cannot be a coherent [procedurally unified] science, or indeed a coherent field of scholarship, in any specifiable sense of coherence that can bear upon a field of inquiry. It can certainly not expect to become theoretically coherent; in fact, it is now clear that no large subdivision of inquiry, including physics, can be" (p. 66).

Koch is clearly wielding a double-edged rhetorical sword here because in attempting to cast aside our own past "scientistic" pretenses he seemingly ends up not only explicitly rejecting the application of (albeit carefully confined) natural scientific "methods" to most of psychology but seemingly the very possibility of any "theoretically" consistent science or field of scholarship itself. But does the one point actually follow from the other? Let's simply note that question for now, because it is one that we'll return to again while considering Koch's 1984 paper. A more immediate concern is to simply counter the 1969 straw man account of past natural scientific procedure upon which Koch's further blanket condemnation of any possible theoretical-methodological overview of a given "field of inquiry" relies all too heavily.

Stated simply, even if we temporarily grant one implied part of Koch's first point, -i.e., that many of the past "methods" utilized by "natural science" (under which he apparently does not include the study of humans as part of the nature of things) were indeed mechanical and reductive,- does this mean that the "special analytical pattern" of science itself is necessarily mechanical-reductive? Section 4 of the present work, which carefully outlined the successive adoption of organic as well as mental and cultural evolutionary theory in the very era of biological and sociopolitical investigation that helped "spawn" the discipline of psychology begs to differ! So do many of the above subsections of the present section; especially those outlining not only Woodworth's efforts to produce a dynamic S-O-R psychology (which Koch fails to mention in 1964) but also those outlining the further efforts of Piaget as well as Vygotsky and his colleagues. Despite the specific differences between them, the explicit aim of all of these successive efforts (both within psychology and outside its immediate purview) was to come up with nonreductive and non-mechanical methods of scientific investigation.

Do not each of these admirable historical exemplars meet the (albeit loosened) criterion Koch mentions for the "application" of "duly extended, and generalized" natural science methods to "social scientific" and "psychological" concerns? If so, -and this is really the only argumentative counterpoint I want to leave you with so far- then perhaps Koch might be drifting a bit off course with any further conclusions which utilize as an historical premise the supposed "disconfirmed" status of the old "Millian hypothesis" (p. 68).

Returning to Koch's ongoing 1969 account, we find him suggesting (again rather questionably) that the "analytic pattern" of research used by the so-called "established" natural sciences seems applicable to "at least one" of our disciplinary subfields; namely "biological psychology" (p. 67). This means, he goes on to imply a little later, that it might just as well be "incorporated" into biology (p. 67). How magnanimous of Koch to give us hope that someday this troublesome (and often highly reductive) subdiscipline of psychology might be taken off our weary hands by simply offloading it elsewhere! Yet, would this not be tantamount to what the positivists attempted to do with so-called "metaphysics"? I am inclined to the view that any psychology which attempts to hack off either of these two legs and still remain standing will at the very least become extremely unstable.

Finally, in his similarly discordant argumentative crescendo, Koch suggests that our collective hitherto disconcerting cognitive dissonance about the "scientific status" of "Psychology" can be alleviated by simply casting aside such erstwhile disciplinary pretense and adopting a more modest "Psychological Studies" label:

"The 100-year history of 'scientific psychology' has proved that most other domains that psychologists have sought to order in the name of 'science,' and through simulations of the analytic pattern definitive of science, simply do not and can not meet the conditions for the meaningful application of this analytic pattern.

When I say that designation as 'science' only vitiates and distorts many legitimate and important domains of psychological study, it is well to understand what I am not saying. I am not saying that psychological studies should not be empirical, should not strive towards the rational classification of observed events, should not essay shrewd, though-minded, and differentiated analyses of the interdependence among significant events. I am not saying that statistical and mathematical methods are inapplicable everywhere. I am not saying that no subfields of psychology can be regarded as parts of science.

I am saying that in many fields close to the heart of the psychological studies, such concepts as 'law,' 'experiment,' 'measurement,' 'variable,' 'control,' and 'theory' do not behave as their homonyms do in the established sciences. Thus the term 'science' cannot properly be applied to perception, cognition, motivation, learning, social psychology, psychopathology, personology, esthetics, the study of creativity or the empirical study of phenomena relevant to domains of the extant humanities. To persist in applying this highly charged metaphor is to shackle these fields with highly unrealistic expectations; the inevitable heuristic effect is the enaction of imitation science.

As the beginning of a therapeutic humility, we might re-christen psychology and speak instead of the psychological studies. The current Departments of Psychology should be called Departments of Psychological Studies. Students should no longer be tricked by a terminological rhetoric into belief that they are studying a single discipline or any set of specialties that can be rendered coherent, even in principle" (Koch, 1969, p. 67).

The implied theoretical perspectivism and potentially destabilizing disciplinary consequences of attempting to adopt Koch's "psychological studies" remedy are not drawn out into the open by 1969. This would have to wait until his subsequent writings were produced (especially: Koch, 1981, 1984, 1993).

Reply to Koch (1984)

The most expedient place for us to turn in order to assess those implications and consequences is his International Congress of Psychology address "Psychology versus the Psychological Studies" (1984). That single-page paper was intended by Koch as a succinct summary statement of his cumulative position to date. For us, it will serve as a representative indication of the position he will retain throughout the latter part of his career and of the particular arguments which make up that position (see also Koch, 1992a & 1992b; 1993, 1999).

Given that we are primarily interested in where Koch is going with his 1984 paper, we need only note about the first few paragraphs that he emphasizes the scarcity of "significant" psychological knowledge resulting from our use of "natural science" methods (including experimentation) even in areas such as learning or motivation; questions their very applicability to "history-dependent fields like social psychology;" and makes a point of mentioning that psychology is "the most philosophy-sensitive field in the entire gamut of disciplines that claim empirical status."

The basic argumentative rationale for the remedial disciplinary position he will be laying out for us to adopt starts to take shape thereafter so let's concentrate on how he proceeds with it:

"The 19th-century belief that psychology can be an integral discipline motivated its baptism as an independent science. But both a priori and empirico-historical considerations prove that belief to be autistic. On the a priori basis nothing so awesome as the total domain comprised by the functioning of all organisms (not to mention persons) could possibly be the subject matter of a coherent discipline."

Versions of these two lines appear frequently in Koch's writing (from as early as 1962 onward), but what exactly do they mean? For no matter how broadly we might cast our collective investigatory net, why should we say a priori (before hand) that our understanding of psychology's complex subject matter cannot remain -in some sense- "coherent"? Similarly, what are the "empirico-historical considerations" he has in mind when suggesting that it is now unsuitable to believe the discipline can somehow hold together as an "integrated" scientific endeavor which studies albeit diverse psychological processes?

Let's take up the apparently more difficult of these two questions first. When Koch appeals to "a priori" considerations which might interfere with our understanding of the breadth of psychological "subject matter," all he is actually referring to is his own personal investment in an (albeit objective idealist) Neo-Kantian epistemology. This only became clear to me while reading his rather tedious 1981 article where he explicitly resurrects the old Kantian "antinomies" of human experience argument and (after numerous pages of near-nonsensical circumlocution) ends up marveling at the "human being's capacity for discerning islands of order within the [uncertain] antinomal ocean in which we swim" (emphasis added, p. 265).

Obviously, if one has presupposed such an indirect realist barrier of the senses, then surely the prospect of even attempting to comprehend something so complex "as the total domain" of psychology becomes an "awesome" (and perhaps even impossible) task indeed. So why even bother to make such a presupposition in the first place? Why not simply assume that we have some sort of direct realist access to material bodies and to the events or processes which arise from such bodies -be they biological, psychological, sociopolitical, or whatever? That would (at the very least) save me the compound headache incurred while trying to explain to students that what Koch (1981) means by "meaningful [clear-headed] thinking" is actually an intellectual and emotional openness to the "antinomal [mutually contradictory, dichotomous] complexities" of the experienced object (the phenomena) rather than an openness to the dialectical properties of the ontological object itself (the noumenon); and that, therefore, what he means by disciplinary "coherence" (in 1984) is mere argumentative coherence about the thing-as-known because -after all (in his Neo-Kantian view)- our contact with the thing-in-itself (psychological subject matter) is merely inferential, etc.

With regard to the "empirico-historical considerations" which seem for Koch to count against the possibility of attaining an integrated scientific approach to psychology, we can safely assume that he has in mind the the failed attempt on the part of the behaviorists to impose a crude procedural uniformity on the discipline. Koch is correct about one thing here: The use of any form of analytical reductionism (be it physiological, behavioral, mechanical operational, or statistical) is the wrong way of going about the quest for generality of results or procedures of investigation.

But does the rejection of such past reductive attempts automatically rule out the possibility of obtaining a coherent account of or investigatory approach to the ontologically integrated relations between various aspects of that subject matter? No, we don't have to tie these two things together. We can have a coherent account of the whole which is nonreductive. There is likewise no reason doubt that a suitable set of scientific procedures can be worked out to reflect that integrative complexity, nor to expect that when they are applied they will produce anything less than generalizable results.

Yet because Koch seems to possess no overt knowledge of, or purchase on, the nonreductive methods of analysis and empirical investigation we already have at our disposal; and given that he has previous anti-positivist reasons for resisting any such unifying efforts (as well as a proclivity to view them as inherently dogmatic on "a priori" epistemological grounds), we'll see his 1984 argument gravitating successively toward a kind of procedural and conceptual pluralism (which ostensively rules out -"in principle" as he is fond of saying- any kind of integrative disciplinary status for psychology).

"If theoretical integration be the objective, such a condition has never been attained by any large subdivision of inquiry --including physics. When the details of psychology's 100-year history are consulted, the patent tendency is towards theoretical and substantive fractionation (and increasing insularity among the "specialties"), not integration. As for the larger quasi-theoretical "paradigms" of psychology, history shows that the hard knowledge accrued in one generation typically disenfranchises the regnant analytical frameworks of the last --and that any new framework this knowledge is believed to suggest typically survives only until the next."

Here, Koch is taking the early historical expansion of psychological subdisciplines as well as the current theoretical "fractionation" of insular crisis era psychology "specialties" as evidence for the likely impossibility of disciplinary integration. But is the mere fact that "theoretical integration" had not been "attained" by 1984 really a sufficiently good reason to assume that we should give up on that goal?

We can certainly empathize with his anti-positivist reasons for being against past reductive approaches to procedural homogeneity, but in science and life there are lots of things we have never done before that we might yet expect to do once we find a way, or achieve the skill and ability to do them. Koch would most certainly call such an expectation quaint though naive. For as he put it in 1969, he too once expected that our disciplinary difficulties could be resolved by the careful application a little clear thinking. Okay, so let's be naively optimistic until convinced otherwise. After all, isn't that better than simply giving up? I think we'll see shortly that it is.

In this regard, let's emphasize the point that the most disconcerting aspect of the above extract is contained in the last line. Here, the view of intellectual history being expressed by Koch is one which relies very heavily upon Thomas Kuhn's relitivized notion of theoretical "paradigms" -where one "analytical framework" is disenfranchised by the next (simply different) one due to shifts in taste and political popularity. This is not the view of intellectual history contained in Scheffler's account of the Standard view of science -which recognizes cumulativeness at the empirical level and a subsumptive forward moving succession of theories at the theoretical level. So, we have to be vary careful about what we take away from the above extract.

Clearly, Koch is a very astute observer. Is he describing what is going on during crisis era psychology? Yes. The discipline is indeed theoretically fractionated and insular. What he takes that state of the discipline to imply, however, is that we will never achieve any theoretically coherent account of the whole, and that we can only expect a succession of incommensurable paradigms which will displace each other according to the new analytic frameworks of each generation.

Having thus completed his basic argumentative rationale, Koch now proceeds to reiterate the remedial "psychological studies" position (mentioned in 1969; 1981) and starts laying out what the disciplinary implications of adopting it might be:

"My position suggests that the non-cohesiveness of psychology finally be acknowledged by replacing 'psychology' with some such locution as 'the psychological studies.' Those 'studies,' if they are seriously to address the historically constituted objectives of psychological thought, must range over an immense and disorderly spectrum of human activity and experience. If significant knowledge be the desideratum, problems must be approached with humility, methods must be contextual and flexible, and anticipations of synoptic breakthrough held in check."

What I hope you can all already see is that Koch has carefully maneuvered his arguments in place and tailored his disciplinary remedy with a particular goal in mind. That goal is to begin espousing a sophisticated form of conceptual pluralism. If this is not clear to you so far, it soon will become pointedly obvious. At any rate, various cruder forms of conceptual pluralism began appearing in psychology during the 1960s and, as we saw previously, Koch (1969) was quite rightly appalled by them. But such views did not go away and they had become fairly sophisticated throughout the mid-through-late 1970s on up into the 1980s context of theoretical psychology within which Koch is an active participant. Subsequent versions also continued into the 1990s too (see Danziger, 1990, 1994, 1997).

By 1984, it is clear that what Koch appreciates is that if we actually do accept the subjectivized epistemological premises and methodological endpoint of Logical positivism (that all we really have contact with is sense data, and that scientific objectivity seems to be unattainable), then there's no way for him to convince me of his position (nor for me to convince you of my position), except through some sort of coercion or subtle persuasion. Formally speaking, if we are confined to such a conceptual pluralism viewpoint, I can't actually appeal to any shared material object, observable event, let alone the actual objective nature of any given psychological process as such; I have to appeal to you (your psychical experience, beliefs, etc.). It's your psyche against mine. My subjectivism against your subjectivism.

Being a reasonable fellow, Koch (1981, 1984) reasons that if this is indeed the case, then why should we even try to keep forcing everybody into the same mold? If these three colleagues want to have a particular theory of the psychological universe, let them. If those three want to have another theory, let them. Koch suggests, therefore, that the "non-cohesiveness of psychology" be acknowledged by adopting the "psychological studies." At face value, this takes the pressure off of us. So we three can now set up Psychological Study A, those three study B, etc.; and all this will be done in comparative "humility," because any pretentious "anticipations of synoptic breakthrough" or generality of results will be "held in check".

What Koch is alluding to in the above extract is the influence of positivism on our prior approaches to formalizing empirical psychological method. The Positivists were not humble! They were militant and acted as if they had the answers. In psychology especially, they attempted to make everybody conform to their analytical rules -including physiological reduction, the appeal to observable behavior, the use of operational definition, and/or proper statistical techniques.

So, we're going to be more "humble" he suggests. Methods must be "contextual and flexible" not dictated dogmatically as the positivist were doing. Let me do it the way I want to do it, and I'll let you do it the way you want to do it; and neither of us will kid ourselves into figuring that we'll come up with any sort of "synoptic" overview of whatever it is that we are studying. In other words, give up, let's just all do our little things!

"Moreover, the conceptual ordering devices, technical languages ('paradigms,' if you prefer) open to the various psychological studies are --like all human modes of cognitive organization-- perspectival, sensibility-dependent relative to the inquirer and often non-commensurable."

What's a "conceptual ordering" device? My language, my attitudes, my ethnic identity, my historical or cultural context and so on. All these, Koch suggests are "perspectival." They depend on your point of view or (as Koch mentioned earlier) on the "analytical frameworks" you are utilizing. Such perspectivism allows the behaviorist and the humanistic (e.g., Existential) psychologist to coexist with each other. Well, the Existentialist is investigating subject matter from an inside-outward perspective, the behaviorist from perhaps an outside-outside perspective; these are different perspectives, they are irreconcilable, incommensurable; there's no reason for one to claim the other is wrong, they should all live together in a kind of happy harmony. Such harmony is not, of course, based on any kind of theoretical integrity. That's thrown out! Toleration is what it comes down to; that's the key to such professional coexistence.

"Such conceptual incommensurabilities will often obtain [[sic. occur, -P.B.]] not only between contentually different psychological studies but between perspectivally different orderings of the 'same' domain."

You may be claiming to be dealing with behavior and so might I, but, if you are a follower of Skinner while I'm utilizing some more phenomenological view of behavior we'll both have to simply agree to disagree about these different perspectives in the same domain. It isn't like you're talking about behavior and somebody else is talking about emotions. No, we're both talking about behavior. It's the same domain of interest, but even in that same domain we have different perspectives on the thing being studied and we'll just have to get used to living in this pluralistic way indefinitely.

"Characteristically, psychological events are multiply-determined, ambiguous in their human meaning, polymorphous, contextually environed or embedded in complex and vaguely bounded ways, evanescent and labile in the extreme."

All of the claims made in the above extracted line are true and it was these mere surface features of the perspectivist position being laid out by Koch which achieved the status of popular appeal during the 1980s. For instance, even Ernest Hilgard's widely read Psychology in America (1987) adopted a weak version of such perspectivism while wrapping up his long and detailed work with a sweeping little subsection heading called "pluralism may be the answer" (p. 803). But what we want consider very carefully indeed, is whether the immediately above truths warrant the pointedly subjectivized conclusions Koch (and others during the crisis era) were drawing from them -i.e., that it is really naive or hopeless to expect to achieve any kind of a unified discipline which gets at fundamental truths about its psychological subject matter.

The issue under contention was whether the "free-market interplay of theories" advocated by Hilgard and other moderate figures would be a free-floating one which is detached at its observational base (as Koch seems to argue from here onward) or one that is somehow still firmly grounded in the world. That is, whether we are going to simply recognize a plurality of viewpoints and an overall unity within the diversity of the discipline, or adopt a conceptual pluralism per se. Whichever way you put it, what's at stake is the difference between a unified "psychological" discipline and an in principle disunited collection of "psychological studies."

In the interest of focusing your attention as sharply as possible at this critical argumentative juncture, I just want to alert you to the fact that we are about to begin dealing openly with some of the most confining (epistemological, logical, and practical) aspects of a movement which usually presents itself in public as liberating. Most immediately we'll be taking up that argumentative theme with regard to the endpoint of Koch's 1984 paper, but we'll return to it again a little later while considering the views of Wertheimer and Gergen too. Early on, Koch had no small part to play in this cheery public relations aspect of the anti-objectivist movement in psychology. But as we'll see momentarily, this pretense was dropped by him in 1984.

In 1964 Koch was announcing that psychology could now be freed from the procedural "constraints" of neobehaviorism. In 1969 the message was that if we stopped worrying about being a "science" and adopted his "psychological studies" position we at least had some "hope" of surmounting a "grave disciplinary impasse." Even his 1981 article, which can otherwise be encapsulated as a confrontational abreaction to the concurrent celebratory backslapping of APA 1979 (where it was first presented), had such a message implanted near the end: "You will be surprised perhaps," he writes, "to discover that my proposals are liberation ones and not devoid of hope" (p. 268). By 1984, however, Koch no longer has the stomach for such callow optimism. The message of hope is gone and a somewhat gloomier one of voluntary limitations on our former disciplinary "beliefs" or in principle "constraints" takes over.

"This entails some obvious constraints upon the task of the inquirer and limits upon the knowledge he or she can hope to unearth. Different theorists will --relative to their different analytical purposes, predictive or practical aims, perceptual sensitivities, metaphor-forming capacities, preexisting discrimination repertoires-- make asystematically different perceptual cuts upon the same domain. They will identify variables of markedly different grain and meaning contour, selected and linked on different principles of grouping. The cuts, variables, concepts, that is, will in all likelihood establish different universes of discourse, even if loose ones."

If we are to provide an adequate counter to the endpoint of his 1984 position, one crucial step is to appreciate the depth of the "constraints" Koch is enunciating in the above extract and in the one immediately following. In the above, there is an inherent anti-system building constraint being implied. By way of using the term "asystematically different perceptual cuts of the same domain" Koch is emphasizing that different theorists cannot tie their respective "perceptual cuts" together in any way. You are simply going to do it differently than I do. If, per chance, the variables I consider "meaningful" are different from the ones you consider meaningful, we are simple stuck.

According to him, when two colleagues meet to hammer out their theoretical differences of opinion on say behaviorist therapy, they can not know if they are even talking about the same thing, and that's why I can't say (for instance) that my phenomenological (field) theory of behavioral therapy for autistic children is better than your Skinnerian (operant) theory. For one theory to be better, the two theories would have to be referring the same thing, and since we can't be sure of this on a priori epistemological grounds, then all we can do (as he goes on to explain) is consider them as "different universes of discourse," optional conceptual frameworks, or "alternate organizations" which can in no way "preempt or preclude" one another (see also Gergen, 1984, p. 30).

"Corollary to such considerations, it should be emphasized that "paradigms," theories, models can never prove preemptive or preclusive of alternate organizations. That is so for any field of inquiry, but conspicuously so in relation to the psychological and social studies. The presumption on the part of their promulgators that the gappy, sensibility-dependent, and often arbitrary paradigms of psychology do encapsulate preemptive truths raises a grave issue reflective of a widespread moral bankruptcy within psychology."

Notice he's emphasizing above that to "presume" there is truth encapsulated in our theories represents "moral bankruptcy." Exactly what does he have in mind here? It is the positivists. They thought they were on the track of some kind of object-oriented truth about natural scientific and psychological processes but when you look at their underlying philosophical (methodological) position you see that the "sensibility-dependence" that they started out with undermines any possibility of asserting truth and for them to go ahead and assert it anyway (e.g., by way of appeal to operational definition, etc.) was clearly dogmatic (to say the least) and perhaps even as Koch suggests unethical too.

Yet even if we grant Koch's point regarding the dogmatism of past positivist positions, there is still the little problem of the fact that he has left himself no room to tie his (or our own) views back to practice. In 1984, for instance, there is no longer any basis for making his former cogent observation of 1969 that:

"If one is drawn by unassailable scientific argument to the conclusion that man is cockroach, rat or dog, that makes a difference. It also makes a difference when one achieves ultimate certitude that man is a telephone exchange, a servo-mechanism, a binary digital computer, a reward-seeking-vector, a hyphen within a S-R process, a stimulation-maximizer, a food, sex, or libido energy-converter, a utilities-maximizing game player, a status-seeker, a mutual ego-titillator, a mutual emotional (or actual) masturbator. And on and on" (Koch, 1969, p. 14).

Such a tie to practice (the "difference" made by believing one theory over another) seems to have broken down for Koch in 1984 and mere appeal to the whims of discourse has taken over. He's given up on the possibility of "preemptive" truth and suggests that things will go a lot easier for each of us if we do the same.

Once again, is not the conclusion implied here too strong? Isn't it possible that "preemptive" truth may be encapsulated in some assumptive methodologies and in some theories but not in those utilized by past positivist psychology? I'm suggesting that there is a kind of non sequitur at work in this endpoint of his 1984 position. The conclusions really don't follow because if we take Koch's 1984 conclusions seriously (instead of at mere face value), we end up in exactly the same position as the Logical positivists. That is, we will have no way of deciding whether our assertions are right or wrong and as I've pointed out numerous times already the argument falls all too easily into solipsism. If this is the case, we might just as well all stay home to watch television instead of bothering to attend International Congress of Psychology meetings, etc., because there really is no point of us coming together to resolve such technical-observational, theoretical, or disciplinarily problems.

Being a philosophical sophisticate, however, Koch adopts (at face value) a Neo-Kantianism with all of its inherent complications and circumlocutions, over the old Humean solipsism. This can be noted by the Kantian flavor of his specific critique of the so-called "cognitive science" (information-processing or artificial-intelligence) approach near the bottom of the paper.

The cognitive scientists were an important part of psychology's postpositivist ferment too. Like Koch, they recognized that the positivists led the behaviorists to deny that there are real cognitive functions. Instead of assenting to that dogma, they set themselves the task of studying such processes. Their overall research rationale was that if we focus our immediate empirical investigations in on intentional mental, cognitive, and reasoning functions we might eventually find some way of tying the wider subject matter of psychology together. It is around those intentional mental functions that we'll eventually draw in emotions, behavior, language, and all the rest of the related processes we are interested in understanding.

Having not only failed to adequately distinguished between the merely mechanical aspect and the more intentional-functional aspect of the ongoing cognitive science movement (see Flanagan, 1984), it is then on largely Neo-Kantian (epistemological) grounds that Koch goes about expressing his cynicism about such an ambitious research rationale. What exactly is Koch saying to these cognitive scientists? Forget it you guys, our investigations will always remain local and qualified because we will never reach any wider agreements about how such processes tie in with each other. Instead, Koch advocates "the modest, particulate, untidy, pluralistic" study of "local" language communities over any such search for a wider "unifying paradigm for psychology." But doesn't Koch's proposed corrective downsizing of our collective disciplinary aims itself fail to acknowledge that -according to his own previous argumentation- such miniature research communities would run into the same problem of establishing inter-subjective agreement (no matter how local)?

Well, that's the reductio of Koch's (1984) position. Koch (like Kant himself) knows that if he adopts such solipsism per se he is dead in the water -i.e., in the sensory qualities which we for the sake of convention and by the mechanical force of habit call "water." Other psychological metatheoreticians during this crisis of relevance era, however, were very much less sophisticated than Koch in asserting their Humean or Neo-Kantian views so the shortcomings of their positions are more transparent. We'll briefly mention two such figures (Wertheimer, and Gergen) because their respective positions provide instructive contrasts with not only Koch himself but also with the neopositivist tradition of empirical-operationist psychology they set out to critique. For your convenience, these contrasts are encapsulated in the following table:

Empirical-Operationist psychologyNeopositivist inconsistent on issue of access to object (but usually not accessible)objective idealist and/or reductive materialistfalsification criterion of proof
KochNeo-Kantianobject not accessibleinconsistent objective idealismqualified inter-subjective agreement
Wertheimer"Neosolipsism"object not accessiblesubjective idealismpersonal persuasion only
Gergeninconsistent Neo-Kantianobject accessible but not importantrelativismqualified inter-subjective meanings; sociopolitical persuasion (but not corroboration or falsifiability)

 

On the implications of Wertheimer's "Neosolipsism"

Despite all the carefully placed Neo-Kantian qualifiers Koch utilizes in his writings from 1964 onward (not the least of which is the 1984 one suggesting that competing views of a given domain of interest are "often incommensurable" -emphasis added), I've suggested that his position ultimately slides into solipsism. This is because instead of adopting a postpositivist and direct realist correspondence view of truth, it relies upon an implied appeal to "inter-subjective agreement" (within albeit localized language communities) as a criterion of proof or truth.

Although Koch may have been unaware of this methodological shortcoming up to 1971, there was one man within the anti-objectivist movement itself whom raised this critical reductio counterpoint explicitly. So, turning to that part of Michael Wertheimer's (1972) account might be doubly helpful in our present efforts to see through the pleasant facade of the anti-objectivist movement in psychology (to its deeper problematic assumptive foundations).

In his terrible little book Fundamental Issues in Psychology (1972), Wertheimer includes an informative chapter called "Subjectivity versus Objectivity" within which a convincing reductio is carried out on both the "interobserver" and "intersubjective" agreement criteria assumed by positivism and by Koch (respectively). This, of course, puts the onus on Wertheimer to come up with some other such criterion for making his own claims and the way he goes about it speaks volumes about the self-imposed methodological limitations this movement brings along with it when approaching important disciplinary topics like why psychological theories differ, what theoretical integration might mean, or whether it is even desirable (Wertheimer, 1988).

After opening the chapter by pointing out (quite correctly) that the positivist variety of "interobserver agreement" often served as an unquestioned "criterion and assurance for scientific validity" (p. 111) in the mid-century tradition of empirical-operationist psychology (including neobehaviorism), Wertheimer eventually follows this up with an equally cogent attack on the more Kantian variety of this assumed agreement criterion called "intersubjective agreement" (pp. 117-118). Under that heading he writes:

"Some have tried to escape what they consider the specter of [all-out] subjectivity in the scientific endeavor by proposing what one might call a democratic criterion of truth. The criterion is based on a modified [intellectual] democracy in that only a select few have a vote, and some people's votes may count more than others' [(e.g., a full-professor instead of an assistant, while a student's views would count hardly at all)]; but the idea that interobserver [or intersubjective] agreement can [p. 118] somehow transcend individual subjective experience seems to me nothing more than wishful thinking. While I am passionately committed to the validity of democratic processes in political matters, I am equally intensely opposed to a democratic criterion of scientific validity or 'truth'" (Wertheimer, 1972, pp. 117-118).

Now, if one were to read this particular extract in isolation, it might sound as though Wertheimer may be leaning toward a more workable direct realist and correspondence view of truth in order to replace the problematic agreement view shared by both positivism and Neo-Kantian psychology (including Koch). Alas, as we'll see, Wertheimer does not pursue this option. He does, however, immediately launch into a convincing reductio of that agreement view as follows:

"But be this as it may, consider the prior question of whether there is or is not agreement about a particular event among relevant [(qualified)] members of the scientific community. The only way in which one can tell whether there is agreement or not is for someone to observe this agreement. Hence, the criterion operation for the presence or absence of agreement is someone's (ultimately, my) cognition of the presence or absence of the agreement. Thus, making science public by striving for intersubjective agreement, can in no way yield the desired result of objectivity in the sense of independence from the subjective experience of scientists or a scientist -or, better, knowers or a particular knower" (Wertheimer, 1972, pp. 118).

The positivists and the Kantians turned to agreement between experts as a criterion for truth because they either assumed a fallibility of the senses outright or were not quite sure of the reliability of the senses as a basis for knowing physical-material objects, mental processes, etc. What Wertheimer recognizes (above), is that adopting such agreement criteria still leaves the further tricky methodological issue open regarding their jointly unquestioned further assumption that they have access to that very agreement itself. In other words, to remain consistent in their argumentation, their epistemological doubts about their access to the object under investigation should extend to the communicative act of agreement too. The point is that, once you have cut yourself off from the object under investigation, other people are objects too. So, as Wertheimer suggests, interobserver or intersubjective agreement is not a viable solution to the problem!

Having recognized this inherent shared reductio of the positivist and Kantian position, what kind of solution does Wertheimer suggest? He suggests one which is (in its broadest outline) very similar to that implied by Koch (1984, 1993) and adopted more openly by Gergen (1981, 1984). That is, our discipline should take a far-flung flight off into total subjectivity by allowing individual psychologists to do whatever they might think is right or wrong according to their own idiosyncratic criteria.

As stated thus far, Wertheimer (1972) has arrived a crucial analytical insight: that if we carefully follow the argument for an "agreement" theory of truth, we necessarily end up in solipsism (the view that I am the only reality I have access to, and that each of us live in a kind of singular epistemological vacuum). Such a reductio ad absurdum is typically used as a form of negative counter-argument against a position (to reduce it to absurdity). In a roughly contemporaneous work, for instance, the philosopher of science Frank Cunningham (1973) carries out his own reductio on past positivist as well as more current anti-objectivist views (showing as Wertheimer does that they lead to solipsism) and then moves forward from there to outline an updated objective rationale for the "social sciences" -just as Scheffler (1967) had done a little earlier. So, one might expect Wertheimer (1972) to likewise consider the above convincing reductio as an indication that there is something wrong with the former premises being assumed (that the epistemological position of the positivists and the Kantians within psychology must in some way be wrong). You don't simply accept the absurd outcome of a given position itself as a "fundamental" and necessary truth. Wertheimer, however, goes on to conclude that solipsism must be right.

Since this particular absurdity was (in a loose way) part of popular 1960s culture and early 1970s psychology, let's at least attempt to understand what leads Wertheimer to adopt it formally. As far as I can tell, it is by way of combining a traditional idealist ontology going right back to Descartes (the belief in the primacy of ideas over matter, which itself necessitates that an inside-outward method of analysis must be followed), with both a widely prevalent indirect (representationist) theory of perception (where the source of all knowledge is viewed as springing from "experience") and a peculiarly individualized analysis of the issues or potential consequences at stake that leads Wertheimer, step by step, into adopting his so-called "neosoliplistic" position. This is all made particularly evident in both Wertheimer's preceding chapter heading called "the futile attempt to be objective" (pp. 116-117) and under the chapter heading called "affective consequences of neosolipsism" (pp. 122-123) which follows shortly after the intermediary reductio segment (mentioned above).

Under "the futile attempt to be objective" heading, Wertheimer starts off with the assertion that "it seems impossible to avoid the epistemological primacy and irreducibility of my own cognition" (p. 116), -which means (in essence) that despite minor equivocations mentioned later in his chapter he has bought into the traditional idealist ontology of yesteryear. Having done so, he then goes on to reason his way into a seemingly disconcerting epistemological corner where the "first person singular" is taken as the ultimate criterion of knowledge:

"Let us at this point attempt briefly to reexamine some perennial ancient issues in epistemology, the questions of how we know what we know and of the criteria for the validity of knowledge....

Since I am claiming that the first person singular is the ultimate criterion of knowledge.... : Knowledge that other people have is inaccessible or meaningless to me unless that knowledge exists... in my own experience" (p. 116).

If you follow this sort of idealist, individual and representationist analysis to its logical endpoint as Wertheimer does, one can not know that any other person knows anything. He is also being forced (and he recognizes this explicitly) into rejecting the existence of "reality" itself:

"The assertion that the tree falling in the forest makes a crash even though there is no one there to hear it and no device there for recording the sound, like the statement that a physical reality exists 'out there' independent of my experience of it, seems to be an arbitrary unnecessary postulation" (p. 117).

As Wertheimer mentions (quite correctly) along the way, such a "neosolipsistic position seems to be the logical end point of the epistemology of operationism" (p. 117). Curiously enough though, he's going to be advocating neosolipsism himself. Why? In part, it is because (just like Hume, see Section 2), Wertheimer is compelled by the logic of the argument. Don't you feel the compulsion of the argument? If we've wiped out direct knowledge of the objects or processes under study, and to the communication of agreement regarding them, what's left? There's just one thing, a very important word in the vocabulary of these discussions: solipsism. Wertheimer is correct in one crucial argumentative regard: if you adopt the premises of both classical (pre-Darwinian) empiricism (that all knowledge starts with "experience") and representationism (that there is a barrier of the senses), it necessarily brings you to solipsism. I don't quite know why Wertheimer calls it "neosolipsism;" it's the same old solipsism that Hume faced and which the Logical positivists eventually fell into by way of making very similar assumptions.

What would you do if you have been led down such a seemingly logical path of analysis to a position that is so blatantly incredible and impracticable as this? As practicing scientists, the Logical positivists themselves from at least Reichenbach (1938) onward, recognized that there must be something wrong with their own philosophy. This, I think, is the right way to go. Just one of the outcomes of their reanalysis was Popper's (albeit flawed) neopositivist position on falsifiability (see above and also below where those flaws are revisited again in our coverage of Gergen), but there were other outcomes too. These include not only a new nonreductive ontological stance called the the integrative theory of levels analysis (see Novikoff, 1945; Herrick, 1949) and an updated methodological stance called Standard view of science (see Scheffler, 1967/1982), but also a reassertion of both a direct realist epistemology (Cunningham, 1973; Wilcox & Katz, 1984) and a new direct perception stance (see Gibson, 1966; Shaw & Bransford, 1977; Gibson, 1979).

Wertheimer (1972), however, for seemingly personal-individual reasons, chooses to adopt the absurd anti-objective outcome of past indirect realist epistemological assumptions wholeheartedly. Under the heading of "affective consequences of neosolipsism" he shares his early misgivings but also implies they are surmountable by bracketing or simply shrugging one's shoulders with regard to any practical, social, and disciplinary concerns that might be raised:

"Some may feel that such a philosophy of convincingness is a philosophy of defeat, of pessimism, perhaps even of nihilism. I must confess that when I first started thinking about these issues, I was still grasping for the straw of absolutism, and this position struck me the same way; I struggled to avoid the solipsistic box into which logic seemed to stuff me. But now I think that the quest for, and insistence upon, absolutes over and beyond one's own experience is at best an unnecessary crutch, or at worst a set of goggles that blind or distort rather than correct. Why despair just because it is ultimately impossible to transcend individual cognition? Why need one try to prove or assume the existence of a reality independent of experience?" (p. 122).

Wertheimer seems to feel that it would be far more dogmatic of him (personally) to postulate an objective "absolute" existence outside his own immediate experience, than to adopt a philosophy which is quite clearly "nihilistic" (ontologically speaking) and which is both impractical to maintain in everyday life as well as potentially socially irresponsible too. It is quite easy for us to develop these two latter counter-agumentative themes. We need only notice that the above extract from Wertheimer is highly reminiscent of Hume's reflective cogitations on adopting his own philosophy -as covered under our Section 2 subheading called "Implications: Hume versus everyday life and science". One of the issues raised there was that the practical requirements of one's own everyday existence -as Hume (1739) points out himself- is at variance with the solipsistic philosophy being proposed.

That personal practicality counter would be one answer to question posed by Wertheimer (in the last line of the above extract) about why we need to assume a reality outside our immediate experience. Another answer to the posed question, however, goes far beyond even such fundamentally individualistic concerns, proclivities, or practicalities. It resides within the wider extra-individual realm of social practice and scientific responsibility apparently bracketed by Wertheimer's position. For instance, if one can't even be bothered to postulate anything outside one's "individual cognition," why bother likewise "despairing" about any aspect of a world, a society, or a discipline of study which doesn't immediately impact your own personal here and now narcissistic concerns?

Such a philosophy might be suitable for disingenuous intellectual gadflies to have fun playing with in the conference-hall, but even they cant use it when it comes time to plan or pay for the trip home. Beyond that, what about the rest of us who are sincerely interested in understanding some part of psychological existence or in mobilizing historically progressive change in some segment of wider society? I'm inclined to the view that there's nothing there for us.

More importantly, and aside from all the ethical undertones of the above three paragraphs, what exactly are the ultimate prospects for either sociohistorical or disciplinary "progress" (in psychology or elsewhere) if this solipsistic position is adopted? According to Wertheimer, himself, there is no such thing! "Progress, then, may be illusory" he says; "all we have is change" (p. 122) and that change (we must assume from the argument as presented) either effects me immediately or it doesn't exist.

This implied connection with Hume's old viewpoints (not only his solipsistic manner of analysis but also his opinion that philosophy and science is but an amusing pastime) is eventually made completely explicit when Wertheimer mentions that all of this was "already clear in David Hume" (p. 128). The ultimate philosophical authority Wertheimer (1972) is appealing to is Hume, while for Koch (1981) it is Kant.

What we have in Wertheimer and Koch is a modernized psychological reenactment of the tail end of 18th century empiricism. In this particular case, Wertheimer plays 'Hume' to Koch's 'Kant.' The commonality between them is that they both adopt one or another of the worst aspects of Logical positivism, and the main difference between them is simply how far each is willing to go with those aspects. Koch refuses to adopt solipsism because he already knows that if he does, his argument is sunk. Wertheimer doesn't seem to grasp that so the failings of his personalized philosophy are more transparent.

In the final analysis, we find that these two slightly varying so-called alternatives to Logical positivism are not alternatives at all. Koch and Wertheimer provision themselves by taking onboard some basic positivist premises (chief among these being the doctrine that all knowledge springs from experience). They then cast off by unshackling our discipline from its seemingly ancillary (object-oriented) procedural moorings. The tide of their argumentative discourse is itself directional (instead of merely free-floating); but as Wertheimer (1972) so amply demonstrates, it leads inevitably to subjectivism, pluralism, and even solipsism if followed to its logical endpoint.

Suffice it to say that while I'm envious of the fun they must of had while engaging in such modernized sophistic pursuits, I resent both the time it takes to indicate their respective positional shortcomings as well as the potentially deleterious effects those positions have on the discipline of psychology. Some of the most disconcerting of these were already indicated by Koch's highly pessimistic (1984) analysis about the future prospects of psychology as a "scientific" endeavor. His swan-song article for American Psychologist called "Psychology or the psychological studies?" says much the same thing and I often wonder how many productive young minds he might have dissuaded from psychology over the years by way of his published works alone.

Koch, of course, was the raucous disciplinary headliner for this sort of de facto nihilistic and divisive position, but from time to time, Wertheimer too chimed in with his own more ostensively diffident tonality on such matters. For instance, given his long-standing neosolipsistic beliefs and ancillary anti-progress viewpoint, it is no surprise to find Wertheimer (1988) writing on "Obstacles to the integration of competing theories in psychology."

I'll leave it for you to look over the details of that article on your own. While doing so, however, you might want to consider carefully the way Wertheimer presents the complex disciplinary and historical issues under debate as if they were simply a set of all or none logical either/or propositions (between say unity vs. disunity; a "multiverse" vs. "universe"; a diversified "pluralism" vs. a monolithic holism). Is this not in itself a continuation by Wertheimer of the nondialectical absolutism of early Logical positivist argumentation?

Likewise, you might want to consider the obviously delimited straw man definition of so-called "theoretical integration" being utilized by that particular author. Wertheimer himself admits that to define such integration as the "translation" of one theory (or research program) into another "may not be integration in more than a rather trivial sense" (p. 134). So, what is it exactly that stops him from utilizing a more adequate definition? Surely they are out there to be had! It seems to me that there is no genuine intent on Wertheimer's part to provide a fair hearing for the actual issues under debate because that would require he refer to ontological integration on the one hand (see Novikoff, 1945) and to the possibility of subsumptive theoretical progress on the other (see Scheffler, 1967/1982). Both his communicative discourse and historical analysis is carefully tailored to avoid those sensitive methodological issues.

Wertheimer (1988) mentions theoretical diversity, varying methodological assumptions, as well as differing research cultures or traditions but never quite gets around to talking in any semblance of a normal manner about a given psychological process per se -in that article alone or indeed elsewhere (see Wertheimer, et. al., 1978; Wertheimer, 1980, 1984; Wertheimer & Wertheimer, 1996). For these reasons, and many more, there are indeed serious misgivings raised by his conclusion that: "One can favor general psychology, and careful theoretical and philosophical work, without endorsing the possibly unrealistic and perhaps even undesirable goal of unifying all of psychological theory" (1988, p. 136). As far as the current ongoing historical narrative is concerned, however, we best move on to cover the somewhat related anti-objectivist approach of Kenneth Gergen.

On Gergen's "Valuational Advocacy"

Logical positivism was characterized by both a commitment to the Standard view of science and a highly problematic underlying (primarily Humean) epistemology which ultimately undermined its attempt to be objective. As we've already seen, Koch and Wertheimer throw out the science side and what is left? The problematic epistemology. When they reject the object-oriented rules of positivism they end up in one or another kind of relativism (an overt subjectivism) and their outlined programs for psychology to follow are, to say the least, not very satisfying. In Koch (1981, 1984) the remaining overly-subjectivized philosophical feature alone is highly Kantianized, which ultimately fails to stay afloat too on any other grounds than his purely personal and dogmatic refusal to adopt solipsism (its logical endpoint).

This recognition of the failure of Koch's position raises the further issue of what the similarities and differences might be with Gergen's (1978a&b, 1981, 1984) own somewhat modified neo-Kantian "constructivist" position, so let's turn to that now. Since our current analysis is not merely aimed at exposing the highly problematic methodological position Gergen actually adopts and "advocates" for psychology, but is motivated by a wider desire to both understand his reasons for adopting it and to consider what the more viable alternatives might be, we'll have to be careful to mention some of the more cogent argumentative historical-disciplinary points Gergen makes (especially in 1984) along the way.

From the mid-1970s onwards, Kenneth Gergen (a prolific and influential social psychologist) set out to critique the contemporaneous neopositivist tradition of empirical-operationist psychology. According to Gergen (1973, 1976, 1978a&b, 1981), social psychology (and presumably General psychology itself) is not concerned primarily with unadorned or bare empirical events per se but rather with attributed meanings. Gergen (especially 1978b, 1981, and 1984), attacks the classical empiricist Achilles heal of neopositivist psychology by presenting a cleverly modified form of Kantianism which doesn't deny that we can know empirical objects or events, but which (instead) argues that they are not in and of themselves very interesting.

For Gergen (1981, 1984, 1985, 1987) all psychological knowledge is a "construction" of the human mind, a "social fact" is an attributed meaning thrust on an empirical event of interest, and a "theory" is an expressed interpretation of those social meanings. Theoretical knowledge, therefore, is simply a "system of interpretation" with which one may "choose to agree or disagree because one employs a different system of interpretation, but one may not empirically falsify a theoretical competitor" (1981, p. 335). In other words, if such a modified neo-Kantian position is adopted, then ordinary empirical evidence can no longer count in the acceptance or rejection of psychological theories. But if we don't check our meaning-laden "theories" about social psychological events by referring to things outside ourselves where do we go?

Although Gergen himself (see 1978b, 1981, 1984) speaks of adopting a seemingly reasonable (though potentially combatant) procedure of "valuational advocacy" where competing scientific theories are to be judged by their degree of achieved consensus within the community of scientists, others (including Sandra Scarr, 1985) were much more explicit about the methodological implications of adopting such a "constructivist" approach. They suggest that it is the "persuasive" power of given theories (both within and beyond the discipline) that counts. In other words, when you check my empirical or theoretical assertions out and disagree, the issue at hand is not one of truth anymore it is an issue of whether I can convince you of my point of view.

Both of these proposed notions of theoretical determinacy differ only in degree and not in kind. They stem from, and are dependent upon, a view of truth as consensus. Accordingly, I can make my theoretical statements true by gaining such consensus through advocacy or persuasion and this is quite clearly a very different approach to scientific methodology than the one that says to check the object, event, or process under study.

Let's take a brief moment to consider what the wider implications are in terms of our previously outlined Standard view notion of responsible assertion. I now have no responsibility beyond my power to persuade. And where is such power to be found? Its political power. If you agree with my particular theoretical "system of interpretation," good; but if you disagree tough! There isn't much more to be said. From the arena of reason and pragmatically guided empirical evidence, we have been moved into the arena of individualized will or negotiated affirmation (see Gergen, 2001). And in that arena what counts is clear to all: political power. Truth will be determined by those who can successfully persuade, in short, by those who have the 'clout' (After Tolman, 1986, pp. 2-3).

Well, that's one sort of bare-boned and rhetorical reductio of Gergen's overall position. But what exactly led him to adopt it in the first place and what does the late-20th century disciplinary appeal of his particular position have to say about the state of psychology at that time? Moreover, how might we retain the most germane methodological points Gergen is trying to make while simultaneously avoiding the pitfalls of his position per se? Fortunately, those questions can be easily addressed by not only considering the details of Gergen's (1984) arguments against prior psychology but also by zeroing in on the nature of a rather informative 1981 encounter Gergen engaged in with his neopositivist critics.

In Gergen's "Experimentation and the Myth of the Incorrigible" (1984) he presents the contemporaneous disciplinary situation as though there were only two methodological options available to adopt. On the one hand there is the "positivist-empiricist line" (which he views as untenable), while on the other hand there is some form of as yet untried "constructivist" relativism. In order to better understand his rationale for being so convinced that these were the only choices available, we need to note one of the more subtle features of this seemingly simple forced-choice style of presentation. That is, Gergen's account of the specific failures of early-through-mid 20th century philosophy of science and psychology differs a little from our own account (as outlined above).

In particular, whenever he talks about positivist-empiricism, it is almost always in terms of its intention to come up with generalizable theoretical laws and to produce objective or long-lasting empirical knowledge about the world. Yet are these goals not more characteristic of the Standard view of science itself rather than of the positivist-empiricist line per se?

Is not Gergen's argumentative quibble more properly directed (as his own title suggests) at the old (largely Platonic) positivist-empiricist search for "incorrigible" (unchanging, everlasting, or absolute) knowledge? I think it is and we could easily assent to such a critical rejection of that unrealistic goal too. But like Koch, Gergen clearly throws the baby out with the bath water on this rather central methodological issue of the proper knowledge-goals of postpositivist social or psychological science.

In brashly rejecting the unnamed Standard view of science at the outset of the article, what Gergen ends up with is an inordinately ontologically purified constructivist argument with no overt or practicable basis for deciding what is true, or right, or wrong at all. We are soon informed, for instance, that due to the combined effect of "phenomenal instability" (change across time) in the events we observe and the socially constructed "language conventions" we utilize as researchers, even our efforts to gain or maintain an "objective" empirical-descriptive account of seemingly "referential" mechanical or behavioral events are inherently and perpetually doomed to remain at the highly questionable status of mere "pseudo objectivity" (pp. 31-36).

Given that we reside in a somewhat later historical-disciplinary juncture under which it is already quite clear both neopositivism and constructivism have essentially played themselves out, we can easily demur from Gergen's false either/or methodological choice as presented back in 1984. Yet let's at least attempt to recognize a few of the more germane historical-disciplinary points that Gergen does indeed score (against the neopositivist tradition of empirical research) in this article.

First of all, when Gergen talks about the failures of attempting to apply Popper's Logic of Scientific Discovery to our own discipline -i.e., that competing theories within given domains of psychology (e.g., learning) have not been found to be so easily "vanquished" by way of empirical falsification as this overly simplified neopositivist argument suggests they should be- he is quite right. It is likewise correct for Gergen to assert that there has been a "full-scale deterioration of the metatheoretical launching pad from which experimental psychology was thrust into the world" (p. 29). We do indeed want to recognize there has been such a "deterioration" and we have just covered its complicated history in some detail.

The key, however, is to recall the exact (epistemological and ontological) nature of the deterioration in 20th century "experimental psychology" so that when we attempt to replace the neopositivist tradition of empirical research (residing within it) with something else we come up with an (assumptive methodological) approach that is going to do the job that it set out to do and failed to do. We don't want to simply throw up our hands and abandon its overall scientific project (of gaining object-oriented knowledge of psychological processes) or its ongoing goal (of resolving theoretical differences) like Gergen and his colleagues have done in their opting for a de facto disciplinarily nihilistic position.

The main point is that since the Standard view of science is such a good one, should we psychologists really give it up so easily? More specifically, what I've been encouraging you to do is opt for a third methodological approach to empirical practice and theorizing that is carried out in accordance with a suitably updated (direct realist and integrative levels of analysis) Standard view -under which the collection of empirical facts is carried out on various ontological levels of psychological processes, and from which we form various grades of descriptive observational laws, until we produce explanatory theories which overarch those laws. This would be a methodological approach to General psychology that would allow for the possibility that at least some of the existing theories in the same domains of interest (e.g., perception, learning, memory, motivation, personality, etc.), might ultimately be reconcilable because they are being derived from the same (albeit complex and ontologically integrative) psychological aspect of objective existence.

To this end, I believe it is necessary to emphasize the marked intellectual opportunism at work in even Gergen's crisis era (conceptual pluralism) take on the issue of so-called "theoretical indeterminacy" in psychology. At this time, philosophers of science like Hanson, Kuhn, and Feyerabend were just waiting in the wings for somebody like Gergen to come along and stubbornly propound their views for what has turned out now to be a period of some 30 odd years. It was and still is my impression that Gergen wanted to sway his argument in a given relativistic direction and hence tends to cite only those philosophers who apparently back up that argument (see 1984, pp. 29-31). There was already, of course, an equally wide literature in defence of the Standard view, yet that literature was (and continues to be) conveniently ignored by Gergen and his dwindling number of constructivist colleagues.

There are, in short, a host of tactics to counter-argue the constructivist position; not the least of which is the one of pointing out that it is inherently anti-evolutionary. One could, for instance, confront Gergen with the evolutionary argument as follows: If a human being is only engaged in creating interpretations which have no necessary truth value or relation to the world, and this is a cognitive activity; then it is hard to understand how or why such an organism would have evolved because evolution unfolds in a context of adaptation in which the activity of an organism has to fit the way the world is (otherwise it does not survive). In other words, if cognition is an evolved human activity then it must in some rather fundamental way reflect the demands of the environment and any position which denies that necessary, vital, connection with the world must have to reject the theory of evolution too.

Has not the theory of evolution already contributed too much to our knowledge of biological processes as well as the individual, social, and cultural development of human beings to treat it as though it is something that we can reject on the basis of the kinds of epistemological considerations that are laid before us in this or any other constructivist article? To me, that seems like a pretty good counter to raise with your ordinary run of the mill constructivist colleague.

Yet it is just here that one might run into another troublesome characteristic of the constructivist argument. Just like solipsism, it is self-protecting. As long as a stubborn adherent is willing to persist with their superordinate obstinacy that everything is a construction of the human mind, then even evolution becomes a construction and no significant headway will be made on this count.

Perhaps because Gergen has subsequently held onto his self-refuting constructivist views for so very long, I find it more disturbing than amusing to note how reminiscent his (1984) argumentative tactics are of those utilized by Wertheimer in 1972 and 1988. In particular, Gergen (1984) ostensively bemoans the failures of past positivist psychology while pairing that up with an implied straw man account of what the so-called "common faith" in our scientific assumptions might in fact be:

"Let the common faith not be shaken, it may be argued; if we but continue along the route of rigorous, critical probing of human activity, we may ultimately hope for results of more telling significance.

....

There are few who would care to play out the heroic role implied by these latter sentiments more than the present author. There is a certain existential enchantment in launching oneself into the open space of a groundless faith. Yet, several concerns that have emerged within my own work over the past decade militate against such blind commitment" (1984, p. 31).

Yet when we talk about our "commitment" to the Standard view of science, are we not referring to a little more than just "blind" neopositivist dogma or "groundless" faith? The Standard view makes sense because of the kinds of practical constraints we encounter every day, and because of the fact that we have each learned things about the world in the way the Standard view suggests we should. Haven't we all accepted theories of some minor variety and rejected others on the basis of some kind of result of practice? I think that inescapably pragmatic relation with the world around us lends a lot more than "groundless faith" to that Standard view.

Such a pragmatic imperative is just one of many motives for at least attempting to zero in a little more closely on the exact historical and methodological reasons for the failure of the neopositivist version of the Standard view in 20th century psychology. There are theoretical, disciplinary, and ethical imperatives at stake too. I think we can indeed find the reasons for that failure and when we understand them well enough, we will each be able to avoid falling into them in future.

One of the most deleterious methodological shortcomings of neopositivist psychology has indeed been its absolutized (and ultimately Platonic) notion of lawfulness. Although Gergen (1984) touches upon this in a somewhat round about way, I think it is rather crucial that we (at long last) draw this out a little more openly than he does.

In particular, when Gergen raises the implications of "phenomenal instability" and "historical change" for "scientific research" (pp. 31-34) he is touching on this important issue of law formation and draws his own conclusions. He notes (along with others) that "the complexity of human change and the unpredictable character of the ultimate consequences of any given act or decision of men [is] its most conspicuous feature" but concludes that: "If such characterizations seem compelling [i.e., of unpredictability, constant change, etc.], then the traditional belief that, when properly conducted, scientific research enables us to proceed ineluctably toward the truth is rendered problematic" (p. 32).

Now, the specific position Gergen seems to be rejecting here is the old static-mechanical materialist position. One of its assumptions was that since the world is fixed and immutable, our scientific laws about it would likewise have to be immutable and constant (see Section 2). In other words, embedded within both the mechanical materialist and the neopositivist approach to scientific lawfulness was an inherently problematic (ultimately Platonic) knowledge-goal of coming up with such "incorrigible" and fixed laws.

What Gergen is trying to say in 1984 is that things don't seem to work this way because both mechanical events and human beings are "changing" all the time and therefore there can not be any incorrigible laws like that. He is completely right on that score, but it does not follow (as he seems to imply) that we are therefore barred from coming up with scientific laws which "truly" reflect the nature of such a changing world. All it means is that our static neopositivist notion of laws was wrong to begin with! We need a better (postpositivist) conception of lawfulness which reflects the "changing" nature of things.

It is just here that the emergent aspect of evolutionary theory (see Section 4) comes to the rescue once again. If we recognize that evolution is in business of producing new qualities (e.g., there are human qualities that did not exist in prehuman organisms, primate qualities that did not exist in pre-primate organisms, animal qualities that did not exist in merely chemical or physical processes, etc.), and if such new qualities are governed by principles which are distinct from those which governed earlier qualities, then new laws are produced along with such emerging ontological qualities.

We mustn't think of the generalized laws (up there in some eternal absolute or mathematical realm) and the qualities of individual objects or events residing down here in the ephemeral corporeal realm. That's thinking like Plato; and that's the way both the neopositivist tradition of variable psychology has operated and the way constructivist figures like Gergen think too. In not having adequately followed through with his tentative questioning of the neopositivist notion of lawfulness, Gergen is forced into eventually concluding that we psychologists can not come up with any lawful truth claims at all.

It is only by way of abandoning both those approaches that we can appreciate that the law is the way things work. When we look at an observable phenomena (event, etc.) and abstract a description (which we now call an observational law) that law does indeed exist as a result of the abstraction ability of our cognition. I have extracted it from observable events, formulated it in language or mathematics, and I call it a law. But where did I get it? I got it from the event.

So, we now have an apt and workable reply to Gergen's final (rather off-base) summary line on this issue:

"The essential problem is the extent to which it is fruitful to adopt a theory of knowledge based on the stability or reliability of events in confronting creatures whose activities demonstrate precious little of either" (1984, p. 32).

To put the point plainly, if there is little stability and little reliability in concrete psychological events, then clearly the static neopositivist approach to scientific lawfulness is wrong; but all this means in turn is that we need to produce different kinds of scientific laws to describe and explain such dynamic events.

Given the "full-scale deterioration" of the neopositivist and constructivist attempts to deal with this central methodological issue, the disciplinary imperative we are faced with is to start interpreting and empirically investigating psychological processes in a way that better reflects the way they really are. Is that going to be impossible or fruitless? I don't think so; and as I've indicated (above), both the materialist form of dialectical logic as well as the process oriented approach to empirical method in psychology (adopted by Vygotsky and his colleagues) already go a long way in helping with this task.

Aside from the issue of lawfulness, there are a few other instructive methodological comparisons between Gergen's "constructivist" position and the neopositivist psychology tradition which warrant mention. These are revealed quite nicely in Gergen's "The meagre voice of empiricist affirmation" (1981), where he is answering an attempted neopositivist refutation of the methodological issues raised by his relativistic "generative theory" position (1978b).

The first aspect of this particular exchange of views is that Gergen (1981), by way of adopting a modified Neo-Kantian stance on epistemological matters, retains an argumentative upper hand against his neopositivist critics. This is because those critics, even while valiantly attempting to defend the empirical object-oriented intent of the Standard view of science, were still relying on the inherently vulnerable indirect realist epistemological assumptions of Classical empiricism (which as we saw previously got us into the overly subjective trap in the first place).

Classical empiricism (which goes back to the 17th-18th century Locke vs. Berkeley and Hume debate) assumed a representationist theory of perception under which all knowledge was considered to be somehow built up indirectly from experience of sensory inputs (rather than derived directly from nature). How eminently reasonable, but if you examine that theory closely and consider the question historically as well as ontogenetically, it was totally false (see Section 2-5).

Historically, it was this theory that led to both Hume's empirical form of "phenomenalist" skepticism (with visual perception being portrayed as the experience of sensory images and not truly representative of any "thing" per se) and to the succeeding degeneration of even Kant's "inferential" transcendental metaphysics into a somewhat similar rationalist form of phenomenalism (see Section 2). It also eventually undermined the objectivist intentions of various 19th-20th century positivists whom were forced by its seemingly inescapable logic to conclude (because we can't prove there is that object on the other side of our sensory apparatus) that the various sciences differed only with respect to which forms of sensory experience are considered to be of central import (see Section 3).

Most notably, Logical positivism was the last of a long line of formalized and explicit movements in philosophy of science to attempt to make such classical empiricism work for science. As mentioned by Passmore (1967) it failed completely, and it should by now be readily apparent to you that the ensuing neopositivist tradition within psychology (see above subsections on operationism, convergent operationism, variable psychology) was -with regard to its perception theory at least- simply an archaic holdover from that positivist tradition (see also C.W. Tolman, 1991c).

Hence, if one neglects to adequately distinguish between Classical (philosophical) empiricism and the wider Standard view's use of empirical-observational techniques of investigation, then one remains vulnerable to the various Neo-Kantian (though ultimately phenomenalist and self-defeating) epistemological counters as presented by Gergen or Koch. This is just what happened in this particular exchange. Once Gergen came along to answer the neopositivist charges levied against his "generative theory" position, he laid waste to those critics by simply exposing the methodological disjunction between their objective "affirmations" and the dogmatic circularity which resides in their naive or indirect realist epistemological assumptions.

Whatever the weaknesses of his own position might be, Gergen (1981) is correct about one thing: his neopositivist opponents erred in attempting to defend the classical empiricist line. We don't want to defend classical empiricism (in the epistemological sense) either, but does this actually mean that we necessarily have to abandon the empirical-observational techniques of the Standard view? As we'll see shortly, Gergen appears to conclude that this follows, and it is just this sort of conclusion that I'm suggesting we can not only question on various pragmatic or disciplinary grounds but also counter with a fairly well worked out direct realist epistemological argument to support the Standard view as well (see Wilcox & Katz, 1984). Before turning specifically to the part played by psychology in that distinctly postpositivist affirmative argument, however, we should probably outline another aspect of the 1981 exchange because it conveniently indicates how and why Gergen's constructivist methodology is untenable.

The second aspect of this particular exchange is that, since Gergen was so comfortable in defending himself against neopositivist reassertions of the so-called classical "empiricist line," his guard was down. That is, he says a lot of things more openly and makes his own methodological assumptions more clear then he tends to do elsewhere. It is here, for instance, that (while writing on "fact and verification") he not only puts forward a thoroughly idealist notion of social history but ultimately goes on to inadvertently underscore that his own views on the relation between theory and facts is as confused as the neopositivist views he is attempting to counter.

In discussing the past influential European and American social theorists he somehow wishes to emulate, Gergen opens his idealist line of reasoning by saying: "The central question in such cases is how the investigator views his or her relationship to what pass muster as 'the facts.'" It appears, he suggests, "that these theorists recognized the primacy of the significant idea" (p. 334).

"Ideas about social life are not driven by observation, nor can they be invalidated through observation: They essentially organize the process of observation itself. It is my fear and belief that in the contemporary fetish for rigorous [empirical] method, the concern with revolutionary ideas has been woefully neglected" (p. 334).

Gergen immediately provides a spurious movement of the arm example to illustrate what a "social fact" might be and implies that the empirical description of the event is of no particular interest. It is the "attributed meaning," he suggests, that is important to the social psychologist.

"The critics offer a variety of desultory doubts regarding my views of fact and verification. Most of these doubts can be laid to rest by bringing the argument back to center course. What is a social fact that it may be verified or falsified? Consider the human arm moving in an upward direction by 120 degrees at 22 miles per hour, pausing for 2.5 seconds and then returning to its resting place at 12 mph. Is this a social fact? To be sure, with proper measuring instruments we can verify that this public event did occur. Yet, is it a fact in which social psychologists are likely to take interest? Would we wish to develop theories of such movements, and to assess the various variables (such as wind velocity, time of day, size of arm, and so on) that might be systematically related to the direction and speed for the arm's movement? This form of social psychology is hardly imaginable; it is implausible both on theoretical and practical grounds. Observable movements of the arm, or of other features of the human body, are [p. 335] not in themselves of concern to the social psychologist. What transforms the observation into a social fact is its meaning, to the individual, to others, or to the theorist. Did the movement of the arm reflect the individual's attempt to salute, gain attention, beckon a friend, or insult an enemy? When we move to the level of the human meaning of the action, the social psychologist's interest is kindled. One can build theories concerning the giving of esteem through salutations, the search for attention, friendship, or aggression. Yet, the focus in such instances is on the attributed meaning of the action. Theories concerning such meaning cannot be derived from the behavior itself -the direction and speed of movement. Theories represent interpretations of the action's social meaning, and this meaning cannot itself be observed (Gergen, 1981, pp. 334-335).

Basically, he's granting the objective existence of the actions but arguing that since you may give it a different meaning than I do, our "theories" must have to do with these "meanings" rather than with the observed act per se. The misleading features of this example and its accompanying overly subjectivized argument regarding "meaning" may not jump out at you right away, so it is probably best that we make them pointedly clear. There is, in short, a long tradition of idealist methodology in psychology which essentially predisposes us to assent all too easily (without deeper consideration of the matter) to Gergen's view that "meanings" are brought or "attributed" (ascribed, assigned) to, rather than inherently present (residing) in an object; existing within an observable event; or signified by an intentional action. This latter set of unusual materialist views on meaning, however, become much more appealing when we step back from our inadequately explored idealist notions to consider exactly what is being proposed by Gergen.

What I'm trying to call your attention to is that Gergen's (1981) implied constructivist theory of meaning is not merely idealist but idealist with a vengeance. It assumes an inescapable and complete methodological dualism between meanings (in our head) and empirical events (objects, observed acts, etc.). According to him, observable events aren't meaningful in themselves, they are "rendered" meaningful by a thoroughly subjective psychical act on our part, just like we are said (under representationalism) to impart secondary qualities (e.g., color) to the objects of perception.

In a way, Gergen (1981) is simply drawing out the implications of the neopositivist's own implied idealist stance on where meanings reside and in this respect he has actually done us a favor. Under Gergen's specific account, however, the "construction" of meaning is portrayed as a process involving an arbitrary, willy-nilly, thrusting of meaning down onto the Rorschach block that he calls "behavior." I hope that you can already recognize how very Kantian this proposed theory of meaning is. Gergen's main intent here is to set his readers up for an unquestioned acceptance of a Neo-Kantian interpretive psychology in which there is no necessary connection allowed for between "meaning" and "the behavior itself" -except perhaps through the individual and historically relativized "conceptual systems" (the Kantian categories) we each bring to our interpretations of such human actions (see Gergen, 1984).

Where this unnamed and unattributed Neo-Kantian theory of meaning leads Gergen in 1981, is itself highly instructive. It leads him to a marked theoretical pluralism position where everybody can have their own "theory" (say, my meaning versus your meaning) and where there is no way to reconcile these differences. More, specifically, what Gergen (1981) is going to argue (both explicitly and seriously) is that psychological theories (being such subjectivized modes of interpretation) can not be "verified or falsified":

"The result of this state of affairs should now be clear. The symbolic meaning of observables is, either on the level of mundane discourse or on the broad theoretical level, not open to objective verification or falsification. There is no observable referent to which the investigator can reliably point. The meaning of human action is dependent on the observer's system of interpretation. The observer must bring to the event a conceptual system through which behavioral observations may be rendered meaningful. There is no means of verifying or falsifying a 'mode of interpretation.' One may choose to agree or disagree because one employs a different system of interpretation, but one may not empirically falsify a theoretical competitor" (Gergen, 1981, p. 335).

But the conclusions Gergen is drawing in this particular paragraph do not follow in any clear way from what he called our attention to in previous paragraphs -i.e., from the contemporaneous empirical "method fetish" of neopositivism, the particularized interests of "social psychology", or the requirement to somehow "build" theories regarding the meaning of comparatively mundane observable events. His conclusions are not solely overstated as such either; nor are they merely misleading or misguided in a harmless oversight sort of way. In fact, they seemingly prove decisive in leading Gergen into a further, extremely awkward and fundamentally unworkable position that observational data may be "inimical" to theory building:

"It is in this light that the substantive criticisms levied against generative theory drop away. Data and observation may be inimical to the development of theory because naive observation is already constrained by preexisting conceptual schemata. The problem for the theorist is not to observe more carefully or broadly. In doing so he or she can only serve to sustain existing forms of interpretation. Rather, the major task is that of developing alternative schemata for interpretation. To be sure, American social psychologists may have had some impact outside the field. However, because of their assumption of naive inductivism they have too frequently engaged in mere recapitulation of preexisting assumptions held by members of the culture. From this standpoint, data cannot be anomalous or theory-threatening unless, perhaps, the theorist possesses a rigid, inflexible, or conceptually barren framework of interpretation. In no sense can hypothesis testing rule between competing theories, and the proliferation of cross-cultural research cannot restore objectivity to the judgmental process. Nor am I arguing in any way that theorists forfeit their place within their culture. They must participate fully in order to gain those communication skills which enable them to make intelligent alternative views" (Gergen, 1981, p. 335).

We could, of course, conceivably think up some delimited situations in which one has an overload of data. I think that is possible. In fact, it was one of the main historically situated disciplinary characteristics of not only the "Discipline building era" (1870s-1920) and operationalized "General psychology era" (1940s-through-1970s), but also the "Crisis of relevance era" in which Gergen is writing. But Gergen does not quite qualify his position in this reasonable (historically situated) manner. Instead, the point (as he is stating it) is that the "constrains" of "preexisting conceptual schemata" on observation are viewed by him as having a much more profoundly disruptive role. This is pointedly underscored by his suggestion (above) that what we should be doing is devising new "alternative schemata" for interpreting observational data. The "major task" for the theorist, he suggests, "is not to observe more carefully or broadly." But how exactly shall we take this conjoined statement? If we should not observe more carefully, shall we observe less carefully? If not more broadly, shall we observe less broadly?

Moreover, if theorists are to start grinding out such "alternative schemata," how will that in turn resolve itself? Are there any principles by which theorists might ultimately decide which ones are better than the others? Gergen (1981, 1984) suggests that a procedure of valuational advocacy be adopted in social psychology but it seems to me that what is being implied by it is a return to just the sort of unconstrained "riotous thinking" and "mere speculation" which Edna Heidbreder was warning us against back in 1933 (pp. 3-17). Admittedly, Gergen may not have intended to say exactly that, but that is literally what his argument as presented says.

On the one hand, Heidbreder (1933) being already under the influence of Logical positivism and situated within the midst of the so-called "Schools and Systems era" (1921-1939), was emphasizing the disciplinary scarcity of "sober" (descriptively contextualized) empirical facts relative to the existing historical plenitude of under-supported theories of human nature. On the other hand, Gergen (1981, 1984) seems to be attempting to call our attention to a profound shortcoming over the interim 50 odd years of neopositivist inspired General psychology -i.e., its overemphasis on the operationally defined factual level of analysis and shying away from theory production. In his overly absolutized means of doing so, however, Gergen provides us with yet another lesson in the problematic assumptive overlap between neopositivism and constructivism. By denying that theories are derived from and dependent upon observations, Gergen is (in effect) cutting off psychological theorizing from its formerly overemphasized observational base. That is, he simply postulates the reciprocally opposite implications of their shared assumption about there being some sort of underlying adversarial relationship between theory and facts.

This dramatic turnaround, of replacing the old neopositivist fact-theory confusion with its mirror image (a constructivist theory-fact confusion), once again raises grave doubts about whether constructivism was ever actually a truly postpositivist movement. Instead, constructivist psychology seems to be simply a reactionary variant on the same sort of basic methodological misunderstandings residing in explicitly positivist psychology itself.

There was, of course, never really any ensuing danger that the constructivist position would ever be taken seriously enough to forestall the ongoing efforts of those practicing empirical or applied psychologists who still pressed ahead sincerely in an effort to understand cognitive processes, human development, etc. My feeling at the time, however, was not only one of incredulity about the fact that this particular group of theoretical psychologists -whose business it is to write about and sort through the theoretical strivings of the discipline- were not even attempting to support such frontline psychologists; but also an ongoing unease about why these particular figures were being afforded such personal or professional prestige in certain other circles.

The root of their popularity, I believe, went far deeper than the mere entertainment value of attending one of their talks. Koch and Wertheimer (on the one hand) and Gergen (on the other) found within their respective disciplinary age-cohorts a willing audience of well-placed psychologists with vested and entrenched interests in appealing whenever needed to views which allowed so-called creative (institutional or intellectual) inertia.

Some of these intellectual opportunists didn't want their own theories and so-called models or interpretive systems to be challenged. So the simplest way to avoid that fate was to appeal to Gergen (a modern day Protagoras) who asserts openly that psychological theories are not empirically challengeable anyway. Others didn't want to bother updating (let alone abandoning) their own cherished empirical research projects to accord with successive counter-arguments or even mounting unfavorable evidence. So it was likewise convenient to appealed to someone like Koch or Wertheimer who assert that we should all just learn to get along in a happy pluralistic harmony. After all, if one claims from the start (as the constructivists do) that it is institutional and disciplinary influence or personal persuasion that counts; and you've already got such coveted power; does anything else matter? Not at all.

Direct perception as part of postpositivist psychology

Whether conceptual pluralism is presented as a necessary methodological position to be held or put forward more cynically as a delaying tactic in the face of wider disciplinary change, it is just these sorts of counterproductive arguments that I'm suggesting we can resist in various ways. In this regard, there are at least two sides (lines, or aspects) to such resistance.

One line of resistance involves the somewhat negative, discouraging and off-putting task of critically analyzing what is wrong with those arguments as well as locating exactly where they go wrong. This is what has been done above by indicating where the non sequiturs and reductios are in the respective radicalized arguments of Koch, Wertheimer, and Gergen.

I think we have a least established that the disciplinary cachet their philosophically relativist positions once held was based on false pretense or promises as to what they might eventually imply for psychology. They do not lead to a freeing up of the discipline but to the impossibility of inter-area communication, theoretical agreement within the same domain of empirical interest, and even effective personal or professional action. If the more moderate empirically leaning, historical, or theoretical psychologists of the late-20th century would have stopped to consider such implications before jumping on the trendy methodological bandwagon of conceptual pluralism (e.g., Dixon, 1983; Hilgard, 1987; Danziger, 1990, 1997) we'd now be wasting a lot less of our disciplinary energies on countering such positions.

A second aspect to such resistance involves the more affirmative task of somehow coming up with an explanatory theory of perception to support the realist-materialist methods of investigation which science typically utilizes. Happily, psychology has already played an important historical role in providing such a theory.

More specifically, as indicated in our introduction to the Crisis of Relevance era, there is a notable postpositivist flavor in the implied direct realist strivings of philosophers of science like Scheffler (1967/82) or Cunningham (1973) and it is this aspect of objective discourse or practice that we want to help shore up. So, it is about time that we begin driving home our former point regarding the productive working relationship between their efforts to update the Standard view of science and similar advances made in the area of direct perception theory by James J. Gibson (1966, 1972, 1979), as well as by various second-generation "Gibsonians" (Shaw & Bransford, 1977; Michaels & Carello, 1981; Reed, & Jones, 1982; Lombardo, 1987; Reed, 1987, 1988a&b, 1996a).

Here, we'll be moving beyond our previous mere acknowledgments that there is such an "Ecological theory of direct perception" (Appendix 2) and emphasis on its assumptive features (Section 3); to consider some of the main empirical-procedural aspects of that alternative to indirect perception research. Some of the wider disciplinary or interdisciplinary implications of such an ecological approach will be mentioned along the way. This will be done by noting both its procedural overlaps and differences in emphasis with the other favorably reviewed psychological approaches outlined above. Finally, in a brief bid to argue that its proper disciplinary place is best understood as an essential part of a combined postpositivist general psychology (rather than a stand-alone movement), we'll mention its utility in accounting for the practical success of a few high-profile therapeutic interventions made over the past few years (though not necessarily in helping to interpret all of the empirical results).

On Gibson's "ecological approach"

Gibson's The Senses Considered as Perceptual Systems (1966) presents an evolutionarily contextualized, comparative psychological, and ultimately explanatory theory of direct perception for five active "modes of attention" (General orientation, Listening, Touching, Smelling/Tasting, and Looking). For those seeking a quick entrée into the approach being proposed therein, the most germane selections are the Introduction (pp. 1-6); "Chapter III: The Perceptual Systems" (pp. 47-58); "Chapter XIII: The Theory of Information Pickup" (pp. 266-286); and "Chapter XIV: The Causes of Deficient Perception" (pp. 287-318). The first few selections lay out the basic historical rationale for and theoretical features of his new ecological approach quite nicely, while the latter concerns its immediate implications for the age-old problem of so-called perceptual error.

Even though the impressive accomplishments of that work remained under-recognized in our own discipline for some two decades, they were a turning-point in the former relationship between scientific psychology and philosophy of science. Before 1966, our discipline was routinely forced to beg, borrow, or steal various aspects from realist-materialist philosophy in order to provide a reasonable (direct realist) rationale for appealing to its observational base. Forever after, such direct realist philosophers would have to acknowledge that psychology has something to offer them too. At long last, here was a compelling scientific theory (grounded in both phylogenetic and ontogenetic evidence) which provides a workable alternative to the indirect theory they had been collectively attempting to counter (on both pragmatic and other methodological grounds) for some 250 odd years.

As outlined previously, however, psychology at that time was in shambles. Add such bad timing to the fact that Gibson came to this new theoretical understanding of perceptual processes only near the tail end of a busy career -in which he was both an active producer of "traditional" indirect perception research, as well as author to a celebrated vision textbook (Gibson, 1950)- and we can begin to appreciate the mitigating circumstances working against a welcome reception (see Reed, 1988b, 1996b).

"In my book, The Perception of the Visual World (1950), I took the retinal image to be the stimulus for an eye [along the traditional lines of Helmholtz]. In this book I will assume that it is only the stimulus for a retina and that ambient light is the stimulus for the visual system. This circumstance, the fact of information in the light falling upon an organism, is the situation to which animals have adapted in the evolution of ocular systems. The visual organs of the spider, the bee, the octopus, the rabbit, and man are so different from one another that it is a question whether they should all be called eyes, but they share in common the ability to perceive certain features of the surrounding world when it is illuminated. The realization that eyes have evolved to permit perception, not to induce sensation, is the clue to a new understanding of human vision itself" (Gibson, 1966, p. 155).

Subsequent to the 1966 work, Gibson did manage to produce a notable summary article called "A theory of Direct Visual Perception" (1972) as well as a further book-length work called The Ecological Approach to Visual Perception (1979), but he passed away in December of that year. It is rather easy though to zero in on the main investigatory emphasis and procedural aspects of this "ecological approach" by turning to Gibson and the rules of thumb presented by some second-generation figures in this area (Shaw & Bransford, 1977; Michaels & Carello, 1981; Reed 1987, 1988, 1996a). These will come in very handy when attempting to evaluate the potential and limitations of this particular departure from traditional research.

As Gibson put it, the "new departure" (or shift of investigatory emphasis) being called for by the ecological approach is one from "sensation-based" to "information-based" analysis of "perceptual" processes (1966, p. 266). Whichever mode of perceptual attention is being discussed, a careful distinction is maintained between two ontological levels of analysis. On the one hand, "perception" is treated as a psychological act of the intact organism involving the direct pickup of structured information from the environment over time. On the other, "sensation" is treated as a set of physiological processes belonging to the various sorts of receptors. That is, a perceptual act does not typically involve the experiencing of receptor stimulations and is more characteristically understandable as a active seeking out of relevant information from the environment.

Despite the particular point made in the quotation above (regarding eyes being evolved to permit perception), Gibson (1966, 1979) clearly recognizes that what is biologically significant for one organism may not be relevant for another. Evolution, that is, has differentially selected the structure and functioning of both sensory receptors and perceptual systems. Although a better feel for the qualitative shifts (in structure and function) such selection brought about can be gained by turning to the careful (phylogenetic, social, and societal) analysis of Leontyev's activity theory account (1981), the main commonality between their accounts is the evolutionary one: For the organism to survive there must be a direct "reciprocity" between that organism and the environment in which it is active (see also Lombardo, 1987).

In a very fundamental way then, all such reciprocity with, or knowledge of, that environment does not begin with "sensory experience" (as the representationalists long claimed); it begins with objects. Gibson (1977) points out that structured objects afford (provide) the organism with a chance to survive or reproduce and suggests it is those "affordances" that they actively seek out in various ways. Leontyev (1981) clarifies for us exactly what those various active ways of seeking are. Together, I believe, these two approaches present a pretty good case: There is no "barrier of the senses" because (strictly speaking) the epistemological act is not reducible nor equivalent to the particular physiological mechanisms which it utilizes; it is something more.

Shortly, we will be considering a few recent serendipitous empirical tests of this jointly held claim of the direct perceptionist and activity theory movements by way of so-called "stimulus substitution experiments" (where the presentation of structured stimulation to the tongue has been utilized so as to allow visual perception by otherwise blind subjects). We'll be indicating that the results of those experiments have been misinterpreted even by some of those who carried them out. For now, however, let's simply set the stage for that consideration by getting a better taste for the procedural aspects of ecological research. That is, given the ecological psychologist's shift of emphasis (towards perception instead of the traditional investigation of physiological sensation or neural pathways), one would expect there would be procedural implications for the kinds of empirical questions asked in their research or the order in which it is carried out.

Shaw & Bransford (1977), recognize that if perception is an active process of information pickup by an organism (rather then a passive reception or physiological transduction of sensations by receptors) then the focus of research should be on what is perceived. Any attempts to reduce the question of what information is being picked up by an organism from the environment, to questions of how it is processed by the mechanical or physiological mechanisms inside that organism (a retinal image being transduced into nerve impulses which travel through the ocular nerves to the occipital lobes; etc.) are "doomed to failure" because the two sorts of questions fail to address the same issue (p.4).

Any motile multicellular organism (let alone a human subject) is not a collection of passive mechanical serial receivers waiting for wave emanations to impinge on it like a radio antenna or satellite dish. Nor is it merely a set of physiological receptors. It is an active (and often intentional) seeker of information about the relevant affordances -invariants (stable features), variants (changing features), or higher-order invariants (the typical way things change)- of the environment it is moving through and living in. Exactly what is relevant may differ from one species to another or from one circumstance to another, but such information is always gained by engaging objects or venues in practice to pick up the required information over time.

The main "How" question raised in ecological perception research, is one of how information "stored in the world" is obtained by such active investigative creatures. The organism's perceptual investigation of its environment takes time to occur as well as space for the organism to maneuver in. Similarly, the dominant perceptual system utilized by a given higher organism will typically differ not only from those utilized by another species of organism (say Looking or Touching vs. Smelling/Tasting), but also ontogenetically (across its own life course) as well as according to the immediate surrounding circumstance (say diurnally). On the latter count, for instance, fetching a sandwich from the kitchen at lunchtime is different (for most of us) than groping in the dark on the way to the bathroom at night. All of these contexts of a given perceptual act are considered fair game for observational study or experimental manipulation by ecological psychology. They are considered as important if not more important than either individualized introspective analysis or generalized knowledge about the normative supporting sensory-physiological receptors and neural pathways "inside" those organisms -i.e., the traditional narrower "how" questions.

To encapsulate these sorts of procedural points, Shaw & Bransford offer the following aphorism: "Ask not what's inside your head but what your head is inside of" (p. 6). This is highly indicative of a shift in the orientation and procedure of their perceptual research. Like Leontyev's activity theory approach, the ecological psychologists are attempting to put perception back into its rightful wider context of evolutionary activeness and adaptive human action.

Similarly, Michaels & Carello (1981) provide a notable "What-How-Who" procedural rule of thumb for ecological research. The researcher should start with analysis of the information available to the organism in the environment, and once that is established observe "how" it is actively picked up by the organism (as well as processed within the organism). Only once these "What and How" questions are answered, they suggest, does one stand any chance of achieving a clearer understanding of the perceiving subject itself.

Such a procedure contrasts markedly with the old Cartesian inside-outward method of epistemological analysis, which starts with the "Who" (I myself), moves outward to the mechanical-physiological "How" (in the receptors), and typically gets blocked at that point of its analysis before even reaching the "What." Instead of this old idealist methodology, Michaels & Carello imply a materialist methodology which shows every promise of being successful.

In other words, Ecological psychology not only adopts a materialist and nonreductive (levels of analysis) methodology, it utilizes a form of outside-inward procedure for its epistemological analysis which avoids encountering the so-called barrier of the senses that has always engendered skepticism about objects. If we adopt such a direct perception procedure of investigation, then a possible lapse into phenomenalism is no longer a problem. On the one hand, the individualized nature of one's human introspective ability is placed into its wider evolutionary or developmental context, and on the other hand the excitation of receptors by "stimulus energy" (as well as the inward route of nervous impulses to the working brain) is no longer treated as an issue of knowledge about objects but simply constitutes the underlying "sensory" basis of the active "perceptual systems" being utilized by the organism to pick up "information" from the environment.

The various sensory receptors (e.g., mechanoreceptors, chemoreceptors, olfactory buds, photoreceptors) of an organism are threshold exhibiting units functioning at the physical, chemical, and biological levels of existence. The perceptual systems of an organism (utilized for touching, tasting, smelling, hearing, seeing, or general orientation in space) are wider entities which show greater plasticity and which function at a higher psychological-ecological level of existence. As Gibson points out: "One sees the environment not with the eyes but with the eyes-in-the-head-on-the-body-resting-on-the-ground…. The perceptual capacities of the organism do not lie in discrete anatomical parts of the body, but lie in systems with nested functions" (1979, p. 205).

Although this Gibsonian aphorism was initially tabled against the Helmholtzian tradition of retinal image research (see Section 3), and therefore overlaps with the claim of the stimulus substitution researchers that one does not see with "the eyes," it applies equally against Bach-y-Rita's further claim that one "sees with the brain" (Weiss, 2001). That again, is a confusion of the relevant level of analysis. Moreover, it dredges up a host of outdated and highly problematic "homunculus theories" which I don't want to even bother getting into (see Shaw & Bransford, 1977, pp. 8-10). In other words (as Shaw & Bransford once put it): "It is the perceiver who knows and the knower who perceives" -i.e., not the retina, eye, tongue, occipital lobes, or brain (p. 10). This applies equally to the blind subjects of a stimulus substitution experiment (who are now provided with the opportunity to utilized their tongue-in-their-head-on-their-body-resting-on-the-ground in order to actively pick up visual information) as it does for the rest of us.

In the particular case of the research in question, the following can now be said: Structured stimulus energy (in the environment) which affords visual information in sighted subjects was allowed a culturally provided platform (the rehabilitative apparatus) and an alternative sensory means of delivery (tongue) by which to reach the evolutionarily preadapted (though comparatively dormant) visual perceptual system (which includes not only specialized neural pathways and the occipital lobes but intentionally directed head or body movements) and be actively attended to as well as picked up as a limited sort of (black and white) visual information by otherwise blind subjects.

Such clarification doesn't take anything away from the remarkable achievements that group of rehabilitative scientists and "neural plasticians" have already made (see Doidge, 2007). It simply provides them with a better theoretical understanding of what they have been doing in practice as well as with some intriguing empirical and societal-ethical questions to investigate or consider. For instance: Can color be somehow included in the content of the stimulation provided to the tongue, and if so, would that help or hinder the observed learning curve of such subjects in their efforts to differentiate the information they are picking up? Are retinal implants on the horizon, and if so, will they ever afford the perceiver with visual information which reaches or surpasses the sophistication of "normal" (unassisted) sight? And ultimately, whether or not those empirical-functional milestones are reached, what are the ethical (legal, political, and military) implications of this sort of research? Will, for instance, such research be met with a political backlash from blind culture similar to the ongoing clash between deaf culture and the cochlear implant industry? Will the individual soldiers of the future reserve the right to refuse augmentative surgery designed to increase combat effectiveness?

On E.S. Reed and "cognition"

We have indicated that the Ecological approach deals very well with the ontological distinction between sensation and perception. But what about the distinction between perception and so-called "cognition"? Here it would probably be Quixotic to pretend this issue has received a definitive answer within that movement itself and since my opinion is that such answers are to be found in the overlap between Gibson, Piaget, Vygotsky, and Leontyev, I would be the wrong person to make a survey of how far "ecological psychology" has recently progressed with regard to where perception ends and higher cognitive processes begin. There were, however, some reasonable basic positions put forward early on which will suffice for our present purposes.

During his short but productive career Edward S. Reed (1954-1997) made various sporadic attempts to put cognition into its proper place. In particular, his "James Gibson's Ecological Approach to Cognition" (1987) seems to acknowledge that the levels of analysis approach utilized by Gibson to shift "perception" upward in the ontological hierarchy of psychological processes (beyond mere sensation) requires the bar to be raised for both social and uniquely human cultural forms of "cognition" as well. Here, Reed sketches out a reasoned argument for the existence of a developmentally transformative sequence in humans from direct individualized "perceptual cognition" (knowing of the environment for oneself), through a transitional Vygotskian-like "social" cognition stage (of communication about and cooperation within that socialized environment), on up to a culturally historicized and so-called "symbolic cognition" stage (replete with materially mediated pictorial "representations" and formalized linguistic or writing systems). He is careful to demarcate off an "ecological" understanding of each from the traditional "mental construction" position of the "cognitivists" as he proceeds, but the following quote should state the case well enough:

"Whereas cognitivists see language-learning as the child's construction of a meaningful world out of stored 'rules' for the interpretation of not yet meaningful inputs, Gibson saw language as a process of stabilizing, making explicit and sharing the meaningful information we all use in our direct perception of the world. Direct perception is thus our basic mode of cognitive contact with the environment, and indirect [representative] modes of awareness extend and amplify this... contact, but do not alter it. Thus, as opposed to the theory of a mental construction of a meaningful environment, Gibson proposed a process of discovery: meanings and values are available to observers in their environment. There is always more meaning implicit in perception than has been made explicit by words, pictures, maps, or other symbols..." (Reed, 1987, p. 158).

Reed has made some good initial headway, for here again we have another link up with the Standard view of science and its realist theory of "meaning" (in contradistinction to the operationist and constructivist camps outlined above). Alan Costall (1999) suggests that a carefully worked out realignment of higher cognitive processes respective to basal perceptual processes, was never quite forthcoming in Reed's subsequent three major works (1988a, 1996a, 1997) in part because the latter stopped too short in its historical coverage. Yet it seems to me that the last chapter of the 1988a work (called "New Vistas for Psychology") provides a fairly precise reiteration and partial extension of Reed's initial account (see pp. 298-300; 305-309). Similarly, Reed also returns to this issue in a lesser known (1993) book chapter where he distinguishes between perception, memory, insight/anticipation, and planning as different sorts of evolutionarily contextualized as well as socio-culturally embedded "cognitive" processes (pp. 46-47). The point to emphasize about the latter account is that Reed explicitly links up his call for an "ecology of human behavior and knowledge" (p. 48) with the social guidance of cognition (see pp. 70-72) approach of the Neo-Vygotskian tradition (Rogoff & Lave, 1984; Lave, 1988; Rogoff, 1990; Valsiner, 1987, 1991; Lave & Wenger, 1991).

Although I am not personally convinced that we need even keep this seemingly problematic and acrimonious "cognition" label at all, Reed (1987, 1988a, 1991, 1993) was certainly correct that some sort of "radical" definitional realignment was required by the discipline to help counter the inexcusably reductive, mechanical, and one-sided "Cartesian" versions of "cognitive science" being practiced during that era (see also Flanagan, 1984; Costall & Still, 1987; Claxton, 1988; Reed, 1988b, 1990, 1997; Velichkovsky, 1987, 1990; Tolman & Robinson, 1997; Still & Costall, 1991; Costall, 2004). It is indeed unfortunate, then, that his ongoing project to combine ecological psychology with other progressive (postpositivist) movements in 20th century general psychology received the unexpected blow of Reed's early passing (see Mace, 1997). But I believe that project is imminently salvageable.

The key to understanding the uniquely human forms of higher "cognitive" processes for what they are, seems to be to ask the very "ecological" question of what they are part of (phylogenetically, ontogenetically, individually, and socio-historically). Such higher mental processes are not present in lower organisms (below the level of mentality that Leontyev called "animal intellect"), nor even in human infants or toddlers who must first undergo further ontogenetic and social psychological development (as both Piaget and Vygotsky pointed out). Further, once the capacity to begin utilizing these ontologically superordinate higher mental processes is achieved by the now intentionally socializing organism or actively appropriating individual child (Leontyev), the occurrence of such novel forms of "cognitive" activity is always (as Shaw & Bransford once put it) both "parasitic on" and "logically posterior to" direct perception rather than anterior to or part of it (p. 17). In short, such higher (social or culturally mediated) "cognitive" processes occur subsequent to perceptual ones in all these varied ways. Moreover, in the specific case of modern human beings, the actual functional border (though not the ontological priority) between perceptual and "symbolic cognitive" processes has been demonstrated to have shifted historically too by way of the formalized adoption of number systems and literacy (Vygotsky & Luria; Leontyev; Gibson; Reed).

In other words, our very means of picking up information and appropriating culture has changed socio-historically. Just as the direct perceptual pickup of information can be extended beyond our finite biological limitations by way of culturally provided tools (e.g., telescopes, electron microscopes, or infrared goggles for vision) so too can individual human cognition (in its internalized "mental" or interpersonally "verbal" communicative forms) be extended beyond that individual's finite lifetime. I'm externalizing my so-called "higher cognitive functions" right now for you (at a plodding pace of about 100 words per day) and with any luck that externalized form of "mediated memory" will outlast the both of us. Assuming it succeeds at all, what does such laborious and formalized erudition do? It broadens your personal horizons. It provides you with an explicit historically situated point of entry into the ongoing collective pickup of information over time as well as with a starting point for your own further inquiries.

It is unlikely that Gibson or Reed would have any unresolvable objections to the combined disciplinary account implied in the above few paragraphs and neither should any contemporary "cognitive psychologist" who is simply willing to give up the erstwhile tradition of Cartesian mentalism for something altogether better. As Reed (1990) once put it, Descartes' view of the "mind as encased in the cerebrum" still "haunts our thinking about [both animal and human] action, like a bad dream" (p. 119). Reed's professional efforts were directed at moving the discipline away from a psychology of 'behavior (out there) versus mental knowledge (in here),' and towards a psychology of nested "perception action cycles" (PACs) -with perception being conceived of as merely the most basic of various higher "cognitive" functions. When Reed (1993) starts his chapter conclusion by saying that "Cognition, like all forms of animate activity, begins with use values, with the affordances of the environment for action" (p. 71), his intention should be made clear to all: The one-sided or inconsistently dualistic (inner-outer) accounts utilized in Behaviorism, operationalized S-O-R research, and Cognitivism have been overcome by a new form of reciprocal monism (see also Reed, 1989, 1990). Further, for those of us already schooled in a basic understanding of "Activity Theory" that particular statement reveals that the early similarities between Reed and Leontyev have now become an identity. For all practical intents and purposes Read (1991, 1993) is proposing that the discipline adopt the same essentially dialectical-developmental approach to psychological analysis as utilized by Leontyev (1978, 1979, 1981).

Whether or not you are as immediately excited about such theoretical-methodological convergence as I am is hard to gauge so please bear with me for a moment while I spell this point out in black and white. I've already attempted to indicate to you that there is a considerable methodological overlap between Piaget, Vygotsky, Leontyev, and the "ecological approach" of Gibson in certain respects (e.g., their emphasis on development, on non-reductive levels of analysis, and their overall compatibility with the realist-materialist Standard view of science). They are each progressive for psychology in their own way, yet only when these converging lines of theoretical argumentation and empirical investigation are combined do they become a generalizable discipline that is more than the sum of such efforts.

For his part, Edward Reed understood all of this very well. In 1993 he went out of his way to demonstrate that by combining the more sound empirical-developmental results of "cognitive psychology" with not only Gibson's account of "affordance" but also with Jaan Valsiner's "social" learning concepts -Field of Promoted Action (FPA) and Field of Free [or attained] Action (FFA)- one could produce an updated as well as more refined notion of Vygotsky's original "Zone of Proximal Development" concept (Reed, 1993). I find this theoretical refinement quite helpful in considering the situationally "specific" aspects of social guidance and it, of course, links up well with the horizontal (shifting didactic) aspects of the so-called generalized "form law of ZPD" we proposed above. That is, the situationally specific role of the teacher in guiding the attention of the student or apprentice is different (both quantitatively and qualitatively) at the beginning, middle, and end of the that student's transition through the ZPD for a given task (as for instance, Gallimore & Tharp's 1990 diagram attempts to indicate). The vertical (transformative) aspects of the issue (and of the form law), are another matter entirely and here Leontyev's work which emphasizes upwardly mobile phylogenetic, ontogenetic, and social-societal transitions in different qualitative kinds (or orders) of ZPDs is invaluable (as my own table of "Transformative Levels" attempts to indicate; see also M. Cole, 1996 and Ballantyne, 2002).

Concluding Remarks for Section 5:

Near the dawn of the 20th century, William James (1904) expressed optimism that "functional psychology" under Dewey at Chicago held great promise for dealing with the complexities of psychological processes in a systematic and ultimately explanatory manner. But by the mid-1920s Angell had encountered difficulty dealing adequately with the qualitatively discontinuous aspects of mental evolution or development, and Carr's particular account of "mental activity" had as much in common with Watson's behaviorism as it did with either James or Dewey (see Section 4). Meanwhile Watson (1914, 1919a, 1924a, 1924b, 1930) was narrowing the scope of empirical research to observable (situationally specific) "behavior" and reductive physiological analysis of so-called complex motor habits. In North America at least, it eventually fell to Robert Woodworth to attempt to counter Watson's mechanical and additive S-R psychology with various generalized "organismic" and so-called "dynamic" S-O-R accounts of psychological subject matter.

In his middle works, Woodworth (1929, 1934, 1931, 1938) advocated an eclectic use of behavioral, physiological, introspective, psychometric, or experimental methods depending on which method best fits the situation of investigatory interest. Within the context of the times, there was nothing wrong with that and these efforts helped bring about a brief period of procedural and assumptive eclecticism in so-called General psychology (one that would allow a continuance of research into observable behavior and of what James called conscious mental life). But it was also during this interim Great Depression period that psychology got its empirical-numerical "variables." While Boring (1933) and others had a hand in this, it was Woodworth (1934, 1938) who popularized and formalized the "IV-DV" terminology of experimental research.

Between 1929 and 1939 these three italicized emphases combined to produce an initially subtle disciplinary movement away from the former "debates of the schools" (e.g, Functionalism or Structuralism versus Behaviorism) toward the careful collection of empirical data pure and simple. The two major historically oriented training vehicles for depression era psychologists provide evidence of this shift in emphasis. Woodworth's Contemporary Schools of Psychology (1931) advocates a 'middle of the road' psychology and Edna Heidbreder's Seven Psychologies (1933) suggests (along Logical positivist lines) that psychology has enough "theory" and must get on with the task of collecting more data if it is ever to become a real "science." Further, under the positivist ideal of value neutral science, interest in tackling the larger problems of psychological methodology or the production of explanatory theory as such were constantly pushed to the background while the sorting out of smaller-scale descriptive and empirically defined questions under the new operationist tradition (S.S. Stevens , 1935a&b, 1939) were moved to the foreground. By the time of the "Symposium on operationism" (1945) the disciplinary bracketing of schools, methodological systems, and even theories of psychological processes in such so-called "General psychology" was almost complete.

This mid-century operationist stance, however, created as many problems as it set out to solve because the wider interdisciplinary "Standard view of science" seems to require that psychology produce theories about the nature of the development of psychological processes (e.g., perception, learning, memory, motivation, and personality as such). Operationism offered only mere empirical-numerical descriptions of each subject area without any possibility of explanation or theoretical determinacy.

The first indications that this operationist stance was a mistake came between 1948 and 1956 with the start of a notable argumentative slide from the initial discussion of "psychological states" or "intervening variables" toward mere "operational definitions" and a contemporaneous (minority report) call for the "purging" of so-called "hypothetical constructs" (entities like emotions or motives) which seemed to be too ontologically loaded. On the more progressive side of those discussions, MacCorquodale & Meehl (1948) implied that the former term (intervening variables) may have a prominent role in early descriptive phases of investigation while the latter term (hypothetical constructs) has a role in later explanatory phases of investigation regarding some particular "domain" of subject matter (e.g., learning, memory, motivation). Yet even while making these important procedural distinctions, they attempt to remain "metaphysically neutral" on the issue of realism versus anti-realism. Both Skinner (1950) and Melvin Marx (1951), however, refused to go along arguing (respectively) that there was no need for "psychological theories" per se, or that to attempt to produce such theories (which after all move beyond merely descriptive data analysis) was dogmatic. Finally, Kendler (1952 onward) exposed the fact that a consistently held psychological operationism is a mere modernized form of "nominalism" (naming without claiming). Thus was mid-20th century psychology purged of its "metaphysical" content.

It is also important to appreciate how the issue of perception theory fits into the disciplinary conundrum being faced by mid-century General psychology. MacCorquodale & Meehl (1948) refused to take a stand on this issue but that refusal just made them more vulnerable to anti-realist arguments (e.g., Stevens, 1935a&b, 1939). Similarly, Boring (1953) and Cronbach & Meehl (1955) attempted to argue that hypothetical constructs (involving hypothesized entities and processes) are either an end-goal or might have a place in the later parts of ongoing research projects (respectively), but these seemingly reasonable positions were completely undermined by their indirect theory of perception (which asserted that all we really have contact with are constructs built up from sense data).

Furthermore, as we saw, the situation was not helped in the slightest by the energetic discussions (from 1956 onward) of so-called "convergent validity" or "convergent operationism"; nor even by Cronbach's (1957, 1975) attempts to promote "Aptitude X Treatment interaction" (ATI) research as a means of potentially resolving the outstanding procedural disjunction between experimentalist versus individual differences research. Even Cronbach's (1975) admirable appeal to the "evolutionary" context and "historical" content of such derived data sets fell flat because it was undermined by his inordinate faith that the empirical-numerical technology of the "combined" variable model of research would help produce adequate psychological "concepts" (by carving nature closer to the bone). To put the point plainly, the scientistic hubris of this latter indirect or naive realist group doesn't reside in their realism: belief that the discipline can reach an objective (veridical) account of psychological processes. It resides in their methodolatry: faith that quantitative procedures will be the primary (or indeed the one and only) means by which this veridical account would come about.

Ultimately, the mid-century operationists and combined "variable psychology" traditions found out the hard way that by attempting to produce psychological "concepts" or "constructs" (a.k.a., theories) empirically -strictly on the basis of derived data sets- went largely nowhere. By the late 1970s, it was becoming clear to all that the recently sought after theoretical-explanatory knowledge products which were supposed to be the end-goal of such intense scientific activity had not yet appeared. It is only now becoming more clear (to some of us psychologists), however, that in order to produce such sound theories (explanations of psychological processes), or likewise resolve differences between existing theories, we can't take the indirect neopositivist route of the "empirical tools to theories" approach (Gigerenzer, 1991).

To attempt to bring about mere statistical refinements of numerical measurement techniques, or to carry out so-called crucial experiments designed to choose definitively between logically exclusionary and operationally defined empirical hypotheses is the wrong way of going about addressing such wider theoretical issues (Tolman & Lemery, 1990, 1990; Tolman, 1991a). We have to first carry out some rather basic observational analysis of that which is to be measured. We have to answer the ontological and developmental "what" question before starting to measure. As we saw, that was the very starting point for Piaget and Vygotsky who, in comparison to most of their North American contemporaries went a whole lot further in coming up with "theories" as such. They did this not through the collection of empirical-numerical data, but by carefully considering the evolutionary, ontogenetic, and socio-historical contexts that are the very genesis of human intellect.

In order to drive home these points of proper psychological procedure, we initially made mention of some built in "formal logical," mechanical and Platonic limitations of "variable psychology techniques" (like Factor analysis or ANOVA) as contrasted to a "dialectical account of developmental processes" in general and its more Galilean view of lawfulness (see also Lewin's Chapter 1, 1931/1935; Novikoff, 1945; Tolman, 1987c, 1991a&b; Holzkamp, 1991a&b). We then considered a more specifically psychological contrast between Brainerd and Piaget on "conservation tasks." In turn this was followed up by yet another contrast between Vygotsky's views on empirical-developmental procedure and the analysis of variance approach utilized in the area of so-called Life-Span Developmental research (specifically by Baltes, et. al., between the 1970s and late-1980s). We even threw in the occasional mention of so-called individual differences research (a.k.a., I.Q. or ability testing) for the sake of semi-completeness on such issues (see also Ballantyne, 2002).

The argumentative point of the immediately above considerations was that, on occasion, some important disciplinary headway has been made with regard to ascertaining exactly what we can and can't do, as well as what we should or shouldn't attempt to do with our empirical-numerical measurement techniques (see Wilcox, & Katz, 1984 for one central instance of such headway). Yet all of these respective hard-won lessons regarding the working relationship between statistical-empirical techniques (methods) and the more rational techniques (allowed by the above mentioned dynamic or dialectical methodological assumptions) were very slow to take hold in so-called General psychology. In fact, they were so slow to take hold that a full-scale "Crisis of relevance" had to run its course (between the mid-1960s through well into the 1990s) before they would at least begin to be disseminated in late-20th century psychology.

Part of the reason for this slow uptake was that in psychology the rejection of so-called "positivist scientism" has tended to be a whole-scale rejection of both materialism and realism. But as we saw in preceding Sections, those respective ontological and epistemological positions are essential parts of the Standard view of scientific objectivity. So, in order to indicate why defending and updating the Standard view is important, we outlined and critiqued the views of three prominent late-20th century anti-objectivist figures (Koch, Wertheimer, Gergen) as exemplars that their form of ready remedy was in effect worse than the scientistic disease it tried to cure. As a thoroughly reactionary movement, such anti-objectivist "constructivism" misunderstood what positivism was and retained its most problematic aspect: Humean or Kantian epistemology.

Various negative reductios (based on their own problematic epistemological assumptions) were initially outlined in detail. In one of those we raised the old pragmatic triad between action, belief, and truth argument as follows: In order for the anti-objectivists to actively advocate their position they must believe it is true. If they don't believe it is true, why do they advocate it? Yet, since their position is that no such truth is possible, their own argumentative actions are at variance with their position, and in any case no such action (either for or against any position) would be possible if their position were indeed true (see Foster, 1987). In short, one can not seriously put forward an anti-objectivist position without running into such an inherently self-contradictory pragmatic situation (see Cunningham, 1973). We raised the related dialectics of disagreement argument here and there too. It presents the same sort of counter to anti-objectivist views in a slightly different way: In order to disagree on anything, two disputants must first agree on at least something (e.g., the language to be used; the topics to be raised; the institutional venue, location, or date; the journal, book, or Internet site to be used for dissemination of the debate; the duration or word count for each contribution, etc., etc.), yet to do so seems to undercut the anti-objectivist argument from the very start. Again, the point is that all it takes to turn the argumentative table on the anti-objectivist is to insist that they openly own up to the fact that (in order to even engage in such debate) they are necessarily assuming some sort of direct and shared access to any one of these.

Other more affirmative tactics were also utilized. First of all, so as to shore up the updated Standard view of "objectivity" -a.k.a., "responsible assertion"- (as proposed by Scheffler and Cunningham), we outlined the "direct perceptionist" position of J.J. Gibson (1966, 1979). That unique differentiation rather than enrichment theory of the information pick up not only shatters the former seemingly insoluble dichotomy between Classical Empiricist and Rationalist accounts of depth perception (see Section 2), it also provides a missing methodological ingredient for overcoming the long-standing and most insidious Cartesian subject-object dualism as well. Secondly, we indicated how Gibson's "ecological approach" to the study of psychological processes has been expanded upwardly to the consideration of higher order "cognitive" processes by Edward Reed.

Of note in this latter regard is Reed's (1987) suggestion that the so-called "cognitive revolution" did not challenge the basic assumptions of S-R psychology so much as add a new set of "extra" problematic mechanical assumptions to it. As with Woodworth's S-O-R and other neopositivist versions of "S-M-R" psychology (e.g., Cronbach), the cognitivist position is that the S-R psychology can be made more complete by adding an internal domain of "mental representations" (a.k.a., the information-processing model). According to Reed (1987, 1988a, 1991, 1993), this mental-mechanical and "decontextualized" internal processing view was merely an effort to avoid the obligation to study a larger unit of psychological analysis (which includes the shifting and "contextualized" yet directly reciprocal relationships between the subjectivity of the active organism or person and its environment). The ecological approach of the Gibsonians is an attempt to show how adopting that larger unit of analysis can help make a science of the subject become more objective -i.e., how it can be reoriented toward the investigation of such qualitatively developing reciprocal relations at different levels of individual, social, and societal existence.

As we tried to show, this "ecological approach" has much in common with the position of Vygotsky and his colleagues (especially with Leontyev's "Activity Theory" account -which utilizes a unit of analysis that embraces both subject and object as indeed distinct, but only analytically so). Others, but most particularly Charles Tolman, have done the same elsewhere with respect to the late-20th century tradition of German "Critical Psychology" as well (see Tolman, 1989c; Tolman & Maiers, 1991; Holzkamp, 1991a&b, 1992; Tolman, 1994a, 2008; Teo, 1995, 1998, 1999a&b).

When it comes to adopting an assumptive methodology, selecting appropriate investigatory research procedures, and choosing between rival theories one should not be too quick to buy or buy into the first lemon which presents itself. In other words, before getting locked into a given course of research that will decide your career path for years to come, one should pause to consider carefully whether or not you actually want to devote your life (or likewise seal the fate of one's patients, clients, research subjects, etc.) to assumption x, procedure y, or theory z. But this sort of consideration, of course, requires that you have some firm and well-informed notion of what the contemporaneous or historical alternatives are. I hope the above historical narrative has been clear and engaging enough to afford you with some of that required knowledge.

For example, alone among your current student cohort (and even most of the previously trained-up working professionals in our discipline), you should now know the difference between: psychological methods and methodology as such; description and explanation; methodolatry and the selective application of relevant rational or empirical methods; qualitative and quantitative change; interactionism and a transformative approach to mental development; etc (see Table 1 of the Introduction).

Yet along with such newly attained knowledge and the expanded field of free action it affords you, comes a responsibility to act -i.e., not merely for one's own sake but also so as to promote the attainment or facilitate the use of such abilities in others. This is the very definition of our currently shared didactic imperative, but my part (as author and teacher) in this duty is presently coming to an end while yours (as reader and apprentice) is just beginning. My sincere hope is that each of you will already not only appreciate the necessity of reform in the discipline but begin rather immediately to actively participate in such reform efforts.

For instance, since about 2004, the call for "multiple levels of investigation" has become a prominent mantra of most progressive departments of psychology. This is surely a potentially progressive occurrence for it seems very similar to the call for an "evolutionary-functional account" of psychological processes some 100 years ago. I'd encourage you, however, to test the current limits of your student cohort's and your professor's exact understanding of such a slogan by way of raising some of the methodological issues we have covered above. When that methodologically loaded string of words is used, does the user actually mean what the dialectician or ecological psychologist means by multiple levels or analysis? Or is it perhaps just being utilized as a fashionable buzz term, or what is worse, a mere administrative steppingstone to ultimately re-christen the department of psychology as the "department of psychological sciences?"

There are, of course, inherent dangers in either of those two more casual and unsystematic usages. On the first count, it would be advantageous to remember the eventual disciplinary predicament faced by the early functional psychologists who started with a potentially progressive yet inadequately worked out emergent evolutionary account and ended up being absorbed into a General psychology full of behaviorism on the one hand and eugenics (a.k.a., I.Q. or standardized testing) on the other. The same sort of fate can easily befall the current multiple levels of investigation bandwagon if it is not persistently steered in the proper anti-reductive and professionally inclusive direction. On the second count, I think it is probably not entirely necessary to belabor the point because you will probably already appreciate the historical parallel between Koch's call for an inordinately unscientific "psychological studies" label and the reciprocally opposite potential dangers of an inordinately scientistic re-christening of our departments with some such locution as the "psychological sciences."

In past eras of psychology, by the time a given student or working professional figured out what was going on in the discipline or even within their own department, it was for the most part too late to do much of anything about it. They were, for instance, already respectively engulfed by the entrench departmental system of grade seeking and the publish or perish tread-wheel, the seemingly highly selective but in fact mediocratic hiring process and its crony-laden tenure process, the tradition-laden organizational or governing body, the committee-based editorial board, etc. I've made every effort above to help you avoid this sort of dismal mind-numbing situation by way of providing a safe third-hand venue to think for yourself on some of the fundamental issues effecting the establishment, current trends, and potential future course of our discipline. So, having afforded you that opportunity, I'm now urging you to also get out there and start making your own voice heard! In spite of the potential interpersonal ruffles or professional risks incurred by participating in such direct remedial action, the personal and societal price of inaction would ultimately be worse. The right kind of psychology will never occur as a result of inaction, nor by way of seemingly self-preserving tactical diffidence to the ongoing perseveration of older methodologies or even to the outmoded tradition of overly-conservative and top down information technologies (e.g., vetted peer review).

So-called creative administrative inertia alone is enough to stop any potentially progressive disciplinary momentum (e.g., away from neopositivism or constructivism and toward a third metatheoretical alternative) which is not openly seen to have renewable ranks and a persistent political presence. What this means is that the institutional funding or professional position opportunities for each of you will not be there later on if you do not act now as an organized and collectively progressive cohort to bring about the necessary disciplinary changes in some sort of bottom up revolution. So don't be shy about putting your professors or Departmental Heads (etc.) on the spot about what they are doing, how or why they are doing it, and ultimately who stands to benefit from their particular theory, empirical research procedures, assumed wider disciplinary methodology, or seemingly narrowly circumscribed administrative policies. Ask them in public, who is funding the research being carried out by departmental members, visiting scholars, or short-listed job candidates; who will benefit from the research being carried out or the information being collected; and who owns or controls the results, etc. Such latter kinds of societal-ethical questions are completely legitimate and those who are not part of the problem will be willing to answer readily, while those who are still part of the ongoing problem will be quite rightly embarrassed into taking a more responsible tack in future.

I've suggested as unequivocally as possible that both the neopositivist and the constructivist traditions of psychological research are currently holding the discipline back. They can not endure philosophically and have in effect been an inefficient (and sometimes unethical) waist of lives, potential, resources, and time. Neopositivist psychology is a methodological "lemon" and the empirical-numerical ("variable" centered) tools of investigation it utilizes can not stand on their own merits -i.e., without being guided at the front and back end by more rational dialectical-qualitative (development centered) methods. Constructivism, on the other hand, is quite clearly a disciplinary dead-end.

In the long run, top-down delaying tactics by those currently in positions of power will not hold back the tide of disciplinary change indefinitely. It would be ill-advised as well as self-defeating, therefore, for any of you to be a willing part of such delaying tactics. If change is inevitable, predictable, and beneficial, doesn't it make pragmatic sense to become an active agent of such change?

Stated plainly, much of the future fate of our discipline lies in the success or failure of your ongoing collective activities to reform and improve its professional practices, training standards, and institutional or publication base. The greatest obstacle of all to such reform is cynicism with respect to your collective ability to change the status quo. One student cohort can not summon the future alone, but it can make its own progressive changes to the present state of disciplinary affairs; and each of you are better prepared to do so than most. So what will that future look like? Will psychology continue to be a cyclical succession of lapses into the mistakes of the past, or will it be a disciple which abides by the hard-won lessons of the past? It's up to you. Alas, however, it is time to close out the present work by simply wishing you the best of good-byes and luck in such future endeavors.

Posted while in progress: March, 2004-January, 2008; Minor Grammatical changes: April, 2008.


Home Page Contents   |  Course Intro |  Bibliography