Whose name is usually associated with developing the earliest intelligence test?

Eugenics

Garland E. Allen, in Encyclopedia of Social Measurement, 2005

Measuring the Trait of Intelligence

One trait that could be expressed quantitatively was intelligence, tests for which were developed particularly in the United States. In 1912, Davenport arranged for his long-time friend, Henry H. Goddard (1856–1962), then Director of the Training School for Feebleminded Boys and Girls at Vineland, New Jersey, to administer versions of the French Binet-Simon test to immigrants arriving at Ellis Island. Although the Binet-Simon test was intended to measure only an individual's mental functioning at a given point in time, Goddard and a host of American psychometricians considered that it also measured innate, or genetically determined intelligence. Goddard coined the term feeblemindedness to refer to those people who scored below 70 on his tests and claimed that it “was a condition of the mind or brain which is transmitted as regularly and surely as color of hair or eyes.” Because Goddard was convinced that feeblemindedness was a recessive Mendelian trait, he reformulated the concept of intelligence from a continuous character to that of a discrete character. And it was Goddard who carried out the famous study demonstrating the supposed inheritance of mental deficiency in a New Jersey family known by the pseudonym Kallikak.

For psychometricians and eugenicists, the belief that their tests measured innate capacity rather than merely accumulated knowledge meant that the tests could be used as an instrument for carrying out educational and social policy, not merely as a measure of an individual's progress at a specific point in time. For eugenicists, the new mental tests, especially the Stanford-Binet test first published in 1916, were seen as a precise, quantitative tool for measuring an otherwise elusive, but fundamental human trait. The fact that much of the material, including terminology, on which the tests were based was culture-bound did not deter psychometricians or eugenicists from claiming that the tests measured only innate learning capacity. Even when results from the U.S. Army tests during World War I showed that the longer recruits from immigrant families had lived in the United States, the better they did on the tests, Carl C. Brigham (1890–1943), a Princeton psychologist who analyzed the data, argued that the trends showed a decline in the quality of immigrants over time, not their degree of familiarity with the cultural content of the tests.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985005272

Sociocultural and Individual Differences

Cecil R. Reynolds, in Comprehensive Clinical Psychology, 1998

10.03.8 Cross-Cultural Testing when Translation is Required

When a test is translated from one language to another, the research findings discussed thus far do not hold. It is inappropriate simply to translate a test and apply it in a different linguistic culture. A test must be redeveloped from scratch (although constructs may be retained) basically before any such application would be appropriate. New items, new normative data, and new scaling would all be required. This has been known since the early days of psychological assessment and testing. In the early 1900s, when the Binet–Simon tests were brought to the USA from France, approximately 30 different versions of the test were developed in the USA by various researchers. However, most of these were mere translations or contained minor modifications to adapt to American culture. The Stanford–Binet Intelligence Scale, in its various incarnations, however, became the standard bearer for measurement of intelligence for nearly 60 years and was even more popular in France at one time than the original French Binet–Simon scales. The reason for the domination of the Stanford–Binet series was Lewis Terman's insight and tenacity in redeveloping the test in the USA. After determining that Binet's theory of intelligence applied, new items were written, tried out, and a new scale devised for norming that was conceptually consistent with the Binet–Simon scales but in its practical application was a new and different test.

The problems in translating verbal and nonverbal concepts across linguistic cultures are difficult but in any event the redevelopment of tests in such circumstances seems required. Cronbach and Drenth (1972) provide a book length treatment of these problems and various experiences with proposed solutions to cross-cultural adaptation of psychological tests stemming from some 30 nations from throughout the world. The various contributors describe both the strengths and limitations of adapting tests cross-culturally from one country to another, providing perspectives from such diverse discipline psychometrics, cognitive development, psychology, and anthropology. More recent guidelines and reviews of the issues involved in cross-cultural adaptation of psychological and educational tests can be found in Hambleton (1994), Hambleton and Kanjee (1995), and Van de Vijver and Hambleton (1996).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B008042707300105X

THE HISTORY OF DEVELOPMENTAL-BEHAVIORAL PEDIATRICS

Heidi M. Feldman, Trenna L. Sutcliffe, in Developmental-Behavioral Pediatrics (Fourth Edition), 2009

DEVELOPMENT OF PSYCHOLOGY: THE NINETEENTH AND TWENTIETH CENTURIES

The core concepts and approaches of developmental-behavioral pediatrics are as solidly rooted in psychology as they are in pediatrics. The following brief summary highlights major developments in psychology that were particularly relevant to current practice and research.

Charles Darwin (1809-1882) has been credited with introducing the study of human behavioral development, which evolved into the psychology of children (Kessen, 1999). His essay, entitled, “A Biographical Sketch of an Infant” was a meticulous account of the capacities of his infant son. He carefully described developments in a variety of domains—movement, vision, emotions (anger, fear, and pleasure), reasoning, moral sense, and communication. This inventory presaged the domains of functioning further described and studied by subsequent contributors and formed the basis of how we view child development in the current era.

Francis Galton (1822-1911), Darwin's cousin, launched the study of human intelligence. He was particularly interested in the variation among individuals. Galton's legacy is developmental and intelligence testing, a foundation of current developmental-behavioral pediatric practice (Kessen, 1999). Alfred Binet (1857-1911) collaborated with Theodore Simon in designing a carefully constructed scale that could be used to differentiate children who were developing typically from children who required special education because of slow development. The Binet-Simon test was first published in 1905. Lewis Terman (1877-1956) standardized the Simon-Binet test on a large sample of U.S. children, creating the Stanford-Binet test of intelligence. Arnold Gesell (1880-1961) used a similar empirical approach to create an evaluation of the development of young children. His book, entitled An Atlas of Infant Behavior and published in 1934, described the typical developmental milestones. Although the developers of these assessments were clear about the limitations of the quantitative approach to measuring intelligence, the Eugenics Movement used the work of Galton and results of intelligence testing to support their claims about the superiority of white race and inferiority of African Americans, immigrants, and individuals with disabilities and mental health disorders. Eugenics advocated for improvements in the human race through selective breeding, prenatal testing, birth control, sterilization, and euthanasia (Kanner, 1964). This history emphasizes the ethical obligations of professionals in assessing the capacities of young children.

In a concurrent but independent tradition of psychology, Sigmond Freud (1856-1939) described the development of emotions and emotional disorders (Kessen, 1999). Freud proposed a three-part structure of the mind: the id, the ego, and the superego. He described five stages of psychosexual development: the oral, anal, phallic, latency, and genital stages. Freud also articulated the concept of the unconscious. Psychoanalysis became the method for helping patients acquire insights into the unconscious conflicts in their upbringing that caused emotional disorders. Most of these concepts have been severely criticized or reworked throughout the 20th and 21st centuries. Erik Erikson (1902-1994) later reconceptualized Freudian stages in psychosocial rather than psychosexual terms. The major tasks that children face at various points in development are still described in Erikson's terms.

James Mark Baldwin (1861-1934) was a leading figure in the area of sensation and perception. His experimental work on infant development strongly influenced Jean Piaget (1896-1980), whose intense observation of his three children formed the foundation of an integrated theory of cognitive development. In Piaget's theory, the sensorimotor stage of development preceded the preoperational, operational, and formal operational stages. Children progressed through these stages through processes of assimilation of environmental experiences and accommodations to those experiences. These concepts remain a foundation in experimental cognitive development.

Another influential tradition within psychology that emerged in the 19th century was the study of learning. Ivan Pavlov (1849-1936), a Russian physiologist, psychologist, and physician, described what he called the “conditioned reflex.” The conditioned reflex is the ability of a once neutral stimulus, such as a bell, to cause a physiologic reaction, such as salivation, in an animal or human based on pairings of the neutral stimulus with a motivating stimulus, such as food. These concepts are current in areas such as the causes and treatments of phobias. In the United States, James B. Watson (1878-1958) was an early behaviorist, who argued for cutting out consciousness and other intangibles from the dialogue of psychology. His hope was to control children's emotions through conditioning. B. F. Skinner (1904-1990) elaborated on operant conditioning, the ability of a reinforcing stimulus to change the probability of the appearance of behaviors. Operant conditioning still plays a central role in behavior management of children developing typically and children with disabilities. Following Skinner, behavioral approaches scrutinize antecedent conditions, behaviors, and consequences in the search for reinforcers. In addition, the frequency and pattern of reinforcement are still considered important to the maintenance of behavior change.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781416033707000018

Special Education

Stephen D. Truscott, ... Kelvin Lee, in Encyclopedia of Applied Psychology, 2004

2 History and Theoretical Foundations: From Institutionalization to Inclusion

People with disabilities have always been part of society. What has changed is how society defines and explains these differences and how people with special needs are treated. Early explanations of disabilities were grounded in superstitious belief systems, and treatments included abandonment and extermination. By the 1800s, these gave way to quasi-scientific explanations and institutionalization of people with disabilities. Early efforts to educate children with disabilities were often conducted by physicians (e.g., Itard, Seguin, Howe, Gallaudet) whose work established the foundation for current practices such as individualized instruction and an emphasis on functional skills. Because societal demands were different (e.g., for literacy), early efforts to provide special education were focused on people with severe disabilities such as mental retardation and psychosis. During this period, asylums for the treatment of individuals with disabilities began to appear. Such institutions were prevalent in the United States until the relatively recent past and still exist widely in many other countries. Although asylums were originally intended to protect people with disabilities from cruel treatment in the outside world, patients were sometimes abused and neglected.

2.1 The Child Study Clinic Movement

Witmer’s psychological clinic was part of a movement that featured the application of scientific methods to problems of living, an emphasis on public health, and attention to social justice. At the turn of the 20th century, such “child study clinics,” often associated with public schools, sprouted up in metropolitan areas throughout the United States. These clinics focused on using the new science of psychology to address the problems of children. The clinics served students with a range of disabilities, including mental retardation, learning problems, and behavior disorders. In addition to prescribing treatments and educational interventions, these pioneering applied psychologists viewed schools as places to provide things such as nutrition, parent training, and healthy environments when these were missing at home. It is no coincidence that child study clinics were established at approximately the same time as the rapid expansion of public education and the implementation of child labor laws.

The clinics focused on applying the new science of psychology to the needs of individual children, so it is not surprising that the Binet–Simon test (translated in 1910) found widespread acceptance among early clinicians. The new test provided an objective and scientific way in which to classify students and played a significant role in the rise of special programs for students with disabilities in metropolitan areas served by clinics. During the 1920s, special programs for students with disabilities (notably mental retardation) were widespread in cities but nearly absent in rural America. The Binet test, administered by clinicians in child study clinics, played a critical role in these placement decisions.

2.2 The Rise of the Medical Model and the “Within Child” Deficit Approach to Special Education

Throughout the 20th century, psychological and medical technologies were increasingly evident in special education diagnoses and programming. “Within child” and biological explanations of disabilities (e.g., minimal brain dysfunction, dyslexia, Down syndrome) became more and more dominant after 1950. Following the medical model, disabilities were thought to originate within the child and were most often attributed to manifestations of underlying biological problems. Special education treatment followed along with these explanations, and the idea of diagnostic prescriptive teaching came to the forefront. This model is predicated on the idea that particular diagnoses can be addressed best by prescribing specific treatments. In special education, the individualized educational program (IEP) exemplifies this approach.

Concurrently, after the 1954 Brown v. Board of Education U.S. Supreme Court decision ruled against public school segregation by race, a series of court cases and legislation provided U.S. constitutional rights (e.g., equal protection) to people with disabilities. Special education rights grew rapidly, culminating in 1975 with the enactment of Public Law (PL) 94-142, a federal law that granted access to a free and appropriate public education (FAPE) to all students regardless of disability. The rapid expansion of special education services after PL 94-142 has continued to date.

2.3 Current Conceptualization: Behavioral, Ecological, and Sociocultural Models

Currently, behavioral, sociocultural, and ecological approaches mark a fundamental transition in special education in the United States. In contrast to the medical model, these contemporary approaches attribute the manifestation of a “disability” to the transaction between the demands of the environment and the behavior of the individual rather than to the expressions of internal biological deficits. Fueling the transition is the understanding that the widespread goal of recognition and inclusion of people with disabilities in the educational system has been reached but that less attention has been focused on the learning, social, behavioral, and occupational outcomes of special education placement. In addition, considerable growth has occurred in the percentage of students classified in the mild or “judgmental” categories of disability (i.e., those disabilities that are determined primarily by results of psychometric testing such as learning disability [LD], speech/language impaired, and emotionally disturbed). Such students now comprise 85 to 90% of the special education population in the United States. The percentage of students in disability categories with a known etiology or physical problem has remained much more stable over time.

Indeed, in the United States today, most students in special education have neither clearly defined disabilities nor known etiologies. Wide differences among the characteristics of students within the same disability classification in various settings support the idea that most educational disabilities are primarily social constructions. For example, in 1994, Gottlieb and colleagues reported that the average intelligence quotient (IQ) of students labeled as LD in urban schools was 81, compared with an average of 102 for students labeled as LD in suburban communities. Examination of such differences has resulted in the serious ongoing reexamination of special education definitions and services exemplified by the PCESE’s finding in 2002 that many students in special education are essentially “instructional casualties” rather than students with educational disabilities. As programs based on the medical model (e.g., neurological models of LD) have failed to result in acceptable outcomes for the majority of students in special education, new interest in functional assessment of behavior, direct assessment of academic skills (e.g., curriculum-based measurement), attention to observable intervention outcomes, and direct instruction of academic, social, and behavioral skills has become an important element of special education reform in the United States. These practices, which are derived from behavioral psychology and ecological developmental theory, are based on the idea that the learning problems experienced by most children labeled as educationally disabled arise from a mismatch between the learning needs of the students and the instructional or other environments.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0126574103004451

Sleep, intelligence and cognition in a developmental context: differentiation between traits and state-dependent aspects

Anja Geiger, ... Oskar G. Jenni, in Progress in Brain Research, 2010

Assessment of cognitive processes and intelligence

Lower-order cognitive processes may be considered as a heterogeneous group of various cognitive functions (e.g. perceptual motor learning, visual short-term memory, selective attention etc.). Thus, there is no unitary-assessment approach. The evaluation of cognitive processes ranges from simple reaction times (RTs) measure to the query of strategies adopted during problem-solving tasks, and depends on the particular problem. For example, mirror tracing is a visual motor task to measure visual integration, hand–eye coordination and learning of new motor skills. During skill acquisition, subjects have to copy a figure, but with their hand being placed behind a barrier. Feedback from their copying performance is only visible in a mirror. Because a detailed description of cognitive assessments would go beyond the scope of this article, we will exclusively focus on the assessment of intelligence.

Intelligence can only indirectly be inferred. Basically, three main approaches may be distinguished (1) inference from an external criterion, (2) psychometric testing [e.g. the classical intelligence test, resulting in intelligence quotient (IQ) scores] and (3) inference from neurobiological correlates.

Intelligence can be inferred from apparent behaviour or linked to an external criterion. Usually, professional success is assumed to reflect the underlying intelligence. Intelligence correlates positively with professional success, income or career advancement, and seems to be the single best predictor for the later position in society (Brody, 1992). According to a meta-analysis by Fraser et al. (1987), correlation coefficients between intelligence scores and school marks vary from 0.34 to 0.51, depending on the type of school and years of education. Thus, intelligence only moderately correlates with school performance (Cohen, 1992), reflecting the fact that intelligence does not exactly mirror school performance, and limiting the significance of intelligence testing for prognostic use in the prediction of individual school careers. In fact, many aspects such as concrete learning opportunity, individual support and personality factors (e.g. achievement motivation), have an impact on external criteria such as school marks which explains the moderate nature of correlations between intelligence scores and school marks.

Psychometric testing is the standard approach to infer about intelligence. Specific tasks, which are assumed to reflect individual differences in intellectual ability, are applied. Introduced by Albert Binet and published as the Simon–Binet test in 1905 (Binet and Simon, 1905), the first tool for the assessment of intelligence was aimed to detect and support children with special needs. The items included in this test battery reflected age-appropriate intellectual ability. Individual items ranked for chronological age (CA) allowed for inference about children's mental age (MA). Accordingly, the individual mental ability score for a given child was related to its CA, thus providing a mean to evaluate its general state of intellectual development – advance or delay. With the transformation of the original difference score (MA minus CA) into a quotient score (MA/CA × 100) by William Stern (1912), the well-known IQ score was born. Two basic principles from this seminal work are still present in today's IQ tests: (1) adaptive testing, that is the principle to start with items assumed to match approximately the intellectual level of a given subject, with a subsequent gradual progression of item difficulty and (2) relation of individual test scores to age-standardized population norms which is based on the assumption that intellectual ability just as many other human characteristics should be normally distributed in the general population.

Finally, intelligence can be inferred from neurobiological correlates. Early anthropometric approaches from the 19th century used total brain weight and volume as correlates of intelligence, assuming that bigger brains are better brains. Even though, there are in fact positive correlations between total brain volume and IQ scores, correlation coefficients are only small to moderate (r ∼ 0.3) (McDaniel, 2005), and there is no causal relation. Besides, many of these approaches were biased by sexual and racial aspects, frequently suiting scientifically justified racism (Gould, 1994). Summarizing studies that are based on voxel-based morphometry, Jung and Haier (2007) elaborated and upgraded the original idea of linking grey matter volume to intelligence. The parieto-frontal integration theory (P-FIT) implies that networks of grey matter volumes in specific brain areas (local aspects) and interactions dependent on the connecting fibre tracts are the biological substrates of individual differences in intelligence.

A different approach, the so-called mental-speed approach is based on the assumption that speed of processing in elementary cognitive tasks is related to psychometric intelligence. In general, RTs are slower with increasing load of information. However, subjects with higher IQ scores need less extra time to process additional information. In fact, the increase in RT with increasing load of information correlates about −0.18 to −0.23 with IQ scores (Jensen, 1987). Techniques such as positron emission tomography (PET), EEG and functional magnetic resonance imaging (fMRI) have considerably contributed to a refinement of the mental-speed approach. A seminal study was performed by Richard Haier et al. (1988). Subjects engaged in a non-verbal IQ test were simultaneously scanned for their glucose metabolism rate (GMR). Surprisingly, those that reached the highest IQ scores displayed the lowest GMR. In other words, the higher the IQ score, the lower the cognitive effort or demand, which has become known as the principle of neural efficiency. Haier interpreted his finding as follows ‘… intelligence is not a function of how hard the brain works, but rather of how efficiently it works’. (Haier et al., 1992). Since then, many studies have replicated and further characterized neural efficiency which is modulated by the level of task complexity and a task–sex interaction (for review, see Neubauer and Fink, 2009).

In sum, over the past decades common lines emerged regarding the neurobiology of intelligence. First, general statements are formulated for specific abilities rather than global aptitude. Second, neurobiological correlates are related to the local, instead of the global anatomical level. Third, the focus is on dynamic rather than static aspects of processing. As a notion of caution, it has to be mentioned that neurobiological correlates are easily mistaken as a direct measurement of intelligence. There is, however, no causal relationship between physiological parameters and psychometrically assessed intelligence. Furthermore, there is no solid theoretical model linking physiological parameters of the central nervous system to intelligence as a behavioural trait and the reported magnitude of correlations between IQ scores and physiological parameters of the waking brain (e.g. waveforms of event-related potentials, cerebral metabolic rates) are only moderate (r ∼ 0.4) (Deary and Caryl, 1997).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444537027000105

History of adult cognitive aging research

K. Warner Schaie, in Handbook of the Psychology of Aging (Ninth Edition), 2021

Assessment of intellectual functions

Psychological tests were originally developed to identify intelligent people. Francis Galton believed that human intelligence is mostly inherited. But he wondered how could the most intelligent people be identified? A test of intelligence would have to be created. Galton took on the job and in 1883 published the first intelligence test.

A test of intelligence

Influenced by British philosophers who considered intelligence to be based on the ability to process sensory information, Galton (1883) devised a series of tasks designed to measure how well a person could see, hear, smell, taste, and feel.

Galton’s “mental test” (as he called it) was not very successful; it showed only trivial correlations with measures of intellectual competence in the real world, such as scholastic performance (Wissler, 1901).

Almost 20 years later, a French psychologist by the name of Alfred Binet tried again to construct a test of intelligence. He had been given a much more practical problem to solve by the French Ministry of Public Instruction. They needed a test to distinguish students of low ability (mentally retarded) from those of adequate ability but low motivation.

Binet and Simon (1905) held a more traditional view of intelligence than Galton, believing, for example, that playing chess was a better indicator of intelligence than smelling vinegar. He decided to assess “reasoning, judgment, and imagination” by a series of cognitive problems.

Because Binet’s miniature tasks were quite similar to those that children are expected to face in school, scores on his test were highly correlated with scholastic performance. First published in 1905, Binet’s test (Binet & Simon, 1905) was quickly translated into other languages. In the United States, his test was translated and revised by Stanford psychologist Lewis Terman and became known as the widely used Stanford–Binet Intelligence Scale.

The background information on intelligence testing is relevant for our discussion of adult intelligence for two reasons. The first is to show that the testing movement in psychology began in practical circumstances—there was a need to predict the potential for scholastic success.

IQ tests are age graded, that is, the average score for each age level is given the score of 100. A question such as “Who has the higher IQ, an average 10-year-old or an average 70-year-old?” is meaningless. They both may have IQs of 100, the average for each age group. However, as we will see other kinds of comparisons can be made that inform us as to how intelligence changes from childhood into advanced old age.

The nature of intelligence

From the very beginning, there has been a great deal of debate about the nature of intelligence and whether there may be different kinds of intelligence. Is intelligence a single, general ability or are there several different intellectual abilities? Binet favored the idea of a “general ability” (sometimes called the “g” factor), but later researchers have favored the notion of several factors in intelligence.

Some intelligence tests have a number of subtests covering different content. The Wechsler Adult Intelligence Scale (WAIS-R) is the test most frequently used by clinical psychologists for the individual assessment of adult intelligence (Wechsler, 1997).

The fact that there are slightly different subtests on an intelligence test is of course no guarantee that these subtests actually measure different intellectual abilities; they may simply be different ways of measuring a single ability: “general intelligence.” Further exploration has therefore taken the form of factor analysis, a statistical procedure that identifies the number of basic dimensions or factors in a set of data.

Factor analysis will tell us if intelligence is a one-dimensional construct or whether it is a construct with multiple dimensions. The answer to this question is both. In a factor analysis of the WAIS subtests, for example, the major dimension was found to be that of general intelligence, a large factor that accounted for about half of the information contained in the test. Three other factors appear to be important for some purposes. For example, an individual high in perceptual-organizational abilities might do better on the block design subtest than we would expect from his or her general intelligence alone (Cohen, 1957).

One finding of interest in this study is that the memory factor, a relatively weak factor among young study participants, became a major factor for persons over the age of 60. This means that specific memory abilities vary more among older people and affect scores on more of the subtests.

Intelligence as multiple abilities

If one’s goal is to map the broad scope of intelligence and not simply that of the WAIS, many different intellectual tasks must be administered to a large number of people. Factor analysis of a wide variety of intellectual tasks has regularly turned up between 6 and 12 primary mental abilities. These abilities have sometimes been described as the “building blocks” or basic elements of intelligence. (Thurstone, 1962). The “purest” tests of these factors are sometimes administered as tests of the “primary mental abilities.” A more recent adult version of these tests is called the Schaie–Thurstone Adult Mental Abilities Test (STAMAT; Schaie, 1985, 1996, 2013).

But what is the nature of the relationship between such elementary building blocks of intelligence and the tasks that people face in real life? To find out, performance on the different primary mental abilities was supplemented in a sample of over 1000 persons by administering real-life tasks such as interpreting medicine bottle labels, reading street maps, filling out forms, and comprehending newspaper and yellow page advertisements.

The researchers found a substantial correlation between abilities and performance on tasks; correlations varied, however, depending on the task. Furthermore, it was found that the composite performance on the real-life tasks could be predicted by several abilities, particularly reasoning, but also by verbal knowledge to a lesser extent. This also suggests a strong relationship between “building blocks” of intelligence and perceived real-life competence.

We have come, then, from the view of intelligence as primarily a single trait to the view of intelligence as a number of distinct abilities. As we shall see, the distinctions of several different abilities are vital for the study of intellectual development in adults.

Relevance of test instruments to stages of intellectual development

The simple tasks in the traditional IQ tests are well suited to measure progress in the performance of many basic skills through the stages of knowledge acquisition described by Piaget (Humphreys & Parsons, 1979). But they are decidedly less adequate for the assessment of adult competence.

Even a test that was constructed explicitly for adults, the WAIS, is deficient in several respects. First, the test was designed with the intent of measuring cognitive dysfunctions in clinically suspect individuals, and second, it was originally normed on young adult samples, those who in our conceptual scheme would be classified as being in the achieving stage, although norms for midlife and older adults are now available. What we need, therefore is to construct adult tests of intelligence relevant to competence at different points in the life span, just as the traditional test is relevant to the competencies of children in school settings.

Practical or everyday intelligence

Some would argue that intelligence in adults should be studied by asking well-functioning people how they go about solving their everyday problems (Sternberg & Lubart, 2001). This is what is known as a “naive” theory of intelligence; that is, it is not derived from objective analyses of experts, but rather from the collective perceptions of laypersons. Perhaps it is indeed the conceptions of adults about their own competence that ought to be the basis for defining intelligence. But there is the distinct danger that in this process we would confuse intelligence with socially desirable behavior. Moreover, the attributes of intelligence obtained in this manner may be characteristic only of the specific group of persons interviewed or may be governed by time-specific and/or context-specific conceptions.

We would be remiss, then, if we were to discard the objective knowledge of mental functioning that is now in hand and is directly applicable to adult intelligence (Schaie & Willis, 1999; Willis & Schaie, 2005). Instead, we may wish to consider how the basic intellectual processes that are important at all life stages relate to everyday tasks (also see Diehl, Willis, & Schaie, 1995; Marsiske & Margrett, 2006; Wettstein, Wahl, & Diehl, 2014).

There have been a number of efforts to develop objective measures of people’s abilities to engage in effective problem solving and to perform tasks required for daily living (see Marsiske & Willis, 1995; Willis, 1996, 1997). For example, the Educational Testing Service (1977) developed a test to assess whether high school graduates had acquired the necessary information and skills to handle everyday problems. This test includes tasks such as interpreting bus schedules, tax forms, labels on medicine bottles, advertisements, and understanding instructions for the use of appliances, and the meaning of newspaper opinion/editorials. The test has been given to large samples of adults ranging in age from the 20s to the 80s (Schaie, 1996, 2013). The test correlates with a number of the primary mental abilities; in fact, most of the individual differences on the test can be predicted from knowledge of scores on the basic abilities tests.

Another effort to measure everyday problem solving was a test constructed to assess the skills that old people are thought to need to function independently in the community. These skills, called the instrumental activities of daily living (IADL; Lawton & Brody, 1969), include the ability to engage independently in food preparation, housekeeping, medication use, shopping, telephone use, transportation, and financial management activities. Obviously, each of these activities requires the exercise of practical intelligence.

Marsiske and Willis (1995) collected written materials (e.g., medication labels, bus schedules, telephone instructions, mail order forms, appliance instructions, etc.) that are actually used for each of the seven types of activities. These items were rated as to their relevance by professionals working with older people, and then a test was constructed that measured proficiency with the information to carry out each activity of daily living independently. The validity of these measures was validated further by observing individuals in their homes actually using these materials to engage in activities such as measuring out medications, using a microwave, and so forth. Again, individual differences on this everyday problems test could be explained in large part by the performance of individuals on the basic abilities.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128160947000179

Sick? Or slow? On the origins of intelligence as a psychological object

Serge Nicolas, ... Jeremy Trevelyan Burman, in Intelligence, 2013

Abstract

This paper examines the first moments of the emergence of “psychometrics” as a discipline, using a history of the Binet–Simon test (precursor to the Stanford–Binet) to engage the question of how intelligence became a “psychological object.” To begin to answer this, we used a previously-unexamined set of French texts to highlight the negotiations and collaborations that led Alfred Binet (1857–1911) to identify “mental testing” as a research area worth pursuing. This included a long-standing rivalry with Désiré-Magloire Bourneville (1840–1909), who argued for decades that psychiatrists ought to be the professional arbiters of which children would be removed from the standard curriculum and referred to special education classes in asylums. In contrast, Binet sought to keep children in schools and conceived of a way for psychologists to do this. Supported by the Société libre de l'étude psychologique de l'enfant [Free society for the psychological study of the child], and by a number of collaborators and friends, he thus undertook to create a “metric” scale of intelligence—and the associated testing apparatus—to legitimize the role of psychologists in a to-that-point psychiatric domain: identifying and treating “the abnormal”. The result was a change in the earlier law requiring all healthy French children to attend school, between the ages of 6 and 13, to recognize instead that otherwise normal children sometimes need special help: they are “slow” (arriéré), but not “sick.” This conceptualization of intelligence was then carried forward, through the test's influence on Lewis Terman (1877–1956) and Lightner Witmer (1867–1956), to shape virtually all subsequent thinking about intelligence testing and its role in society.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0160289613001232

The making of a field: The development of comorbid psychopathology research for persons with intellectual disabilities and autism

Johnny L. Matson, Lindsey W. Williams, in Research in Developmental Disabilities, 2014

1 Early developments

ID has a longer history of research on comorbid psychopathology compared to ASD. This observation is understandable as ID has a much longer history as a field of study in the disciplines of education, health, and mental health. Binet famously developed the Binet–Simon intelligence test as a method of identifying and separating children with ID into classrooms separate from typically developing children. These developments occurred in Paris and culminated in the first version of the Binet–Simon test in 1905. Later, Terman translated and modified the test for use in the U.S. The first edition of the Stanford–Binet appeared in 1916 with revisions following in 1937 and 1960 (Sears, 1957).

Formative efforts in defining ASD came much later. Almost 40 years had passed since the development of the Binet–Simon before Kanner (1943) first described autism in a professional journal. Even at that point, major modifications and changes to the diagnosis of ASD continued, with scale development following even later. This preoccupation with defining core symptoms in our view was an impediment to the development of the field of comorbid conditions.

For some time after the development of modern definitions of ID and ASD, and accompanying tests to help identify these conditions, comorbid conditions had not been addressed. Additionally, various rationales for why these disorders could not overlap with mental health conditions, in particular, were common. Insufficient ego strength or poor insight into their own problems were reasons cited for these beliefs. It was not until the 1960s that researchers began to acknowledge the presence of co-occurring psychopathology among persons with ID (Gardner, 1967).

Despite these developments, there was considerable resistance to change in the field. One of the primary difficulties was the general separation of services into two tracks: ID and mental health. Thus, persons with ID and mental health concerns often found themselves in a proverbial health services no-man's-land. This service model also shaped how services were provided and how patients were viewed. The ID centers and outpatient programs tended to focus on psychological and educational services. Over time these services became more and more focused on methods and procedures adhering to an operant conditioning paradigm. These methods as a group are often referred to as applied behavior analysis. The mental health side, conversely, adopted a medical/biological model. The focus has been on differential diagnosis and psychotropic medication. Supportive psychological therapies were also employed in some instances. Thus, the types of services received were greatly affected depending on to which of the two types of agencies the individual was assigned. In truth, another problem with this approach was that both treatment models have merit. However, while these methods certainly could complement one another, that approach was seldom followed. Rather, many professionals tended to gravitate to one model or the other. The opposing camp was viewed as a rival treatment model rather than a potential asset to a particular program. The notion was more about sorting out if the person fell into an ID or a mental health box, with no consideration for overlap. This drastically limited the focus on comorbid mental health conditions in persons with ID.

ASD also developed as a singular disorder with respect to the delivery of services. Once the condition had been defined, early researchers were of the opinion that the disorder was rare and occurred among children with above average intelligence. As a result, most programs tended to be housed in medical schools or were administered by private providers. It was not until the 1960s and 1970s that the disorder was reframed. At this point, ASD made the move from being defined as a mental health disorder to a developmental disorder. Later, it became evident that a high overlap occurred between ASD and ID (Hill & Furniss, 2006; Matson & Shoemaker, 2009). In essence, this became the first major advance into the field of ASD and comorbidity. One could also argue that research showing as many as 70% of individuals with ASD also evince ID hastened and solidified the establishment of ASD as a form of developmental disability.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S089142221300437X

Who created the first intelligence test?

The first IQ tests It wasn't until the turn of the 20th century that Frenchman Alfred Binet (1857-1911) developed the first test resembling a modern intelligence test.

What is the name of the first intelligence test?

This first intelligence test, referred to today as the Binet-Simon Scale, became the basis for the intelligence tests still in use today. However, Binet himself did not believe that his psychometric instruments could be used to measure a single, permanent, and inborn level of intelligence.

Who is associated with intelligence testing?

Since Alfred Binet first used a standardized test to identify learning-impaired Parisian children in the early 1900s, it has become one of the primary tools for identifying children with mental retardation and learning disabilities.