Image via Corfina
Edited by Matthew A. McIntosh / 03.09.2018
1 – Introduction to Intelligence
1.1 – Defining Intelligence
1.1.1 – Introduction
Over the last century or so, intelligence has been defined in many different ways.
The meaning of the word “intelligence” has been hotly contested for many years. In today’s psychological landscape, intelligence can be very generally defined as the capacity to learn from experiences and adapt to one’s environment, but thanks to the many different theories of intelligence that have been developed over the last century or so, there are many different frames in which to discuss it.
1.1.2 – “General” Intelligence
A brief history of intelligence testing: In this video, Philip Zimbardo explores a brief history of intelligence testing, beginning with Binet’s original tests. The video concludes with a brief discussion of modern notions of IQ and intelligence testing.
Francis Galton, influenced by his half-cousin Charles Darwin, was the first to propose a theory of intelligence. Galton believed intelligence was a real faculty with a biological basis that could be studied by measuring reaction times to certain cognitive tasks. Galton measured the head sizes of British scientists and ordinary citizens, but found no relationship between head size and his definition of intelligence.
A deeper search for understanding of human intelligence began in the early 1900s when Alfred Binet began administering intelligence tests to school-age children in France. His goal was to develop a measure that would help determine differences between normal and subnormal children. Binet’s research assistant, Theodore Simon, helped him develop a test for measuring intelligence. It became known at the Binet-Simon Scale, the predecessor for the modern IQ test.
In 1904, Charles Spearman published an article in the American Journal of Psychology titled “General Intelligence.” Based on the results of a series of studies collected in England, Spearman concluded that there was a common function across intellectual activities that he called g, or general intelligence. Since the article, research has found g to be highly correlated with many important social outcomes and the single best predictor of successful job performance. The current American Psychological Association definition of intelligence involves a three-level hierarchy of intelligence factors, with g at its apex.
In 1940, David Wechsler became a major critic of general intelligence and the Binet-Simon scale. He was a very influential advocate for the concept of non-intellective factors (variables that contribute to the overall score in intelligence, but are not made up of intelligence-related items, including lack of confidence, fear of failure, attitudes, etc.), and he felt that the Binet-Simon scale did not do a good job of incorporating these factors into intelligence. He suggested that these factors were necessary for predicting a person’s capability to be successful in life. Wechsler further defined intelligence as the capacity of an individual to act purposefully, to think rationally, and to deal effectively with his or her surroundings or situation.
1.1.3 – Multiple Intelligence
An early theory of multiple intelligence is attributed to Edward Thorndike, who in 1920 theorized three types of intelligence: social, mechanical, and abstract. Thorndike defined social intelligence as the ability to manage and understand people. He focused on behavior rather than consciousness in his research; as such, his studies constituted the beginning of investigations related to social intelligence.
In the mid-20th century, Raymond B. Cattell proposed two types of intelligence rather than a single general intelligence. Fluid intelligence (Gf) is the capacity to think logically and solve problems in novel situations, independent of acquired knowledge. Crystallized intelligence (Gc) is the ability to use skills, knowledge, and experience. It does not equate to memory, but it does rely on accessing information from long-term memory. Cattell hypothesized that fluid intelligence increased until adolescence and then began to gradually decline, while crystallized intelligence increased gradually but remained relatively stable across most of adulthood until declining in late adulthood.
In more recent decades, many new theories of multiple intelligence have been proposed. In 1983, Howard Gardner published a book on multiple intelligence that breaks intelligence down into at least eight different modalities: logical, linguistic, spatial, musical, kinesthetic, naturalist, interpersonal, and intrapersonal intelligences. A few years later, Robert Sternberg proposed the Triarchic Theory of Intelligence, which proposes three fundamental types of cognitive ability: analytic intelligence, creative intelligence, and practical intelligence.
1.1.4 – Emotional Intelligence
In 1990, Peter Salovey and John Mayer coined the term “emotional intelligence” and defined it as “the ability to monitor one’s own and others’ feelings, to discriminate among them, and to use this information to guide one’s thinking and actions. ” Hendrie Weisinger also worked with theories of emotional intelligence. He emphasized the significance of learning and making emotions work to improve oneself and others. He documented and illustrated the positive effect emotions could have in personal settings and work environments. Both emotional intelligence and social intelligence have been positively associated with good leadership skills, good interpersonal skills, positive outcomes in classroom situations, and better functioning in the world.
1.2 – Theories of Multiple Intelligence
1.2.1 – Introduction
Theories of multiple intelligence contend that intelligence cannot be measured by a single factor.
Today, the most widely accepted theory of intelligence is the “three stratum theory,” which recognizes that there are three different levels of intelligence, all governed by the top level, g , or general intelligence factor. However, there are alternate theories of multiple intelligence which are useful in their own way for delineating certain intellectual skill sets which vary between people. Additionally, certain individuals, such as those with savant syndrome, do not fit into traditional definitions of intelligence; multiple intelligence theory can offer a helpful way of understanding their situations.
1.2.2 – Gardner’s Multiple Intelligence Theory
In 1983, Howard Gardner proposed a view of multiple intelligences from which our thoughts and behaviors develop. According to Gardner’s theory, these intelligences can emerge singularly or can mix in a variety of ways to achieve very diverse end results. Gardner identified eight specific intelligences and two additional tentative ones:
- Bodily-kinesthetic intelligence: the control and use of one’s body (e.g., dance, sports, art, primitive hunting, etc.)
- Linguistic intelligence: the use of language and communication
- Spatial intelligence: visual perceptions and manipulations (e.g., packing items into a box, reading a map, etc.)
- Intrapersonal intelligence: knowing oneself, emotional awareness, motivations, etc.
- Interpersonal intelligence: discerning the emotions and motivations of others
- Musical intelligence: competencies related to rhythm, pitch, tone, etc., and areas related to composing, playing, and appreciating music
- Naturalist intelligence: discerning patterns in nature
- Logical-mathematical intelligence: numerical abilities and logical thinking
- Spiritual intelligence: (tentative) recognition of the spiritual
- Existential intelligence: (tentative) concern with ultimate state of being
1.2.3 – Sternberg’s Triarchic Theory of Intelligence
In 1986, Robert Sternberg proposed a Triarchic Theory of intelligence. His theory organizes intelligence into three dimensions that work together: componential, experiential, and contextual.
The componential dimension includes an individual’s mental mechanisms, and is composed of three parts:
- Metacomponents: processes used in planning, monitoring, and evaluating the performance of a task. These direct all other mental activities
- Performance components: strategies in executing a task
- Knowledge acquisition components: processes involved in learning new things
The experiential dimension involves the way that individuals deal with the internal and external world. This dimension looks at how individuals deal with novelty and the eventual automation of processes. Finally, the contextual dimension examines how individuals adapt to, shape, and select the external world around them.
1.2.4 – Savant Syndrome
Savant syndrome identifies individuals who are considered to be intellectually deficient, yet have extremely well-developed talents or skills in a specific area, often art, music, or math. For example, Kim Peek is a savant who was born with considerable brain damage including an enlarged head, a missing corpus callosum (the fibers that connect the two hemispheres of the brain), and a damaged cerebellum. Peek scored at below average intelligence when tested, and he had difficulty with gross and fine motor activities. But Peek’s savant abilities were demonstrated through his ability to read and memorize material extremely quickly. He was reported to read books two pages at a time, reading the right side with this right eye and the left side with his left eye. He was capable of memorizing the material as he read.
Kim Peek: Kim Peek, a savant, was the inspiration for the 1988 film “Rain Man.” Peek was able to read and memorize large amounts of information in a short amount of time, yet scored below average on IQ tests.
Savant syndrome demonstrates that an individual who appears to be intellectually deficient based on traditional definitions of intelligence can display exceptional abilities in a specific area or areas. If a savant such as Peek was measured by Gardner’s multiple intelligence theory, he would be considered to be very gifted in a subtype of intelligence, such as linguistics.
1.3 – Genetic and Environmental Impacts on Intelligence
Human intelligence is shaped by both internal genetic factors and external environmental circumstances.
1.3.1 – The Great Debate: Nature vs. Nurture
The natural genetic make-up of the body interacts with environment from the moment of conception. While extreme genetic or environmental conditions can predominate behavior in some rare cases, these two factors usually work together to produce individual intelligence. There is much debate among researchers and scientists over which influence, genetics or environment, has the largest role in determining overall intelligence, because both have been scientifically established as having a significant impact on intelligence. Recent discoveries have further complicated this debate by proving that the relationship between internal predispositions (“nature”) and external circumstances (“nurture”) not only varies among populations, but also changes over time. Genetics and environment interact constantly, so the question of supremacy in the nature versus nurture debate in human intelligence will probably never be fully answered.
1.3.2 – Genetics
A gene is the unit of heredity by which a biological trait is passed down through generations of human beings. Heritability describes what percentage of the variation of a trait in a population is due to genetic differences in that population (as opposed to environmental factors). Some traits, like eye color, are highly heritable and can be easily traced. However, even highly heritable traits are subject to environmental influences during development. Intelligence is generally considered to be even more complicated to trace to one source because it is a polygenic trait, influenced by many interacting genes.
Twin studies in the western world have found the heritability of IQ to be between 0.7 and 0.8, meaning that the variance in intelligence among the population is 70%-80% due to genetics. Conventional twin studies reinforce this pattern: monozygotic (identical) twins raised separately are more similar in IQ than dizygotic (fraternal) twins raised together, and much more than adoptive siblings.
Heritability Correlations: This chart illustrates patterns in studies of heritability of traits in certain individuals. Even identical twins with shared family history fail to show 100% heritability, which helps explain the high amount of variance in intelligence among human beings.
However, the heritability of IQ in juvenile twins is much lower at 0.45. Heritability measures of IQ have a general upward trend with age (from as low as 0.2 in infancy to 0.8 in late adulthood), leading psychologists to believe that either we rely on or reinforce our genes as we age. This is thought to occur through human interaction with external circumstances, whereby people with different genes seek out different environments. Thus, despite the high heritability of IQ, we can determine that there is an environmental influence as well.
1.3.3 – Genetics and Intellectual Disabilites
As mentioned, under normal circumstances intelligence involves multiple genes. However, certain single-gene genetic disorders can severely affect intelligence. Genetic causes for many learning disabilities, such as dyslexia, and neural disorders, such as Down syndrome, autism, and Alzheimer’s disease have been investigated by the field of cognitive genomics, the study of genes as they relate to human cognition. Down syndrome, for example, is a genetic syndrome marked by intellectual disability, and has implications for the ways in which children with Down syndrome learn. While experts believe the genetic cause for Down syndrome is a lack of genes in the 21st chromosome, the gene(s) responsible for the cognitive symptoms have yet to be discovered. And like most traits, the occurrence of neurobehavioral disorders is influenced by both genetic and non-genetic factors, and the genes directly associated with these disorders are often unknown.
1.3.4 – Environment
Many different environmental influences have been found to shape intelligence. These influences generally fall into two main categories: biological and sociocultural. Biological influences act on the physical body, while sociocultural influences shape the mind and behavior of an individual.
1.3.5 – Biological Influences
Biological influences include everything from nutrition to stress, and begin to shape intelligence from prenatal stages onward. Nutrition has been shown to affect intelligence throughout the human lifespan; malnutrition during critical early periods of growth (particularly the prenatal period and during the second year of life) can harm cognitive development. Inadequate nutrition can disrupt neural connections and pathways, and leave a person unable to recover mentally.
Stress also plays a part in the development of human intelligence: exposure to violence in childhood has been associated with lower school grades and lower IQ in children of all races. A group of largely African American, urban first-grade children and their caregivers were evaluated using self-report, interview, and standardized tests, including IQ tests. The study reported that exposure to violence and trauma-related distress in young children was associated with substantial decreases in IQ and reading achievement. Exposure to toxins and other perinatal factors have also been proven to affect intelligence, and in some cases, cause issues such as developmental delays.
1.3.6 – Sociocultural Influences
The family unit is one of the most basic influences on child development, but it is difficult to untangle the genetic from the environmental factors in a family. For example, the quantity of books in a child’s home has been shown to positively correlate with intelligence… but is that due to the environmental impact of having parents who will read to their children, or is it an indicator of parental IQ, a highly heritable trait?
A child’s position in birth order has also been found to influence intelligence: firstborn children have been found in some studies to score higher, though criticism has been offered to these studies for not controlling for age or family size. Moving outside of the family unit, human beings are substantially shaped by their respective peer groups. Stereotype threat is the idea that people belonging to a specific group will perform in line with generalizations assigned to that group, regardless of their own aptitude; this threat has been known to affect IQ scores both positively and negatively. That is, if a person belongs to a group that is told they are intelligent, they will appear more intelligent on IQ tests; if they are told they belong to a group that is unintelligent, they will perform worse, even if these distinctions are random and fabricated (as in lab studies). People’s access to education, and specific training and intervention resources, also determines their life-long intelligence level.
2 – Measuring Intelligence
2.1 – History of Intelligence Testing
Our concept of intelligence has evolved over time, and intelligence tests have evolved along with it. Researchers continually seek ways to measure intelligence more accurately.
2.1.1 – Introduction
The abbreviation ” IQ ” comes from the term intelligence quotient, first coined by the German psychologist William Stern in the early 1900s (from the German Intelligenz-Quotient). This term was later used in 1905 by Alfred Binet and Theodore Simon, who published the first modern intelligence test, the Binet-Simon intelligence scale. Because it was easy to administer, the Binet-Simon scale was adopted for use in many other countries.
These practices eventually made their way to the United States, where psychologist Lewis Terman of Stanford University adapted them for American use. He created and published the first IQ test in the United States, the Stanford-Binet IQ test. He proposed that an individual’s intelligence level be measured as a quotient (hence the term “intelligence quotient”) of their estimated mental age divided by their chronological age. A child’s “mental age” was the age of the group which had a mean score that matched the child’s score. So if a five year-old child achieved at the same level as an average eight year-old, he or she would have a mental age of eight. The original formula for the quotient was Mental Age/Chronological Age x 100. Thus, a five year-old child who achieved at the same level as his five year-old peers would score a 100. The score of 100 became the average score, and is still used today.
2.1.2 – Wechsler Adult Intelligence Scale
In 1939, David Wechsler published the first intelligence test explicitly designed for an adult population, known as the Wechsler Adult Intelligence Scale, or WAIS. After the WAIS was published, Wechsler extended his scale for younger people, creating the Wechsler Intelligence Scale for Children, or WISC. The Wechsler scales contained separate subscores for verbal IQ and performance IQ, and were thus less dependent on overall verbal ability than early versions of the Stanford-Binet scale. The Wechsler scales were the first intelligence scales to base scores on a standardized bell curve (a type of graph in which there are an equal number of scores on either side of the average, where most scores are around the average and very few scores are far away from the average).
Modern IQ tests now measure a very specific mathematical score based on a bell curve, with a majority of people scoring the average and correspondingly smaller amounts of people at points higher or lower than the average. Approximately 95% of the population scores between 70 and 130 points. However, the relationship between IQ score and mental ability is not linear: a person with a score of 50 does not have half the mental ability of a person with a score of 100.
2.1.3 – General Intelligence Factor
IQ Curve: The bell shaped curve for IQ scores has an average value of 100.
Charles Spearman was the pioneer of the theory that underlying disparate cognitive tasks is a single general intelligence factor or which he called g. In the normal population, g and IQ are roughly 90% correlated. This strong correlation means that if you know someone’s IQ score, you can use that with a high level of accuracy to predict their g, and vice versa. As a result, the two terms are often used interchangeably.
2.1.4 – Culture-Fair Tests
In order to develop an IQ test that separated environmental from genetic factors, Raymond B. Cattell created the Culture-Fair Intelligence Test. Cattell argued that general intelligence g exists and that it consists of two parts: fluid intelligence (the capacity to think logically and solve problems in novel situations) and crystallized intelligence (the ability to use skills, knowledge, and experience). He further argued that g should be free of cultural bias such as differences in language and education type. This idea, however, is still controversial.
Another supposedly culture-fair test is Raven’s Progressive Matrices, developed by John C. Raven in 1936. This test is a nonverbal group test typically used in educational settings, designed to measure the reasoning ability associated with g.
2.1.5 – The Flynn Effect
During the early years of research, the average score on IQ tests rose throughout the world. This increase is now called the “Flynn effect,” named after Jim Flynn, who did much of the work to document and promote awareness of this phenomenon and its implications. Because of the Flynn effect, IQ tests are recalibrated every few years to keep the average score at 100; as a result, someone who scored a 100 in the year 1950 would receive a lower score on today’s test.
2.2 – IQ Tests
2.2.1 – Introduction
IQ tests attempt to measure and provide an intelligence quotient, which is a score derived from a standardized test designed to access human intelligence. There are now several variations of these tests that have built upon and expanded the original test, which was designed to identify children in need of remedial education. Currently, IQ tests are used to study distributions in scores among specific populations. Over time, these scores have come to be associated with differences in other variables such as behavior, performance, and well-being; these vary based on cultural norms.
2.2.2 – Measuring IQ Scores
After decades of revision, modern IQ tests produce a mathematical score based on standard deviation, or difference from the average score. Scores on IQ tests tend to form a bell curve with a normal distribution. In a normal distribution, 50% of the scores will be below the average (or mean) score and 50% of the scores will be above it.
IQ Bell Curve: In a normally distributed bell curve, half the scores are above the mean and half are below. The farther from the mean, the less frequent a given score is.
Normal distributions are special, because their data follows a specific, reliable pattern. Standard deviation is a term for measuring how far a given score is from the mean; in any normal distribution, you can tell what percentage of a population will fall within a certain score range by looking at standard deviations. It is a statistical law that under a normal curve, 68% of scores will lie between -1 and +1 standard deviation, 95% of scores will lie between -2 and +2 standard deviations, and >99% percent of scores will fall between -3 and +3 standard deviations.
The scores of an IQ test are normally distributed so that one standard deviation is equal to 15 points; that is to say, when you go one standard deviation above the mean of 100, you get a score of 115. When you go one standard deviation below the mean, you get a score of 85. Two standard deviations are 30 points above or below the mean, three are 45 points, and so on. So by current measurement standards, 68% of people score between 85 and 115, 95% of the population score between 70 and 130 points, and over 99% of the population score between 55 and 145.
It should be noted that this standard of measure does not imply a linear relationship between IQ and mental ability: a person with a score of 50 does not have half the mental ability of a person with a score of 100.
Standard Deviations of IQ Scores: IQ test scores tend to form a bell curve, with approximately 95% of the population scoring between two standard deviations of the mean score of 100.
IQ tests are a type of psychometric (person-centric) testing thought to have very high statistical reliability. This means that while a person’s scores may vary slightly with age and environmental condition, they are repeatable and will generally agree with one another over time. They are also thought to have high statistical validity, which means that they measure what they actually claim to measure, intelligence. This means that many people trust them to be used in other applications, such as clinical or educational purposes.
2.2.3 – Types of IQ Tests and Tasks
WAIS Test Components: The WAIS uses a variety of components to determine a person’s IQ score, including verbal, memory, perceptual, and processing skills.
There are a wide variety of IQ tests that use slightly different tasks and measures to calculate an overall IQ score. The most commonly used test series is the Wechsler Adult Intelligence Scale (WAIS) and its counterpart, the Wechsler Intelligence Scale for Children (WISC). Other commonly used tests include the original and updated version of Stanford-Binet, the Woodcock-Johnson Tests of Cognitive Abilities, the Kaufman Assessment Battery for Children, the Cognitive Assessment System, and the Differential Ability Scale. While all of these tests measure intelligence, not all of them label their standard scores as IQ scores.
Sample IQ Test Question: This is a sample IQ test question modeled after a person’s ability to identify and continue patterns in progressive matrices.
Currently, most tests tend to measure both verbal and performance IQ. Verbal IQ is measured through both comprehension and working (short-term) memory skills, such as vocabulary and arithmetic. Performance IQ is measured through perception and processing skills, such as matrix completion and symbol coding. All of these measures and tasks are used to calculate a person’s IQ.
2.3 – Standardized Tests
2.3.1 – Introduction
Standardized tests are assessments that are always administered in the same way so as to be able to compare scores across all test-takers. Students respond to the same questions, receive the same directions, and have the same time limits, and the tests are scored according to explicit, standard criteria. Standardized tests are usually created by a team of test experts from a commercial testing company in consultation with classroom teachers and university faculty.
Standardized tests are designed to be taken by many students within a state, province, or nation (and sometimes across nations). Standardized tests are perceived as being “fairer” than non-standardized tests and more conducive to comparison of outcomes across all test takers. That said, several widely used standardized tests have also come under heavy criticism for potentially not actually evaluating the skills they say they test for.
Types of standardized tests include:
- Achievement tests, which are designed to assess what students have learned in a specific content area or at a specific grade level.
- Diagnostic tests, which are used to profile skills and abilities, strengths and weaknesses.
- Aptitude tests, which, like achievement tests, measure what students have learned; however rather than focusing on specific subject matter learned in school, the test items focus on verbal, quantitative, problem solving abilities that are learned in school or in the general culture. According to test developers, both the ACT and SAT assess general educational development and reasoning, analysis and problem solving, as well as predicting success in college.
2.3.2 – Scoring Standardized Tests
Standardized test scores are evaluated in two ways: relative to a specific scale or criterion (“criterion-referenced”) or relative to the rest of the test-takers (“norm-referenced”). Some recent standardized tests incorporate both criterion-referenced and norm-referenced elements in to the same test.
2.3.3 – Standardized Tests and Education
Scantron scoring: Many standardized tests are capable of testing students on only multiple-choice questions because they are scored by machine.
Standardized tests are often used to select students for specific programs. For example, the SAT (Scholastic Aptitude Test) and ACT (American College Test) are norm-referenced tests used to help admissions officers decide whether to admit students to their college or university. Norm-referenced standardized tests are also one of the factors in deciding if students are eligible for special-education or gifted-and-talented programs. Criterion-referenced tests are often used to determine what students are eligible for promotion to the next grade or graduation from high school.
2.3.4 – Standardized Tests and Intelligence
Some standardized tests are designed specifically to assess human intelligence. For example, the commonly used Stanford-Binet IQ test, the Wechsler Adult Intelligence Scale (WAIS), and the Wechsler Intelligence Scale for Children (WISC) are all standardized tests designed to test intelligence. However, these tests differ in how they define intelligence and what they claim to measure. The Stanford-Binet test aims to measure g-factor, or “general intelligence.” David Wechsler, the creator of the Wechsler intelligence scales, thought intelligence measurements needed to address more than just one factor and also that they needed to take into account “non-intellective factors” such as fear of failure or lack of confidence.
It is important to understand what a given standardized test is designed to measure (as well as what it actually measures, which may or may not be the same). For example, many people mistakenly believe that the SAT is a test designed to measure intelligence. However, while SAT scores and g-factor are related, the SAT is in fact designed to measure literacy, writing, and problem-solving skills needed to succeed in college and is not necessarily a reflection of intelligence.
2.4 – Controversies in Intelligence and Standardized Testing
2.4.1 – Introduction
Intelligence tests and standardized tests are widely used throughout many different fields (psychology, education, business, etc.) because of their ability to assess and predict performance. However, their uses and applications in society are often criticized. Those criticisms usually concern the use and applications of these measures.
2.4.2 – The Issue of Validity
Intelligence tests (such as IQ tests) have always been controversial; critics claim that they measure factors other than intelligence. They also cast doubt on the validity of IQ tests and whether IQ tests actually measure what they claim to measure—intelligence. Some argue that environmental factors, such as quality of education and school systems, can cause discrepancies in test scores that are not based on intelligence. Other argue that an individual’s test-taking skills are being evaluated rather than their intelligence.
The field of psychometrics is devoted to the objective measurement of psychological phenomena, such as intelligence. Psychometricians have sought to make intelligence tests more culture fair and valid over the years, and to make sure that they measure g, or the “general intelligence factor” thought to underly all intelligence.
2.4.3 – Prediction of Social Outcomes
Another criticism lies in the use of intelligence and standardized tests as predictive measures for social outcomes. Researchers have learned that IQ and general intelligence (g) correlate with some social outcomes, such as lower IQs being linked to incarceration and higher IQs being linked to job success and wealth. However, it is important to note that correlational studies only show a relationship between two factors: they give no indication about causation . As a result, critics of intelligence testing argue that intelligence cannot be used to predict such outcomes, and that environmental factors are more likely to contribute to both IQ test results and later outcomes in life.
The controversy surrounding using intelligence and standardized tests as predictive measures for social outcomes is, at its core, an ethical one. Consider the implications if employers decided to use intelligence tests as a way to screen prospective employees in order to predict which individuals will be successful in a job. This misapplication of intelligence testing is considered unethical, because it provides a measure for discriminating against fully qualified individuals. Again, even if intelligence scores correlate with job success, this does not mean that people with high intelligence will always be successful at work.
2.4.4 – Standardized Test Scores and Intelligence
Student taking test: Students take the SAT and/or ACT exam in order to gain admittance to college.
Another criticism points out that standardized tests that actually measure specific skills are misinterpreted as measures of intelligence. Researchers examined the correlation between the SAT exam and two other tests of intelligence and found a strong relationship between the results. They concluded that the SAT is primarily a test of g or general intelligence. However, correlational studies provide information about a relationship, not about causation. Using a standardized test like the SAT, which is designed to measure scholastic aptitude, as a measure of intelligence is outside the scope of the tests’ intended usage, even if the two do correlate.
Critics of standardized tests also point to problems associated with using the SAT and ACT exams to predict college success. According to recent research, the SAT and ACT have been found to be poor predictors of college success. Standardized tests don’t measure factors like motivational issues or study skills, which are also important for success in school. Predicting college success is most reliable when a combination of factors is considered, rather than a single standardized test score.
2.4.5 – Bias
A similar controversy surrounding the use of intelligence tests surrounds whether or not these tests are biased such that certain groups have an advantage over other groups. Questions of bias raise similar questions to the questions around whether intelligence tests should be used to predict social outcomes. For example, the relationship between wealth and IQ is well-documented. Could this mean that IQ tests are biased toward wealthy individuals? Or does the relationship go the other way? If there are statistically significant group differences in IQ, whether based on race, gender, socioeconomic status, age, or any other division, it is important to take a look at the intelligence test in question to make sure that there are no differences in testing method that give one group an advantage over others along any dimension other than intelligence.
Additionally, IQ cannot be said to describe or measure all possible cultural representations of intelligence. Various cultures value different types of mental abilities based on their cultural history, and the IQ test is a highly westernized construct. As such, IQ tests are also criticized for assessing only those particular areas emphasized in the western conceptualization of intelligence, such as problem-solving, and failing to account for other areas such as creativity or emotional intelligence.
IQ tests are often criticized for being culturally biased. A 2005 study stated that IQ tests may contain cultural influences that reduce their validity as a measure of cognitive ability for Mexican-American students, indicating a weaker positive correlation relative to sampled white American students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa. Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for children with autism, and may have resulted in incorrect claims that a majority of children with autism are mentally retarded.
3 – Extremes of Intelligence
3.1 – Intellectual Disabilities
3.1.1 – Introduction
An intellectual disability is a significant limitation in an individual’s cognitive functioning and daily adaptive behaviors. Disabilities can manifest themselves as limited language, impaired speech, or difficulty performing academically. For many centuries, intellectual disabilities was poorly understood, and so diagnosis and treatment were negligible; until the 2013 release of the DSM-5 (the currently accepted resource for the diagnosis of mental illness), the term “mental retardation” was still in use rather than “intellectual disability.”
Individuals are diagnosed with an intellectual disability if they score below 70 on a measure of intelligence such as the IQ test, which has a mean score of 100. The standard deviation on an IQ test is 15 points, which means that a score of 70 is two standard deviations below the mean, or in the bottom 2.2% of the population. An individual must also display deficits in adaptive functioning; have impairments in at least two areas of functioning, such as self-care, social skills, or living skills; and experience the onset of symptoms before the age of 18 in order to be diagnosed as having an intellectual disability.
3.1.2 – Types of Intellectual Disability
Intellectual disabilities are categorized by their severity:
- Mild: Approximately 85% of individuals with an intellectual disability fit into this category. These individuals are often able to acquire sixth-grade level academic skills. They also often have the skills necessary to live independently and hold a job, but may need assistance if under unusual stress.
- Moderate: About 10% of people with intellectual disabilities fit into this category. These individuals benefit from social skills and vocational training. They can often learn to travel from place to place independently and hold an unskilled job with supervision.
- Severe: Only 3%-4% of individuals are in this category. They may be able to perform some work with supervision and can often function in a community, living in a group home or with their family.
- Profound: Approximately 1% are in this category. These individuals have fundamental mental impairments and need optimal care, which requires a structured environment with one-to-one supervision by a caregiver.
3.1.3 – Causes of Intellectual Disability
Among the common causes of intellectual disabilities are fetal alcohol syndrome and Down syndrome; other contributing factors include certain genetic disorders and exposures to environmental toxins. In every population there is a small percentage of individuals whose intellectual disability has no known cause.
188.8.131.52 – Fetal Alcohol Spectrum Disorders
This spectrum of disorders (FASD) is a group of conditions that can occur in a person whose mother ingested alcohol during pregnancy. Fetal alcohol syndrome (FAS) is the most severe disorder on this spectrum, and it is the leading cause of intellectual disability. This syndrome is caused when alcohol crosses the barrier of the placenta in a pregnant woman and damages the developing brain of the fetus. Alcohol exposure presents a risk of fetal brain damage at any point during a pregnancy, since brain development is ongoing throughout pregnancy. FASD is estimated to affect between 2% and 5% of people in the United States and Western Europe. FAS is believed to occur in between 0.2 and 9 per 1000 live births in the United States.
184.108.40.206 – Down Syndrome
Down syndrome, also known as trisomy 21, is a genetic disorder caused by the presence of a full or partial third copy of chromosome 21. It is typically associated with physical growth delays, a particular set of facial characteristics and a severe degree of intellectual disability. Down syndrome is one of the most common chromosome abnormalities in humans, occurring in about one per 1000 babies born each year. The average full-scale IQ of young adults with Down syndrome is around 50.
Down Syndrome: Down Syndrome is a genetic disorder caused by the presence of all or part of a third copy of chromosome 21.
Education and proper care have been shown to improve quality of life for individuals with Down Syndrome. Some children with Down syndrome are educated in typical school classes, while others require more specialized education. Some individuals with Down syndrome graduate from high school and a few go on to post-secondary education.
3.1.4 – Challenges Caused by Intellectual Disability
Individuals living with intellectual disabilities face both personal and external challenges in life. A child with an intellectual disability may learn to sit up, crawl, walk, and talk later than other children. Individuals with intellectual disabilities may experience difficulty learning social rules, deficits in memory, difficulty with problem solving, and delays in adaptive behaviors (such as self-help or self-care skills). They may also lack social inhibitors. Everyday tasks that most people take for granted, such as getting dressed or eating a meal, may be possible, but they may also take more time and effort than usual. Health and safety can also be a concern; for example, knowing whether it is safe to cross a street could pose a problem for someone with an intellectual disability. The exact combination of challenges varies from one person to another, but it typically involves limitations in both intellectual and daily functioning.
Society itself also poses challenges. People with intellectual disabilities are often discriminated against and devalued by society. This has improved over the twentieth and twenty-first centuries, but many individuals with disabilities still face being stigmatized in everday life. Person-centered planning seeks to address this problem by encouraging a focus on the person with intellectual disabilities as someone with capacities and gifts as well as support needs. The self-advocacy movement promotes the right of individuals with intellectual disabilities to make decisions about their own lives.
3.2 – The Intellectually Gifted
3.2.1 – Defining “Gifted”
Evangelos Katsioulis: Evangelos Katsioulis is considered to be one of the most intelligent men on Earth, with a Stanford-Binet score of 205. In a normal distribution of scores, this score is expected to occur in only one out of 38 billion people.
A child whose cognitive abilities are markedly more advanced than those of his or her peers is considered intellectually gifted. At one time, giftedness was defined based solely upon an individual’s IQ ( intelligence quotient) score. Since that time, the definition for giftedness has become complex, with no consensus on a single definition. A variety of criteria are used to define giftedness, including measures of intelligence, creativity, and achievement, as well as interviews with parents and teachers. Often, the decision of how to identify gifted individuals is left up to school districts.
3.2.2 – Benefits of Gifted Programs
Gifted children often learn faster than their peers, and work more independently. Since their academic achievement is advanced, these students may become bored in a traditional classroom setting. This can sometimes lead to behavioral problems or lack of motivation. Schools face the challenge of how to work with gifted students academically to keep them challenged and engaged.
One way schools may handle this issue is to allow a gifted child to skip grades. The advantage to this solution is that the child will be doing coursework that is appropriate to their cognitive level. The potential disadvantages, however, are substantial: younger students will experience physical development later than their grade-level peers and may experience emotional development later as well. This may lead them to feel self-conscious about being different or to be bullied by their peers.
Another solution is to keep the child in the regular classroom but provide additional assignments and an enriched curriculum. In addition to the enriched curriculum, another option might be pull-out programs where the child leaves the classroom during certain times or days for advanced classes. Finally, some school districts create entire gifted classrooms.
Gifted programs can be beneficial to the gifted child by keeping the child engaged in learning. Some programs even offer gifted children college-level credit for advanced placement classes (with a passing grade on an assessment test). These programs can provide the child with opportunities to discover his or her potential.
3.2.3 – Disadvantages of Gifted Programs
Gifted programs can also be detrimental to children. By labeling some children as “gifted” and others as “not gifted,” schools can create a self-fulfilling prophecy where those who are not accepted into the program do not perform as well as those who are accepted. The students identified as “not gifted” may believe they are not as intelligent as those who are labeled gifted, and in turn they may not put forth the same effort at school. The Pygmalion effect is a studied phenomenon in which higher expectations lead to better performance; the golem effect is the opposite phenomenon, in which lower expectations lead to lower performance. Teachers who are told their students are gifted will treat them as though they are gifted, which can result in increased performance; teachers who are told their students are “average” may be less patient when those students struggle than they would be if they thought those students were gifted.
Another detriment to gifted programs is that students who are not identified as gifted are denied the benefits of enriched education. Gifted programs vary widely in what they offer, but some involve activities like field trips, talks from scholars, or cultural experiences, like visits to museums. These enrichment activities would benefit all children, not just the gifted.
Originally published by Lumen Learning – Boundless Psychology under a Creative Commons Attribution-ShareAlike 3.0 Unported license.