Wednesday, May 27, 2015
The Bell Curve Chapter 3 The Economic Pressure to Partition.
Why people with different levels of intelligence end up in different occupations is the subject of this chapter. It is commonly believed in liberal academic circles that it helps to have high test scores ( SAT, GRE, MCAT or LSAT) to get into professional or graduate schools, but once in a school, the intellect they measure, becomes unimportant.
It is true that educational degrees are at times a ticket for jobs that could be done just as well by those without an educational degree. However, in professions like medicine, law, engineering and astrophysics one not only has to have above average intelligence to get into a school that teaches one of these disciplines but, once in, he has to have a relatively high IQ to assimilate the material they teach.
Most people inherently know that one has to be pretty smart to become an electrical engineer or a doctor of medicine, but the relationship of cognitive ability to job performance in other occupations it is less well understood and often ignored entirely. As the authors of The Bell Curve point out, a smarter employee is, on average, the more proficient the employee. This tenet holds true for all professions. Lawyers with high IQs are more productive that those with lower intelligence; blue-color workers, such as carpenters, with high intelligence are more productive than carpenters with lower IQs; finally, and a relationship that is most often overlooked, an unskilled laborer with a relatively high IQ will outperform one with lower intelligence every day of the week and twice on Sunday.
The message is clear, intelligence is important to job performance irrespective of the job under discussion. This holds just as true for busboys as it does to CEOs of large companies. Obviously, the consequences of hiring a dull busboy are not as great to the owner of a small restaurant as they potentially would be to the board members who hired a relatively dim-witted CEO to run their fortune 500 company. Nonetheless the intelligence factor is still there just waiting to rear its head and cause havoc to the naive and unsuspecting employer who ignores the importance IQ when he hires a new employee, irrespective of the job in question. That is the message of this chapter.
A flood of new analyses on employee intelligence during the 1980s established several points with large economic and policy implications. Test scores are predictive of job performance because they measure g, Spearman's general intelligence factor, not because they identify aptitude for a specific job. In this respect, any broad test of intelligence predicts proficiency for most occupations and does so more accurately than tests designed to measure a particular skill set.
After reading this chapter a prospective employer will understand that an IQ score is a better predictor of job productivity that a job interview, reference checks, or college transcripts. In short, an employer who is free to pick among applicants can realize huge economic gains simply by hiring applicants with the highest IQs. It's really as simple as that, or at least it was until 1971.
In 1971 the Supreme Court of the United States mistakenly ruled that hiring based on the results of tests that measured general intelligence was unconstitutional. We will have more say about this absurd decision in a moment, but for now realize that preventing employees from hiring the brightest employees costs the U.S. economy an estimated $13 to $80 billion each and every year. Unfortunately, laws can make the economy less efficient, but laws cannot make intelligence unimportant.
Cognitive ability itself- sheer intellectual horsepower, independent of education- has market value. Employers recruit at Stanford and Yale, not because graduates from these elite schools know more than graduates from less prestigious schools but for the same reason Willie Sutton robed banks. Places like Stanford, Harvard and Yale are where you find the coin of cognitive talent. The perceived wisdom in the halls of academia, however, is quite different!
Referring to the SAT test, "Test scores have modest correlation with first year grades and no correlation at all with what you do with the rest of your life," wrote Derek Bok the president of Harvard in 1985. To understand the absurdity of this statement, and the statistical information presented in this book, one must know the significance correlation coefficients. As mentioned previously, correlation coefficients vary from -1 to +1. A correlation of 0 means no correlation exists, no correlation between height and weight for example. A positive correlation is one that falls between 0 and +1 with +1 indicating absolute validity. In a study of height and weight, for example, a +1 correlation would indicate that a person's weight was determined solely by his height.
Correlations in the social sciences are seldom higher that +.5 or lower than -.5 because social events are usually affected by variables that are not being measured by the tests. Thus, a correlation of +.2 can be "big" for many of the topics studied by the social scientists. Looking at it from a different angle, moderate correlations mean many exceptions.
The correlation between IQ and various measures of job performance are usually in the range .2 to .6 which indicates that there many exceptions to the statistical finding that intelligence and job performance are closely related; however, these exceptions do not invalidate the importance of statistically significant correlations.
In any job or profession there is a restricted range of cognitive ability and the relationship between job performance and IQ scores is very weak in that setting. To understand this concept consider the importance of weight in National Football League tackles. The All-Pro tackle probably is not the heaviest player in the league. But, on the other hand, the lightest player in the league weighs at least 250 pounds. In this situation the subjects under consideration have already been pre-selected on the basis of weight.
Thus,If we were to rate the performance of every NFL tackle and then correlate those ratings with the player's weights the correlation would probably be near 0. However, this does not mean that a superbly talented athlete weighing 150 pounds could ever play tackle for the Forty Niners, not a chance. We would be right in concluding that performance and weight does not correlate much in athletes weighing upwards of 250 pounds who want to play tackle; however, it would be wrong to conclude that weight was not much of a factor in achieving excellence at tackle if the candidate pool we were considering were drawn from the general population where the average male weighed around 150 pound.
Thus, president Bok of Harvard was undoubtedly correct when he asserted that, after the first year, there was no correlation between the SAT scores of his freshmen and job performance and success in later life. This is true because the students who get into Harvard already have IQs in the stratosphere. If the correlation had been made between the SAT scores of all college applicants and their success in later life the results would have been very different, now you know why.
Tens of millions of people are hired for jobs each year. Employers make hiring decisions by trying to guess which applicant would be the best worker. Until 1971 many employers used tests of intelligence to help make those decisions. In that year the landmark Supreme Court decision in Griggs v. Duke Power Co. changed all that. The court held that any job requirement, including a minimal score on a mental test, must have a manifest relationship to the employment in question and it was up to the employer to prove that it did. This meant that employment tests must focus on skills specifically needed to perform the job in question. This seemed to make good common sense at the time. Unfortunately, common sense turned out to be dead wrong!
The most comprehensive contemporary studies of tests used for hiring, promotion and licensing in civilian, military, private and government point to three refutable conclusions concerning worker performance. The most comprehensive of these studies were performed by the military who exempted themselves from the 1971 Supreme Court decision in Griggs v. Duke Power Co. All branches of the armed forces, to this day, continue to test every recruit to determine his IQ.
Irrespective if the job is skilled or performed by a menial laborer the correlation between intelligence and job performance is about +.4. As one would expect, the correlation for skilled jobs, such as managers is higher, +.53, while the correlation for industrial workers is only +.37. The correlation between intelligence and job performance was even +.23 for the most menial "feeding/off bearing" jobs (putting something into or taking something out of a machine).
Possibly the most meaningful test of all is the armed Forces Qualification Test (AFQT) which every military recruit takes. This test is highly loaded for g, the measurement of general intelligence. This data base has no equal in studies of job productivity. In this huge study of 472,539 military personal the average correlation between intelligence and job performance was a whopping +.62! An analysis of these studies showed that Charles Spearman's g accounted for nearly 60 percent of the variation in grades achieved by those who attended the 828 military schools. That's why dullards who join the military end up peeling potatoes.
Does experience make up for less intelligence in time? A busboy with one months experience on the job will outperform a smarter busboy on his first day on the job. but all relevant studies show that the initial one mouth difference in experience will have ceased to matter in six months. In this respect, most studies have shown that differences in productivity due to intelligence diminish only slowly and partially with time. Thus, the cost of hiring less intelligent workers may remain as long as they stay on the job.
How good are test scores compared to other predictors of job productivity such as job interviews, reference checks and college grades. As the table shows, a job interview is a relatively poor indicator of job performance.
Predictor Validity In Predicting
Job Performance
Cognitive test score .53
Biographical data .37
Reference checks .26
Interview .14
College grades .11
Interest .10
Age -.01
The data presented in the table suggests that employees chosen on the basis of test scores will, on average, be more productive than those chosen on the basis of any other item of information.
What is the dollar value cognitive ability? The short answer is a lot. A first rate secretary with an IQ in the 84th percentile is worth a 40 percent premium over an average secretary who, by definition, is in the 50th percentile. In other words. if an average secretary is worth $20,000 a year a secretary who scores one standard deviation above the mean on an IQ test is worth $28,000. Alternately, hiring a secretary for a $20,000 a year job who scores one standard deviation below the mean will cost the employer about $8,000 in lost output each and every year he continues her employment..
In higher paying jobs the costs of poor hiring practices are, of course, much more significant. An HMO who has the option of hiring one of two dentists, one of whom has an average IQ and who has an IQ one standard deviation above the mean, will make a $30,000 mistake if they select the less intelligent candidate. Statistics like this explains why the notorious 1971 Supreme Court decision outlawing IQ tests for prospective employees was so detrimental to the US economy.
Finally, what about the cost of testing? We live in an era when a reliable intelligence test can be administered in twelve minutes; thus, the cost of testing can be lower, in the terms of labor, than it would be to conduct interviews or check references. The fact that it is now illegal to perform tests of intelligence test, of course, complicates the issue but it does not negate the fact that the way to get the best possible work force is to hire the smartest people you can find. Because the economic value of employees is linked to their intelligence, so are their wages which the subject matter of the next chapter.
Comment:
For those working in Human Resources this is, by far, the most important chapter in The Bell Curve. However, while it may seem obvious that CEOs of large companies need to be very intelligent, it may be less evident to a small business managers that they should strive to hire the smartest candidates available irrespective of how menial the job in question, even if they are hiring a busboy or someone to pull lumber from the green-chain in a saw mill. Since it is now illegal to use tests of IQ to screen job applicants, the job of a recruiter is more difficult now than it was 50 years ago before the Griggs v. Duke Power Co. decision. However, clever managers can get a pretty good idea of the relative intelligence of candidates for a given job, and it behooves them to do so. The fact that John knows why manhole covers are round but Billy doesn't tells you volumes about the relative intelligence of the two, doesn't it?
Monday, May 18, 2015
The Bell Curve Chapter 2 Cognitive Partitioning by Occupation
People in different jobs have different mean IQ's. Lawyers, for example, have higher average IQ's than bus drivers. Whether lawyers must have higher intelligence than bus drivers to do their jobs is the topic of the next chapter. Here the authors simply note that people with different IQ's end up in different jobs.
In 1900 a CEO of a large company was likely a White Anglo-Saxon Protestant (WASP) born into affluence. He may have been bright but that was not why he was chosen. Nothing changed until the 1950's. However, the three decades that followed were a time of great social leveling when executive suites were filled with people who could maximize corporate profits irrespective if they came from the wrong side of the tracks or worshiped in a temple rather than a church. Jobs sort people by IQ just as college does but there is a big difference between educational and job sorting because people spend only a decade or two in school but a major portion of their lives working. The relationships they develop in their jobs often determine who they will marry and where they will live. More importantly, job status determines where their children will be raised; go to school; and who they will marry.
Testers have noted the relationship between job status and IQ test scores since there were tests to give. Academics, however, have argued about the importance of intelligence to job performance from the beginning and the debate rages to this day.
For example, it takes a law degree to practice law and it takes intelligence to get into and through law school, but aside from that, is there any good reason that lawyers need to have higher IQ's than bus drivers. At the height of egalitarianism in the 1970's the answer in academic circles, surprisingly, was "No." In other words, one had to be smart to get into medical school but once one had a medical degree he or she did not have to be all that smart to become a good doctor. Here are a few on the most germane studies relating job status and intelligence.
If one is either born smart, dumb, or somewhere in-between, it shouldn't matter when a test of intelligence is administered, and indeed it doesn't! In an elegant longitudinal study relating childhood intelligence to adult outcomes, boys and girls were given IQ tests in childhood and again when they were 26 to 27 years old. The IQ scores of these children when they were 7 or 8 years old were just as correlative to their ultimate job status as were the tests performed after they finished their educations. This, of course, weakens the argument that IQ is correlated more with social status and educational advantage than it is with innate inherited intelligence.
Job status also typically runs in families. We all know of families with several members who are doctors and lawyers and one who is blue-collar worker, or vice versa; but such examples stand out because they are rarities. Most close relatives occupy neighboring, if not the same, rungs on the job status ladder. This, of course, is yet another indication that intelligence is an inherited trait similar to height, weight, athletic ability and skin color. Interestingly, this observation somehow manages to be both obvious and controversial at the same time.
Another nail in the coffin of inherited intelligence deniers is provided in a study of adopted children in Copenhagen between 1924 and 1947. The children had an average age of three months when placed with their new families. In adulthood they were compared with their biological siblings and their adoptive siblings to see where they landed on the occupational status ladder. Not surprisingly, the full siblings had more similar job status than the half siblings, while there was no correlation in job status between full siblings and their genetically unrelated adoptive siblings.
It is important to realize that even occupations that require a high mean IQ will include individuals with modest scores. If a distribution curve with a mean of 120 is symmetrical around 20 percent will have IQs below 100. In 1900 only 10 percent of the so-called high-IQ occupations were filled by people with IQ's in the top IQ decile, most being filled by individuals with average intelligence. By 1990, 36 percent of these high paying jobs were being filled by supper smart members of the work force. As the twentieth century progressed the smart got richer and those with average or below average intelligence were progressively more often left behind.
Income and job stratification by intelligence, of course, is even more pronounced if you compare people with mean IQ's of 100 (the average worker) with those with IQ's three standard deviations above the mean who graduated from an elite college like Harvard or Yale. Chances are that the average Joe has never even met such a person.
In 1900 the vast majority of CEO's of large corporations were white male Anglo-Saxon Protestants whose fathers were business executives or professionals. By 1976 only 5.5 percent of the countries CEO's came from families of wealth and, while our business leaders were still disproportionately likely to be Protestant, they also disproportionately were likely to be Jewish, something unheard of in the early 1900's. As the twentieth century came to a close having the intellectual capacity to be educated had become the key to the executive suite.
Comment:
We are a segregated nation, always have been and likely always will be. However, today we are segregated not so much by race, inherited position and wealth as we are by education achievement. Unfortunately, the ability to be educated is an inherited trait determined at the time of our birth. School busing, affirmative action and head-start programs have not and will not changed this fact. The forthcoming chapters of The Bell Curve explain why these fiscally wasteful liberal boondoggles were doomed to fail from their inception.
Wednesday, May 13, 2015
The Bell Curve Part 1 Chapter 1- The Emergence of a Cognitive Elite
In the nineteenth century the world was segregated into social classes defined in terms of money, power and status. By the twentieth century the ancient lines of separation based on hereditary rank, often accompanied by the sword or crown, were being replaced by educational credentials and, increasingly, talent. Social stratification based on educational achievement continued in the twentieth century and now, in the twenty-first century intelligence, with several notable exceptions (athleticism, artistic ability), is the sole engine that pulls the train.
As Herrnstein and Murray point out, cognitive stratification produces different results for those who are smart and those who are not. In part 1 the authors deal with those who live in the upper echelons of the bell curve; in part 2 they delve into the consequences of not being born all that bright. As we learn daily from the press, isolation of the cognitive elite is already extreme and is growing more so with each passing day. Just consider New York city where the masses live in conditions akin to slums while the very wealthy few live in $50,000,000 pent houses 50 stories above the fray.
Stratification by intellectual ability was not as significant in times past because the numbers of extremely bright people far outnumbered the number of specialized jobs requiring high intellect. Most bright people living in Cheop's Egypt, dynastic China and even Teddy Roosevelt's America were engaged in ordinary pursuits while mingling, working and living side by side with their less bright fellows. At that point in time social and economic stratification was extreme but cognitive stratification was, for practical purposes, nonexistent. To have a true cognitive elite there has to be a highly technological society such as the one we have in twenty-first century America.
Chapter 1 Cognitive Class and Education, 1900-1990
As late as 1952 an exclusive school like Harvard was not to difficult to get into, two out of every three applicants were admitted and the admission rate rose to 90 percent if an applicant's father was a graduate of the school. At that time Harvard students were not all that brilliant, the mean SAT score of Harvard's freshman in 1952 was only 583. Historically a school primarily for the New England's elite, by 1962 Harvard was a far different place. In one decade the number of admissions to Harvard from New England dropped by a third and the average SAT score for incoming freshmen rose over 100 points to 678. Almost overnight Harvard had been transformed from a school for the northeastern socioeconomic elite into a school for the brightest of the bright, drawn from all over the country.
The advance in educational opportunities for the unwashed masses, regardless of race, color, gender or creed is one of America's great success stories. But it also had a darker more sinister side because education and having the cognitive ability is a powerful divider and classifier. Education affects occupations and occupations divide. Most importantly cognitive ability and education affect income, possibly the biggest divider of all.
In 1900 there was a significant social and economic gap between high school and college graduates, just as there is now, but this disparity was not accompanied by much of a cognitive gap because most of the brightest people in the country in the first decade of the twentieth century did not go to college; in fact, 50 percent of brightest people in the nation were house wives who had never gone to college. Things changed drastically as the twentieth century unfolded. Early in the century only 2 percent of the population achieved a college degree, by 1990 thirty percent of the 23-year old population has a bachelor's degree or better.
At first glance one might conclude that education had become the great equalizer giving the poorest of the poor, and especially impoverished minorities, a chance to become educated and by so doing escape poverty. Unfortunately, this turned out not to be the case because, as the century progressed it became increasingly difficult to get into one of the better colleges if you were not exceedingly bright. In the 1920s only 15 percent of the nations smartest high school students went on to college. This meant that 85 percent of the nations college students were no smarter than their non-college brethren.
By 1960 eighty percent of the college slots were filled by high school students who were in the top IQ (25 percent) of their high school class which meant that only 15 percent of the students in college could be considered to have average or below average intelligence. The IQ stratification in the elite Ivy League colleges today is even more pronounced. For example, at Harvard 55 percent of the incoming freshmen have a perfect 4.0 high school grade point. Possibly of more significance, students with SAT scores above 700 are 40 times more likely to be Yale and Harvard than a less well known state college wherever it might be.
Having established the close association of intellect and higher education in the opening pages of The Bell Curve, Herrnstein and Murray devote the remaining pages of the book's first chapter to a discussion of basic statistics. They point out that a distribution is a bell shaped cure wherein, with respect to IQ, most people score in the middle range, the mean being set at 100, while the scores of a relative few can be found at the upper and lower ends or "tails" of the curve.
If there were only one test of intelligence, such as the SAT, one could simply compare the SAT score of one person to that of another to determine the relative intelligence of the two. However, there are many different tests of intelligence and a simple assessment of the scores from the various tests tells you little about the relative intelligence of those that took two or more tests of intelligence.
The authors were masters at making the difficult understandable by sighting examples to make their points. In this instance they point out it would be difficult for most people to compare the height of horse and the length of a snake if the height of the horse were measured in hands and the length of the snake were measured in rods. If inches were used to measure both there would be no problem. In statistics the standard deviation is akin to the inch, an all purpose measurement that can be used for any distribution. More importantly, in studies designed to measure the various components of intelligence, standard distributions can be used to compare the results of differing assessments of IQ.
For example, how do you compare Joes ACT score of 24 with Tom's SAT-Verbal of 720? In the case of the horse and the snake a common denominator, the inch, was used to allow an assessment of the variables you were attempting to compare. Similarly, we can use standard deviation methodology to compare Joe's and Tom's tests of intelligence. If you were told that Joe's ACT score of 24 was .7 standard deviations above the mean (average) and Tom's SAT-Verbal of 720 was 2.7 standard deviations above the mean you would conclude that Joe was pretty smart but Tom was brilliant. Guess who is going to get into Harvard and who is most likely to be going to Humboldt State University?
In tests of intelligence just how big is a standard deviation? If a student's score is one standard deviation above the average he is in the 84th percentile of those who took the test. This means that he scored higher than 83 percent of the people who were tested. Alternately, if a person's score was one standard deviation below the mean he would be in the 16th percentile having a score that was only higher than lowest 15 percent of those who were tested. In this case, 84 percent of those who took the test would have scored better than he did. Unless this person were a superb athlete, or perhaps black, what are the chances, do you think, of that candidate being accepted at UC Berkeley, USC or Stanford. The answer , of course, is slim to none.
Returning to the subject at hand, two standard deviations from the mean mark the 98th and 2d percentiles above and below the average score while three standard deviations from the mean mark the top and bottom thousandth of a distribution. Yes, Tom was pretty damn smart!
One way of looking at the significance of intellectual partitioning (segregation of the smart from the dull in the population as a whole) is to compare the overlap ( percentages of people with similar IQs) in high school graduates without a college degree with those who graduated from college. In 1930 college graduates had a mean IQ about .7 standard deviations above those who did not go to college; however, so few people were going to college at that time it didn't matter. As a result, there was a large overlap in the medium IQ of those who went to college and those who didn't. Joe the plumber was likely to be just as bright as the branch manager of the local bank.
By the 1990s things had changed drastically primarily because so many bright people were going to college. This, of course, resulted in there being significantly fewer smart people in the general population. Yes, there still are a lot of really smart people who do not have a college degree but there are far fewer of them today than there were in the 1930s. The median overlap (those with similar IQs) between those with only a high school education and college graduates is now only 7 percent. The overlap in IQ between high school graduates who do not go on to college and Ph.D.s, M.D.s or LL.Bs is now only 1 percent. To make things worse, only 21 percent of those with only a college degree have similar IQs to those with Ph.D.s, M.D.s or LL.Bs.
So, what difference does it make? The answer to that question will unfold over the course of this book, The Bell Curve. But for now realize that the social fabric of the nation is severely altered when the most talented children of the middle and working class in every generation are effectively extracted to live in other worlds, leaving only the less capable behind.
It is difficult to exaggerate how different the graduates from the 12 or so elite colleges are from the population at large. The news with respect to the association between intellect, education and achieving success in the high-tech world we live in today is both heartening and disheartening. Heartening, because the nation is now providing a college education to most of those who could profit from it, regardless of ethnic or economic background. Frightening, because so many of those with limited educational potential are left behind being sealed off from the world of the cognitive elite. In chapter two Herrnstein and Murray address the effects of cognitive partitioning by occupation.
Comment: I hope the reader is beginning to see how important intellect is to those who are born into today's high-tech world where education is everything. Unless a child is born with extreme artistic or athletic ability he will have a difficult time competing unless he or she is lucky enough to be born bright. As the pages of this book unfold you also begin to understand how important inherited intelligence is in our modern high-tech world.
Thursday, May 7, 2015
Book Review
Richard J. Herrnstein and Charles Murray published their New York Times best seller The Bell Curve in 1995. I relied heavily on the information presented in this literary masterpiece when writing America In Decline and consider The Bell Curve to be the most important book written in the past 150 years, quite possibly the most significant scholarly work of all time.
I firmly believe that the falling intelligence of those living in the western world is primarily responsible for social and economic problem we face in today's high-tech world. The fact that we, as a society, are literally getting stupider with each passing generation provides an explanation for most, if not all, of our social and economic ills including the disparity in wealth between the haves and the have not's; the escalating crime rate in minority communities; the high rate of illegitimacy in blacks; and, most importantly, our failure to erase poverty even though we have squandered over 22 trillion dollars over the past 55 years attempting to do so.
I am convinced, beyond a shadow of a doubt, that our world would be a better place if everyone read and understood the material presented in The Bell Curve. Unfortunately, the vast majority of the population has never heard of the book and even fewer understand the importance of inherited intelligence. Worse yet, the liberal academics who run our education institutions consider the information presented by Herrnstein and Murray to be heresy of the highest order.
My goal for the next year or so will be write a series of reviews that will introduce The Bell Curve to those who are unfamiliar to it. The Bell Curve consists of an introduction and 22 chapters, each of which will be the subject matter of a separate post. The purpose is to cover the material, which often is of a statistical nature, in a way that is easy to read, and most important, simple to understand. I will make every effort to keep my editorial comments and personal opinions to a minimum. Anyone who reads these chapter reviews will be better informed for having done so. I guarantee it! Hopefully, some of my readers will be induced to obtain a copy of The Bell Curve and have the pleasure of reading it themselves. If so, my efforts will not have been in vain.
Introduction
The authors begin their 575 page book by pointing out that the word intelligence is a universal and ancient term that has been used to describe differences in mental capacity amongst people of the same race and those of different races since the beginning of recorded time.
Most observant first graders know that some of their class mates are smarter than others and so did the ancients. In this respect, it was not until the last forty years that the concept of intelligence variation became a pariah in the world of ideas. As the authors point out, any attempt to measure differences in IQ have been dismissed as racist, statistical bungling or outright scholarly fraud. Without doubt, the vast majority of those reading these words believe that these accusations are true. While, in fact, the concept of intelligence variation among humans is a well-understood construct, measured with accuracy and fairness by numerous standardized mental tests. The primary purpose of The Bell Curve is to introduce these indisputable concepts to an unaware and ill-informed public.
The study of intelligence variation began in the last half of the nineteenth century when Charles Darwin asserted that the transmission of inherited intelligence was a key to human evolution which separated our simian ancestors from the other apes. Darwin's younger cousin Francis Galton, a celebrated geographer in his own right, presented evidence that intellectual capacity ran in families. His Hereditary Genius, published in 1869, began the controversy, with respect to the importance of inherited intelligence, which remains with us today,146 years later. Galton may have been the first to point out that humans vary in their intellectual capacity and these differences matter to them personally and to society.
Galton's first attempts to quantify intellect using measurements of sensory perception, acuity of sight and hearing, and speed of reaction to simple stimuli failed; however, his successor French psychologist Alfred Binet attempts to measure intelligence by measuring the capacity of a person to reason, draw analogies and identify patterns were more successful in identifying crude differences in intellectual capacity that accorded with the commonly understanding of high and low intelligence.
By the end of the nineteenth century mental tests that we would recognize today as IQ tests were in wide use throughout the civilized world. In 1904, a former British Army named Charles Spearman made a statistical break thought that has fueled the debate on inherited intelligence to this day. Based on Karl Pearson's correlation coefficient, statisticians could determine how two variables, such as height and weight were related to each other.
For example, using Pearson's r (as the coefficient was labeled) investigators could determine, for the first time, just how much a man's weight was determined by his height. The correlation coefficient ranged from -1 to +1. In a study of the relationship of height and weight, if the r was determined to be -1 then, from a statistical standpoint, there would be no relationship between a person's height and his weight. Alternately, an r of +1 would indicate that a person's weight was determined entirely by how tall he was. An r of +.8 would mean that 80 percent of a person's weight was a result of his height.
Before long several tests were developed to measure human intelligence. Charles Spearman noted that people who did well on one test invariably did well on all other tests of mental acuity while those that that did poorly on one mental or academic test did poorly on all others as well. This observation led Spearman to conclude that there was a single mental factor, which he named the g factor for general intelligence.
Spearman's g factor differed subtly from the commonly held concept of intelligence at the time which was centered around the ability to learn and to apply what is learned to subsequent tasks and endeavors. In Spearman's view his g factor was a measurement of a person's capacity for complex mental work.
By 1908 the concept of mental level or mental age had been developed. This was followed a few years later by the more sophisticated conception of the intelligence quotation, or IQ. By 1917 the U.S. Army was using IQ test to classify and assign recruits for World War 1, a practice which persists to this day. During this period of time a new specialty called psychometrics was developed in the field of psychology which was devoted to the study of mental capacity.
The study of mental capacity had far reaching results some good, and some not so good. For example, by 1917, 16 states had passed mandatory sterilization laws designed to decrease the numbers of dim-witted in their populations. Oliver Wendell Homes, in upholding the constitutionality of the law stated that "Three generations of imbeciles are enough." In the 1920s immigration policies were introduced to limit the influx of immigrants from southern and eastern Europe because people from these areas were thought to be less intelligent than those of Nordic stock. A review of these policies reveals that they were politically based and not driven by the findings based on tests of mental capacity.
The rise of social democratic movements which began after World War 1 ultimately changed everything. By the 1960s a fundamental shift was taking place regarding the concept of equality. This was most evident in the political arena where the civil rights movement and the war on poverty, based on presumptions of unfairness and inequality of opportunity, dominated the American political scene.
Whereas in the 1930s psychometricians had debated the role environmental factors, like poverty and family structure, might play in development of a person's IQ , by the 1960s it became extremely controversial for psychologists to claim that genes played any role at all in the development of intelligence. The behaviorists, as they were called, based on learning experiments performed in rats and pigeons, convinced large segments of the population that human intellectual potential was almost perfectly malleable and was shaped almost entirely by environmental influences including, most importantly, good parenting, educational opportunity and other factors that lay outside the individual. In other words, if a person was dumb it was a result of his environmental handicaps, not his inherited genetic makeup. These environmental limitations were blamed on capitalism and, most often, an uncaring, incompetent government.
Of more importance the behaviorists convinced the academics responsible the educating our nation's young people that these deficiencies in opportunity could be righted by changes in public policy- redistribution of wealth, better education, better housing and better medical care.
These claims, of course, collided head-on with a half century of accumulated IQ data indicating that differences in intelligence are intractable; that IQ is largely inherited; and that the average IQ of various socioeconomic groups and ethnic groups differ significantly.
This debate came to a boiling point in 1969 when Arthur Jensen, an educational psychologist at UC Berkeley was asked to explain why the compensatory and remedial programs of the War on Poverty had been such dismal failures? Jensen's reply shocked the liberal establishment when he concluded that " Such programs were bound to have little success because they were aimed at populations of youngsters, predominately black, with relatively low IQs, and success in school depended in significant degree on IQ." Jensen went on to add that IQ had a large heritable component.
As expected, the reaction to Jensen's article in the Harvard Educational Review was immediate and violent. "It perhaps is impossible to exaggerate the importance the Jensen disgrace," wrote the behaviorist Jerry Hirsh. During the years after Jensen's article was published in the Harvard Educational Review he could not appear in any public forum without triggering something perilously close to a riot. So much for the freedom of academic expression.
To continue this sordid tale, in 1971 the U.S. Supreme Court outlawed the use of standardized ability tests because they acted as "built-in headwinds" for minority groups. A year later the National Education Association called for a moratorium on all standardized intelligence testing, hypothesizing that "A third or more of American citizens are intellectually folded before they have a chance to get through elementary school because of linguistically or culturally biased standardized tests."
In 1976 the British journalist, Oliver Gillie, published an article in the London Sunday Times charging Britain's most eminent psychometrician, Cyril Burt, with fraud stating that he made up data, fudged his results and invented co authors in his study of identical twins which had been raised apart. Burt, by then dead, had claimed that the correlation factor, r, for the IQs of identical twins raised apart was +.77 which irrefutably supported a large genetic influence on IQ. Burt's reputation was not restored until 1990 when the Minnesota twin study produced almost identical results showing that the correlation of IQ in twins raised apart was +.78. Nonetheless, Stephen Jay Gould's book The Mismeasure of Man, in which he accused Burt of being a fraud, became a best sell and won the National Book Critics Circle Award.
When all was said and done, Gould and his allies won the battle and the new perceived wisdom relating to intelligence became what it is today.
Intelligence is a bankrupt concept. Whatever it might mean- and nobody really knows how to define it- intelligence is so ephemeral that no one really knows how to measure it accurately. IQ tests are, of course, culturally biased, and so are all other "aptitude" tests, such as the SAT. To the extent that tests such as IQ and SAT measure anything, it certainly is not innate "intelligence." IQ scores are not constant; they often change significantly over a person's life span. The scores of entire populations can be expected to change over time- look at the Jews, who early in the twentieth century scored well below average on IQ tests and now score well above average. Furthermore, the tests are nearly useless as tools, as confirmed by the well-documented fact that such tests do not predict anything except success in school. Earnings, occupation, productivity- all the important measures of success- are unrelated to the test scores. All IQ tests really accomplish is to label youngsters, stigmatizing the ones who do not do well and creating a self-fulfilling prophecy that insures the socioeconomically disadvantaged in general and blacks in particular.
Comment:
This, in a nut shell, summarizes what is being taught in our institutions of higher learning today with respect to the importance of inherited intelligence. The Bell Curve was written to dispel the myths related to the subject of IQ, especially the commonly held belief that intelligence is not an inherited genetic trait similar to height, weight and skin color. As we proceed through the chapters of this important book it will become increasingly clear that every single concept voiced in the paragraph above is false, every last one of them! More importantly, we will see the relationship between IQ and the socioeconomic ills that are destroying our once prosperous nation.
Questions:
1. Do you believe that it is possible for there to be a white Michael Jordan?
2. Do you believe that there ever will be a black Albert Einstein?
If not , why not?
Subscribe to:
Posts (Atom)