Winnett Hagens – Southern Changes The Journal of the Southern Regional Council, 1978-2003 Mon, 01 Nov 2021 16:23:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 The Origins of Inequality /sc20-3_001/sc20-3_015/ Tue, 01 Sep 1998 04:00:14 +0000 /1998/09/01/sc20-3_015/ Continue readingThe Origins of Inequality

]]>

The Origins of Inequality

Reviewed by Winnett Hagens

Vol. 20, No. 3, 1998 pp. 32-33

Claude S. Fischer, Michael Hout, Martin Sanchez Jankowski, Samuel R. Lucas, Ann Swindler, and Kim Voss, Department of Sociology, University of California, Berkeley, Inequality by Design: Cracking the Bell Curve Myth, Princeton, New Jersey: Princeton University Press, 1996

In truth this is a book far more about the origins of inequality than a rebuttal for the tired neo-Darwinist arguments advanced by Richard Herrnstein and Charles Murray in The Bell Curve (1994). The authors of Inequality by Design return the continuing debate on inequality and its correlate–racism– back to basics in a way that nearly everyone can appreciate.

What re the basics of inequality and racism? Human appetite is insatiable. No society has sufficient resources to satisfy the desires of all its members. Inevitably, some get more than others. In its most basic essence, racism is a nearly universal mechanism for deciding who gets what in a world where the most desirable goods are scarce. Ruling groups enjoying affluence or even opulence invariably fabricate and perpetuate myths disparaging and debasing the ethnic or racial groups they subjugate and exploit. Such racial myths play a crucial role in perpetuating inequalities by rationalizing and justifying oppressive mistreatment of enslaved or indentured populations upon whose labor the social hierarchy ultimately rests.

According to the authors (Department of Sociology, University of California, Berkeley) of Inequality by Design, Herrnstein and Murray’s The Bell Curve is merely a contemporary version of a philosophy that is centuries old: “[ordained by our creator] inequality is fated; and people deserve, by virtue of their native talents, the positions they have in society…individuals’ intelligence largely decides their life outcomes.” From this perspective, poverty and wealth are the artifacts of a kind of cosmic game of chance that we call genetics. The fault, if there is any, is in nature. If you lose in the cosmic throw of the dice, blame nature.

Readers interested in a classic exposé of a failure in social science research will find Inequality by Design good reading. Is there really such a thing as “intelligence?” Is intelligence one, a few, or many things? If you want to get into the details of psychometrics, if you want a powerful summary of arguments questioning current understandings (or, are they myths?) of intelligence, this well written book will hold your attention. You will learn, for instance, that subordinate racial and ethnic groups around the world, today and in earlier decades, invariably do worse in schools an don school tests than do dominant groups regardless of the genetic differences or similarities between them. The deprivations of oppression and exploitation–the hunger, disease, ignorance, pervasive alienation, and self-hatred–


Page 33

that typically accompany inferior status often guarantee collective failure in schools. The history of blacks and Latinos in the United States fits the pattern. As the authors put it: “it is not that low intelligence leads to inferior status; it is inferior status that leads to low intelligence test scores. …A racial or ethnic group’s position in society determines its measured intelligence rather than vice versa.”

Yet, the real power of this book is not its devastating rebuttal of the phony “prosperity of the fittest: argument, but it is more central and compelling counter-theme that inequality is a property of now societies are structured, not of how individuals talent is distributed. And when it comes to inequality the United States has the greatest degree of economic inequality of any developed country. This inequality between classes in America has, moreover, been growing for the last quarter century. In the United States, 120:1 is the ratio of the average CEO’s pay to the average industrial worker’s pay. In Britain the ratio is 25:1, nearly five times less. In Germany it is 21:1 and in Japan 16:1.

The kinds of extreme inequalities we see reappearing in America today, according to the authors, are neither natural nor inevitable. We help fewer of our citizens than do other industrialized countries and when we do help we do so less generously. For example, we provide medical insurance for some residents; most nations provide medical insurance care for all. According to the authors, in America we spend less on children than any developed country in the world. We give families with children tax breaks, most nations give family allowances. While it’s free enterprise for the poor in America, tax breaks, subsidies, and cozy regulatory relationships all spell socialism for the rich, especially the super rich, in America. There should be little wonder that the gap between rich and poor, or even the rich and middle class Americans continues to widen.

Many Americans, perhaps a majority, believe that inequality is needed to motivate people to work hard and propel economic growth. But, if this premise of American inequality is true, how much inequality is needed? Some of the research the authors of Inequality by Design present suggests that extreme inequalities in income may actually impede economic growth rather than stimulate it. One research study touched on in the book showed that key redistributive policies (homeowners subsidies, health plans, child care, etc.) do not inhibit the functioning of economies instituting such measures. Another study mentioned in the book even showed that product quality improved as the wage gap narrowed between managers and workers.

This kind of research is ground zero for a coming debate that, unless our current prosperity is perpetual, will occur when good times turn bad. Now, as the authors of Inequality by Designinsist, is the time for Americans to accept responsibility for the inequality our choices have created instead of blaming nature. This book is an excellent place to enter a critical dialogue that will, one day, reshape our nation.

Winnett Hagens is director of Fair Representation Programs at the Southern Regional Council

]]>
Census 2000: One Census, Two Counts–The Politics Behind the Confusion /sc21-4_001/sc21-4_002/ Wed, 01 Dec 1999 05:00:01 +0000 /1999/12/01/sc21-4_002/ Continue readingCensus 2000: One Census, Two Counts–The Politics Behind the Confusion

]]>

Census 2000: One Census, Two Counts–The Politics Behind the Confusion

By Winnett Hagens

Vol. 21, No. 4, 1999 pp. 3-5

Arguing the case for a statistical adjustment of the 1990 Census before the Supreme Court in the fall of 1995, U.S. Solicitor General Drew Days began his presentation with the observation that “the true total population of the United States is unknown and unknowable.” Although the 1990 Census described itself as “better designed and executed than any previous Census,” it also was the first Census ever to be less accurate than its predecessor. That year, the Census Bureau determined it had missed about 8.4 million people and double-counted 4.4 million others. As for 2000, according to Bureau Director Dr. Kenneth Pruitt, “the apportionment counts are not likely to be an improvement on the 1990 accuracy levels.” Indeed, the undercount will be even higher.

Causes of Growing Census Undercounts

The Census is administered through mailed questionnaires. Every decade for the last thirty years, more and more people seem unable or unwilling to cooperate with the process. In 1970, the non-response rate was 15 percent; in 1980 it was 25 percent; in 1990, 37 percent; and, in 2000, estimates suggest that the rate will be between 39 and 40 percent.

Many powerful reasons exist for this weakening of public cooperation. The realities that make some people “difficult-to-count” are growing. People in the U.S. are busy. In many families, both spouses are now working, making it more difficult than ever to find anyone at home. Transient lifestyles are on the rise. More people are living in culturally and linguistically distinct communities. The population of groups that often avoid government officials is on the rise. Massive increases in the volume of direct mail solicitations over the last two decades has inured Americans to mail-based appeals. Civic engagement generally-voting, partisan involvement, jury duty-is in decline while public cynicism about government gathers momentum. These converging realities have frustrated and confounded the extraordinary effort of the Bureau of the Census to achieve increasingly accurate censuses.

The Differential Undercount

The Census undercount disproportionately injures non-affluent minorities. While 1.6 percent of the total population in 1990 was missed, the undercount for blacks was about 4.4 percent; for Latinos, nearly 5 percent; for Native Americans about 12.2 percent. Only 0.7 percent of non-Hispanic whites were missed. Children and people renting shelter are also over-represented in the undercount.

Historically, the non-affluent immigrant and minority groups most vulnerable to the undercount have often been supporters of the Democratic Party. This reality fuels the intense political and legal battles over enumeration. Census methods that include “difficult-to-count” people will inevitably apportion to them-and any party winning their loyalty-increased political power. Democratically-affiliated politicians, arguing that science will produce a more accurate and cost effective census (and with it fair representation and equitable federal funding for minorities), passionately endorse sampling techniques and adjusted census numbers that minimize undercounting. Republicans bitterly oppose these remedies, insisting that sampling is an unlawful scheme to inflate Democratic political strength with phony “computer-generated people.” Since $185 billion in federal funds will be apportioned in the next decade on the basis of the Census’ findings, differing enumeration methods will have huge effects upon the amount of federal dollars some states receive. In the end, this is a battle for partisan control of governing bodies from Congress to local school boards.

Congressional Malapportionment by Court Order

Determined to avoid a repeat of the methodological controversy that doomed the timely release of adjusted census figures for the 1990 census, the Bureau commis-


Page 4

sioned a series of studies by the National Academy of Science on ways to achieve the most accurate census possible in 2000. Academy experts concluded that it was “fruitless to continue trying to count every last person with traditional Census methods of physical enumeration.” Rather, they agreed, the key to more accurate numbers and greater cost effectiveness would be a post-census survey that carefully sampled undercounted and over-counted populations in order to produce a precise description of census coverage inaccuracies. Once the inaccuracies and their distribution were known, adjustment methods could be applied, producing more accurate numbers. Sampling generates cost savings by reducing the face-to-face follow-ups at households failing to return Census questionnaires.

Thanks to these experts, until January 1999, we had a more accurate, scientifically-engineered and cost-effective census within reach. But, as Bureau Director Pruitt explains, the NAS design with its commitment to accuracy through statistical correction “quickly became mired in political disputes, was litigated by a coalition of conservatives and Republican plaintiffs, and was set aside by the Supreme Court.” By a single vote, the conservative faction of the Court prevailed (in Department of Commerce et. al. v. United States House of Representatives et. al.), mandating the use of uncorrected-inaccurate-population counts for the purpose of reallocating congressional seats among the states in 2001. Congressional apportionment will be deliberately calculated with census data unadjusted for accuracy by scientifically valid sampling techniques. Without the adjustment, “difficult-to-count,” non-affluent immigrant and minority groups missed by the head count will be excluded from the Census figures. This is no small matter as it directly manufactures an avoidable under-representation of non-affluent immigrant and minority groups in Congress. Past Census estimates show that these groups are among the fastest growing segments of our population. Unless Congress steps in to change the law, this deliberate under-representation will last at least until the next Census in 2010. It is congressional malapportionment by court order. As long as this decision stands, the opportunity for fair congressional reapportionment is lost.

The decision, especially the way it was carefully limited to congressional reapportionment, serves perfectly the conservative Republican interests which inspired and backed the lawsuit. Because federal law (the Census Act) unequivocally required that sampling be used for purposes other than apportionment if ‘feasible,’ the Court came to an entirely opposite conclusion when it comes to the use of sampling for intrastate redistricting or the distribution of federal funds. With all nine Justices concurring in one way


Page 5

or another, the Court expressly and emphatically voiced its opinion that for purposes other than the reapportionment of congress, the use of statistical sampling methods was not only lawful but required. As Sandra Day O’Connor put it, the Census Act of 1976 changed the law from one that “permitted the use of sampling for the purposes other than apportionment into one that required that sampling be used for such purposes if ‘feasible.'” Political redistricting (drawing boundaries for representational districts of governing bodies within states) and apportionment of federal spending can be based on scientific techniques that minimize the differential undercount. Because of this decision there will be two Census counts published in 2001.

From the point of view of the conservative Republican coalition which engineered this outcome, this is an ingenious strategy. The use of an unadjusted “head count” will over-represent residentially stable, conservative, suburban voters in Congressional reapportionment. Yet, Republican governors and legislatures, especially in states like Texas and Florida, can demand the use of adjusted numbers which minimize the undercount for the purpose of allocating federal funds. The question of which set of numbers to use for intrastate redistricting is being left up to each state to resolve. Legislatures in four states controlled by Republicans-Alaska, Arizona, Colorado, and Kansas-have already passed laws prohibiting the use of sampled population data in their state redistricting process. Other legislatures are likely to follow suit (see sidebar on page 4).

Both Arizona and Alaska are subject to the preclearance requirements of Section 5 of the Voting Rights Act of 1965. Before either state can legally implement their prohibition of sampling-based census numbers they must prove to the Department of Justice (DOJ) that these new laws are not intended to and will not have the effect of diluting minority voting strength. Both states have made their submissions to the DOJ but at printing time (January 2000) we are still awaiting word from the Civil Rights Division. Ed Still, Project Director of the Voting Rights Project at the Lawyers’ Committee for Civil Rights Under Law anticipates that the DOJ will interpose objections. “The only outstanding question is how severe the DOJ objections will be,” says Still.

One Census, Two Counts: More Work for Lawyers

Following the Court’s decision in February of 1999, the Census Bureau unveiled a revised Census 2000 plan that complied with the Court’s order. In place of modern scientific methods, the Bureau proposed a huge follow-up outreach to non-responding households. This face-to-face effort will add $1.7 billion to the cost of the Census. The results will be used for congressional apportionment.

Following the enumeration, the Census Bureau will also conduct a quality control survey, called Accuracy in Coverage Evaluation (ACE) of 300,000 households. The ACE results will be compared to findings of the initial head count in order to build an adjustment formula to account for people missed or double-counted. These corrected numbers will be used for non-apportionment purposes like redistricting and the distribution of federal funds.

There will be, then, two separate counts disseminated by the Bureau following the 2000 Census. For the first time in history the Census will yield contradictory sets of numbers describing the U.S. population. Neither will be highly accurate. Both will ignite heated controversies and provide opportunities for political mischief at all levels of government-from local school boards to congressional districts-over which set of numbers, adjusted or unadjusted, must be used for intrastate redistricting. A new field of litigation-census law-will come into its own.

Winnett Hagens is director of Fair Representation Programs at the Southern Regional Council.

]]>