Los Angeles Times, Wednesday, June 6, 2001
The spin cycle began immediately the day in May the U.S. Census
Bureau released new data about American households: There were
25% more single mothers than a decade ago; 62% more single fathers;
and 72% more unmarried partners.
High-profile conservative Bill Bennett appeared on TV and decried the continued decline of the traditional nuclear family. He cited other research he said proved the superiority of intact, two-parent families for children. Dan Quayle prepared a column saying he was still right.
At the same time, Nancy Folbre, a University of Massachusetts researcher, received a call from the Institute for Public Accuracy, a small liberal nonprofit group, asking her to help craft a press release with a different interpretation. Folbre, a self-described feminist progressive economist, feared the conservatives' agenda would hurt children by replacing poverty programs with pro-marriage policies. She jumped at the chance to "get in a lick."
That day, the IPA e-mailed a press release to the nation's journalists saying many mothers and fathers stay involved with their children even if they aren't counted among traditional married family households. It quoted Folbre: "We shouldn't stigmatize single-parent families just because they don't conform to some traditional stereotype, or because they violate the personal, ethical or religious standards of influential members of our society."
Her subsequent appearance on four talk shows didn't match the reach of Bennett or Quayle. But Folbre was satisfied. At least she had lobbed a hand grenade into the nation's political and ideological battle over the meaning of social statistics.
"From my point of view," she said, "it's like guerrilla warfare."
Anyone who harbors an idea that researchers produce Olympian truths about social issues is naive, the researchers themselves say. "All studies are possibly wrong, whether they're surveys, clinical trials, or whatever," said Tom Smith, director of the General Social Survey at the University of Chicago's National Opinion Research Center.
Not only are survey results such as the census open to interpretation by rival advocates who hope to influence laws, judges or public opinion; research itself is also vulnerable to manipulation by individuals hoping to win grants or promote their own careers. Researchers say they have been under increasing pressure in recent years to produce results that are either politically correct or can tell an exciting story in a few sound bites.
Though honest researchers stop short of cooking results to prove a particular theory, they shape their studies nevertheless through countless choices from what questions they ask to how they define terms such as "child abuse" or "sexual assault" or present their results.
The tide of bad statistics is rising, said David Murray, director of the Washington, D.C.-based Statistical Assessment Service. "It's everywhere now. In politics. In polling. The interpretation of public reaction. Physical sciences. It's the way you win arguments. The way you make points."
Because our culture grants so much authority to statistics in social debates, the bad ones have great potential to mislead, he said. People use statistics in the way priests use mysterious language. "The language of science invokes higher authority. We speak Latin. Egyptian. You believe us."
What's more, only a handful of scientists are fully trained in complex statistical methodology, he said. "It's like flying when it's really foggy and you tend to rely on instruments rather than looking out the window. It's great, unless the radar is on the fritz. Social scientists are captivated by the radar screen. They don't look out the window anymore."
Incompetence, Confusion Can Lead One Astray
Most bad statistics are less the result of deliberate deception than incompetence, confusion or self-serving selection of numbers that reaffirm one's beliefs, said Joel Best, professor of sociology and criminal justice at the University of Delaware and author of "Damned Lies and Statistics" (University of California Press, 2001).
Best cites the example of a mathematically illiterate PhD student who mangled a figure found in a journal and wrote that "Every year since 1950, the number of American children gunned down has doubled," a figure that would have surpassed 35 trillion if the student had calculated the implication of the wording.
"A definition is everything," he said. "Some studies have found 90% of workers are bullied in the workplace. Others show 90% of students are bullied in schools. If you define bully broadly enough as people being mean to you, it's amazing 10% of the population has not been bullied."
A classic problem leading to distorted numbers arises when scientists report a cause-and-effect relationship between two factors but fail to account for other factors that could produce the same result. For instance, some people concluded from large-scale studies associating church attendance with low levels of mortality that doctors should prescribe religious activities.
However, critics said the relationship could just as easily have resulted from the fact that people who were already seriously ill would be more likely to stay home.
Though not all readers of "startling new information" from a just-released study might have the savvy to question the study's basic assumptions, fellow researchers in a field are well positioned to. Recently, however, scientists--under increasing pressure to produce newsworthy results--have started marketing their results to the general public, bypassing the traditional safeguards provided by peer reviewed journals. "That's how you make your own work more visible. That's how you get that next grant," Best said.
Media, tempted by large, surprising or reasonable-sounding figures, sometimes go along repeating numbers without investigating--or mentioning--how they were collected, or whose agenda they might promote. When researchers come out with competing interpretations--such as colleagues involved in the now famous National Institute of Child Health and Human Development child-care study--journalists' leanings, Smith said, can lead them to find one more credible than another because it reinforces their beliefs or makes a more striking story.
Day-Care Study Was Reported as Bad News
Confusion may be one of the most enduring responses to the day-care study, which was released to the press last month during a conference of the Society for Research and Child Development. At first the study, the latest update in an ongoing look at the effects of day care on children, was reported as bad news. Presenting the results was researcher Jay Belsky, a day-care critic, who stressed the aggressive behavior found in 17% of kids who spent more than 30 hours a week in day care, compared with 6% who spent fewer than 10 hours a week away from their mothers. He suggested policies to extend parental leave and part-time work.
But Belsky's fellow researchers on this study rushed to point out the good news: The aggressive behavior the study reported was within the normal range for most children, and the study also showed language and memory benefits for the kids in high-quality care. They noted that 83% of the children are doing fine and that the study measures correlations, not causes. To prove cause and effect, social scientists would have to be able to randomly assign some parents and not others to use day care, for instance. And because they can't, their studies can only show links between two phenomena, not that one causesthe other.
The dust-up was variously blamed on public relations officials overeager for publicity; a hasty press conference by telephone; bad blood between Belsky and his working-mother colleagues; a desire to please the new, more conservative administration in the White House; and uncritical media interested only in quick sound bites.
Reached by telephone in London, where he is a psychology professor at the University of London's Birbeck College, Belsky said, "The truth is, most kids in whatever care are functioning in the normal range. We never stop to point that out."
Blaming colleagues who he said see modest findings as useless, he said, "All they want to say is good quality care is good for kids. I think that's true, but it's not the only thing that's true. When they show it, they don't qualify how modest the effect is, they don't point out how most kids in bad quality care function in the normal range, and they don't hesitate one iota in drawing policy implications and causal implications from correlational data."
The study's lead statistician, Margaret Burchinal, said she's never hidden her belief that children deserve better care. Part of scientific training, she said, is showing what you believe to be true in an unbiased way because it makes your case stronger.
Researchers said they had been advised to discuss only peer-reviewed reports in the future.
As a result of the fuss, Belsky said, "I've come to believe that too much of social science research, especially as it gets disseminated, is ideology masquerading as science."
Reformers Historically Have Inflated Numbers
Some say he shouldn't be so surprised.
Historically, reformers have inflated numbers to help solve social problems, while public officials have tended to underestimate them, Best said. In 1833, New Yorkers concerned about prostitution, and pushing for authorities to take action, published a report stating there were 10,000 prostitutes in the city--a number equaling 10% of the female population. Other estimates ranged up to 50,000. In 1872, police counted 1,223 prostitutes.
Values, morality and religion fuel most debates about social issues, but people are more comfortable discussing numbers, researchers said. "Because of the supposed separation of church and state, we have a system that subscribes to an ideology of objectivity," said Judith Stacey, a USC sociology professor. "We're not supposed to argue from religion [or] from our personal values and impose it on others."
Not so with "scientific research."
"Part of what's going on is we as a society have an appreciation for science, and there's the sense we ought to get the facts about one thing or another," Best said. Our society often mistakes statistics for facts, attributing magical powers to them, like fetishes, he said. Activists have learned to package their claims as facts by including numbers.
In many cases, he said, the number is "suspiciously round. Two million missing children. Three million homeless. Sometimes it's very difficult to figure out: How would you ever count these things? Of course the answer is, nobody is, and somebody is making a best guess. Usually if they think it's a big problem, they want a big number, and if they want to minimize it, they want a small number."
One commonly heard statistic is that gay teenagers are three times more likely to commit suicide than straight teenagers. However, Best said, coroners are not always sure which deaths are suicides and do not record sexual orientation on death certificates. "Nobody is keeping track of suicide statistics by groups who cannot be precisely identified themselves. How can anybody know? Maybe the rate is three times higher, maybe 17 times. Maybe half. There's no way to know that."
Over the past decade, as state and federal governments have begun enacting laws affecting family life, statistics about childbearing, child raising, marriage and divorce have become particularly contentious. "These days, almost any family research is rapidly politicized. It's scary to publish something," Stacey said.
For many years, for example, most research on gay and lesbian parenting was conducted by sympathetic researchers well aware that judges had been deciding against gay parents in child custody cases, Stacey said. The researchers "thought differences were deficits or that judges would seize on any differences, especially the possibility that boys will be sissies and girls will be tomboys. It's that popular prejudice that people were combating or avoiding," she said.
Perhaps unsurprisingly, their studies found no difference in children raised by straight parents and those raised by lesbian or gay parents. "I was suspicious of the no-difference claim," she said. "I found it very hard to believe that there wouldn't be a higher percentage who were less conforming gender-wise. I think that's a good thing."
The report she coauthored, published in April in the American Sociological Review, found that children raised by gay parents were more open to same-sex relationships, that boys were more sexually restrained and girls less so. The emotional health of the children was the same regardless of their parents' sexual orientation.
Still, a colleague, Steven Nock, professor of sociology at the University of Virginia, believes the samples in the studies she researched are too small to justify policy changes. Nock, who has been asked to testify in a Canadian case to determine marriage rights of gay and lesbian individuals, said, "We really haven't done the kind of research that would need to establish the presence or absence of any difference yet."
Research Used to Bolster Political Agendas
Many social scientists have been pushed out of academia into politics because of the implications of what Stacey calls "virtual social science"--spinning statistics to create "truth" in the mass media. In 1997, Stacey and several colleagues, worried about how judges and politicians were misusing research, helped found the Council on Contemporary Families, a group formed to educate the public on research about family diversity.
"You get into the political arena and there's no review," she said. "They just choose the research they want to have on the record. That's not research. That's politics."
In particular, what she called "fatherlessness dogma" appears in the preamble to the 1996 Personal Responsibility Act that reformed welfare: "The absence of a father in the life of a child has a negative effect on school performance and peer adjustment." Though there are studies to that effect, Stacey said, "you can find a study that says the opposite."
Currently, hearings to reauthorize the welfare reform bill are attracting pro-marriage "experts" she said. "That's not research," she said. "That's testimony."
Government funding has had a particular influence on researchers who depend on grants from Congress, researchers said.
Some government-funded researchers may hide or shade findings as "a truth that's just not useful at this point," said the Statistical Assessment Service's Murray. "You have to sniff the wind and find out which way this government is leaning and say the right things to keep the funds coming," he said.
In a particularly ironic twist, Scott O. Lilienfeld, an Emory University associate professor of psychology, said political pressure apparently affected a proposed journal article of his about political influence on child abuse research. His article was first accepted, then not accepted, then reaccepted by the American Psychological Assn.'s journal, American Psychologist.
Lilienfeld's article criticized the association for bowing to members of Congress--including Texas Republican Tom DeLay--who complained that a 1999 Psychological Bulletin article by Temple University Bruce Rind and others "normalized pedophilia." Rind's research analyzed existing studies of childhood sexual abuse and concluded that not all instances of sex between adults and children caused psychological harm to the children. In the wake of a media "feeding frenzy," the association apologized for the article and invited DeLay to contribute an article on child abuse.
Though Lilienfeld did not take a stand on Rind's findings, he said they did not condone pedophilia. The apology, he said, "raised the issues about potential suppression of unpopular or politically incendiary writings."
After media reports in journals Science and the Chronicle of Higher Education that Lilienfeld's article had been accepted in February and rejected in May, he received word Friday it would be published after all, in a special edition along with other commentaries.
Researchers Call for a Realistic Attitude
Some kind of "line in the sand" must be drawn between politics and social science research, Lilienfeld said. He and others argue that if researchers or scientific journals want to be political, it would be better to acknowledge their political stance rather than pretend they don't have one.
"If you acknowledge and reveal it, you subject yourself to a much higher level of scrutiny," Stacey said.
Because all numbers are a product of somebody and inherently flawed, the relationship between politics and research tends to support the postmodern philosophy that nothing can be known objectively anyway. The result, some say, is a loss of ideals.
But though the public may be tempted to throw up its hands, researchers call less for cynicism than for a more realistic and critical attitude about social science statistics. Though bad numbers are not helpful, to have no numbers, they said, is to have no guide at all to evaluating the scope or growth of society's problems.
The public needs to be particularly on guard when presented with statistics on politically charged topics, scrutinizing definitions of terms such as "intact family," or "sexual assault," researchers said.
Consumers need to consider the source of a statistic and what it measures. No one should be particularly surprised, for instance, that gun control advocates find 75% of the population favors gun control, while surveys by the National Rifle Assn. find 75% support protecting the Second Amendment, Best said.
No single study will ever provide the last word on any social issue, researchers said. Said Lilienfeld: "Social science is a self-correcting process. The findings published may be right or wrong. That's the way science progresses."