Feed on
Posts
Comments

A common fallacy that turns up again and again these days is argumentum ad numerum, Latin for an argument or appeal to numbers. You will hear someone defending an argument say: “According to public opinion polls, 70% of the American people agree with my position.”

All that proves, however, is that a position is popular, not that it is backed by evidence or well-reasoned conclusions.

But there’s a natural tendency to think there’s “safety in numbers” and if most people believe something, it’s more likely to be true than not.

In fact, the “American people” are wrong in their opinions all the time. Or they change them. The wars in Afghanistan and Iraq were initially quite popular. Now a majority of Americans think they were a bad idea. It used to be that a clear majority opposed gay marriage; now, it’s closer to a 50/50 split in opinion on the matter.

And consider the following public opinion poll results:

  • 68% of Americans polled believed (falsely) that most individuals pay more in payroll taxes and Medicare premiums over their lifetime than they end up receiving in Social Security and Medicare benefits—they don’t! (Comeback America Initiative, December 2011)
  • 67% of Americans polled weren’t aware that the United States has a capitalist economic system (Newsweek, March 2012)
  • 55% of Americans believe there are Men in Black-style government agents who threaten people who spot UFOs (National Geographic, June 2012)
  • Some 46% of Americans reject evolution and believe God created the human race in a single day 10,000 years ago (Gallup, June 2012)
  • Only 34% of Americans correctly identify President Barack Obama as a Christian (Gallup, June 2012)

There’s another problem with citing public opinion in support of an argument: polls themselves are increasingly less credible. Pollsters are finding it harder and harder to get Americans to respond. While they don’t want to admit this, the public opinion survey companies have been experiencing extremely high refusal rates.

And the way a survey is worded and administered can have a huge impact on the results. Dalia Sussman of The New York Times has noted that abortion is one issue where wording really matters:

For example, ask people if they consider themselves “pro-life” or “pro-choice” and you’ll get one answer. Ask them if they think abortion should be legal or illegal, and you may get another. They’re not necessarily contradictory — someone may consider himself “pro-life” but as a matter of policy still say abortion should be generally available. That is a nuance often missed. A Gallup poll last spring reported that most Americans are pro-life. Some people took that to mean that most are opposed to legal abortion. That wasn’t the case, as would have been clear had the wording been noted. In fact, in a different question in the same poll, just 23 percent said abortion should be illegal in all circumstances.

Polls, in short, are not particularly accurate. They provide a snap-shot of public opinion and often reflect the misinformed views of those who respond.

The fallacy of appealing to numbers (to the popularity of an idea or argument) is a tempting one, but clear thinkers resist that temptation. Whenever someone cites public opinion in support of their position, it’s helpful to remember that University of Chicago political scientists found that “in four nationally representative survey samples collected in 2006, 2010, and 2011, over half the American population consistently endorsed some kind of conspiratorial narrative about a current political event or phenomena.” (J. Eric Oliver and Thomas J. Wood, “Conspiracy Theories, Magical Thinking, and the Paranoid Style(s) of Mass Opinion,” Working Paper – PDF here.)


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2012 Jefferson Flanders

Vetting sources

The problem is familiar to anyone using the Internet for research: how can you tell whether the information your Google search has unearthed is accurate? How can you verify it? How can you make sure that what you’ve surfaced is fact, not opinion?

One quick test recommended by (among others) librarians and professional researchers is to consider the source. You are less likely to encounter questions of veracity when you’re dealing with research from peer-reviewed academic journals, reportage from established news organizations, and traditional reference works.

Yet even these sources must be considered with a critical eye. The New York Times is regarded by many as the country’s “newspaper of record.” Researchers often turn to it as a source. While the editorial pages of the paper skews to the left, those at the Times covering news are held to a standard of objectivity. Nonetheless, the paper can be a problematic source.

As the public editor of the Times, Arthur Brisbane, recently noted (“A Hard Look at the President“):

According to a study by the media scholars Stephen J. Farnsworth and S. Robert Lichter, The Times’s coverage of the president’s first year in office was significantly more favorable than its first-year coverage of three predecessors who also brought a new party to power in the White House: George W. Bush, Bill Clinton and Ronald Reagan.

Writing for the periodical Politics & Policy, the authors were so struck by the findings that they wondered, “Did The Times, perhaps in response to the aggressive efforts by Murdoch’s Wall Street Journal to seize market share, decide to tilt more to the left than it had in the past?”

Assuming that Farnsworth and Lichter have it right (and they are scholars who have been studying media bias and news content for quite some time), researchers and historians reviewing President Obama’s early years in office would be well served to sample news coverage from a variety of sources—not relying solely on the Times. Comparing news sources is much easier, thanks to the Internet and the digitization of news accounts.

Two of American journalism’s most thoughtful observers, Bill Kovach and Tom Rosenstiel, have considered many of these issues in their book Blur: How to Know What’s True in the Age of Information Overload. Kovach and Rosenstiel maintain that anyone looking at media today should ask six questions:

  • What kind of content am I encountering?
  • Is the information complete; and if not, what is missing?
  • Who or what are the sources, and why should I believe them?
  • What evidence is presented, and how was it tested for vetted?
  • What might be an alternative explanation or understanding?
  • Am I learning what I need to?

While Kovach and Rosenstiel are primarily focused on helping news consumers make sense of a confusing media landscape (and on strengthening their understanding as citizens), the six questions are valid for anyone engaged in research or searching for answers. The authors of Blur also note that scientists, law enforcement and intelligence professionals, and journalists share an understanding that truth is provisional and empirical, “a statement of what is most probable in proportion to the evidence available at the time.”

Of course this need to vet sources more carefully means more effort on the part of the researcher or information consumer. Verification takes effort, and time. Double-checking and weighing the evidence can slow the research process but the benefits of this more pain-staking approach should be that we get closer to the truth. (We’d do well to remember Will Rogers words of wisdom: “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2012 Jefferson Flanders

Tags:

Is there a better time to observe tortured logic, rhetorical slight-of-hand, and fallacious thinking than during a presidential election year? Probably not, because it’s all on public display.

I’m not talking about outright fabrications or falsehoods by politicians (although these often turn up during campaigns). No, it’s the attempts at political persuasion, the arguments for and against those seeking office, that deserve additional attention.

Candidates at all levels seek to convince voters that they are the best choice, but the arguments they make often don’t stand up to a critical consideration.

Here are some of the more common arguments made to voters and why you should be properly skeptical when you encounter them.

  • “Because I hail from humble origins, I can better represent working people than my opponent if you elect me to the State House/Congress/the White House.”
  • Why you should be skeptical: The argument that only someone who comes from a given group (the “working class”) can have empathy for that group and can (and will) address its problems once in office isn’t supported by history. The patrician Franklin Delano Roosevelt instituted the New Deal and the country’s first safety net for the working poor. Great Britain’s Margaret Thatcher, daughter of a grocer, rejected most government intervention to help the working class, believing it developed dependency. And it’s helpful to remember the French president Charles de Gaulle’s observation: “In order to become the master, the politician poses as the servant.”

  • “Because I was a success in business, I’ll be a success in elective office.”
  • Why you should be skeptical: Dealing with political constituencies is different than running a company. The track record of CEOs in public office suggests that executive skills don’t always translate from the private sector to the public. Some business leaders are successful in office (Mayor Michael Bloomberg, Mitt Romney, Angus King, Herbert Lehman) and some are not (Jon Corzine, Warren G. Harding, Herbert Hoover).

  • “My opponent isn’t a true conservative/liberal/libertarian. I am, so vote for me.”
  • Why you should be skeptical: This is a variant of an “us versus them” argument, a key tool for those who employ identity politics to garner support. It’s not the policies or experience of the candidate that is being questioned, but rather whether or not he or she belongs to the correct “in” group. Competence (or real conviction) doesn’t matter. That’s a poor reason to vote for someone.

  • “My opponent has declared war on the rich/poor.”
  • Why you should be skeptical: military metaphors in politics are typically used to energize partisan supporters and bear little relationship to reality. In truth, no one is declaring war on anyone or anything. Democrats paint their Republican opponents as heartless skinflints who want to balance budgets on the backs of the poor. Republicans accuse Democrats of “class warfare” and demonizing the rich. These arguments are a form of the “appeal to emotion” fallacy.

  • “If my opponent is elected, we must fear for our way of life/the Republic/civil rights/all that is good and pure.”
  • Why you should be skeptical: The fallacy of exaggeration is front and center in many campaigns. This year you can expect way too many claims that voting for the other candidate will bring on the Apocalypse. Conservatives claim a Democratic victory means that American will be transformed along the lines of a European social democracy. Liberals will warn that a Republican victory will “turn the clock back” to a racist, sexist society. Both claims are overwrought.

Campaign 2012 will feature lots of misleading and overheated political rhetoric. Several billion dollars will be spent by candidates in attempting to persuade us to vote for them. Let’s hope that Americans take a clear-thinking approach and consider the arguments being made, and whether the evidence and logic support them. At the same time, they shouldn’t take it all too seriously. They should also remember, as Will Rogers once said, that “Politicians can do more funny things naturally than I can think of to do purposely.”


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2012 Jefferson Flanders

Two recent instances of academic fraud have again raised questions about the validity of some research in the field of psychology.

The first is the sad case of Harvard professor and psychologist Marc D. Hauser who resigned this summer after being found responsible of scientific misconduct by the university. Hauser, a researcher in primate behavior and animal cognition, ran into trouble when students in his lab alleged data fabrication.

Gautam S. Kumar and Julia L. Ryan of the Harvard Crimson reported the Harvard’s investigation found instances of misconduct in three of Hauser’s published articles. One was retracted and two others had to be corrected. The university stated that five other studies were either unpublished or were corrected before being published.

Kumar and Ryan added:

Harvard remained silent on the details of the investigation and speculation abounded as to the exact nature of the “scientific misconduct”—a charge Harvard defines as fabrication, falsification, or plagiarism….

…A report earlier this year seemed to validate some of his research after an experiment validated one of the papers that was subject to scrutiny when Hauser’s methods began to be questioned.

But scientists are split over whether these results vindicate the professor, with some arguing that the replications do not prove a proper conduct in the original study. Others also point to a potential conflict of interest in the fact that it was Hauser who replicated the experiments in question.

The second case is that of Dutch social psychologist Diederik A. Stapel who admitted to falsifying data and making up experiments. Stapel, a prolific author, has published more than 150 papers, many of them on hot-button topics like racism and gender discrimination.

As Benedict Carey of the New York Times noted in “Fraud Case Seen as a Red Flag for Psychology Research“:

The scandal, involving about a decade of work, is the latest in a string of embarrassments in a field that critics and statisticians say badly needs to overhaul how it treats research results. In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.

In the wake of these scandals, and the questions that have been raised about the research, how should journalists, students, and the public approach these studies? Here are a few clear thinking guidelines:

  • The more striking or sensational the findings, the more skeptical we should be. As I’ve noted before, extraordinary claims require exceptional evidence. Critics have noted that some researchers in psychology appear to be chasing media attention and the possibility of celebrity with the topics they study and the conclusions they draw.
  • Discount any study where the raw data isn’t published and readily available to other researchers. As Carey of the Times noted, Stapel got away with his fraud because he was ” ‘lord of the data,’ the only person who saw the experimental evidence that had been gathered (or fabricated).”
  • Be skeptical of studies that support current left-wing academic thinking about race, gender, and culture. Stapel built a career on this sort of research and it is no surprise that he was successful, because his like-minded colleagues were predisposed to believe his findings. (This caution would also apply to right-wing ideological prejudices that might influence research if there were significant number of conservative academics in the social sciences, but there aren’t).
  • Watch out for researcher bias. Hauser seemed intent on proving that monkeys (cotton-top tamarins) had more cognitive abilities than previously believed. Is it so surprising that his research findings, which relied heavily on subjective assessments of videotaped tamarin behavior, would support that point of view?

Journalists should be particularly careful in the way they present psychology research. They should resist the temptation to play up the more sensational findings about ESP or racial stereotyping or gender-related research, recognizing how limited and often flimsy the data actually is.

In short, take this research with a very large grain of salt.*


*Taking something with a grain of salt means “to accept a thing less than fully,” according to the Oxford English Dictionary. The original Latin phrase cum grano salis can be translated as “with a grain of salt” or “with a grain of wit.”

Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2011 Jefferson Flanders

If you are like me, you’re finding media reports on the state of the U.S. economy far from clear.

One major problem is that most of what we know about the economy comes from numbers (unemployment, Gross Domestic Product) that are released by various government agencies. There is very little independent reporting done about the macroeconomic trends: most stories include the government statistics with perhaps a few comments from economists.

Context? Not likely. Let’s take unemployment. In September, when the August unemployment numbers were released the prevailing media narrative was one of surprise. The New York Times reported: “August brought no increase in the number of jobs in the United States, a signal that the economy has stalled and that inaction by policy makers carries substantial risk. Reuters called the report of continued 9.1 percent unemployment “unexpected.” Bloomberg News agreed: “Employment in the U.S. unexpectedly stagnated in August…”

But wait! Clear thinking requires us to ask some questions:

  • Why was the unemployment report “unexpected”?
  • Do the job numbers truly indicate that the economy has “stalled” or is “sputtering out”?
  • Are there ways to put the jobs picture in better context?

If August’s bad job news was unexpected, it’s because reporters are paying attention to economists who have proven they have no real idea about what the unemployment rate is going to be.

Need proof? Just consider what economists have been saying over the past few months. In July, Nigel Gault, chief U.S. economist for IHS Global Insight, was quoted as saying: “The June jobs report was a shocker. It was far worse than expected, and weak on all key dimensions — job creation, unemployment, the length of the work week, and hourly earnings.” In August, when the July unemployment rate dropped to 9.1 percent, Jim O’Sullivan, chief economist at MF Global told USA Today that the report “should lessen fears that the recovery is truly faltering.” In September, Nariman Behravesh, chief economist at IHS, was gloomy about the August report: “This is further evidence that the economy is very close to stalling if not having stalled.”

Behravesh copped to having forecast August job growth of 15,000; other economists had predicted some 60,000 additional jobs in the month.

So the Labor Department report was deemed “unexpected” because economists who have never demonstrated that their models work predicted job growth that didn’t happen. Bloomberg had surveyed 86 economists and found a range of estimates for July payrolls “from a decline of 20,000 to a 160,000 increase.”

A better measure, and one that puts the job picture in better context, can be found with Gallup’s daily employment survey. While it isn’t adjusted for seasonality, I think it’s a much better portrait of who has work and who doesn’t in the U.S.

Gallup’s numbers for August showed worsening unemployment from July and underemployment (the percentage of workers who are unemployed with the percentage working part time but wanting full-time work) at 18.5%.

Anyone who monitors the Gallup survey would not find the August numbers “unexpected” or be surprised by less than stellar job growth.

As to whether the economy is “stalled out” or “sputtering,” I think the only realistic answer is: we don’t know.

And, placing the job numbers in perspective, the historical record shows that when a recession is triggered by a financial crisis, the recovery is long and slow. That’s an insight that should be more widely reported.


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2011 Jefferson Flanders

Perhaps one of the most persistent, and damaging, fallacies is that of the sunk cost fallacy, on full display in the recent collapse of the solar panel company Solyndra LLC after heavy government investment (a $535-million Energy Department loan guarantee). The sunk cost fallacy describes our natural tendency to continue a project or investment once we have sunk money, effort, or time into them: the notion of “In for a penny, in for a pound.”

Solyndra filed for bankruptcy protection on August 31 and dismissed the company’s 1,100 employees.

Despite warning signs about Solyndra’s shaky market position, decision-makers at the Energy Department had decided to prop up the company. Their motivation was partly political and partly magical thinking. It was political because President Barack Obama has made creating “green jobs” a centerpiece of his argument for government subsidies for wind and solar energy. The magical thinking was that somehow the laws of market economics would be suspended.

Megan McArdle of the Atlantic concisely outlined Solyndra’s fundamental market problems:

Solyndra didn’t invent the photovoltaic cell; they had a solar panel technology that didn’t use silicon and was (by the company’s account, anyway) supposed to be easier to install on the roofs of big box stores. However, this design was tricky and very expensive to manufacture, apparently: my reading indicates that Solyndra was able to make a product for $6 that sold for $2-3 (before the market collapsed, anyway). I take it that the idea was that Solyndra would somehow ease this disparity by getting up to scale, but the product was apparently extremely difficult to manufacture, and they never got their assembly line working properly.

You can’t manufacture devices at $6 and sell them at $3 and expect to survive for long. The old joke about losing money on every individual sale but making it up on volume is just that, a joke.

And even if Solyndra had been able to make solar panels that would be competitive with those manufactured in China, there was still the negative cost differential between energy produced by cleantech and energy from traditional carbon-based methods, specifically (in the US) coal.

Why sunk costs?

It is easy to judge a failure like Solyndra with 20/20 hindsight. Yet what makes the sunk cost fallacy so damaging, and so prevalent, is how in its most pernicious forms it openly acknowledges the potential for failure. Venture capitalists know that they will have to make a heavy investment in a start-up and ride out losses until the early-stage company proves itself in the market and starts making a profit. The rationale for continued investment is that the market uptick is just around the corner.

Smart investors know when to cut their losses. They will walk away from a large investment unless they are convinced that further investment makes sense (no “good money after bad.”) It’s a difficult thing, because our natural tendency is to want to protect any resources we’ve committed.

In the case of Solyndra, the company grew from $6 million in 2008 to $100 million in 2009 when it won approval from the Energy Department for the loan guarantee. In late 2010, when the company ran out of money, the agency agreed to refinance the loan and to continue paying out federal funds. Why? It was a classic example of the sunk cost fallacy in action. Government officials hoped that Solyndra could avoid bankruptcy and concluded after “a due-diligence effort” that Solyndra “still had a viable business.” Some nine months later that assessment proved tragically flawed.

The way to avoid the sunk cost fallacy is to ask some simple questions: Is this a good investment on its own merits? Would I make this investment if I had no money or time already invested? Am I being completely objective in my assessment, or am I letting my emotions influence? (It’s hard for many to admit they’ve made a mistake or they may so earnestly believe in an idea or a project that they can’t let go.)

Had those questions been asked about Solyndra from the start, it’s hard to imagine that American taxpayers would be on the hook for half-a-billion dollars.


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2011 Jefferson Flanders

Words and phrases, and how we define them, can shape how we think. They can be used to obscure meaning or they can provide greater clarity.

There’s been a focus in the social sciences and media studies over the past several decades on the concept of framing. Sociologist and communications scholar Todd Gitlin has written: “Frames are principles of selection, emphasis and presentation composed of little tacit theories about what exists, what happens, and what matters.” Active framing involves choosing the language in the hopes of setting an agenda or defining a debate. Frames can provide a mental shortcut, focusing our thinking within given boundaries.

Framing has an impact on our actions. The Nobel prize-winning economists Amos Tversky and Daniel Kahneman ran a series of experiments that showed the way choices are framed can influence an individual’s decisions, sometimes encouraging “irrational” choices.

Two of the leading proponents of the power of framing in political discussion are, on the Right, the pollster Frank Luntz, and on the Left, George Lakoff, a Berkeley professor of cognitive linguistics. Luntz has instructed Republicans to call the inheritance or estate tax the “death tax” and substitute the phrase “exploring for energy” instead of “drilling for oil.” Lakoff argued that liberals should call trial lawyers “public-protection attorneys” and refer to taxes as “investments.”

Some of their suggestions for reframing seem absurd. As the character Butters in South Park once said: “You can call a shovel an ice-cream machine, but it’s still a shovel.” Yet framing often can be effective, not only in politics but in the workplace and other arenas.

Debt debate framing

This summer’s debate in Congress over federal spending, the deficit, and the debt limit has been chock full of framing attempts. The Democrats have focused on what they claim are deep cuts in the Federal budget that will “gut Medicare and Social Security.” Republicans have recast any proposed increase in federal revenue through tightening or eliminating tax loopholes as a “tax increase.”

A closer look suggests both of these frames are misleading. The claim by the Democrats that federal spending is being “gutted” just doesn’t hold up. What the government spends isn’t being reduced below current levels, which is what you’d think with such rhetoric. Instead the agreed-upon deficit deal seeks to reduce the growth rate of federal baseline spending in the future (and many observers are skeptical that even these future constraints will actually be adopted in practice).

To put this in practical terms: if you earned a salary of $50,000 a year and your boss told you that your compensation was being slashed, what you expect to be paid the following year? Perhaps $45,000? What would you think if your salary turned out to be $51,000? Would you consider that a cut?

For their part, Republicans are guilty of deliberately obscuring the difference between “tax increases” and “tax rate increases.” Closing tax loopholes will certainly increase the taxes/revenues the federal government collects, but it won’t impact tax rates, which is what most of us think of as “raising taxes.” When a politician makes a “no new taxes” pledge, most voters understand that he or she is promising not to vote for higher tax rates.

The logic of labeling any alterations in the tax code as a “tax increase” doesn’t hold up. Would Republicans vote against an across-the-board flat tax of 15% on income (a huge tax rate cut) because it might spur the economy (as some economists argue) and produce more tax revenues?

Countering framing

The best way to counter someone who is trying to establish a frame is to ask pointed questions. What is meant by this new phrase or use of language? Is there a conscious attempt to change past definitions of an issue? Is the language used in the frame a euphemism for something unpleasant or unpopular? Will adopting this frame limit the debate and consequently limit options or choices? What are alternative or counter-frames? Who is promoting a new frame and why? Can the frame be explained in clear and simple language? It’s important not to accept the new frame at face value, because when you do so you are implicitly accepting its premises.


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2011 Jefferson Flanders

“Correlation does not imply causation.”

“Don’t rush to conclusions.”

Remembering these two simple phrases can help you achieve clarity in your thinking.

Clear thinkers need to be cautious when analyzing cause-and-effect relationships. A better understanding of the relationship between correlation and causation could spare you from falling prey to the false cause fallacy.

By definition, correlation is “a relation existing…between mathematical or statistical variables which tend to vary, be associated, or occur together in a way not expected on the basis of chance alone.” When two variables move in the same direction, we call that positive (or direct) correlation and when they move in opposite directions, it’s negative or inverse correlation. The fundamental logical error in the false cause fallacy is assuming that when two things occur simultaneously or chronologically (correlation), one has caused the other to happen (cause). But that may not be true. Clear thinkers must consider other possible explanations for any correlation between events.

Considering correlation

Correlation may happen by coincidence. As historian David Hackett Fischer noted: “Near-perfect correlations exist between the death rate in Hyderabad, India, from 1911 to 1919, and variations in the membership of the International Association of Machinists during the same period. Nobody seriously believes that there is anything more than a coincidence in that odd and insignificant fact.”

Two events that are correlated may not involve causation. For example, children who have been vaccinated may also then develop symptoms of autism shortly after (correlation). Some activists have argued that the thimerosol found in the MMR (measles, mumps, rubella) vaccine causes autism in children. But exhaustive research suggests that there is no cause-and-effect relationship. The MMR vaccine is administered at the age when children who have a genetic disposition toward autism start displaying symptoms of the disorder; it’s natural to imagine the vaccine triggered the autism, but large epidemiological studies have found no casual connection. (See: “Immunization Safety Review: Vaccines and Autism” from the Institute of Medicine from the National Academies.) This is a classic example of the cum hoc ergo propter hoc (Latin for “with this, therefore because of this”) variation of the false cause fallacy.

A further variation: events that are correlated may not have a direct casual relationship but may share a common cause. For example, people with red hair have higher skin cancer rates. It’s not their red hair that causes the skin cancer, however, but their relative levels of pheomelanin and eumelanin pigment—a common cause for both hair color and fair skin. Fair skin is what in reality makes an individual more prone to skin cancer). (See: “The Genetics of Sun Sensitivity in Humans” from The American Journal of Human Genetics.)

Rushing to judgment

The false cause fallacy is very seductive: when we see two phenomena occur at the same time, it’s “logical” to think one has caused another. We see cause-and-effect all around us in the natural world (the wind blows and leaves fall, the moon changes and tides respond). But drawing conclusions about the relationship between simultaneous or sequential events is risky.

Take the attempted assassination of Gabrielle Giffords, a Democratic member of the U.S. House of Representatives from Arizona’s 8th district, in January 2011. Some on the Left quickly linked the attack by a lone gunman to rhetoric by Sarah Palin, the Tea Party movement, and other right-wing groups that they argued encouraged violence. The New York Times ran a story entitled “Bloodshed Puts New Focus on Vitriol in Politics,” and columnist Paul Krugman (“Climate of Hate“) wrote: “…violent acts are what happen when you create a climate of hate.”

But the causal relationship between right-wing rhetoric and the motives of the accused shooter was just as quickly proven false. Jared Lee Loughner, the alleged gunman, turned out to be mentally disturbed, a man without any coherent political beliefs. In May Loughner was ruled incompetent to stand trial, with a court-appointed psychiatrist finding that Loughner “experienced delusions, bizarre thoughts and hallucinations and appeared to suffer from paranoid schizophrenia.”

Many of those who had quickly connected the attack on Rep. Giffords to Tea Party rhetoric did so for ideological reasons. Reaching a premature conclusion, however, if anything helped to further political divisions in the country. Had these partisans waited a few days and not rushed to judgment, the probable cause-and-effect relationship would have been clearer: mental illness, rather than politics, as the motivating factor.

Avoiding the cause-and-effect trap

In many cases, causation is extremely difficult to establish. When considering a cause/effect relationship, it’s helpful to remember the following tips:

  • Recognize that because two events, A and B, are related in time, it doesn’t follow that A caused B (symbolically represented by logicians as A ⇒ B). For example, did launching a weather balloon in the morning (A) cause rain (B) an hour later? Look at circumstances where A was present, but B wasn’t and where B was present, but A wasn’t. Were there times when a weather balloon (A) was launched and it didn’t rain (B)? Did it rain (B) when there were no weather balloon launches (A)?
  • Review other possible causes for an effect. Events C and D may be related in time, but E, F, or G may have been the underlying cause for D, not C. A new marketing campaign (C) for frosted doughnuts is followed by declining sales of the product (D); it’s not failed marketing that has caused the slump, however, but more likely a series of negative media coverage (E) of scientific reports linking sugary desserts to adult onset diabetes (E ⇒ D).
  • Don’t assume that events have only a single cause. There can be a number of causes involved, and clear thinkers recognize that complexity.
  • Focus on explaining why one event, H, would cause another event, I, beyond a simple sequential relationship. For example, consider an overloaded truck (H) that drove onto a bridge just before it collapsed (I). An explanation of causation (H ⇒ I) would note that the truck’s weight was greater than the amount the bridge was originally designed to support; thus, there is a logical reason for its failure.

Clear thinkers remain alert to the way language can help create false casual relationships. Conjunctions like “thus,”" therefore,” and “consequently” can imply a cause-and-effect relationship when one doesn’t exist (“More people died in Central Hospital’s emergency rooms than anywhere else in the facility; therefore, going to Central’s emergency room causes death.”) There is logic in cause-and-effect reasoning; the trick is to make sure the relationship is truly casual, not merely coincidence or simple correlation.


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2011 Jefferson Flanders

They may be the three hardest words in the English language for many of us to say out loud: “I was wrong.”

Yet clear thinkers recognize that a willingness to acknowledge that they’ve been wrong is a key part of arriving at the truth.

It’s not surprising that we have a hard time admitting error. (You may be the exception that proves the rule, but I’ll admit to struggling with it at times.) It can mean hurt pride or even produce a sense of shame. In a work or professional setting, we may worry that a public mea culpa will call our competence into question. We may worry that we will be blamed for a lapse in judgment.

There may be public embarrassment associated with an admission of error. Or we may flinch from disclosing something that we imagine will disappoint our friends and hurt our reputation. In a competitive situation, it can be seen as a sign of weakness.

Sometimes ideology or theology gets in the way in when it’s a question of (to use a Britishism) “climbing down.” Look no further than the Roman Catholic clerics who couldn’t accept Galileo’s evidence for a heliocentric universe, or Stalinist apparatchiks who refused to admit that Marxian economic theory—collective farms, Five-Year Plans—was failing in practice. More contemporary examples aren’t hard to find. Conspiracy theorists (Truthers, Birthers, Climate Deniers), for example, ignore contrary evidence or challenge its veracity.

But being able and willing to admit error is crucial to clear thinking. When the famous economist John Kenneth Galbraith was challenged over changing his views on monetary policy during the Great Depression, he had a telling reply: “When the facts change, I change my mind. What do you do, sir?”

There can be a high cost to stubbornly refusing to admit error. Peter Senge, director of the Center for Organizational Learning at the MIT Sloan School of Management, has noted how people and organizations can fall into “addiction loops” where they continue to repeat the same damaging behavior because they can’t step back and see how some of their fundamental assumptions are flawed.

The courage to reverse course

It takes courage for public figures to reassess their conclusions, and admit they’ve erred, especially when the perceived stakes are high.

The former South African jurist Richard Goldstone recently backed away from the controversial conclusions of a United Nations fact-finding mission he chaired that had reviewed the 2008-09 conflict in Gaza between Israel and the Palestinians. The Goldstone Report found that both the Israeli military and the Palestinian group Hamas had potentially committed war crimes and crimes against humanity.

Earlier this month Goldstone wrote in a Washington Post op-ed (“Reconsidering the Goldstone Report on Israel and war crimes“) that subsequent investigatory findings had led him to alter his views on Israel’s culpability:

The allegations of intentionality by Israel were based on the deaths of and injuries to civilians in situations where our fact-finding mission had no evidence on which to draw any other reasonable conclusion. While the investigations published by the Israeli military and recognized in the U.N. committee’s report have established the validity of some incidents that we investigated in cases involving individual soldiers, they also indicate that civilians were not intentionally targeted as a matter of policy.

Goldstone faced harsh criticism for reversing course on the question of whether Israel had intentionally—and illegally—targeted Palestinian civilians. Some argued that he should never have endorsed the initial conclusions of the Gaza fact-finding group (Jeffrey Goldberg, a national correspondent for The Atlantic magazine, wrote: “Unfortunately, it is somewhat difficult to retract a blood libel, once it has been broadcast across the world.”)

Others claimed that Goldstone, who is Jewish, had been pressured into changing his position (For example, Nimer Sultany, a civil rights attorney in Israel, wrote in the Boston Globe: “Palestinians, long abandoned by countries that should press for their freedom, have now been abandoned by a leading human rights advocate who could not withstand months of withering and cruel criticism. He is showing a new ideological bent in castigating Hamas for rocket fire, but saying nothing about the siege Israel is applying to Gaza.”)

I’d argue that it’s a disservice to the pursuit of the truth to question either Goldstone’s sincerity or his integrity. To insist that once a position is taken that it should never be altered is a recipe for intellectual stagnation. John Kenneth Galbraith had it right: intellectual honesty demands an acknowledgement of changed facts and, when appropriate, altered conclusions.


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2011 Jefferson Flanders

Cold fusion. The validity of extrasensory perception. The existence of alien life forms.

When scientists make these assertions—that they have found a way to generate energy through a tabletop nuclear reaction, or they’ve shown that humans can predict future events, or they’ve proven that alien life exists—they must expect to encounter skepticism. They are, after all, making extraordinary claims. Controversy is bound to follow.

Other scientists rightly approach these startling claims with caution, starting with the premise that, “If it seems too good to be true, it probably is (too good to be true).” At the heart of the scientific method is the careful truth-testing of any hypothesis through an evaluation of the data, and the replication of the results from any experiments or tests. The scientific peer-review process asks disciplinary experts to consider the validity of the findings—ideally these reviewers are independent, open-minded, and able to reach objective conclusions.

Cold fusion and ESP violate what we currently know about the way the physical world works, so any claims are, by definition, extraordinary. While a case can be made that it’s likely other life forms are extant somewhere in the universe (based solely on the odds), having proof of their existence is another matter.

Signs of alien life?

The Fox News headline earlier this month certainly caught people’s attention: “Exclusive: NASA Scientist Claims Evidence of Alien Life on Meteorite.” Dr. Richard B. Hoover, an astrobiologist with NASA’s Marshall Space Flight Center, claimed that he had found fossil evidence of bacterial life within a rare class of meteorites (CI1 carbonaceous chondrites) that have fallen to Earth. In an article in the March edition of the Journal of Cosmology, Hoover described fracturing meteorite stones, examining them with an electron microscope, and finding fossilized remains of micro-organisms. While many of the micro-organisms looked like those found on Earth, some were unrecognizable, according to Hoover.

Hoover told Fox News that he interpreted his findings “as indicating that life is more broadly distributed than restricted strictly to the planet Earth.”

As could be expected, Hoover’s paper faced a skeptical reception, with some scientists wondering if the meteorite samples had been contaminated after falling to Earth, and whether what Hoover had uncovered were actually chance shapes or random mineral formations. Dr. David Marais, an astrobiologist with NASA’s AMES Research Center, noted that similar discoveries of fossilized life in meteors had not stood up under closer scrutiny. “It’s an extraordinary claim, and thus I’ll need extraordinary evidence.”

Astronomer Seth Shostak of the Search for Extraterrestrial Intelligence (SETI) told Space.com that Hoover hadn’t made a conclusive case. “If you look at the microscope photos, they are certainly suggestive—looking like photos made of various terrestrial bacteria. But then again, while intriguing, that’s hardly proof. If similarity in appearance were all it took to prove similarity in kind, then it would be pretty easy for me to demonstrate that there are big animals living in the sky, because I see clouds that look like them.”

Shostak added: “Sometimes scientific results are ambiguous, and are greeted with the common (and rather uninspiring) refrain that ‘more research is needed.’ That’s the case here. We need evidence from other approaches and from other researchers.”

Predicting the future?

Whether deserved or not, a researcher’s Ivy League background will often trigger public interest in his or her claims. Harvard professor John Mack once attracted attention for his research on claims of alien abductions. Princeton engineering dean Robert G. Jahn established the Princeton Engineering Anomalies Research laboratory, which explored paranormal phenomena (primarily telekinesis and ESP), and sparked considerable controversy before it was shut down in 2007 by the university.

Now Cornell psychology professor (emeritus) Daryl J. Bem’s research purporting to show people accurately predicting random events, has generated significant media attention while being panned by many in the field.

In an article published in The Journal of Personality and Social Psychology, Bem reported that some of his 1,000 test subjects were able to predict future events with more than 50 percent accuracy, with the odds strongly against the combined results being “merely chance coincidences or statistical flukes.” One of the experiments asked participants to guess behind which of two curtains a photograph might be hidden. New York Magazine provides an online version of the Bem precognition experiment here.

Critics of Bem’s paper note that initial attempts to replicate his experimental results have been unsuccessful, and that his study didn’t employ Bayesian statistical analysis, which would have assessed his findings against known probabilities.

In an article in the March/April 2011 edition of Skeptical Inquirer (“Back from the Future: Parapsychology and the Bem Affair“) psychology professor James Alcock sharply critiqued Bem’s paper: “Careful scrutiny of his report reveals serious flaws in procedure and analysis, rendering [his] interpretation untenable.”

Reconsidering cold fusion?

Another controversial scientific topic, cold fusion, recently resurfaced in the news. In January, Sergio Focardi, an Italian physicist and Andrea Rossi, an inventor, demonstrated a device that they said produces 12,400 watts of heat power for every 400 watts of input.

Clay Dillow of Popsci.com provided an explanation of cold fusion in layman’s terms: “Hypothetically (and broadly) speaking, the process involves fusing two smaller atomic nuclei together into a larger nucleus, a process that releases massive amounts of energy. If harnessed, cold fusion could provide cheap and nearly limitless energy with no radioactive byproduct or massive carbon emissions.”

Dillow added: “The problem is, they [Rossi and Focardi] haven’t provided any details on how the process works. After their paper was rejected by several peer reviewed scientific journals, it was published in the Journal of Nuclear Physics—an online journal apparently founded by Rossi and Focardi. Further, they say they can’t account for how the cold fusion is triggered, fostering deep skepticism from others in the scientific community.”

The credibility of research into cold fusion had suffered what appeared to be a critical blow some two decades ago. In 1999, electrochemists Martin Fleischmann and Stanley Pons claimed that they had produced nuclear fusion at room temperature with a tabletop setup using palladium, deterium and an electric current. But scientific researchers could not replicate the Fleischmann–Pons results and cold fusion was labeled junk science.

The Focardi-Rossi announcement followed a recent resurgence of scientific interest in Italy, Israel, and the U.S. in cold fusion—now renamed “low energy nuclear reactions” (LENR) or “chemically assisted nuclear reactions” (CANR). As 60 Minutes reported in 2009, “[a]t least 20 labs working independently have published reports of excess heat – heat up to 25 times greater than the electricity going in.”

Yet skepticism from mainstream scientists persists. “I’m still waiting for the water heaters. I’m still waiting for the thing that will produce heat on demand,” American physicist Richard Garwin told 60 Minutes. “I require that you be able to make one of these things, replicate it, put it here. It heats up the cup of tea. I’ll drink the tea. Then you make me another cup of tea. And I’ll drink that too. That’s not it.”

Keeping an open mind

The scientific process should separate the good from the bad, exposing the half-baked and fraudulent, and leave us with the truth about a theory or claim. The model should be that espoused by medieval theologian Peter Abelard: “By doubting we come to inquiry; and through inquiry we perceive truth.”

This truth, however, should be seen as provisional. Why? This encourages the scientist—or any clear thinker—to continue to question received wisdom and perhaps arrive at a further breakthrough. As astronomer Phil Plait commented in a Discover magazine blog post about the potential for alien life on meteorites (which he doubts): “As a scientist and a skeptic I have to leave some room, no matter how small, for the idea that this might be correct.”


Jefferson Flanders is president of MindEdge. He has taught at the Arthur L. Carter Journalism Institute at New York University, Babson College, and Boston University.

Copyright © 2011 Jefferson Flanders

Older Posts »