Category Archives: junk science

Examples of Junk Science: A Pseudoscience Whistleblower Story

Today's example of junk science is special in that we're focusing on the story of an individual who contributed to the exposure of the false science that is bringing down one of the most successful business startups of the last decade: Theranos.

If you follow the story, you'll find that there are a lot of boxes that can be ticked off on our checklist for how to detect junk science, but we'll be focusing on the following two categories in today's example.

How to Distinguish "Good" Science from "Junk" or "Pseudo" Science
Aspect Science Pseudoscience Comments
Challenges Scientists in legitimate fields of study commonly seek out counterexamples or findings that appear to be inconsistent with accepted theories. A challenge to accepted dogma in the pseudosciences is often considered a hostile act, if not heresy, which leads to bitter disputes or even schisms. Science advances by accommodating change as new information is obtained. Frequently, the person who shows that a generally accepted belief is incorrect or incomplete is more likely to be considered a hero than a heretic.
Merit Scientific ideas and concepts must stand or fall on their own merits, based on existing knowledge and evidence. These ideas and concepts may be created or challenged by anyone with a basic understanding of general scientific principles, without regard to their standing within a particular field. Pseudoscientific concepts tend to be shaped by individual egos and personalities, almost always by individuals who are not in contact with mainstream science. They often invoke authority (a famous name for example, or perhaps an impressive sounding organization) for support. Pseudoscience practicioners place an excessive amount of emphasis on credentials, associations and recognition they may have received (even for unrelated matters) to support their pronouncements. They may also may seek to dismiss or disqualify legitimate challenges to their findings because the challengers lack a certain rare pedigree, often uniquely shared by the pseudoscientists.

Let's get to the story of one of Theranos' whistleblowers, picking it up from when they reported their findings of falsified research and cover-ups to the CEO of Theranos, Elizabeth Holmes to see how these categories came into play.

After working at Theranos Inc. for eight months, Tyler Shultz decided he had seen enough. On April 11, 2014, he emailed company founder Elizabeth Holmes to complain that Theranos had doctored research and ignored failed quality-control checks.

In essence, Shultz was complaining that the company was engaged in pseudoscience, a form of fraud that seeks to use the veneer of respectable science to advance false premises to support ideological, cultural or commercial goals. In this case of Theranos, the allegations are that the company's management falsified its research results in order to advance their commercial goals, where their sales depended upon the acceptance of their product's capabilities in the medical community. Capabilities that, by many accounts, have proven both greatly overstated and severely lacking.

The alleged doctoring of research, thus ensuring they would obtain their desired results, and the tossing out of failed quality control checks that might contradict the perception they sought to create that their Edison test devices were genuinely capable of performing as they claimed.

But perhaps the most telling evidence that individuals at the firm were engaged in highly unethical conduct was to be found in their response to being called out for their bad actions. The story continues with how Theranos' executives responded to the whistleblower's e-mail.

After emailing Ms. Holmes in April 2014 about the allegedly doctored research and quality-control failures, Mr. Shultz heard nothing for several days.

Then Mr. Balwani’s response arrived. It began: “We saw your email to Elizabeth. Before I get into specifics, let me share with you that had this email come from anyone else in the company, I would have already held them accountable for the arrogant and patronizing tone and reckless comments.”

Note the immediate attempt to put down the real challenge to the doctored research and ignored quality checks by immediately changing the subject to attempt to make it all about the whistleblower, which in addition to representing an abuse of whatever authority they may have, also checks off the boxes on our checklist for how to detect junk science for both challenges and merit.

This kind of personal attack is surprising common among those who have knowingly engaged in junk science and have had their scientific misconduct exposed. Insults and smear attacks aimed at those who have identified misconduct are simply part of their toolbox for getting away with their unethical behavior, where they hope to discourage additional scrutiny by attempting to make it personally painful for those seeking to expose it.

In the case of Theranos' executive's response, that appears to also have meant going after the whistleblower's family.

The reply was withering. Ms. Holmes forwarded the email to Theranos President Sunny Balwani, who belittled Mr. Shultz’s grasp of basic mathematics and his knowledge of laboratory science, and then took a swipe at his relationship with George Shultz, the former secretary of state and a Theranos director.

As it happens, Tyler Shultz' grandfather, where the conflict between Theranos' executives and the younger Shultz has led to a rift within the family.

But that's not the creepiest part of Theranos' response, which included some serious escalations after the WSJ began publishing a series of exposés about the company.

Theranos accused him of leaking trade secrets and violating an agreement to not disclose confidential information. Mr. Shultz says lawyers from the law firm founded by David Boies, one of the country’s best-known litigators and who later became a Theranos director, surprised him during a visit to his grandfather’s house.

They unsuccessfully pressured the younger Mr. Shultz to say he had talked to the reporter and to reveal who the Journal’s other sources might be. He says he also was followed by private investigators hired by Theranos.

The purpose of this kind of activity on the part of those engaged in pseudoscience is to intimidate the whistleblower into either silence or into compliance. The Theranos case is unique in that the company has the resources to apply pressure through these costly means, but other forms of intimidation, such as cyberstalking, are a preferred choice of intimidation tactic for those more economically minded.

Meanwhile, the Theranos story is still playing out in the headlines and in the courts, where in the latest news, it appears that all the right people are being targeted with the consequences for their actions. There's hope for justice yet for the pseudoscience whistleblowers of the world!

References

Carreyrou, John. Theranos Whistleblower Shook the Company - and His Family. Wall Street Journal. [Online Article]. 18 November 2016.

Elizabeth Holmes, Theranos CEO - Source: White House

Examples of Junk Science: The Political Polls of 2016

When we launched this series back in July 2016, we never expected that we would find ourselves crossing into U.S. election analysis, and yet, thanks to the political polls of 2016, here we are!

In today's example of junk science, we're looking at several factors that tick off different boxes on our checklist for how to detect junk science, which include, but at this early date, are not limited to the following items (if you're reading this article on a site that republishes our RSS news feed that doesn't neatly render the following table, please click here to access the version of this article that appears on our site:

How to Distinguish "Good" Science from "Junk" or "Pseudo" Science
Aspect Science Pseudoscience Comments
Inconsistencies Observations or data that are not consistent with current scientific understanding generate intense interest for additional study among scientists. Original observations and data are made accessible to all interested parties to support this effort. Observations of data that are not consistent with established beliefs tend to be ignored or actively suppressed. Original observations and data are often difficult to obtain from pseudoscience practitioners, and is often just anecdotal. Providing access to all available data allows others to independently reproduce and confirm findings. Failing to make all collected data and analysis available for independent review undermines the validity of any claimed finding. Here's a recent example of the misuse of statistics where contradictory data that would have avoided a pseudoscientific conclusion was improperly screened out, which was found after all the data was made available for independent review.
Models Using observations backed by experimental results, scientists create models that may be used to anticipate outcomes in the real world. The success of these models is continually challenged with new observations and their effectiveness in anticipating outcomes is thoroughly documented. Pseudosciences create models to anticipate real world outcomes, but place little emphasis on documenting the forecasting performance of their models, or even in making the methodology used in the models accessible to others. Have you ever noticed how pseudoscience practitioners always seem eager to announce their new predictions or findings, but never like to talk about how many of their previous predictions or findings were confirmed or found to be valid?
Falsifiability Science is a process in which each principle must be tested in the crucible of experience and remains subject to being questioned or rejected at any time. In other words, the principles of a true science are always open to challenge and can logically be shown to be false if not backed by observation and experience. The major principals and tenets of a pseudoscience cannot be tested or challenged in a similar manner and are therefore unlikely to ever be altered or shown to be wrong. Pseudoscience enthusiasts incorrectly take the logical impossibility of disproving a pseudoscientific principle as evidence of its validity. By the same token, that scientific findings may be challenged and rejected based upon new evidence is taken by pseudoscientists as "proof" that real sciences are fundamentally flawed.

Given what happened in the U.S. on Election Day 2016 and what happened earlier in the year on Brexit Vote Day, the one clear message of 2016 that voters are sending is that political polling is badly broken.

One example of how badly is given by FiveThirtyEight's Nate Silver, who presented the following analysis indicating how the 2016 Presidential election in the United States was expected to turn out based on the aggregation of numerous state polls across the United States, which had peaked in favor of candidate Hillary Clinton on the eve of Election Day:


538 - Who will win the presidency? 7 November 2016 22:08

Based on such analysis and its propagation throughout the media, many Americans went into and through Election Day with the firm expectation that Hillary Clinton would soon be officially elected to be the next President of the United States.

As we now know however, that expectation was widely off the mark. And the reason that so many Americans were caught flat footed when reality arrived late on 8 November 2016 is because they erroneously placed too much importance on the results of polling and analysis that was fundamentally flawed and which would never pass scientific muster.

Alex Berezow of the American Council on Science and Health argues that's because political poll analysis like this example lacks even the most basic scientific foundation, where the models behind them cannot be falsified:

Earlier, we published an article explaining why there is no such thing as a scientific poll. In a nutshell, because polling relies on good but sometimes inaccurate assumptions, it is far more art than science. As we noted, "Tweaking [voter] turnout models is more akin to refining a cake recipe than doing a science experiment." Still, since American pollsters are good at their jobs, polls tend to be correct more often than not.

Recently, pollsters and pundits have tried to up their game. No longer content with providing polling data, they now want to try their hand at gambling, as well. It has become fashionable to report a candidate's "chance of winning." (ESPN does this, too. Last week, the network predicted that the Seattle Sounders had a 94% chance to advance to the semi-finals of the MLS Cup. I am grateful this prediction ended up being correct.)

However, these predictions are thoroughly unscientific. Why? Because it is impossible to test the model.

Let's use the soccer match as an example. The only way to know if ESPN's prediction that Seattle had a 94% chance of advancing to the semi-finals is accurate is to have Seattle and its opponent play the match 100 (or more) times. If Seattle advances 94 or so times, then the model has been demonstrated to be reasonably accurate. Of course, soccer doesn't work like that. There was only one game. Yes, the Sounders advanced, so the prediction was technically correct, but a sample size of one cannot test the model.

The exact same logic applies to elections. As of the writing of this article, Nate Silver gives Hillary Clinton an absurdly precise 70.3% chance of winning. (No, not 70.2% or 70.4%, but exactly 70.3%.) If she does indeed win on Election Day, that does not prove the model is correct. For Mr Silver's model to be proven correct, the election would need to be repeated at least 1,000 times, and Mrs Clinton would need to win about 703 times.

Even worse, Mr Silver's model can never be proven wrong. Even if he were to give Mrs Clinton a 99.9% chance of winning, and if she loses, Mr Silver can reply, "We didn't say she had a 100% chance of winning."

Any model that can never be proven right or wrong is, by definition, unscientific. Just like conversations with the late Miss Cleo, such political punditry should come with the disclaimer, "For entertainment purposes only."

Starts With a Bang's Ethan Siegel points his finger at a different problem that such poll-based analysis has that renders their conclusions to be invalid: the inherent inconsistencies from systemic errors in data collection.

A systematic error is an uncertainty or inaccuracy that doesn't improve or go away as you take more data, but a flaw inherent in the way you collect your data.

  • Maybe the people that you polled aren't reflective of the larger voting population. If you ask a sample of people from Staten Island how they’ll vote, that’s different from how people in Manhattan — or Syracuse — are going to vote.
  • Maybe the people that you polled aren't going to turn out to vote in the proportions you expect. If you poll a sample with 40% white people, 20% black people, 30% Hispanic/Latino and 10% Asian-Americans, but your actual voter turnout is 50% white, your poll results will be inherently inaccurate. [This source-of-error applies to any demographic, like age, income or environment (e.g., urban/suburban/rural.)]
  • Or maybe the polling method is inherently unreliable. If 95% of the people who say they’ll vote for Clinton actually do, but 4% vote third-party and 1% vote for Trump, while 100% of those who say they’ll vote for Trump actually do it, that translates into a pro-Trump swing of +3%.

None of this is to say that there’s anything wrong with the polls that were conducted, or with the idea of polling in general. If you want to know what people are thinking, it’s still true that the best way to find out is to ask them. But doing that doesn't guarantee that the responses you get aren't biased or flawed....

I wouldn't go quite as far as Alex Berezow of the American Council on Science and Health does, saying election forecasts and odds of winning are complete nonsense, although he makes some good points. But I will say that it is nonsense to pretend that these systematic errors aren't real. Indeed, this election has demonstrated, quite emphatically, that none of the polling models out there have adequately controlled for them. Unless you understand and quantify your systematics errors — and you can’t do that if you don’t understand how your polling might be biased — election forecasts will suffer from the GIGO problem: garbage in, garbage out.

In economics, these are problems that can affect the weighs in on the factors that may very well have skewed the results of 2016's political polling:

I can think of one technical reason the polls were wrong. The low response rate polls were subject to sample selection bias. Let's say that only 13% of the population is responds to the survey (13% is the response rate in the Elon University Poll). If the 80% that doesn't respond is similar except for observed characteristics (e.g., gender, age, race, political party) then you can weight the data to better reflect the population. But, if the 87% that doesn't respond is different on some unobservable characteristic (e.g., "lock her up") then weighting won't fix the problem. The researcher would need other information about nonrespondents to correct it (Whitehead, Groothuis and Blomquist, 1993). If you don't have the other information then the problem won't be understood until actual behavior is revealed.

Which is to say that you'll have a lot of people who obsess over the reports of pre-election polling, who might be banking on them in setting their expectations for the future, that will ultimately have their hopes dashed when reality turns out to be very different from their expectations. All because the polls and reporting upon which they relied for their outlook were so inherently flawed that they also had no idea of how disconnected from reality their expectations had become.

In many cities around the United States, and particularly within those regions where people counted on a Clinton victory to retain the benefits of their political party's power over the rest of the nation, that disappointment has sometimes turned into protests, discriminatory threats and outright rioting.

Much of which could have been avoided if Americans had trustworthy political polling results and analysis to more properly ground their expectations. Instead, we're discovering that junk science in political polling and punditry and their role in setting irrational expectations has a real cost in physical injuries and property damage within their own communities.

Examples of Junk Science: Models of Mathiness

In many of the examples of junk science that we've previously presented, we've focused on cases where junk science results were obtained from what might be called the "Garbage In, Garbage Out", or GIGO, principle, where either inappropriate data was used or where relevant data was suppressed in order to guarantee results that would advance their author's preferred narrative.

But in today's example, we'll focus on situations where the KIBO principle can apply, whose family friendly translation is "Knowledge In, Baloney Out". In short, it is not in the data where the deficiencies will be found, but rather in the author's choice of analytical methods that can be all-too-easily manipulated to achieve a predetermined outcome, which enables pseudoscience results to be advanced with a low probability of detection.

Here's the relevant item from our checklist for how to detect junk science that applies to today's example.

How to Distinguish "Good" Science from "Junk" or "Pseudo" Science
Aspect Science Pseudoscience Comments
Models Using observations backed by experimental results, scientists create models that may be used to anticipate outcomes in the real world. The success of these models is continually challenged with new observations and their effectiveness in anticipating outcomes is thoroughly documented. Pseudosciences create models to anticipate real world outcomes, but place little emphasis on documenting the forecasting performance of their models, or even in making the methodology used in the models accessible to others. Have you ever noticed how pseudoscience practitioners always seem eager to announce their new predictions or findings, but never like to talk about how many of their previous predictions or findings were confirmed or found to be valid?

Today's example also specifically applies to the field of economics, where we'll be discussing Dynamic Stochastic General Equilibrium (DSGE) models, whose characteristics are such that they can really lend themselves to generating pseudoscientific results that can be difficult to detect. DSGE models feature prominently in analyses produced by advocates of the "Freshwater" Real Business Cycle (RBC) and New Keynesian schools of thought within the discipline, whose core assumptions about how the economy behaves are directly incorporated into DSGE models.

Our discussion then starts with an observation which explains why the debate between freshwater and saltwater schools continues, even though the private sector (or marketplace) would appear to have largely rejected the DSGE models produced according to the "freshwater" school's assumptions.

One curiosity that economists seem too polite to note is that one important school of macroeconomic thought—"freshwater" macroeconomics that focuses heavily on the idea of a "real" business cycle and disparages the notion of either fiscal or monetary stimulus—has completely flopped in the marketplace. It lives, instead, sheltered from market forces at a variety of Midwestern nonprofit universities and sundry regional Federal Reserve banks.

Stephen Williamson, a proponent of freshwater views, reminded me of this recently when he contended that macroeconomics is divided into schools of thought primarily because there's no money at stake. In financial economics, according to Williamson, "All the Wall Street people care about is making money, so good science gets rewarded." But in macroeconomics you have all kinds of political entrepreneurs looking for hucksters who'll back their theory....

"Political entrepreneurs" may be far too polite a term. These kinds of econometricians might be better described as pseudoscience peddlers, and we should recognize that they are by no means limited to a single school within the economics discipline. Regardless of where they fall on the ideological spectrum, pseudoscience peddlers are people who are on the prowl for marks that they can trick into buying into their deficient output.

In such hands, a tool like Dynamic Stochastic General Equilibrium modeling can represent a means by which they might distract attention away from severe defects that exist in their analyses, where their presentation of the mathematical model is really little more than window dressing that has been specifically aimed at giving their work a "scientific" veneer to help them sell it while also obsuring the means by which they ensured their predetermined analytical results.

How does that work? NYU economist Paul Romer explains how an econometrician can use confounding variables in a DSGE model to achieve a predetermined result.

As I will show later, when the number of variables in a model increases, the identification problem gets much worse. In practice, this means that the econometrician has more flexibility in determining the results that emerge when she estimates the model.

The identification problem means that to get results, an econometrician has to feed in something other than data on the variables in the simultaneous system. I will refer to things that get fed in as facts with unknown truth value (FWUTV) to emphasize that although the estimation process treats the FWUTV’s as if they were facts known to be true, the process of estimating the model reveals nothing about the actual truth value. The current practice in DSGE econometrics is feed in some FWUTV’s by "calibrating" the values of some parameters and to feed in others tight Bayesian priors. As Olivier Blanchard (2016) observes with his typical understatement, "in many cases, the justification for the tight prior is weak at best, and what is estimated reflects more the prior of the researcher than the likelihood function."

This is more problematic than it sounds. The prior specified for one parameter can have a decisive influence on the results for others. This means that the econometrician can search for priors on seemingly unimportant parameters to find ones that yield the expected result for the parameters of interest.

A potential red flag that such defective analysis is present might be found in cases where DSGE models are either inappropriately or inexplicably put to use in applications that would never past muster in the private sector, which demands greater transparency in analytical methods and which also requires that modeled results be validated with real world observations, both within and from outside the period for which the model was specifically developed. In a sense, the practice of applying DGSE models in these cases may be considered to be a form of junk science p-hacking where the model is specifically tuned to produce their preferred outcome, which we can't mention without sharing John Oliver's priceless commentary on the topic!

Getting back to the topic at hand, after presenting a similar example, Romer describes how "Facts With An Unknown Truth Value" (FWUTV) can survive challenge from its audience thanks to its scientific veneer and lack of transparency:

With enough math, an author can be confident that most readers will never figure out where a FWUTV is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask....

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, "because I say so" seems like a pretty convincing answer to any question about its properties.

Throw in a number of additional confounding variables, and the number of places where a pseudoscientific practicioner might cook their analysis in a DSGE or error correction model to produce their desired predetermined results increases exponentially, which makes the detection of its deficiencies difficult to achieve without significant effort. These kinds of models might be therefore considered to be an ideal tool of choice for pseudoscience practicioners intent upon plying their deceptive trade, which is why they would choose to employ them over more accepted or demonstrably better analytical methods.

Meanwhile, the core assumptions that underlie such models are perhaps why they don't pass the smell test of at least one mainstream economist:

I am generally a quite traditional mainstream economist. I think that the body of economic analysis that we have piled up and teach to our students is pretty good; there is no need to overturn it in any wholesale way, and no acceptable suggestion for doing so. It goes without saying that there are important gaps in our understanding of the economy, and there are plenty of things we think we know that aren't true. That is almost inevitable. The national – not to mention the world – economy is unbelievably complicated, and its nature is usually changing underneath us. So there is no chance that anyone will ever get it quite right, once and for all. Economic theory is always and inevitably too simple; that can not be helped. But it is all the more important to keep pointing out foolishness wherever it appears. Especially when it comes to matters as important as macroeconomics, a mainstream economist like me insists that every proposition must pass the smell test: does this really make sense? I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way. I do not think that this picture passes the smell test. The protagonists of this idea make a claim to respectability by asserting that it is founded on what we know about microeconomic behavior, but I think that this claim is generally phony. The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether.

"Generally phony" is perhaps the best description for DSGE modeling results altogether. Unless their creators provide full transparency of all the the factors and assumptions that they have incorporated into their model to obtain their analytical results, it may be considered to be a safe policy to reject any findings based on those modeled results. Even with such transparency, should any of those factors and assumptions be considered to be too unrealistic by those independently assessing them, the results obtained from DSGE modeling might still be candidates for automatic rejection.

There are, after all, very valid reasons for why DSGE modeling has been all but completely rejected for use in the private sector, where the market purportedly being modeled by them has itself found them to be neither useful nor relevant to the real world.

References

Blanchard, O. (2016). Do DSGE Models Have a Future? Peterson Institute of International Economics, PB 16-11. [PDF Document]. August 2016.

Gürkaynak, Refet and Edge, Rochelle. Dynamic stochastic general equilibrium models and their forecasts. VoxEU. 28 February 2011.

Keen, Steve. Oliver Blanchard, Equilibrium, Complexity, And the Future of Macroeconomics. Forbes. 6 October 2016.

Romer, David. Advanced Macroeconomics, 4/e. Chapter 7: Dynamic Stochastic General-Equilibrium Models of Fluctuations. [PDF Document]. McGraw Hill. 2012.

Romer, Paul. The Trouble With Macroeconomics. 5 January 2016 Commons Memorial Lecture of the Omicron Delta Epsilon Society. [PDF Document]. 14 September 2016.

Romer, Paul. The Trouble With Macroeconomics, Update. Paul Romer (blog). 21 September 2016.

Smith, Noah. "Freshwater vs. Saltwater" divides macro, but not finance. Noahpinion. 12 December 2013.

Smith, Noah. The most damning critique of DSGE. Noahpinion. 10 January 2014.

Smith, Noah. What Can You Do With a DSGE Model?. Noahpinion. 27 May 2013.

Solow, Robert. Building a Science of Economics for the Real World". Prepared Statement for Congressional Testimony before the House Committee on Science and Technology's Subcommittee on Investigations and Oversight. [PDF Document]. 20 July 2010.

Yglesias, Matthew. Freshwater Economics Has Failed the Market Test. Slate. 18 December 2013.


Examples of Junk Science: Taxing Treats

When junk science goes unchallenged, it can have real world consequences.

In today's example of junk science, we have a case where the real world consequences involve taxes being selectively imposed on a single class of products that is commonly purchased by millions of consumers, soda pop, because of the perceived harm to people's health that is believed to result from the excessive consumption of a single one of its ingredients, sugar.

What makes this an example of junk science is the combination of ideological and cultural goals of the proponents of the city's soda tax and the inconsistencies associated with their proposed solution to deal with it, in which they are ignoring thousands of other foods and beverage products that also contain similar levels of sugar in its various forms (sucrose, fructose, et cetera), which will not also be subjected to the tax aimed at solving a perceived public health issue. The table below lists the specific items from our checklist for how to detect junk science that apply to today's example.

How to Distinguish "Good" Science from "Junk" or "Pseudo" Science
Aspect Science Pseudoscience Comments
Goals The primary goal of science is to achieve a more complete and more unified understanding of the physical world. Pseudosciences are more likely to be driven by ideological, cultural or commercial (money-making) goals. Some examples of pseudosciences include: astrology, UFOlogy, Creation Science and aspects of legitimate fields, such as climate science, nutrition, etc.
Inconsistencies Observations or data that are not consistent with current scientific understanding generate intense interest for additional study among scientists. Original observations and data are made accessible to all interested parties to support this effort. Observations of data that are not consistent with established beliefs tend to be ignored or actively suppressed. Original observations and data are often difficult to obtain from pseudoscience practitioners, and is often just anecdotal. Providing access to all available data allows others to independently reproduce and confirm findings. Failing to make all collected data and analysis available for independent review undermines the validity of any claimed finding. Here's a recent example of the misuse of statistics where contradictory data that would have avoided a pseudoscientific conclusion was improperly screened out, which was found after all the data was made available for independent review.

As part of the discussion related to today's example of junk science, you'll also see the phrase "Pigovian tax". These are named after British economist A.C. Pigou, who proposed that imposing taxes on things or activities that produce undesirable consequences would lead to less of the undesirable consequences, assuming that lawmakers set that kind of tax to the correct level to compensate for the cost of the negative consequences and impose it everywhere it needs to be imposed to achieve the intended result.

Otherwise, what you will get will close resemble today's example of junk science, where one city's lawmakers are making a total hash out of nutrition science, public health, and tax policies.


Busybodies in the American public, never content to leave other people alone, always seem to need a common enemy to rally against. For years, it was McDonald's. Then it was Monsanto and Big Pharma. Now, it's Big Soda.

At first glance, a war on soda might appear to make sense. There is no nutritional benefit to soda. Given the large and growing segment of the U.S. populace that is obese or contracting type 2 diabetes, perhaps a Pigovian tax on soda (with the aim of reducing soda consumption) makes sense. After all, the science on sugar is pretty clear: Too much of it in your diet can lead to health problems.

But a closer look at food science reveals that a tax on sugary drinks (such as soda, sports drinks, and tea), a policy being pondered by voters in the San Francisco Bay area, is deeply misguided. We get sugar in our diets from many different sources, some of which we would consider "healthy" foods.

Taxed versus Untaxed Grams of Sugar in Selected Foods and Drinks

A 12-oz can of Coke has 39 grams of sugar. That's quite a bit. How does that compare to other foods? You might be surprised.

Starbucks vanilla latte (16 oz) = 35 grams

Starbucks cupcake = 34 grams

Yogurt, sweetened or with fruit (8 oz) = 47 grams

Homemade granola (1 cup) = 24.5 grams

Grape juice (8 oz) = 36 grams

Mango (1 fruit without refuse) = 45.9 grams

Raisins (A pathetic 1/4 cup) = 21 grams

If these food activists were consistent, they would also advocate for a tax on fruit juice, granola, and coffee. But considering that these very same activists are probably vegan, organic food-eating granola-munchers, they're not going to do that. The truth is, moderation is key to a healthy diet and preventing diseases like obesity and type 2 diabetes*. But that simple message is boring, and it doesn't excite nanny state activists.

Furthermore, if proponents of a soda tax were actually serious about reducing diseases related to poor nutrition, they would endorse a public health campaign aimed at raising awareness of the sugar content found in all foods. Or, they might endorse a Pigovian tax on all high-sugar foods. But, they won't do that, either, because it would be widely despised, as people strongly dislike paying large grocery bills. So instead, they demonize Big Soda, which is politically popular.

And that is the very definition of a feel-good policy based on junk science.

*It should also be pointed out that food choices are only one factor among many that determine whether a person becomes obese or develops type 2 diabetes. Genetics, weight, and physical inactivity also play roles.


Unfortunately, the field of nutrition science has often been the victim of pseudoscientific research and practices.

In today's example, the nutrition pseudoscience that argues that only sugary soda beverages should be taxed to deal with the negative consequences of excessive sugar consumption has intersected with the self-interest of politicians who strongly desire to boost both their tax revenues and their power over the communities they govern without much real concern about seriously addressing the public health issues that they are using to justify their policies.

The way you can tell if that's the case is what they do with the money from the taxes they collect. If any part of that money is diverted to other unrelated purposes, such as to pay public employee pensions for example, then it is a safe bet that they never believed the public health problems they said they would solve by imposing such taxes were anywhere near as great as they claimed.

And unfortunately, like the junk science on which such poor public policy is based, you often won't find out until long after the damage has been done.

References

Berezow, Alex. San Francisco Soda Tax: A Feel-Good Policy Based On Junk Science. [Online Article]. American Council on Science and Health. 29 September 2016. Republished with permission.

Examples of Junk Science: Taxing Treats

When junk science goes unchallenged, it can have real world consequences.

In today's example of junk science, we have a case where the real world consequences involve taxes being selectively imposed on a single class of products that is commonly purchased by millions of consumers, soda pop, because of the perceived harm to people's health that is believed to result from the excessive consumption of a single one of its ingredients, sugar.

What makes this an example of junk science is the combination of ideological and cultural goals of the proponents of the city's soda tax and the inconsistencies associated with their proposed solution to deal with it, in which they are ignoring thousands of other foods and beverage products that also contain similar levels of sugar in its various forms (sucrose, fructose, et cetera), which will not also be subjected to the tax aimed at solving a perceived public health issue. The table below lists the specific items from our checklist for how to detect junk science that apply to today's example.

How to Distinguish "Good" Science from "Junk" or "Pseudo" Science
Aspect Science Pseudoscience Comments
Goals The primary goal of science is to achieve a more complete and more unified understanding of the physical world. Pseudosciences are more likely to be driven by ideological, cultural or commercial (money-making) goals. Some examples of pseudosciences include: astrology, UFOlogy, Creation Science and aspects of legitimate fields, such as climate science, nutrition, etc.
Inconsistencies Observations or data that are not consistent with current scientific understanding generate intense interest for additional study among scientists. Original observations and data are made accessible to all interested parties to support this effort. Observations of data that are not consistent with established beliefs tend to be ignored or actively suppressed. Original observations and data are often difficult to obtain from pseudoscience practitioners, and is often just anecdotal. Providing access to all available data allows others to independently reproduce and confirm findings. Failing to make all collected data and analysis available for independent review undermines the validity of any claimed finding. Here's a recent example of the misuse of statistics where contradictory data that would have avoided a pseudoscientific conclusion was improperly screened out, which was found after all the data was made available for independent review.

As part of the discussion related to today's example of junk science, you'll also see the phrase "Pigovian tax". These are named after British economist A.C. Pigou, who proposed that imposing taxes on things or activities that produce undesirable consequences would lead to less of the undesirable consequences, assuming that lawmakers set that kind of tax to the correct level to compensate for the cost of the negative consequences and impose it everywhere it needs to be imposed to achieve the intended result.

Otherwise, what you will get will closely resemble today's example of junk science, where one city's lawmakers are making a total hash out of nutrition science, public health, and tax policies.


Busybodies in the American public, never content to leave other people alone, always seem to need a common enemy to rally against. For years, it was McDonald's. Then it was Monsanto and Big Pharma. Now, it's Big Soda.

At first glance, a war on soda might appear to make sense. There is no nutritional benefit to soda. Given the large and growing segment of the U.S. populace that is obese or contracting type 2 diabetes, perhaps a Pigovian tax on soda (with the aim of reducing soda consumption) makes sense. After all, the science on sugar is pretty clear: Too much of it in your diet can lead to health problems.

But a closer look at food science reveals that a tax on sugary drinks (such as soda, sports drinks, and tea), a policy being pondered by voters in the San Francisco Bay area, is deeply misguided. We get sugar in our diets from many different sources, some of which we would consider "healthy" foods.

Taxed versus Untaxed Grams of Sugar in Selected Foods and Drinks

A 12-oz can of Coke has 39 grams of sugar. That's quite a bit. How does that compare to other foods? You might be surprised.

Starbucks vanilla latte (16 oz) = 35 grams

Starbucks cupcake = 34 grams

Yogurt, sweetened or with fruit (8 oz) = 47 grams

Homemade granola (1 cup) = 24.5 grams

Grape juice (8 oz) = 36 grams

Mango (1 fruit without refuse) = 45.9 grams

Raisins (A pathetic 1/4 cup) = 21 grams

If these food activists were consistent, they would also advocate for a tax on fruit juice, granola, and coffee. But considering that these very same activists are probably vegan, organic food-eating granola-munchers, they're not going to do that. The truth is, moderation is key to a healthy diet and preventing diseases like obesity and type 2 diabetes*. But that simple message is boring, and it doesn't excite nanny state activists.

Furthermore, if proponents of a soda tax were actually serious about reducing diseases related to poor nutrition, they would endorse a public health campaign aimed at raising awareness of the sugar content found in all foods. Or, they might endorse a Pigovian tax on all high-sugar foods. But, they won't do that, either, because it would be widely despised, as people strongly dislike paying large grocery bills. So instead, they demonize Big Soda, which is politically popular.

And that is the very definition of a feel-good policy based on junk science.

*It should also be pointed out that food choices are only one factor among many that determine whether a person becomes obese or develops type 2 diabetes. Genetics, weight, and physical inactivity also play roles.


Unfortunately, the field of nutrition science has often been the victim of pseudoscientific research and practices.

In today's example, the nutrition pseudoscience that argues that only sugary soda beverages should be taxed to deal with the negative consequences of excessive sugar consumption has intersected with the self-interest of politicians who strongly desire to boost both their tax revenues and their power over the communities they govern without much real concern about seriously addressing the public health issues that they are using to justify their policies.

The way you can tell if that's the case is what they do with the money from the taxes they collect. If any part of that money is diverted to other unrelated purposes, such as to pay public employee pensions for example, then it is a safe bet that they never believed the public health problems they said they would solve by imposing such taxes were anywhere near as great as they claimed.

And unfortunately, like the junk science on which such poor public policy is based, you often won't find out until long after the damage has been done.

References

Berezow, Alex. San Francisco Soda Tax: A Feel-Good Policy Based On Junk Science. [Online Article]. American Council on Science and Health. 29 September 2016. Republished with permission.