Ok, first watch this video:
What’s your first reaction? “Wow, that’s quite a good investment. Just increasing fruit and veg consumption would save $11 trillion? Let’s fund fruit and vegetable growers!”
Want to know my first reaction? “How the hell did they come up with those numbers?!”
See, I’m all about eating healthier and supporting the producers of more healthy foods. I agree that healthy food is too expensive compared to other foods, and that the way subsidies are set up in our current agri-food systems does nothing good for public health. But trying to make that claim using fishy data makes your argument weaker, not stronger. And the data used here is definitely fishy in at least some aspects.
Let’s break down the way the UCS arrived at $11 trillion (find their total paper here) and see where you could object:
First, a word about nutrition studies. When I was working on a diabetes prevention project two summers ago, I read a lot of nutrition cohort studies, and one of the things that is really difficult to establish is what in science is known as the ceteris paribus condition. If you work on an hypothesis of cause and effect, it is important to control for as many external factors as possible. Otherwise, how will you be certain that the one factor caused your result and not something completely different?
The problem here is that, unlike for natural scientists, it is very hard for social scientists to set up perfect experiments that control for all other factors because life happens in between. This is as true for economists as it is for public health scientists, and especially true in the process of cohort studies. These are normally set up as long-term (10 to 50 years) studies of people’s diets, behaviors and their health, morbidity and finally mortality causes. Thus, we are tracking different people during much of their lifetime, asking them in regular intervals about what they are eating, how they are exercising, whether they are smoking, and whether they had any health-related problems lately. (And, just by the way, try remembering what you ate during last week, then track your eating the next one to see how accurate you were. Probably not very.)
Doing that gives us a mass of data on different individuals which we can then try to examine. Using statistical methods, we can try to extrapolate and find people that are as similar as possible while still having a significant difference in, for example, the amount of apples they eat. This is useful to a certain extent, but is still very unlikely to be a solid finding because people rarely eat the very same diet, follow the same exercise routine, the same sleeping, smoking and drug consumption patterns and just differ in how many apples they eat. Thus, making the assumption that the consumption of one additional piece of fruit, ceteris paribus, will be the cause of a decrease in morbidity risk is potentially statistically valid, but based on very shaky grounds.
Then, the UCS’s analysis is based on a meta-study that tries to aggregate the results from different cohort studies (that were carried out as mentioned above) which had different definitions of what one portion of fruit or vegetable actually constituted, or even what fell into the category of fruit (does juice count?) or vegetable (potatoes?), and that were based on a number of different methodologies. Again, this is not necessarily wrong scientific practice, but numbers won from these types of analyses – where ranges of potential error are possibly compounded – should be treated with great care in further analysis.
Plus, the journal article even mentioned the possibility of publication bias – the fact that, if you search for published journal articles and base your meta-analysis on these search results, you are very likely to find mainly significant relationships between the variables you are interested in. The reason? Non-significant results are much less “sexy” scientifically speaking, and thus much less likely to be published.
Thus, the 4% decrease in the risk of coronary heart disease that the paper mentions? Very shaky. As the paper itself points out:
In contrast, other facts are not in favor of a causal relation. In population studies fruit and vegetable intake correlates with healthy lifestyles, which may explain the lower CHD rates. Generally, consumers of fruit and vegetables smoke less, exercise more, and are better educated than nonconsumers (31). Although most studies adjust for lifestyle factors, residual confounders may still explain part of the favorable association with CHD. High intakes of fruit and vegetables are associated with a prudent diet pattern (32,33) and inversely related to the consumption of saturated fat–rich food (27), which may also contribute to the lower CHD risk (32–34). Furthermore, the hypothetical mechanisms involved in the protective effects of fruit and vegetables have not always been confirmed in randomized clinical trials (35–37). Therefore, the results of the present study support the concept that the regular consumption of fruit and vegetables is associated with low rates of CHD, however, it does not establish a causal relation. (Dauchet, L., P. Amouyel, S. Hercberg, and J. Dallongeville. 2006. Fruit and vegetable consumption and risk of coronary heart disease: A meta-analysis of cohort studies. Journal of Nutrition 136(10):2588–2593.)
It does not establish a causal relation. Ouch.
If you think about it though, it makes sense, right? If you are on a high calorie diet eating a lot of junk food rich in sugar and saturated fat, adding an apple a day will probably not make too much of a difference to your overall health. Rather, a total shift and substitution in your diet would be necessary in my opinion to bring about real results.
We could continue to talk about how the UCS then attributes value to the estimated reduction in loss of life of 127,261 deaths per year prevented due to cardiovascular disease “if Americans increased their consumption of fruits and vegetables to meet dietary recommendations” to 11 trillion – excessively crunching numbers gained from another study that was looking at the value of lost life from 1970 to 2000 – which is even more fishy, but since it is based on a very shaky causal relationship the point is pretty much moot. Also this post is already a behemoth.
I guess my point is – don’t believe everything that a neat infographic video tells you. Even if it is produced by “Scientists”.