Most scientific claims about food - from fatty acids for more heart disease to soy for less breast cancer - are rooted in epidemiology, which is the science of spotting possible connections. For example, do people with a lot of olive oil in their diet also live longer?
The way scientists make these connections is, for the most part, by setting a food frequency questionnaire. These forms may contain up to 100 questions, and ask people to remember what they ate up to a year before.
Pause for a moment to reflect on what you ate last Tuesday, and on Monday 12th December.
Happily, for you, if you were filling out the form, some questionnaires aren't too precise. Did you drink a small quantity of fizzy drink, a medium quantity or a large one? My small is half-a-glass. How small is your small?
So was that a small amount of fizzy drink you drank in December, or a large amount?
It would seem to be so entirely obvious that such results are less than accurate that many would wonder why scientists are collecting them at all.
Well, first up, there is the issue of ethics. Under- or over-feeding people with some food until it becomes clear whether they get healthier, or ill, is not a terribly viable way to proceed. Fine for the healthier folk, but a sad way to change your life for those who serve to identify a food problem.
There is also the matter of cost: food frequency questionnaires (FFQs) are cheap, well, at least, a lot cheaper than some of the other ways of observing what people really eat, such as three-day food records or 24-hour dietary recall.
These two alternatives ask people to remember and record their diets, just like the FFQ, but during a much shorter space of time. The 24-hour dietary recall is considered by some to be the "gold standard".
It has been estimated that a basic assessment of diet for 160 000 women would cost $1.2m using an FFQ, but over $23m for either of the two alternatives.
So where does that leave the world in judging the results of this cheap method?
As far as some expert estimates are concerned, the FFQ comes with a mighty margin of error, with energy intakes being wrong by as much as 30 per cent. And then add in the other 'little' errors that undoubtedly creep in along the way - how much error are we now looking at?
This level of random misinformation is enough to invalidate the results of just about any FFQ-based study that comes through. And I'm not sure I believe it. I wouldn't want to rate my recall of what I ate in the second week of December up against a 30 per cent margin of error: I haven't the foggiest!
But, be it 30 per cent or 50 per cent out, this is a long way from accuracy. And accuracy must be the number one priority for any scientist, even where finance is a factor. Pockets are not bottomless. But bad measurement is NOT just about good enough.
The reality is that the FFQ is a fragile basis for any conclusion, and yet it is currently the foundation for most of the food and health stories that consumers receive.
And for every conclusion based on a set of rough FFQs that is then put it into the media machine, where journalists will nicely iron out any 'ifs' and 'buts' that remained, the result is going to be very close indeed to misinformation. Unless it happens to be a randomly lucky strike at some truth.
This has left even the scientists at odds over these conclusions.
Take one recent study from the Harvard School of Public Health, which linked dietary iron and calcium to an increased risk of lung cancer, based on an FFQ. An independent epidemiologist from British charity Cancer Research UK told me the conclusions were not valid, because there was simply too much scope for error and chance. Who to believe? Two epidemiologists looking at the same FFQ results - one says they mean there's a problem, the other says they don't.
For years, it has been the sensationalising media that has taken the blame for this push-me pull-you game of red wine is good for you: no, red wine is bad for you.
But look at the science and the scientists, and their FFQs, and the source of the problem becomes starkly clear. As in all things, good science is not the same as poor science.
As some scientists are starting to say, and rather loudly - in their scientific way. In 1998, The International Epidemiological Association (IEA) published an article "Epidemiology deserves better questionnaires" proposing five ways of achieving better FFQs.
These proposals were never implemented. And the debate has since moved on. To whit, the editorial published last month in Cancer Epidemiology Biomarkers Prev. (Vol. 14, pp. 2826-2828) that led: "Is it time to abandon the food frequency questionnaire?"
I believe the answer is: YES. The inconsistencies between results from FFQs and results from food diaries are multiplying. The latter may be more expensive but it is better to spend $20m on a study that shows something real, than $1.2m on a study that will mislead and may even harm.
So is epidemiology embracing the need to move on? Not wholeheartedly. Indeed, the field is facing a crisis point - too much money and so many careers are tied up in FFQs. Abandoning them now would be a mighty about-turn for many, and put back the record of food science results by a mile.
Nonetheless, the first few difficult steps have been taken.
Some teams, in some places, are now using the more expensive three-day food records or 24-hour dietary recall.
And there may be other and better ways forward. With most households owning a computer, and most of us mobile phones, the best answers may lie at home.
If I can download up-to-the-minute football scores onto my phone, surely I can record what I just ate, and maybe even take a photograph of it. Another alternative might be web pages that volunteers log-on to after each meal - no need for paper, quicker data analysis, more readily available progress reports.
Whatever the ways we find to achieve cheaper, more accurate records of what people are eating, one thing is clear: it's time the FFQ got binned.
This is one way of using trees that really wasn't making the world a better place.
Stephen Daniells is the Food Science Reporter for NutraIngredients.com and NutraIngredients-USA.com. He has a PhD in Chemistry from Queen's University Belfast and has worked in research in the Netherlands and France.
If you would like to comment on this article please contact Stephen Daniells.