Is it a worthwhile or reliable means to gather information?
EDITOR’S NOTE: Jason Henninger is the managing editor and product manager at MedLearn Media.
Making trending news recently, the Food & Drug Administration (FDA) is investigating reports of food poisoning related to Lucky Charms cereal. General Mills’ own investigation found nothing wrong with the food, but that alone is of course not enough to satisfy the FDA.
A notable aspect of this story is the role crowdsourcing played, via the website iwaspoisoned.com. Around 3,000 people have complained there of stomach issues, and blamed the cereal. The FDA itself received about 100 complaints through its own reporting methods.
While it is beyond the scope of MedLearn to comment on the veracity of these claims or the inner workings of the FDA or General Mills, the situation raises an interesting question about the nature of crowdsourcing. Is it, to put it bluntly, a worthwhile or trustworthy means to gather information? In the pre-Internet days, the idea of gaining reliable intelligence from mass sources would have been viewed for the most part as hearsay at worst, and folk wisdom at best.
The term crowdsourcing comes from the editors of Wired magazine in 2005, who intended it to mean outsourcing a task to a crowd (more along the lines of what we’d now call the gig economy). The idea evolved to mean a sort of voluntary public database. Wikipedia, probably the most widely used crowdsourced website (though it predates the term itself) is an incredible achievement in collaboration, and a great place to begin learning on a vast array of subjects.
But no matter how much accurate material can be found there, Wikipedia should never be considered a definitive source. After all, academic, scientific, and journalistic research all have rigorous standards; crowdsourcing has posting guidelines at most. This is not to say that crowdsourcing isn’t valuable. But it isn’t intrinsically reliable.
The complexity of the concern can affect the reliability of the crowdsourced information. A website called Does the Dog Die? crowdsources warnings about potentially triggering scenes in movies. Since the reporting relates to something easily verifiable, it is a fairly non-controversial website with little to complicate its findings. Compare this to the famous example of Redditors who took it upon themselves to identify the Boston Marathon bomber, which resulted in several innocent people being harassed.
The website in question here, iwaspoisoned.com, requires no sign-in, no input from physicians (though the site employs physicians as advisors). In other words, there’s no proof. After all, it is crowdsourcing, and crowdsourcing doesn’t require proof. But one of the truisms of research is that correlation does not imply causation. Put another way, the plural of anecdote is not evidence. To the website’s credit, they never claim to provide definitive proof of food poisoning outbreaks; they merely provide a platform to report it, and offer alerts. Further, they can and do report their findings to governmental bodies such as the FDA. There is likely value in doing so.
And yet one doesn’t need to think too long to spot some pitfalls here. First and foremost, people without medical training are making what amounts to self-diagnoses. This can be a dangerous practice for ill people, and a misleading practice when made public. Someone could get sick from spaghetti they ate on the weekend but think, wait, I had Lucky Charms for breakfast. That must be it, because all these other people say the same thing.
It will be interesting to follow this story, especially as the FDA investigates the claims. Will iwaspoisoned.com come out looking like a great source of real-time data of a scope the FDA can’t compete with, or a platform for speculation? Either way, it might be a good time to switch less sugary breakfast.