In the case of Dr. Potti the numbers had the potential to do great things, but were horribly misused at a great cost. In the original article published in 2006 on Dr. Potti's research notes the power that his discoveries held: "The test could improve the prospects of patients who currently suffer the side effects of chemotherapies without knowing if they are undergoing a treatment that has a good chance of working for them." Obviously, we all know how this turned out. It is The Economist's later article from 2011 that I find particularly striking for it notes the many complications that could have contributed to the Duke scandal apart from simply blaming Dr. Potti. I think the challenges the point out in regards to peer reviewing are especially interesting:
the process of peer review relies (as it always has done) on the goodwill of workers in the field, who have jobs of their own and frequently cannot spend the time needed to check other people's papers in a suitably thorough manner [...] Moreover, the methods sections of papers are supposed to provide enough information for others to replicate an experiment, but often do not. Dodgy work will out eventually, as it is found not to fit in with other, more reliable discoveries. But that all takes time and money.Thus we can see how it takes a whole team of individuals to produce the sort of numbers that can be replicated and that people can rely on. This is why it's so important to remember that numbers are indeed fallible, but not necessarily out of malicious intent, but because of the extreme amount of effort it takes to create the sort of exemplary research that "consumers" of statistics or the general public desire. Overall the article helped put some of the Duke case in perspective for me––I'm absolutely not excusing Dr. Potti–– but it simply helped to provide some more background information.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.