Tuesday, May 27, 2008

Confidence Intervals Illuminates Absence of Evidence: Guidelines

CONFIDENCE INTERVALS ILLUMINATES ABSENCE OF EVIDENCE

Others may judge that a smaller benefit would be clinically useful. Even when a clinically useful effect has been ruled out, phrases such as "is not effective," "did not reduce," and "has no effect" are not justified. Also, confidence intervals reflect only uncertainty owing to random allocation, not that owing to failure to follow the protocol, non-random loss to follow up, and so on. True uncertainty is greater, therefore, than indicated by confidence intervals. Lastly, we cannot claim priority with the title "Absence of evidence is not evidence of absence": a paper with this title was published in 1983.
Altman D., Oxford University Research Archive, ORA
www.ora.ouls.ox.ac.uk



STROBE Statement STrengthening the Reporting of OBservational studies in Epidemiology
A paper intended as to provide a checklist following several workshops and consultation with medical researchers, as guideline on how to report observational research:

http://www.strobe-statement.org/Checklist.html


STROBE asks authors to ‘Give a cautious overall interpretation of results, considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence’, in line with Richard Doll's important statement (cited by Ebrahim and Clarke1) on the need to confirm unexpected results with potential implications for public health in further studies. The need for replication, which is an important point in science in general,8 is well taken, but has little to do with good reporting of an individual study: it is not the responsibility of the scientists who report that study. Nevertheless, in the explanatory paper,6 we discuss the scope of observational studies, from reporting a first hint of a potential cause of a disease, to verifying the magnitude of previously reported associations and stress that further studies to confirm or refute initial observations are often needed.9 STROBE tries to accommodate these diverse uses of observational research—from discovery to refutation or confirmation.

.... Does this mean authors should be asked to ‘conduct a systematic review of other similar studies’?1 As a previous editorial in the International Journal of Epidemiology argued,11 systematic reviews should be seen as original research and be published as such, rather than be reported in a paragraph of a discussion section. Interestingly, The Lancet recently updated their policy, asking authors of randomized trials to illustrate the relation between existing and new evidence by referring to a systematic review and meta-analysis.12 We believe that in many situations this requirement is also appropriate for reports of observational research. But note that both The Lancet and the CONSORT recommendations for the reporting of randomized trials (Consolidated Standards of Reporting Trials)13 stop short of asking authors to do a systematic review and meta-analysis.





Treatments Results Reporting

Most evidence on harms from medical treatments is obtained from observational research. Randomized controlled trials (RCTs) are often not useful in determining rates of adverse effects: the frequency of such events during RCTs may be low owing to restrictive inclusion and exclusion criteria; in addition, follow-up periods are relatively short, and the number of participants included in an RCT is limited. As a result, systematic reviews based on evidence from RCTs often fail to provide accurate
data on adverse events. Evidence from nonrandomized studies on adverse effects is often dismissed, simply because the studies were not randomized; however, this philosophy should not be considered the best approach to practising evidence-based medicine.2
What is the best evidence for determining harms of medical treatment?
http://www.cmaj.ca/cgi/reprint/174/5/645?ijkey=457418909ad1c21f316a85b2c11e7b008cfb17bf
Glasziou P, Vandenbroucke JP, Chalmers I. Assessing the quality of research. BMJ
2004;328:39-41.




Genetic variants may be trading one illness with another, research shows


Scientists have identified a genetic trade-off between prostate cancer and type 2 diabetes. Genes have been discovered that can raise the risk of one condition, but protect against the other. Mark McCarthy, Professor of Diabetes at the University of Oxford, Fellows of Green College and a leader of the JAZF1 study, said that the discovery of three 'see-saw' genes makes it more likely that there is interaction rather than just a fluke.

www.green.ox.ac.uk