Probability: Pragmatism in broad sense
For ‘explication’ of a pre-theoretical concept in terms of a scientifically precise concept a number of criteria is given to the proposed explicatum to (i) be sufficiently similar to the original concept to be recognisably an explication of it; (ii) be more exact or precise, and have clear criteria for application; (iii) play a unified and useful role in the scientific economy (so that it is not just gerrymandered and accidental); and (iv) be enmeshed in conceptual schemes simpler than any other putative explication that also meets criteria (i)–(iii). These are good constraints to keep in mind. However, this model is altogether too compressed: for it presumes that we have an independently good analysis of the scientifically precise concept (in effect, it suggests that scientific theories are not in need of conceptual clarification—that the ‘clear conditions of application’ are sufficient for conceptual understanding). It also suggests that the explicatum replace or eliminate the explicandum; and that satisfying these constraints is enough to show that the initial concept has no further importance. But clearly the relation between the scientific and pre-scientific concepts is not so one-sided; after all, the folk are the ones who accept the scientific theories, and if the theory disagrees too much with their ordinary usage, it simply won’t get accepted. I take this kind of approach to philosophical analysis to be pragmatist in some broad sense: it emphasises the conceptual needs of the users of scientific theories in understanding the aims and content of those theories.
Eagle A, (2004), Twenty-One Arguments Against Propensity Analyses of Probability,
http://ora.ouls.ox.ac.uk
Null hypothesis:
“the hypothesis that the phenomenon to be demonstrated is in fact absent [Fisher, 1949, p13].”not that he hoped to “prove” this hypothesis. On the contrary, he typically hoped to “reject” this hypothesis and thus “prove” that the phenomenon in question is in fact present.
Cohen J., (1988) Statistical power analysis for the behavioural sciences, Academic Press
Data dredging, biases, and confounding
It would seem wiser to attempt a better diagnosis of the
problem before prescribing Le Fanu’s solution. Data
dredging is thought by some to be the major problem:
epidemiologists have studies with a huge number of
variables and can relate them to a large number of out
comes, with one in 20 of the associations examined
being “statistically significant” and thus acceptable for
publication in medical journals.w6 The misinterpretation
of a P < 0.05 significance test as meaning that such find
ings will be spurious on only 1 in 20 occasions unfortu
nately continues. When a large number of associations
can be looked at in a dataset where only a few real asso
ciations exist, a P value of 0.05 is compatible with the
large majority of findings still being false positives.w7
These false positive findings are the true products of
data dredging, resulting from simply looking at too
many possible associations. One solution here is to be
much more stringent with “significance” levels, moving
to P < 0.001 or beyond, rather than P < 0.05.w7
BMJ, Data dredging, bias, or confounding
access through www.oxfordjournals.org
Social Epidemiology
Commentary: Education, education, education
Eric Brunner
Department of Epidemiology & Public Health, University College London, 1–19 Torrington Place, London WC1E 6BT, UK. E-mail: e.brunner@ucl.ac.uk
There is no doubt that, broadly within a given society, poorer education is linked with poorer health. The research question is why this linkage exists, and having gained an understanding of the mechanisms, to examine what is to be done about it at policy level. Kilander et al.'s new analysis1 of 25-year mortality of men born 1920–1924 in Uppsala, Sweden, provides further valuable evidence for the education-health association, and focuses on the role of lifestyle factors as mediators between level of education and elevated risks of coronary and cancer death. Compared with those who completed high school or university education, men who had <=7 years of schooling were more . . . [Full Text of this Article]
access through:
www.oxfordjournlas.org
Instruments for Causal Inference: An Epidemiologist's Dream?
Can you guarantee that the results from your observational study are unaffected by unmeasured confounding? The only answer an epidemiologist can provide is “no.” Regardless of how immaculate the study design and how perfect the measurements, the unverifiable assumption of no unmeasured confounding of the exposure effect is necessary for causal inference from observational data, whether confounding adjustment is based on matching, stratification, regression, inverse probability weighting, or g-estimation.
Now, imagine for a moment the existence of an alternative method that allows one to make causal inferences from observational studies even if the confounders remain unmeasured. That method would be an epidemiologist's dream. Instrumental variable (IV) estimators, as reviewed by Martens et al 1 and applied by Brookhart et al 2 in the previous issue of Epidemiology, were developed to fulfill such a dream.
Instrumental variables have been defined using 4 different representations of causal effects:
1. Linear structural equations models developed in econometrics and sociology 3,4 and used by Martens et al 1
2. Nonparametric structural equations models 4
3. Causal directed acyclic graphs 4–6
4. Counterfactual causal models 7–9
A double-blind randomized trial satisfies these conditions in the following ways. Condition (i) is met because trial participants are more likely to receive treatment if they were assigned to treatment, condition (ii) is ensured by effective double-blindness, and condition (iii) is ensured by the random assignment of Z. The intention-to-treat effect (the average causal effect of Z on Y) differs from the average treatment effect of X on Y when some individuals do not comply with the assigned treatment. The greater the rate of noncompliance (eg, the smaller the effect of Z on X on the risk-difference scale), the more the intention-to-treat effect and the average treatment effect will tend to differ. Because the average treatment effect reflects the effect of X under optimal conditions (full compliance) and does not depend on local conditions, it is often of intrinsic public health or scientific interest. Unfortunately, the average effect of X on Y may be affected by unmeasured confounding.
Instrumental variables methods promise that if you collect data on the instrument Z and are willing to make some additional assumptions (see below), then you can estimate the average effect of X on Y, regardless of whether you measured the covariates normally required to adjust for the confounding caused by U. IV estimators bypass the need to adjust for the confounders by estimating the average effect of X on Y in the study population from 2 effects of Z: the average effect of Z on Y and the average effect of Z on X. These 2 effects can be consistently estimated without adjustment because Z is randomly assigned. For example, consider this well-known IV estimator: The estimated effect of X on Y is equal to an estimate of the ratio.
by Hernán, Miguel A.*; Robins, James M.*†
Issue: Volume 17(4), July 2006, pp 360-372
http://ovidsp.uk.ovid.com/spb/ovidweb.cgi
Labels: statistics
<< Home