Abstract Effect measures in epidemiological studies can be absolute, such as difference in incidence rate or risk, or relative, such as risk, rate, or odds ratio. Philosophers have argued for preferentially calculating and reporting absolute measures, in particular risk difference, in virtue of their ease of interpretation, and superior policy- and clinical relevance (e.g. Broadbent, 2013; Sprenger & Stegenga, 2017; Stegenga, 2015; Worrall, 2010). In practice though, the choice of measure depends on the choice of study design, which depends on the nature of the (disease) phenomenon of interest. The arguments favoring absolute measures only apply to cohort studies, which may not always or even usually be feasible. I reassess these arguments in the context of the problem of choosing a study design. I then make some observations about how the choice of study design and an effect measure shapes our understanding of disease causation, in particular that of causal interactions. An interaction is commonly defined as a deviation from additivity of causal effects. But since additivity is scale-relative, and absolute and relative effects are modeled at different scales, this definition is ambiguous.  I assess the consequences of this ambiguity for the interpretability of various effect measures.

Progetta un sito come questo con WordPress.com
Comincia ora