📊 統計・データ

効果量

こうかりょう

統計的に有意な差の「大きさ」を示す指標。p 値が示さない実質的な意味の大きさを評価する。

1 分で読める

Definition and Purpose

Effect size is a quantitative measure of the magnitude of a difference or relationship. While p-values indicate whether an effect exists, effect size tells you how large that effect actually is. Common measures include Cohen's d (standardized mean difference), Pearson's r (correlation strength), and odds ratios. A statistically significant result with a tiny effect size may have no practical importance.

Why P-values Alone Are Insufficient

With a large enough sample, even trivially small differences become statistically significant. A study with 100,000 participants might find a "significant" difference of 0.1 kg in weight between two groups - real in a statistical sense but meaningless in practice.

Effect size provides the missing context. Cohen's conventions classify d = 0.2 as small, 0.5 as medium, and 0.8 as large, though what counts as meaningful depends on the domain.

Application to Ranking Differences

When a ranking shows you differ from the average, effect size thinking helps assess whether that difference matters. Being 2 percentile points above average on a health metric is statistically distinguishable from average but practically identical. Effect size helps you focus energy on gaps that are large enough to represent genuine differences in outcomes.

Interpreting Research Claims

Headlines often report statistically significant findings without mentioning effect size. When you encounter claims like "X improves Y significantly," always ask how much. A supplement that "significantly" raises a biomarker by 1% has a negligible effect size despite the impressive-sounding language. Demanding effect sizes alongside p-values is a hallmark of sophisticated data interpretation.

関連用語

関連記事

この用語解説は役に立ちましたか?