My question is on the interpretation of a 95% CI for a risk difference and the pvalue from a chi2-test.
Sometimes they don't agree and this is because that both methods
are based on different assumptions but how would the interpretation
go if say the 95% CI includes 0 and the pvalue from a chi2 test is significant?
An example of this is discussed in the statalist post:
https://www.statalist.org/forums/forum/general-stata-discussion/general/1591371-p-value-and-95-ci-don-t-match
s g <sarega.g@gmail.com> wrote:
My question is on the interpretation of a 95% CI for a risk difference and the pvalue from a chi2-test.
Sometimes they don't agree and this is because that both methods
are based on different assumptions but how would the interpretation
go if say the 95% CI includes 0 and the pvalue from a chi2 test is
significant?
An example of this is discussed in the statalist post:
https://www.statalist.org/forums/forum/general-stata-discussion/general/1591371-p-value-and-95-ci-don-t-match
As you say, there are "different assumptions". The point of calculating >confidence intervals is to avoid being hung up on arbitrary thresholds.
One can produce confidence intervals by inverting the chi-square test -
this would remove your dilemma ;). Another point is there are multiple >"chi-squares" (eg Pearson v Gibbs for these kinds of data; Wald, score
and likelihood ratio tests more generally), which also can disagree.
They are all only asymptotically equivalent. This is all aside from
the widely shared distrust of P-values...
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (0 / 16) |
Uptime: | 163:54:48 |
Calls: | 10,385 |
Calls today: | 2 |
Files: | 14,057 |
Messages: | 6,416,513 |