Discussion about this post

User's avatar
BCD's avatar
May 4Edited

OmegaQuant sale 10% off with code MAY25

Expand full comment
Ben Hoffman's avatar

I think this problem is implicit in the philosophically incoherence of conventional medical experiments. It's treated as more "scientific" to test the effects on some outcome variable with complex causes, of indiscriminately applying the same treatment to a randomly sampled subset of some group, than to investigate mechanisms in any detail and draw rational conclusions.

One might imagine that in the field of MMA studies, someone put together an RCT to see whether kicking wins fights, on the basis of intention to treat. They put together some group of amateur fighters, and randomly assign them to a treatment or control group, and in the treatment group they encourage them to use a lot of kicks. Then they compare the increase in kicking to any change in measured fight outcomes, and conclude that there's no evidence that kicking can increase your chance of winning a fight.

Of course obviously the ground truth is that there are specific circumstances where specific kinds of kicks at certain levels of skill can win a fight, and also most possible moves in a fight hurt your chances. This is the sense in which vitamins C and D "don't work" and vitamin E "is harmful."

The underlying problem is the insistence on treating any detailed information about the experimental subject as bias rather than information. (Even subgroup analysis requires subgroups to be large enough, with large enough effect sizes, to drown out relevant information (since this shows up as noise, and otherwise "noise" > "signal").

The truth is that "do X after Y, while being this particular person" is what's being tested, not "do X to a nonspecific person with an unspecified context."

Expand full comment
2 more comments...

No posts