Clomiphene is estrogenic in bone though. This is an example of why it is not a simple estrogen antagonist (blocker.) In some tissues it activates the estrogen receptor. In others, it occupies the receptor without activating it, thus blocking the effect of estrogen.
Intermezzo, actually -- and sadly -- it gets worse with the statistics.
Not only does it occur that in many cases actual effect may exist and a study did nothing to demonstrate it does not exist but claims to have done so, we also have the opposite situation where an effect is "significant" to p <= 0.05 but most likely IS a result of chance. Yet the opposite is presented as being the case.
Although it is fundamental to the use of the statistics and therefore should be understood by the authors using the statistics, most authors clearly appear to not understand the meaning of p values.
P values don't mean that the probability only, for example, was 5% that the results seen WERE a result of chance, or 95% probability that they WERE a real effect.
They mean that in cases where there was no actual effect but only random variation of the type seen, 5% of the time there would be an appearance of effect just as great as seen in the study.
Which is deeply different from how p values are typically misinterpreted by authors.
For example, suppose we have a hypothesis that if we mumble an incantation over a rat's water bottle as we fill it, this will extend the life of the rat.
We test this on some moderate number of rats, with one group's water bottles receiving the mumbled incantations and the other, not.
The statistics are analyzed and lo and behold, to a p value of 0.05 the average lifespan of the incantation group was some percentage longer than that of the control group. The difference was "statistically significant."
And to understand this example, it's relevant also to understand that thousands of studies unlikely to have any real effect behind them are done all the time, and we will tend to hear only about the ones showing a positive outcome.
Furthermore, many studies look at 20 or so effects all at the same time. So even within one study, it's highly likely, instead of being of negligible likelihood, that at least one observed effect is produced by chance, where p values are on the order of 0.05.
Back to the rats and the mumbled incantations. Is it really only a 5% likelihood that only chance resulted in those rats having longer lives? A 95% likelihood that there was real life-extending effect from the incantations?
No, if having before the experiment considered to best knowledge that it was highly unlikely that the hypothesis was true, then now with these test results it is highly likely that chance was the only cause of those rats living longer.
It should be considered more likely that this was one of those 5%-of-the-time-chance-alone-will-yield-such-an-outcome cases than that it is now a scientific truth that incantation over water bottles extends lifespan. The probably-chance interpretation should be our best guess of what happened. Despite results being "statistically significant."
There actually are, incidentally, statistics to deal with the likelihood that chance was the cause of the ACTUAL results seen. But the p value is not such a statistic: it does not refer to likelihood of the results being chance.
It is the percentage of cases that with no real effect but same variability, chance alone would result in an apparent effect of the size seen.