T Nation

Methoxy and Ecdysterone = Crap?


being an active reader of as many scientific articles regarding supplementation as i can, i came across the following study, "Effects of Methoxyisoflavone, Ecdysterone, and Sulfo-Polysaccharide Supplementation on Training Adaptations in Resistance-Trained Males", which presents the following conclusion, alongside with the method used:

Forty-five resistance-trained males (20.5 �± 3 yrs; 179 �± 7 cm, 84 �± 16 kg, 17.3 �± 9% body fat) were matched according to FFM and randomly assigned to ingest in a double blind manner supplements containing either a placebo (P); 800 mg/day of M; 200 mg of E; or, 1,000 mg/day of CSP3 for 8-weeks during training. At 0, 4, and 8-weeks, subjects donated fasting blood samples and completed comprehensive muscular strength, muscular endurance, anaerobic capacity, and body composition analysis. Data were analyzed by repeated measures ANOVA.
No significant differences (p > 0.05) were observed in training adaptations among groups in the variables FFM, percent body fat, bench press 1 RM, leg press 1 RM or sprint peak power. Anabolic/catabolic analysis revealed no significant differences among groups in active testosterone (AT), free testosterone (FT), cortisol, the AT to cortisol ratio, urea nitrogen, creatinine, the blood urea nitrogen to creatinine ratio. In addition, no significant differences were seen from pre to post supplementation and/or training in AT, FT, or cortisol."

available at www.ncbi.nlm.nih.gov/pmc/articles/pmc2129166

as the authors refer on the study text body, there are other studies showing different results, but, i quote, "the previous studies reporting positive effects of ecdysterones have been reported in obscure journals with limited details available to evaluate the experimental design and quality of the research".

in the BB forums i found an old but interesting discussion but since it is from 2007, i was hoping a new study, made on humans (and not quails heh) would give us an anwer...

anyone got additional info on this?


I can really only comment on this study at the moment.

With regards to 5-methyl-7-methoxyisoflavone, our studies found that blood levels that result from taking this compound, particularly from a powdered formulation, are very low. I'm not surprised if there is no substantial if indeed any effect from this supplement at the dosage studied.

In general the above study has the serious problem that its sensitivity -- ability to discern fairly small effects as statistically significant -- is, as is an inherent problem with these types of studies, rather poor.

Unfortunately it is easy to read these studies and see "there was no statistically significant effect" and conclude that this means that in fact the compound tested has no effect, where the actual meaning may be, "Our study was incapable of detecting any effect less than X: we found that if the compound has an effect, the amount is probably less than X."

But when X is some large and useful-if-true amount, that makes the study sound like garbage, and so few write their studies up this way. One has to do some calculation, from the reported standard deviations or standard errors, to figure what X might be. I did not do this here, but eyeballing it, X would have had to be somewhat large and if there were actual effect of useful but not extreme magnitude, the study could readily have failed to find it due to the small number of subjects and the high variability.

Also, the "useful markers of catabolism" are not so useful. Modest dose anabolic steroids could have failed this study too. My meaning is NOT that any of these supplements are like modest dose anabolic steroids, but rather that even if they had been that good, their "useful markers of catabolism" test could have been failed.

As for considering testosterone the be-all and end-all of whether anabolism is occurring, that is kind of ridiculous.

Basically, this is a "We didn't find anything, but our error bars are so large that nothing except miraculously fantastic results would have been picked up by us" study.


heh... interesting point...
considering the great value these supplements have on today`s market, it is pretty incomprehensive why there hasnt been made other studies on humans, which would either prove or refute the metioned study... or if there has been, i couldnt find em


There is no profit motive in doing studies on non-proprietary products.

As a result, sometimes a study may be done out of academic interest out of some minor grant money that is lying around for whatever reason, but due to financial limitations, convenience limitations, and what might be called habits of a given field, such studies usually are inherently incapable of resolving small effects.

In this case what I mean by habits of a given is that in exercise science, it is the case that whenever taking either an untrained population or a bunch of guys who train half-assedly, when putting them on a program for (for example) 8 weeks there will be a great deal of variation in results among the placebo group.

Some guys will add 10 lb of muscle, some will lose 5, etc. Not from the placebo actually having any such effects, but from random variation.

It then is assumed that random variation is as severe among the treatment groups.

So when having for example 10 or 11 subjects per treatment group as was the case in this study, even if the average of the data were say a 3 lb increase in LBM and even if this is an actual effect (caused by the treatment) it might well be found "insignificant" and reported as being no increase.

Alternately, a compound could have a real effect of say the 3 lb in 8 weeks, which would be excellent, yet the observed average could come out as zero, by chance alone.

Worse, the above doesn't result in a situation where if such a study does report a "statistically significant" effect, then wow, it must really be something to have overcome the above difficult situation.

Quite the contrary: more than 5% of the time, treatments of zero efficacy will be found to have "statisticallyh significant" efficicy to p < 0.05. The reason for this is a little complex but the simplest explanation of it is that there is publication bias towards positive results. So if let's say 100 studies are done of treatments which in fact have no effect, and random variation is such that very nearly 5% of the time, random variation alone would cause the observed positive value, then on average about 5 of these studies will be published and reported as the treatments having a "statistically significant, p < 0,.05" real effect.

Probably most -- or if proprietary, all -- of the 95 other studies won't be published at all.

So apparently-positive results in studies of the sort that this one was must be looked at very carefully as well.

Not a lot can be concluded from studies of this type. Actually I think they are of less value than simply trying something oneself, provided that one -- if acquiring an initial opinion that something seemed to work for him -- also tries discontinuining it, and then restarting it again after a time.

If not doing this but just sticking with something that really seemed to make a difference, sometimes it will be coincidence that what was a good period of time for the body or for training just happened to fall at the same time as introducing the supplement. So one does have to take a little care with personal experimentation as well.

How could these studies be done better? For example, if in the above study they had tried studying only one compound, they could have assigned 22-23 subjects to each group, treatment or placebo. This would have helped although the number still would have been quite marginal for detecting small effects.

Secondly and more importantly, they'd have needed a more stable set of subjects. Athletes who had been training the best they know how for quite some time and who had reached essentially a steady state are far superior subjects, because where there is no real effect, there won't be a normal outcome of this guy adding 10 lb of muscle in 8 weeks and that one losing a few lb. Variation will be much smaller.

I'd rather informally (not for publication) have basic measurements on 5-10 guys who are at a steady-state in their training, being very consistent and hard trainers, and see what happens with them, than have all kinds of measurements on 20 guys who have been totally inconsistent and half-assed in their training and therefore can see 10 lb of muscle in 8 weeks simply from straightening up their act rather than their being a real effect.

It would be great if the university studies could combine the best of the above approaches and have say thirty (as a minimum, not optimum) subjects per group who were these consistent advanced trainers at a steady-state in their training, but this simply isn't practical on an ongoing basis, if ever, in the university environment and would be incredibly difficult anywhere. Even if having for example a sports team, say a football team, which at first glance might seem such an environment, aside from the fact that the coach might well not want to do it, the athletes are ordinarily not in a steady-state because of the seasonal nature of most sports.


i appreciate your thoughts toward the subject bill...
since i am partially convinced by the study, but your insight is valuable, i will try the personal experimentation on january. lets see if, with a good training/dieting log, i can reach some personal conclusions :wink:


Sounds good! :slight_smile:


If it ain't protein or AAS, it's crap.



I guess I should throw out that dere fish oil then.


I did but that's a whole different thread and flame war. :wink: