Crap Studies

I didn’t really know what section to put this in, so I just picked one.

This question is pointed specifically at the main writers on this site, but I’ll take anyone’s input. What are your criteria when distinguishing a good study and a bad one? I was researching for a kinesiology paper tonight, and I came across some really hokey studies. (One in particular, that looked at 8 AAS users, 6 of which were f***ups to begin with; they came to the conclusion that AAS will ultimately lead to criminal activity/violence.)

I’d like to be able to tell if a study is bad/questionable, especially in the not-so-obvious ones.

EDIT: I was able to find lots of studies here at school, but where are some sources for others? Most times on the internet all you find is the abstract.

I think EC went into this a while back. Generally things to be wary of are:

*Small population studies (like the one you quoted) are shit, as small deviations make such a large impact.

*Don’t trust anything that is based on “untrained” individuals. As one of the coaches on T-Nation said, you could have a newbie open pickle jars for 6 weeks and they would get stronger.

These are the two that spring to mind straight away. There are about 15 others.

[quote]georgeb wrote:
I didn’t really know what section to put this in, so I just picked one.

This question is pointed specifically at the main writers on this site, but I’ll take anyone’s input. What are your criteria when distinguishing a good study and a bad one? I was researching for a kinesiology paper tonight, and I came across some really hokey studies. (One in particular, that looked at 8 AAS users, 6 of which were f***ups to begin with; they came to the conclusion that AAS will ultimately lead to criminal activity/violence.)

I’d like to be able to tell if a study is bad/questionable, especially in the not-so-obvious ones.[/quote]

Some big ones - poor validity; does the study actually answer the question it sets out to answer? Do the conclusions follow from the results? The AAS one you mentioned is obvious, but it gets a little trickier when researchers make slight extrapolations that don’t quite follow due to a little physiological factor they didn’t consider (for example).

-poor repeatability; rarely trust a study that can’t be duplicated. Results should be reasonably consistent if the follow-up researcher isn’t a jackass.

-confounding variables. These are huge. Essentially, you have to ask what the researchers didn’t account for and what sort of effect these factors can have. I can’t think of an example that wouldn’t take 10 minutes to type off the top of my head, as they can get pretty complicated.

-journal of publication. While it’s not that hard to get published, the more respectable journals will at least make sure the study isn’t complete crap.

-how measurements are taken. There can be a huge margin of error in many tests, especially when improperly administered. Skinfold caliper measures are a classic example, but there’s been a lot of criticism on measuring nitrogen retention as a way to measure protein syntesis lately, and that’s the main way we measure that. Turns out it could have been very wrong all along. I’m not in that field, so I don’t know the particulars, but maybe someone else could shed some light there.

-population used. This is a two edged sword. Some population results can be extrapolated to other populations, others can not. This is where knowledge of confounding variables plays in.

Those are just some off the top of my head at 1:30 this morning. I’ll be interested to see what others have to say as well.

-Dan

This is a very good question, and there were some really great answers given.

Here’s my contribution:

  • Statistically significant is not always the same as practically significant. It could be that the results of the two training protocols differ by 1-2%, and researches deemed that difference doesn’t mean one protocol is better then the other. In practice, however, 1% could mean a lot, because it’s 1% more than your opponent.

  • Applicability. For instance, let’s say we found out that an increase in training volume by 50% results only in 5% performace improvenet. Now, for a weekend warrior, with full time job and a few kids, it is obviously not productive. But, for a professional athlete, it’s something worth considering, because 5% is a lot in the big leagues.

  • Length. Studies are usually quite short, like 10-12 weeks. The real question in weight training is how to make continous progress. Just because something worked for first 10 weeks, doesn’t mean it’ll work for another 10 weeks.

if it works for rodents, it works for athletic humans.

[quote]wufwugy wrote:
if it works for rodents, it works for athletic humans.[/quote]

Ok, Lyle.

:wink:

Some great answers here.

But damnit BuffaloKilla that is some kind of kick ass post.

http://bmj.bmjjournals.com/collections/read.shtml

[quote]Mule359 wrote:
Some great answers here.

But damnit BuffaloKilla that is some kind of kick ass post.[/quote]

Haha, thanks - I’m ready to start a study myself in the spring, so it’s been on my mind for the past few months :slight_smile:

-Dan

What’s the study going to be on?

[quote]georgeb wrote:
What’s the study going to be on?[/quote]

The effects of using a free-weight, compound exercise circuit training program on cardiovascular disease risk factor indicators. To be honest, I wanted to look at a style of training more like strongman or Westside, but after looking at the literature, there just isn’t enough data to make that kind of leap and not have a million confounding variables to improvements in CV factors.

-Dan

Here is some stuff you should know about statistics. You need a sample size of at least 30 to even be considered statistical. You need a sample size over 900 to have a 90% confidence limit on the standard deviation of the sample (used the chi squared test to get this). Most studies are small and are only useful to say …“lets get a bigger sample”.

Look at who funded the study. I know a person who did the lab tests at a medical research facility. She said that numerous times the boss would have already written the paper results before the test was completely finished. Question all results that favor the funding organization. Maybe the paper was just searching for more funding…

Be open minded and read lots of papers on the subject. Read reviews on the subject. Look up people who have a different view and read their stuff. Look up the references (some people use references incorrectly). After a while you will be able to spot a worthless study. You will be able to even see wrong ideas promoted by big organizations such as the AMA.