Some big ones - poor validity; does the study actually answer the question it sets out to answer? Do the conclusions follow from the results? The AAS one you mentioned is obvious, but it gets a little trickier when researchers make slight extrapolations that don't quite follow due to a little physiological factor they didn't consider (for example).
-poor repeatability; rarely trust a study that can't be duplicated. Results should be reasonably consistent if the follow-up researcher isn't a jackass.
-confounding variables. These are huge. Essentially, you have to ask what the researchers didn't account for and what sort of effect these factors can have. I can't think of an example that wouldn't take 10 minutes to type off the top of my head, as they can get pretty complicated.
-journal of publication. While it's not that hard to get published, the more respectable journals will at least make sure the study isn't complete crap.
-how measurements are taken. There can be a huge margin of error in many tests, especially when improperly administered. Skinfold caliper measures are a classic example, but there's been a lot of criticism on measuring nitrogen retention as a way to measure protein syntesis lately, and that's the main way we measure that. Turns out it could have been very wrong all along. I'm not in that field, so I don't know the particulars, but maybe someone else could shed some light there.
-population used. This is a two edged sword. Some population results can be extrapolated to other populations, others can not. This is where knowledge of confounding variables plays in.
Those are just some off the top of my head at 1:30 this morning. I'll be interested to see what others have to say as well.