Steady State Cardio as Effective as HIIT

[quote]Bill Roberts wrote:

On potentially cleaning up the data: This winds up being problematic (at best) unless one establishes criteria beforehand. It is fine to have established beforehand that if subjects do this or that, then their data will be excluded and so forth.

But collecting the data and then after having it, making decisions on what to include and what to exclude can readily result in bias, even unconscious bias, creating results out of nothing.

Sort of like running an election where there are problems in different counties or precincts, and according to whether the grand total comes out to the guy you like being the winner or not, you decide whether another re-re-recount is needed using different rules limited to specific counties of your choice where you think the new rules or methods would do better for your guy, or whether it’s all done and we go with the total we now have. That would not be science. You really have to do it according to methods already established, rather than established in response to the data to try to turn non-significance into significance or any other change in outcome.[/quote]

Excluding data from subjects who clearly didn’t do the workouts or quit partway through would not bias the results; keeping that data is a failure to manipulate the variables of interest. When you KNOW that other factors besides the ones you are trying to manipulate are responsible for results, there’s no benefit to keeping them in. You were trying to keep them out in the first place, but failed.

Or say you identify an outlier with extremely good results, email that subject and find out he decided to start a keto diet with a huge calorie deficit at the same time to make the most of his new training program. It’s reasonable to exclude his data because you already know the unplanned variable (drastic keto diet) will have a greater effect than the variables you are trying to manipulate. And it ruins your matched subjects. Once again, it’s not bias that excludes this outlier, it’s a confounding variable.

Obviously, you can never eliminate data points that simply don’t match your hypothesis. You have to have a really good reason for eliminating data points.

By the way, though I’m speculating about these particular results, I’ve seen this type of thing happen all the time with student’s research projects and in research labs.

Techniques for eliminating outliers are legitimately used by the most rigorous researchers in the world and are found in statistics books.

But more importantly, when you know at the outset that the variable you are TRYING to manipulate will have a smaller effect than the individual differences between subjects, you can’t just go averaging everything together. This is a problem plaguing so much research relevant to bodybuilding.

I agree. There are acceptable ways. They really should be pre-planned ways, should be widely-accepted ways, and there must not be room for judgment calls that potentially might differ according to how the decision to keep or reject would affect the outcome – whether the difference in outcome is being able to report that your findings were “significant” or not, or by how much, or whether the difference is in finding perhaps your own theory supported or not.

Now that you have explained in more detail, I see that you didn’t have in mind something as free-wheeling as what many not knowing what you clearly do would tend to do on having a lot of data that was yielding no conclusive evidence, was all over the place in various regards, and got ideas for improvising how they could clean up the mess. That sort of approach is what I was saying was not science, but now I see that you already had those factors considered.

And by the way, I didn’t mean to make my statements regarding a widespread state of affairs of how scientists commonly misunderstand statistics to mean widespread in all areas of study. Not so at all of course: many specific areas are highly rigorous and the understanding is I’m sure quite correct and much beyond mine. But more broadly this is much more the exception than the rule, it seems to me. More commonly tools are used, and figures given, but the actual meaning not really grasped, and thus conclusions reached – such as that evidence shows something or shows it does not exist – are all too commonly not justifiable.

[quote]Bill Roberts wrote:

And by the way, I didn’t mean to make my statements regarding a widespread state of affairs of how scientists commonly misunderstand statistics to mean widespread in all areas of study. Not so at all of course: many specific areas are highly rigorous and the understanding is I’m sure quite correct and much beyond mine. But more broadly this is much more the exception than the rule, it seems to me. More commonly tools are used, and figures given, but the actual meaning not really grasped, and thus conclusions reached – such as that evidence shows something or shows it does not exist – are all too commonly not justifiable.[/quote]

I appreciate where you’re coming from on this. The problem is that in exercise science so much of the information being exposed to the public is from poor quality journals. A journal like J Strength Cond Res, which is often cited here, is not even in the top 20 journals in the sports sciences field.

The big interpretation issue you are alluding too is often observed in this articles in lower quality journals where there was a lack of a clear, testable hypothesis, and a lack of any investigation of a mechanism of change.

Therefore what you are often seeing are studies where we gave random substance A or administered random exercise protocol XXX, and hey, lets just measure variable P Z and DD cause they seem interesting. No clear mechanism being studied for why A influences P, just random things being studied. This leads to speculative results interpretation regardless of how the study was designed and carried out.