T Nation

The Science Thread


#1

Ok, so I’ve kicked around this thread idea for a while now–I’ve thought about creating a thread much like the “PWI Required Reading Thread” (which I’m very happy is still going), but for science. Perhaps a bit more similar to a cross between that thread and the “Ask Moshe” thread with Jewbacca. The idea would be a place where AG, ED, Antiquity, I and others can trade research article links that pique our interest. It is DEFINITELY NOT required to be a scientist to participate! That might make for a short thread. I would like a place to trade interests, questions for others, and papers.

You can post comments or questions to others in the thread without a link. However if you do include a link, the only requirements for posting a link are that:

  1. it must be peer reviewed (or edited in the case of textbooks/technical treatises)
  2. it must be published roughly in the author’s area of expertise–no sociologists talking about genetic inheritence for example
  3. you must have read the entire thing–or skimmed it–and found it interesting

Simply skimming counts of course, since we are talking about things that pique our interests and may or may not be in our own field of expertise. The point is to avoid having a situation similar to certain recent long threads where someone posts something they have not actually viewed.

Conference presentations to societies and “executive summaries” or news articles from academic journals or societies are acceptable as well. Anything scholarly really.

I do not want a place for adversarial debate. We have tons of other threads for that. If you want to debate back and forth on the biology of race, IQ, climate change, or politics do it in one of those threads. This thread is for papers, sources, presentations, or for discussion/questions/trade.


#2

This is an interesting piece that @ActivitiesGuy and his cardiologist colleagues might like:

Machine learning algorithms perform better at predicting heart attack than the ACA guidelines on a dataset of almost 300,000 records. I think this is going to be an area where the rapidly improving computing and AI field may really be able to help. The black box question though remains somewhat of a problem as Science noted in their coverage.


#3

This is something I think is fascinating and waaay out of my area. Acoustic analysis of languages based on combined acoustic signal processing and statistics. Still digesting this one.


#4

I really LIKE this idea. I’ll put it here, since the example is a science related policy issue, but I think this could be a very a COOL technique to use when discussing any public policy. I debated about putting it up in the @thunderbolt23’s Mindless Partisan thread, because that’s initially what caught my interest about it. I’ll put it here.

The example is climate science, but imagine doing this for any public policy issue where you’re trying to get reasonable people to really understand and discuss. I like that it gets at helping lay people understand a technical issue, or helping anyone understand an issue outside of your area of expertise.

Paywalled, so please pardon the text wall.

Tomorrow’s March for Science will draw many thousands in support of evidence-based policy making and against the politicization of science. A concrete step toward those worthy goals would be to convene a “Red Team/Blue Team” process for climate science, one of the most important and contentious issues of our age.

The national-security community pioneered the “Red Team” methodology to test assumptions and analyses, identify risks, and reduce—or at least understand—uncertainties. The process is now considered a best practice in high-consequence situations such as intelligence assessments, spacecraft design and major industrial operations. It is very different and more rigorous than traditional peer review, which is usually confidential and always adjudicated, rather than public and moderated.

The public is largely unaware of the intense debates within climate science. At a recent national laboratory meeting, I observed more than 100 active government and university researchers challenge one another as they strove to separate human impacts from the climate’s natural variability. At issue were not nuances but fundamental aspects of our understanding, such as the apparent—and unexpected—slowing of global sea-level rise over the past two decades.

Summaries of scientific assessments meant to inform decision makers, such as the United Nations’ Summary for Policymakers, largely fail to capture this vibrant and developing science. Consensus statements necessarily conceal judgment calls and debates and so feed the “settled,” “hoax” and “don’t know” memes that plague the political dialogue around climate change. We scientists must better portray not only our certainties but also our uncertainties, and even things we may never know. Not doing so is an advisory malpractice that usurps society’s right to make choices fully informed by risk, economics and values. Moving from oracular consensus statements to an open adversarial process would shine much-needed light on the scientific debates.

Given the importance of climate projections to policy, it is remarkable that they have not been subject to a Red Team exercise. Here’s how it might work: The focus would be a published scientific report meant to inform policy such as the U.N.’s Summary for Policymakers or the U.S. Government’s National Climate Assessment. A Red Team of scientists would write a critique of that document and a Blue Team would rebut that critique. Further exchanges of documents would ensue to the point of diminishing returns. A commission would coordinate and moderate the process and then hold hearings to highlight points of agreement and disagreement, as well as steps that might resolve the latter. The process would unfold in full public view: the initial report, the exchanged documents and the hearings.

A Red/Blue exercise would have many benefits. It would produce a traceable public record that would allow the public and decision makers a better understanding of certainties and uncertainties. It would more firmly establish points of agreement and identify urgent research needs. Most important, it would put science front and center in policy discussions, while publicly demonstrating scientific reasoning and argument. The inherent tension of a professional adversarial process would enhance public interest, offering many opportunities to show laymen how science actually works. (In 2014 I conducted a workshop along these lines for the American Physical Society.)

Congress or the executive branch should convene a climate science Red/Blue exercise as a step toward resolving, or at least illuminating, differing perceptions of climate science. While the Red and Blue Teams should be knowledgeable and avowedly opinionated scientists, the commission should have a balanced membership of prominent individuals with technical credentials, led by co-chairmen who are forceful, knowledgeable and independent of the climate-science community. The Rogers Commission for the Challenger disaster in 1986, the Energy Department’s Huizenga/Ramsey Review of Cold Fusion in 1989, and the National Bioethics Advisory Commission of the late 1990s are models for the kind of fact-based rigor and transparency needed.

The outcome of a Red/Blue exercise for climate science is not preordained, which makes such a process all the more valuable. It could reveal the current consensus as weaker than claimed. Alternatively, the consensus could emerge strengthened if Red Team criticisms were countered effectively. But whatever the outcome, we scientists would have better fulfilled our responsibilities to society, and climate policy discussions would be better informed. For those reasons, all who march to advocate policy making based upon transparent apolitical science should support a climate science Red Team exercise.

Mr. Koonin, a theoretical physicist, is director of the Center for Urban Science and Progress at New York University. He served as undersecretary of energy for science during President Obama’s first term.

Appeared in the Apr. 21, 2017, print edition of the WSJ


#5

Aragorn, I know the above example isn’t exactly what you were talking about, but I thought it was of interest in the “how do we get the public to better understand the science” kind of way.


#6

And because on a Sat night, I was watching a Polka program on RFD TV and considering myself one of the nerdiest dudes in the world…

Instead l declare @Aragorn and @Powerpuff as co-champions of the ‘Welcome to Hell, here’s your accordian.’ contest.

:musical_keyboard: :smile:


#7

I promise to address this in more detail, but I want to do it from my work computer since I have some relevant stuff that I can only access there. Remind me to come back and write a longer post on this if I haven’t done so in the next few days, haha.


#8

I’m fine with it puff–I was hoping you might chime in here with some of your professional interests. I wonder if this might have been better for the Paris Climate Conference thread, but as long as this thread doesn’t turn into climate debate 2.0 let’s leave it here.


#9

Looking forward to it sir. Lots of interesting questions raised by this topic and paper, and I was frankly out of my depth (although enjoying the read). Also hoping you’ll be able to chime in with some other papers too.


#10

I accept. I am a huge nerd!


#11

I agree. The ability of modern computing platforms to ‘crunch’ massive amounts of data will be a huge part of medicine going forward. The results of such analyses will motivate healthcare systems to ensure that useful (=predictive) datapoints are ‘planted’ in every pt’s chart. (In that regard, it was fascinating that, in the present study, the lack of BMI info in a pt’s record predicted a lower risk of CVD.)

Likewise, such analyses will motivate ‘pruning’ of non-useful datapoints that had heretofore been assumed to be useful. For example, the authors mention that data concerning blood levels of an acute-phase reactant called C-reactive protein failed to ‘make the cut’ with regard to identifying pts at risk for CVD. The implication of this finding is that doctors can/should stop ordering this particular test for the purposes of CVD risk stratification. Thus, to the extent that cardiologists use CRP testing in their CVD-risk assessment protocols (not my area, so I don’t have a sense for it), this finding has the potential to yield significant cost-savings at the system level.

In that regard, I would point out that the potential cost-savings are not limited to the direct costs of the test itself (although those are considerable). That is, by removing this ‘bad’ predictor from the physician’s pool of clinical information, fewer pts will receive false-positive assessments concerning their risk of CVD. This in turn means fewer pts will be put (unnecessarily) on CVD-risk modifying drugs (eg, statins). This in turn leads to further cost reductions, both immediate (ie, the direct cost of the unnecessary drugs themselves), and further downstream (ie, in terms of the costs associated with treating pts who end up harmed by the drug they didn’t need in the first place).

As it is in his wheelhouse, I’m looking forward to hearing @ActivitiesGuy’s take on the article.


#12

It’s amazing how far machine learning and narrow set AI has advanced during my IT career.

Take chess for example. In the late 70’s, the apps barely knew the rules, let alone play.

Now… computer chess has ranked itself off the high end of the ratings levels.


#13

You’re not kidding–and even “Go” as well. It’s unbelievable and awesome to see how far we have come from “word processors” to “CD encyclopedias” and baby worldwide web to gigantic connectivity and high speed internet on our phones (not to mention group “crowd solved” supercomputing).


#14

ED–

I go through alternating phases regarding pruning of data points. On one level it’s important to minimize false positives, and this is promising in that arena. On the other hand, we may find they’re predictive or useful for something else down the road so I am almost never a fan of cutting data points out of anything. The number of reversals in perspective in research certainly (to my mind anyway) indicates caution in that area

The biggest issue to me though is the “black box” nature of the algorithms. You see what goes in, you see the result that comes put, but the process is very opaque and it is problematic to “tune” these algorithms in a transparent way.


#15

Well, practically speaking, the number of data points must be constrained. We can’t run every test on every patient, dump it all in a hopper, and hope that some combination of data points proves unexpectedly fruitful. There are both financial and ethical reasons why this would be unacceptable. Now, I know you’re not suggesting such per se. But if a given test is found to be noncontributory with respect to a given medical condition–as was the case with CRP and CVD in the paper presented–it’s difficult to justify keeping that test in the diagnostic protocol on the off-chance it might prove useful regarding some other condition.


#16

Sure, we are not able to maintain every data point, nor should we attempt to for the reasons you stated. This was one of many fascinating facets of the study to me given how much inflammation plays into disease processes and the use of CRP as a marker for systemic inflammation.


#17

The usefulness of CRP has been hotly debated for a while now:

"The role of inflammation in the propagation of atherosclerosis and susceptibility to cardiovascular (CV) events is well established. Of the wide array of inflammatory biomarkers that have been studied, high-sensitivity C-reactive protein (hsCRP) has received the most attention for its use in screening and risk reclassification and as a predictor of clinical response to statin therapy. Although CRP is involved in the immunologic process that triggers vascular remodeling and plaque deposition and is associated with increased CV disease (CVD) risk, definitive randomized evidence for its role as a causative factor in atherothrombosis is lacking. Whether measurement of hsCRP levels provides consistent, clinically meaningful incremental predictive value in risk prediction and reclassification beyond conventional factors remains debated. Despite publication of guidelines on the use of hsCRP in CVD risk prediction by several leading professional organizations, there is a lack of clear consensus regarding the optimal clinical use of hsCRP. " [emphasis mine]

http://www.medscape.com/viewarticle/808448


#18

Definitely agree.


#19

Many of them are self learning. Neural Nets have been in research and military applications since the 60’s. (The Air Force had an self learning analog neural net for reconnaissance photo review back then).

Many of the mathematical algorithms for pattern recognition were written in the 60’s.

It’s only recently, through the effect of Moore’s Law that we have the processing power to put them to practical application.


#20

It has only been a few short years ago that I read the CRP was the best indicator of CV issues. Heralded over the LDL, perhaps even in this site.

I only mention because science often replaces science, to the denigration of long term observation. A sort of 'Can’t see the forest for the trees.'
Not that I oppose science in any fashion, but I don’t hold it up as some ultimate truth.