My new book contract…

I’m working on a book about becoming a scientist who is also a public intellectual. Kind of the story of my professional life since I ‘graduated’ from being a postdoc at the NIH. Yesterday, I signed a book contract with Elgar. I’m really jazzed about this project and I’ll keep you up to date as things progress.

Last day of classes: Spring semester

This was a particularly enjoyable semester for me. Lots of very smart students (both grad and undergrad), and I learned some new tricks for teaching more effectively. George Mason has a new provost and a new logo. On the latter, my students weren’t too enthusiastic. But I think we got a good one on the former. I’m pleased to see our leadership team’s roots in the University of California system.

My new book project about scientists as public intellectuals is moving forward at full speed this summer. I’m also looking forward to revisiting Australia (see photo).

U Chicago’s Money Issues…

The Chronicle has a pretty comprehensive look. It’s not a pretty picture. And the MBL ‘acquisition’ has an uncomfortable high profile in the piece. I respect UC–it does a lot of things better, but it will have to make some tough decisions. Interestingly, what started with a couple of high-profile public R1s now seems to be spreading to the privates.

Most science is expensive…

Photo by FOX on Pexels.com

My folks (they were both scientists also) used to drill into my young, impressionable, self that making it as a scientist was all about asking the right questions. But, the key constraint, was not just the ability to get ask the right question. Rather, it was the ability to have the right tools to answer the question experimentally.

As science has progressed, what we seek to measure, has become smaller and shorter-lived. The gravitational waves from the collision of a pair of black holes, detected by LIGO back in 2015, produced a displacement smaller than the diameter of a proton here on Earth. The machines built to measure that displacement cost something on the order of a billion dollars.

And that example has played out across the entire science waterfront. The phenomena, important as they are to moving science forward, remain ephemeral. You can ask exactly the right question, but the tools to answer your scientific query are expensive.

So why do science then? Aren’t there other more important policy objectives at hand? I would answer that we need to do science for two reasons: first, science delivers concrete practical goods like new medicines and therapies. And second, science allows us to gain knowledge about our place in the universe.

How to reform NIH…

Recently, I’ve mostly written in this respect about the NSF, but I also spent six years at the NIH, as a staff fellow in the intramural program (the biomedical medical center in Bethesda Maryland). When most folks think about the NIH, they are not really focussing on the intramural program. Rather, it’s the extramural program that gives out grant awards to biomedical researchers at US Colleges and Medical Centers that gets the attention. And I guess that’s fine because the extramural program represents about 90% of the NIH budget.

But, if I were going to magically reform the agency, I would focus on the intramural program. That’s because it has so much potential. With an annual budget north of $4B/year, America’s largest research medical center and thousands of young researchers from all over the world, it has so much potential. If Woods Hole is a summer nexus for life sciences during the summer, the NIH Bethesda campus is that thing on steroids year round.

The special sauce for the intramural program is that ideas can become experiments and then discoveries without the usual intermediate step of writing a proposal and waiting to see if it was funded. When I was at NIH, I could literally conceive of a new experiment, order the equipment and reagents and publish the results several months later. Hence, the intramural program has the structure in place to be a major science accelerator.

But, for some reason, when we think of such science accelerators, we generally consider private institutions like HHMI, the Allen Institutes and perhaps the Institute for Advanced Study in Princeton. What about NIH? On the criteria of critical mass, it dwarfs those places.

To my mind the problem lies in NIH’s ‘articles of confederation’ nature: it’s really 27 (or so) different Institutes and other units that are largely quite independent (especially the NCI), with a relatively weak central leadership. And this weak confederation organization plays out, not only on the Hill or in the awarding of extramural awards, but crucially also on the Bethesda campus, where intramural institute program directors rule fiefdoms that are more insular than academic units on a college campus. And this weak organizational architecture acts in the opposite direction of the science accelerator advantage that I wrote about above.

So here’s a big idea: let’s make the intramural program it’s own effective NIH institute. And have Congress authorize it and fund it separately, as a high risk, high payoff biomedical research program for the country. Does that sound like ARPA-H? Ooops. Well, then maybe we should just give the Bethesda campus to ARPA-H.

What is going on with global politics?

I spend most of my time thinking either about global issues somewhat removed from politics, or the politics of science and the academy. But it’s clear that geopolitically, something much larger and emergent is going on that is fostering an increase in violent conflict and political anger. And this is happening not just here in the United States, but globally with hot wars going on in Europe and the Middle East, to say nothing of new threatened conflict in South America.

Apart from the micro-details of each particular conflict, the bigger picture, from my point of view, is that the pandemic and climate disruption are global shocks that have led to this emergent manifestation of large-scale human-on-human violence. Apart from a complexity science approach, how would we study this notion? Are there any testable hypotheses here?

Asking for a friend…

Reproducibility redux…

The crisis of reproducibility in academic research is a troubling trend that deserves more scrutiny. I’ve blogged and written about this before, but as 2024 begins, it’s worth returning to the issue. Anecdotally, I’ve noticed that most of my scientist colleagues have experienced the inability to reproduce published results on at least one occasion. For a good review of the actual numbers, see here. Why are the findings from prestigious universities and journals seemingly so unreliable?

There are likely multiple drivers behind the reproducibility meme. Scientists face immense pressure to publish groundbreaking positive results. Null findings and replication studies are less likely to be accepted by high-impact journals. This incentivizes scientists to follow flashier leads before they are thoroughly vetted. Researchers must also chase funding, which increasingly goes to bold proposals touting novel discoveries over incremental confirmations. The high competition induces questionable practices to get an edge.

The institutional incentives seem to actively select against rigor and verification. But individual biases also contaminate research integrity. Remembering back to my postdoctoral experiences at NIH, it was clear even then that scientists get emotionally invested in their hypotheses and may unconsciously gloss over contrary signals. Or they may succumb to confirmation bias, doing experiments in ways that stack the deck to validate their assumptions. This risk seems to increase as the prominence of the researcher increases. It’s no surprise that findings thus tainted turn out to be statistical flukes unlikely to withstand outside scrutiny.

More transparency, data sharing, and independent audits of published research could quantify the scale of irreproducibility across disciplines. Meanwhile, funders and academics should alter incentives to emphasize rigor as much as innovation. Replication studies verifying high-profile results deserve higher status and support. Journals can demand stricter methodological reporting to catch questionable practices before publication. Until the institutions that produce, fund, publish and consume academic research value rigor and replication as much as novelty, the problem may persist. There seem to be deeper sociological and institutional drivers at play than any single solution can address overnight. But facing the depth of the reproducibility crisis is the first step.

Happy New Year!