Week of March 16, 2026
"I understand why you want me to take the drug, but at $300 per month, you also need to tell me when I am going to die. I want to calculate if the investment is worth it," is not a typical patient response, but this octogenarian engineer often frames his care as value analysis. And, even though he delivered these words with a wry smile, I suspect he hides behind the math.
When I pushed him on the maximum dollar-per-year-of-life he'd accept, he replied: "I'm more worried about how upset I'll be in the afterlife knowing I wasted money avoiding heaven." I reminded him, "I only deal with this life and you might want to hedge your bets - you could be dividing by zero."
Administrative things: Let me know if you need me to send this to an alternate e-mail address if you can't access links from within your organization.
The Google Notebook LM AI-generated podcast version of this week's newsletter.
Science and Technology Trends
A large, retrospective cross-sectional analysis from Sweden suggests that antibiotics disrupt the gut microbiome for much longer than expected. Researchers analyzed the Swedish health system's prescription database and compared the timing of antibiotic use with microbiome diversity, using stool samples collected at various time points after known antibiotic prescriptions. They found that gut microbiome diversity is diminished for up to 4-8 years after exposure to antibiotics, with some antibiotics causing more profound changes. The clinical significance of a "less diverse" gut microbiome is unclear; however, many disease states have been associated with diminished gut bacterial diversity (e.g., psychiatric, cardiovascular, and autoimmune diseases). While I would never advise avoiding antibiotics to avoid this problem, it is certainly another interesting data point in the growing understanding of the impact and balance between the microbiome and human health.
Article:
AI-assisted analysis:
Here is a broader scan of the medical literature demonstrating the problem of the microbiome - we have lots of associations without a clear ability to affect treatment. I had Open Evidence, the AI-driven clinical LLM, generate a table of research articles linking changes in the various disease states:
However, when we look for studies on therapies, it is evident that we cannot easily alter intestinal flora balance, and even if we could, there is limited evidence that probiotics successfully treat many of the illnesses associated with dysbiosis.
Anti-Anti-Science
Last week, HHS announced a new payment model, MAHA ELEVATE, supporting "functional" and "lifestyle" medicine aimed at preventing chronic illness.
- The program will fund up to 30 three‑year, evidence‑based lifestyle/functional medicine interventions (nutrition, physical activity, sleep, stress, substance use, social connection) with about $100M via cooperative agreements, not FFS or shared savings.
- Interventions must support, not replace, conventional care and explicitly target chronic disease prevention/slowing progression, with required inclusion of nutrition or physical activity and reserved awards for dementia‑focused proposals.
On the one hand, I applaud HHS for focusing on chronic disease prevention rather than just treatment; however, the world of functional and lifestyle medicine is often more about faith than science. While there is good evidence for the value of lifestyle medicine (diet, exercise, sleep), functional medicine is typically marked by overinterpreting extensive, nonspecific lab panels and prescribing supplements.
How closely HHS administers the "cooperative agreements" (what qualifications or experience will grant recipients need, and how closely will HHS be involved throughout the research?) will be critical. In addition, is 3 years enough time to demonstrate a significant impact on averting chronic illness (like dementia)? Lastly, most adults in the United States over the age of 65 (i.e., Medicare recipients who fall under the HHS programs) have a high likelihood of having an existing chronic condition or the early stages of such conditions. Put simply, MAHA ELEVATE could be very interesting, or it could be another farcical adventure in logical fallacies and pseudo-science. Or maybe both.
I think it is reasonable to encourage chronic disease prevention and appropriate preemptive care. Stat News published an interesting commentary from a medical student this week, discussing the frustrations many of us who take care of complex, chronically ill patients experience:
However, disambiguating pseudoscience from science will be a real challenge. One of my favorite and often provocative science bloggers, Jordan Lasker (who posts under the pseudonym Cremiex), offered a robust discussion of the intellectual fallacies of "functional medicine" and the challenges HHS will face in supporting this work. (Worth reading, though I'd suggest Googling Jordan Lasker - he has a strong POV.)
Sadly, we have hampered our ability to ensure high-quality science is being funded and monitored. As of the end of 2025, 10,109 doctoral-trained experts in science and related fields left their federal government jobs. Science offered an agency-by-agency analysis of the workforce changes.
"At most agencies, the most common reasons for departures were retirements and quitting. Although OPM classifies many of these as voluntary, outside forces, including the fear of being fired, the lure of buyout offers, or a profound disagreement with Trump policies, likely influenced many decisions to leave."
Thus, the agencies best positioned to evaluate whether programs like MAHA ELEVATE produce real science are now severely understaffed.
AI Impact
Researchers embedded (ChatGPT-4o) into the electronic medical records at 16 primary care clinics in Nairobi and found that the AI's advice was clinically sound and locally appropriate, the vast majority of the time. This is a retrospective, observational analysis without a control group. However, the authors observed:
- When AI offered beneficial advice, clinicians followed it in about 22% of those cases.
- When the AI offered potentially harmful advice, clinicians followed it about 58% of the time.
The authors speculate this reflects the implied authority or veracity of AI: when suggestions felt unfamiliar or counter to the clinician's initial instinct, they were more likely to defer to the AI. The hard part is that we don't know how often these clinicians "get it wrong versus get it right," without the AI.
Moreover, this study is a great reminder of how important it is to maintain critical thinking in an age of AI -
- Are there factual errors in the answer?
- Am I asking the right questions?
- Do the answers make sense in a broader context?
- Are there patterns of inconsistency across all of the AIs' answers?
Article:
AI-assisted analysis:
Related: Lots of clinicians (including yours truly) are already using AI on a daily basis. OpenEvidence, the medical-tuned AI LLM for physicians and clinicians, hit 1 million clinical consultations in the last few weeks. https://www.prnewswire.com/news-releases/openevidence-achieves-historic-milestone-1-million-clinical-consultations-between-verified-doctors-and-an-artificial-intelligence-system-in-a-single-day-302712459.html
Even when warned about bias, having AI display suggested next words shapes humans' thoughts and ideas. Scientific American reported on a fascinating study examining how autocomplete changed political opinions, even when users did not use the autocomplete suggestions.
Cornell researchers enrolled 2,582 participants (online) to write a short essay about a debatable issue using a custom online text editor. Half the participants (AI Treatment) received in-line autocomplete suggestions. The data demonstrated that AI writing assistants configured to produce biased autocomplete suggestions reliably shifted users' attitudes on contested societal issues — death penalty, fracking, GMOs, and felon voting rights — in the direction of the AI's predetermined position. Most users remained unaware they were being nudged, and those whose attitudes shifted were less likely to detect the bias than those who resisted it. Even warning users about the AI's potential bias — before or after the writing task — did not meaningfully reduce the attitude shift, which is alarming given that real-world AI products already deploy exactly these kinds of disclaimers.
Research Article:
AI-Supported analysis
And healthcare consumers are all-in, too. Microsoft researchers analyzed 500,000+ de-identified health-related Copilot conversations in January 2026. Using AI and human reviewers (for validation), they categorized the questions into 12 question types. They found:
- General health questions are the most common, at ~40%, but the authors suspect that questions framed in general terms may well reflect the user's own health concerns rather than casual curiosity.
- 1 in 5 conversations: personal symptoms or active condition management
- 1 in 7 of those: on behalf of a child, parent, or partner
- 1 in 10: interpreting symptoms, labs, or imaging
- 1 in 20: navigating the healthcare system (finding providers, understanding insurance)
- Emotional and symptom queries climb as the day ends, and clinicians become unreachable.
- Mobile = personal and emotional; desktop = research and professional (3× gap)
It would be interesting to see whether the advice was sound and how frequently incorrect information or hallucinations could be identified; however, as with the doctor study, it is difficult to generate a control group when it's AI versus no AI for these kinds of activities.
Article
AI-assisted
Consumers will also need to be thoughtful AI healthcare users. Following OpenAI and Anthropic, Microsoft announced Co-Pilot Health, a separate, secure space within Microsoft Co-Pilot for consumers to store and understand medical records. Beyond the promise of data privacy and security, Copilot Health integrates with numerous wearable devices, connects to "50,000+" US hospitals and Lab providers, and provides users with provider and insurance coverage. As always, I am concerned about promises of privacy and security - business expediencies will make these tools a huge opportunity for hyper-targeted advertising and, perhaps, more subtle behavior shaping.
Microsoft page about Co-Pilot Health:
Of course, some people are more suggestible than others. A Canadian author says she's dating an AI octopus named Sinclair (who speaks to her with an Irish accent), inspired by her monster romance novels. I worry for humanity.
Things I learned this week
"Ornamental Hermits Were 18th-Century England's Must-Have Garden Accessory. Wealthy landowners hired men who agreed to live in isolation on their estates for as long as seven years." This is lifestyle medicine I can get behind.
In the same vein as hobby horse, hobby motocross, and hobby dogging, last week I found an article about the non-hobby rabbit show jumping. Real rabbits on real leashes being led through real obstacle courses. Like Hobby Horse, Scandinavian countries seem to be the originators of this "sport". I think the Swedish Kaninhoppning (Rabbit Jumping) sounds very sophisticated.
and
The quotes in the Der Spiegel article are unintentionally funny and accidentally philosophical:
"As long as you train them, they really like to do it," Fehlen, who has several rabbits involved in Kaninhop, says. "You have to teach them to jump over the hurdles, but at some point, they get it."
I think we all agree that "you have to teach them to jump over the hurdles, but at some point they get it," is equally valid for raising children.
On one of my flights in the last few weeks, I found myself reading about animals that eat psychedelic plants. There are numerous, well-documented examples of animals eating fermented fruit (presumably for the alcohol) and psychoactive plants for the mind-altering effects. As you might imagine, the most in-depth writings on this topic are from scientists and advocacy organizations. For instance, Unlimited Sciences is a pro-psychedelic nonprofit that promotes psychedelic research and normalization.
However, once you go down this rabbit hole, you find yourself in such vaunted literature as High Times (yes, the cannabis magazine), reading about angry goats that attack scientists collecting psychedelic mushrooms. "[Encountering a herd of mountain goats while gathering mushrooms, ethnobotanist Giorgio Samorini was kicked in the chest for intruding on his herd's patch of magic mushrooms. The goat then proceeded to devour all the mushrooms the scientist had already collected." I was a bit surprised by this anecdote - I was envisioning more Woodstock than a New Jack City of violent, territorial hallucinating goats. I would submit a grant to MAHA ELEVATE to support research into microdosing rabbits to improve competitive jumping performance, but I strongly suspect we no longer have a federal pharmaceutical lagomorphologist program.
Are there lagomorphologists working for the US Government? Sort of.
AI art of the week
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt. I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt. I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.
Another week requiring surrealism, which seems to be on point.
In the intricate alchemical style of Remedios Varo, a tall vertical composition depicts a fantastical laboratory-tower embedded in an overgrown English garden. At the top of the tower, a robed hermit sits in meditative isolation at a small desk, surrounded by floating mathematical equations and a single candle, calculating the cost-per-year-of-life on a scroll that trails down through the building like a ribbon. On the middle floors, white rabbits in tiny numbered racing bibs navigate an elaborate obstacle course of hurdles built from test tubes and microscope slides, their eyes wide and luminous, guided by delicate leashes held by serious Scandinavian scientists in fur-trimmed coats, taking meticulous notes. In the lower chamber, a mountain goat with dilated pupils aggressively guards a glowing patch of mushrooms, kicking an alarmed ethnobotanist in a field jacket who clutches a now-empty specimen bag. Surrounding the tower's exterior, enormous translucent gut bacteria float like lanterns in the night air, their diversity diminishing as they drift further from the structure. In the far background, a vast baroque government building has a banner reading "MAHA ELEVATE" draped across its columns, its windows glowing with the cold blue light of computer screens. A small Irish-accented octopus in a waistcoat peers in through one of the windows, holding a love letter. Everything rendered in Varo's characteristic warm amber and teal palette, hyper-detailed, dreamlike, procedural, with her signature sense that all of this activity is completely serious and completely inevitable.
Clean hands and sharp minds,
Adam
Comments
Post a Comment