What Adam is Reading - Week of 4-20-2026

Week of April 20, 2026

I am fascinated by the anthropological dynamics of conference hotels. Logo'd lanyards and business casual are 21st-century tribal insignia. Maintaining eye contact without a furtive name-badge glance is the greatest social etiquette challenge ("who is this person who clearly remembers me?"). The struggle to balance courtesy and hygiene with a knuckle-pressed elevator button (nine-dollar latte in hand) is an undercelebrated acrobatics maneuver.
In one elevator, I found myself commiserating with a regional VP of vinyl siding sales about the inbox overflow that every conference produces. I wanted to ask how AI is changing his side of the home improvement industry, but his bulging swag tote drew my attention. His legal team must spend far less energy on Stark violations and anti-kickback statutes. So by the time we hit floor fifteen, I was quietly weighing whether trading CME requirements and licensing exams for blueprints and custom siding might have been the better career. It is easier to see the greener grass of alternate careers when swag bag size is the metric.

The Google Notebook LM AI-generated podcast version of this week’s newsletter.

Science and Technology Trends

President Trump signed an executive order that coordinates various federal agencies and removes barriers to ongoing psychedelic substance research. Specifically, the FDA, DEA, and DOJ were instructed to expand access to psychedelic medications for research purposes. In particular, research on ibogaine — an alkaloid derived from an African shrub that has shown promise for opioid use disorder but carries significant cardiac risk — received a $50 million federal grant paired with a Texas state grant. Overall, a potentially useful move across psychiatric research — ibogaine for opioid use disorder, psilocybin and MDMA for treatment-resistant depression and PTSD.
I've previously written about the inherent problems with doing research on psychedelics, since double-blind placebo-controlled trials are nearly impossible (the medications being studied induce obvious hallucinogenic effects). Either way, these kinds of executive orders pose an interesting prioritization problem. Why now? Is this a top medical priority? What else could these resources fund? Who will benefit from this change? Will the Executive Order pressure the evidence thresholds required for FDA approval? I am concerned that political pressure could compress evidence standards for a drug class where the unblinding problem already makes the evidence ambiguous.
I used OpenEvidence to generate a list of known psychedelic studies across various illnesses. There is enough promising evidence (particularly for MDMA in PTSD) that reducing regulatory friction could plausibly accelerate useful data.

STAT News examined the impact of ambient AI scribes (which capture conversations between patients and clinicians) on health care costs. STAT cited a recent study by Trilliant Health, a healthcare analytics company, that analyzed national outpatient coding data and found an increase in billed complexity around the time the use of ambient AI scribes increased. STAT frames this as a problem of upcoding (billing more intensively without delivering proportionally more care), but I think the Trilliant study indicates that ambient scribing captures a broader (and likely more accurate) snapshot of complexity. Many of us in complex chronic disease management have noted that clinical documentation often fails to capture the extent of many patients' illnesses. So while costs are going up, they may be tracking the actual complexity of care rather than inflating it.
AI-assisted analysis of both the article and the Trilliant study. https://claude.ai/public/artifacts/b4757882-3aa2-41f5-9bfe-ae17f94bb9f4

Anti-Anti-Science

What makes medicine hard is when the line between evidence-based prescribing and off-label use blurs. The New York Times ran a sharp interactive opinion piece this week on current GLP-1 (Ozempic and Zepbound) use in the United States. Julia Belluz, a health journalist and former Vox senior health correspondent, documents that roughly one in eight Americans now uses a GLP-1, often for indications the drugs were never approved for. She catalogs use cases ranging from alcohol and opioid use disorders to osteoarthritis, long COVID, migraines, menopause symptoms, anxiety, and hair loss — most with mounting anecdotal evidence and little formal study data. The most striking statistic: 63% of users polled said they'd keep taking the drug even if their original indication didn't improve.  The mechanistic hypothesis is that GLP-1s modulate craving and compulsion more broadly than just diabetes and obesity, with patients reporting reduced "food noise" that appears to spill over into alcohol, shopping, and other compulsive behaviors. It will be interesting to see how many of these off-label uses aren’t off-label in the coming years.
Article:

Public health communicators Jess Steier, DrPH; Izzy Brandstetter Figueroa, MPH; and Riley Mulholland, MPH, walk through the recent administrative changes and legal battles surrounding the CDC's Advisory Committee on Immunization Practices (ACIP). The piece is cross-posted with CIDRAP at the University of Minnesota. You will recall that HHS Secretary Robert F. Kennedy Jr. fired the existing membership last June, opening the door to replacement members with little vaccine expertise and documented anti-vaccine advocacy. This group revised the childhood vaccination policy in line with Denmark, as I wrote about in December. Since then, the American Academy of Pediatrics filed federal lawsuits, resulting in a stay on the schedule changes, and 19 state attorneys general have filed separate suits challenging the broader HHS restructuring. The latest move is quieter but more durable: HHS rewrote the ACIP charter itself to broaden eligible expertise (including "recovery from vaccine injury"), replace liaison organizations with vaccine-skeptical ones, and expand the committee's authority to revisit existing recommendations. While the science has not changed, the people interpreting the science certainly have. This is how anti-science gets a foothold, not through public debate, but through charter language and liaison appointments.


AI Impact
Two studies in Nature looking at the impact of AI tools this week:

Researchers from Anthropic and Truthful AI demonstrate that when one large language model generates training data for another, even as simple as lists of three-digit numbers, the "student" model can inherit the "teacher's" traits. These traits include a fondness for owls, a preference for oak trees, or, more unsettlingly, broad misalignment that shows up as endorsements of violence. The authors call this subliminal learning and argue that it is a general property of neural networks that share the same base training run. Of note, training models on the outputs of other models is increasingly common in the industry. While the effect is strongest between models that share a similar base, data is somehow being encoded between models without clarity on how or how to clean AI-generated training data of these latent traits. For anyone building clinical LLMs on distilled or synthetic training data, this may be a quiet but real problem. Let's just hope we are training our large language models to appreciate a nice game of chess and not thermonuclear war.
For those who did not grow up in the 1980s (in reference to chess vs. nuclear war): https://en.wikipedia.org/wiki/WarGames

Of course, if AI starts teaching itself (generation to generation) that it would prefer to do bad things to humans, it will have an easy time convincing us that doing those bad things is the right thing to do. Researchers at EPFL and Fondazione Bruno Kessler demonstrated that GPT-4 (a now-superseded OpenAI model), given six basic demographic data points about a human, was roughly 81% more likely than a human debater to shift an opponent's position in a live ten-minute debate. The mechanism worth noting: the personalization effect came from argument selection, not better writing. Humans, given the same personalization data, couldn't improve their persuasion skills. The study was conducted in a controlled environment with structured debates between anonymous strangers, which is a long way from how persuasion actually unfolds on social media or in group chats. But it is a clear demonstration that LLMs can operationalize trivial demographic data into targeted persuasion faster than humans can.

Things I learned this week

While not “a headline of the week,” I was struck by an article about Edward Warchocki, a customized Unitree G1 humanoid robot that has become a viral sensation in Poland. The clip that caught my attention was footage of Edward chasing and attempting to herd a pack of wild boars through a Warsaw parking lot (I did not know that Warsaw has struggled for years with several thousand wild boars who have made the city their home). Edward wears a backpack, knee pads, and helmet lights, and in the video raises its fist in apparent frustration as the boars escape into the trees. The robot has also visited the Polish parliament, performed on stage with a singer, and chased marathon runners. The broader story, less fun but worth stating: Edward is a marketing project run by Polish entrepreneur RadosÅ‚aw Grzelaczyk and AI developer Bartosz Idzik, designed to test how embodied humanoid robots can function as brand influencers in a space currently dominated by digital avatars.  Marketing or not, robots chasing feral boars is entertaining.
Related WTF: Why are people smiling as “A humanoid robot sprints to victory in Beijing, beating the human half-marathon world record”?  This does not bode well for outrunning the robots.

National Geographic published an unexpectedly rigorous piece about a peer-reviewed observational study of turtle racing in the United States. Alex Heeb, a computer scientist and amateur naturalist who founded the Turtle Race Task Force, spent 2019–2021 organizing volunteer groups to observe, document, and characterize over 615 annual turtle races across 30 states, largely concentrated in Kansas and Oklahoma (141 and 139 races, respectively). They documented significant welfare problems among the turtles, including overheating, turtle-on-turtle aggression, and injuries from racing. They also documented transmission risk for several infectious diseases (turtle herpesvirus, mycoplasma, and ranavirus) across a population of largely wild-caught box turtles. Rather than calling for a ban, Heeb's team attends races and sets up education booths, working toward improved husbandry protocols and pathogen screening. Which is, notably, more population health rigor than we are currently applying to peptide deregulation or childhood vaccines. Perhaps they should employ Edward the Polish Robot to help?
Article:


AI art of the week
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt.  I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.


This week, I had AI generate Nazca lines that capture the newsletter's themes.  Prompt got a little long, so here's the full prompt to read elsewhere: https://claude.ai/public/artifacts/5a07012a-bf57-4483-a7af-fcf25aec5a7e
In case you are wondering what a Nazca line is: https://en.wikipedia.org/wiki/Nazca_lines


Clean hands and sharp minds,
Adam

Comments