What Adam is Reading - Week of 3-30-26


Week of March 30, 2026

With our younger son home for spring break, last week was filled with atypical events. At a talk by Erik Larson (who wrote The Devil in the White City), we were introduced to the antebellum South's culture of dueling for honor. We visited Antietam, the site of the largest single-day casualty count in American military history, reconstructed in minute-by-minute tactical detail (among the most documented U.S. military battles, after Gettysburg). And, courtesy of the movies The Wicker Man (1973) and Days of Heaven (1978), I was reminded of that "critical acclaim" doesn't always equal enjoyable.
My son's fresh, well-educated perspective is a reliable counterbalance to my more age-hardened interpretations. Where he sees anachronistic rituals, historical tragedy, and artistic cinema, I now see dueling as an underutilized HR tool, wonder if time-traveling historians start wars for job security, and suspect 1970s film critics were likely inebriated.

Antietam, minute by minute:
Movies:
and

Administrative note: I am incrementally aging next week. I may take next week off from writing, pending the degree of celebration.

The Google Notebook LM AI-generated podcast version of this week's newsletter.

Science and Technology Trends

Somewhere at the intersection of 1800s patent medicine and real science, I have watched red light therapy (aka photobiomodulation) evolve from pseudoscience to sort-of science. While there is a long way to go before we understand the full clinical potential, Scientific American did a fantastic job of pulling together as much data as possible into a very readable review. I had the clinical website, Open Evidence, pull all available research studies and comment on the quality of the data, which I linked below.   In case all this is too much, here is Open Evidence's attempt at a TLDR:
Red-light therapy (AKA photobiomodulation) has a respectable body of evidence supporting its use for conditions ranging from nerve pain and hair loss to knee arthritis and chemotherapy-related fatigue. The working mechanism is surprisingly elegant: specific wavelengths of light activate an enzyme in the mitochondria, essentially coaxing cells to produce more energy and repair themselves more efficiently. The evidence is real but imperfect; most studies are small and inconsistent in how the therapy is delivered.  A recent expert consensus panel gave it a qualified endorsement for several conditions, and its side-effect profile is about as benign as it gets (social awkwardness of wearing a creepy mask aside).
Article:
AI-supported summary:
AI-supported clinical data review:

I clicked this article expecting to be entertained by AI cow collars and the word 'cowgorithm'.  I lingered on the article because of the amusing, unflattering photo of Peter Thiel photoshopped with cows. And yet, this seems like a really good idea:
Halter's (cow collar) technology works through a system of solar-powered collars and in-pasture towers that collect data—some 6,000 data points per collar per minute—from grazing cattle and feed it into a cloud-based platform and app for farmers. The collars are ergonomically designed to be comfortable for the cattle wearing them and leverage AI to play audio cues or vibrate when it is time to move to a different grazing location or when they step outside a predetermined zone. The collars can also deliver a small electric pulse if an animal does not respond.
I can only assume the audio cues are a Spotify playlist of Nine Inch Nails, Disturbed, and Rage Against the Machine.  We all know how cows respond to politically motivated rage metal.
Article:

Recently, my wife and I started watching HBO's The Pit. A few episodes in, I had to abandon the series. It was so well done that it triggered some unsettling memories from my medical school rotations at Shock Trauma in Baltimore.
I was still pleased to find that Dr. Jeremy Faust, one of my favorite medical opinion writers, interviewed Joe Sachs, MD, an emergency physician and executive producer of the HBO series.  Dr. Sachs still does ER shifts but has been a medical expert for TV and movies since the show E.R., and discusses an interesting array of problems encountered when creating medically realistic TV shows with high levels of accuracy. This interview is a fascinating biopsy of anecdotes from a career of making medical dramas.
Article/Interview:

Tulane pediatrician-epidemiologist Thomas Farley published a fantastic blog post about the astonishing mortality associated with COVID-19. His recent Substack looks at mortality data and estimates that there were approximately 1.36 million excess American deaths (a 25% spike in age-adjusted mortality) between 2019 and 2021; more than the 1918 influenza pandemic in absolute terms, though 1918 exceeded COVID in  per-capita comparison. He offers a  breakdown at the state level: states with higher vaccination and mask rates had lower mortality. Keep in mind he is biased - though he states his motivation is resisting the slow roll revisionist history in the wake of anti-science and cuts to public health.  The comments section offers some criticism of his analysis (for instance, not using per capita death rates when comparing COVID to the 1918 flu).  And though some of this data has been published before, this is the best comprehensive analysis I've seen in some time. Farley concludes by reflecting on things that should not be forgotten: how horrible the pandemic was, how time dulls painful memories, and that vaccines saved many, many lives.
Substack:
AI-assisted Summary:


Anti-Anti-Science

HHS is about to host the first showcase for its consumer-facing health tech initiative, which aims to certify AI apps that interpret health data directly for patients. Participating companies registered to develop interoperable patient-facing mobile applications that use AI to interpret and help inform patients. The goal is laudable: to democratize access to and analysis of health information to help patients make more informed choices.  However, details matter.  My main concern is that the HHS program is very focused on the tech and less on the quality of the clinical output.  While many vendors will act in good faith, there will inevitably be clinically dubious applications entering the market.  And even those that attempt to offer clinically valid and appropriate applications face the problem that there are many unknown unknowns in the AI space.  Last week, I found several related articles that argue for caution in deploying clinical AI.

Vanderbilt researchers found that LLMs struggle to translate common qualitative risk terms into actual probabilities.  Words like "rare" or "common" actually have numerical definitions in some circumstances.  However, when LLMs were asked to interpret phrases like "my doctor said there is a high likelihood that I will have X rare condition," there was high variability among different LLMs.  Moreover, the LLMs frequently refused to provide numeric interpretations, with abstention rates ranging from about 31% to 94% across the four tested LLMs (and even higher when questions involved anxious language or higher clinical severity). The takeaway here is that translating medical terminology and physician statements into patient-understandable language remains imprecise and potentially problematic. This could be solved with appropriate training or filtering of answers, but out-of-the-box LLMs, which probably power an enormous number of soon-to-be-available patient-facing applications, may not be doing the heavy lifting of refining outputs for the appropriate clinical context. Bearing in mind that this article is largely a report on research rather than the full study, and that only a limited number of conditions were tested with non-medical LLMs, the data is still a good reminder that giving medical advice carries a degree of risk.
HHS Program Website:
Article:
AI-supported summary:

Psychiatrist Dr. John Torous, MD, the Director of Digital Psychiatry at Beth Israel Deaconess in Massachusetts, recently emphasized similar points. He was interviewed by JAMA after recently giving congressional testimony before the House Committee on Energy and Commerce on the risks and benefits of AI chatbots in psychiatry. In his testimony, Torous stated that "no AI mental health chatbot has passed even a minimal evidentiary threshold." Most RCTs compare a chatbot against a waitlist; a comparison so easy that almost anything interactive would win, purely on expectation and novelty.  Torous argues that AI chatbots probably work as well as a self-help book — useful for some people, not enough for most, and unlikely to move population-level mental health outcomes.  And, in some instances, chatbots may reinforce some underlying mental health issues like anxiety or OCD.
JAMA+AI published an interview with him this week.  
Article:
AI-supported analysis:

Likewise, I found this paper from Danish researchers (including Søren Dinesen Østergaard, a well-known AI medical researcher) who searched the electronic health records of every patient seen for psychiatric care in Central Denmark between September 2022 and June 2025, spanning the post-ChatGPT launch era. Two clinicians independently reviewed the charts of any patient who had clinical notes containing an indication of LLM use - words like "ChatGPT," "chatbot," or 22 common misspellings of each. The clinicians assessed whether the AI chatbot use appeared to have contributed or exacerbated the patient's psychological health. Among the 10.7 million medical records for 53,974 unique patients, researchers identified 181 notes from 126 unique patients that mentioned chatbots or AI. Of those 126, 38 patients had indications of chatbot psychological distress or harm. Most commonly, patients who have delusional disorder seem to have their delusions reinforced by chatbots (which is not a surprise in a world of effusively validating LLMs). However, the authors also found instances where chatbots helped patients with suicidal ideation, self-harm, disordered eating, depression, and OCD.
Article
AI-supported summary:

Taken together, these data point to potential clinical pitfalls in patient-facing applications. While these articles are mostly about psychiatry, other clinical interpretations and direct therapeutic recommendations need to be appropriately vetted to maximize the potential for patient understanding.  There is an accumulating body of data indicating that many people take AI recommendations at face value, often failing to realize that incorrect answers are incorrect and interpreting them as "the AI knows something I don't know."


AI Impact

The AI Daily Brief Podcast published a thoughtful episode on the intersection of AI and the workforce, "'Will AI take your job?' is the Wrong Question."  Whittemore essentially makes seven arguments against the conventional AI displacement narrative:
  1. While knowledge work will be impacted, AI is exacerbating, not causing, the broken college-to-white-collar pipeline.
  2. Many companies citing AI as the reason for cuts may be crafting a favorable investor narrative to justify reductions in pandemic-era hiring.
  3. AI's dominance in software coding probably doesn't generalize cleanly to other knowledge work domains.
  4. People often want human judgment and interaction, and that preference will drive jobs.
  5. Every major technology wave has triggered job-apocalypse fears, most of which were wrong at the macro level.
  6. AI unlocks entirely new categories of demand and output, making net economic and job growth more likely than job loss.
  7. If AI displacement is truly catastrophic, society will have to restructure itself around non-work-based participation.
Recognizing that Nathaniel Whittemore, the podcast host, is perpetually optimistic about AI, he makes a reasonable case that AI will expand and transform the nature of work rather than displace humans. It's worth a listen, but do so with a critical mind.  His seventh point, though, implies a level of competent governance that the current moment does not inspire confidence in.
AI-summarized transcript of the episode:

And, given all of the above, I think it's worth revisiting some of Kirkpatrick Sale's writing: Rebels Against the Future: The Luddites and Their War on the Industrial Revolution. The 1995 book examines the anti-industrial movement from the early 1800s and argues that the Luddites were not simply anti-technology but were resisting the social and economic harms of early industrialization. Essentially, the struggle to maintain their human-ness in the setting of mass manufacturing and interchangeable factory jobs. The book is a classic neo-Luddite text and is a historical argument against the concentration of labor, technology, and power.  Sale was writing in the early 1990s, in a time of dial-up modems, car phones, and standard definition, CRT televisions. At least Whittemore's fifth argument seems to be true - human reaction to change is consistent and hyperbolic. And major societal shifts spawn reflective literature.  I wonder if my grandkids will interpret the dystopian worlds of Ernest Cline (Ready Player One) as the Upton Sinclair (The Jungle) of the 21st century (though where Sinclair was appalled, Cline seems to nostalgically revel in technofascism)?
About the Sale book:
If you want more:


Things I learned this week

I learned humans now have the technology to transport antimatter in a truck (I was expecting spaceships, but innovation is incremental). Scientists at CERN built a very large, heavy magnetic container that could be loaded into the back of a truck. This container held 92 anti-protons in a cooled magnetic field, which they drove around for several hours to test the stability. The ultimate goal is to move antimatter outside the Large Hadron Collider (LHC) at CERN for testing and use at other research facilities. I do not know what the liability is for transporting antimatter, or how one gets insurance for accidents that could release gamma radiation and annihilate anything touched by accidentally released antiprotons (assuming one is transporting an appreciable quantity of antimatter).  However, I did enjoy the trucks painted moniker "Antimatter in Motion."
And, of course, truck-based cargo theft is on the rise.  For instance,  a truck carrying 400,000 Kit Kat bars was stolen in Switzerland on its way from Italy to Poland. This was apparently the main supply of Kit Kats for Poland during the Easter holiday, and Nestle, Kit Kat's maker, is reporting that Polish Kit Kat availability may be impacted. Let us all be glad that it is difficult to weaponize stolen KitKats, unless they, too, are made from antimatter.

Headline of the week: "Charlie Kirk's Mentor Jeff Webb, the Father of Modern Cheerleading, Dies in Freak Pickleball Accident."  
Two brief thoughts:
  1. As one loyal reader stated, "There is too much to unpack here.  Best leave the link unclicked."  
  2. And, I again remind my readers of just how dangerous Pickleball is: https://pmc.ncbi.nlm.nih.gov/articles/PMC11758564/



AI art of the week
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt.  I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.


Quick Latin Phrase Guide:
"Bos taurus, collar electronico ornatus" — The cow, adorned with an electronic collar.
"Duellum pro securitate laboris" — A duel for job security
"Lux rubra medicamentum incertum" — Red light, medicine of uncertain virtue
"Cistellae chocolate furtim ablatae" — Kit Kat bars, stolen away by stealth.
"LLM non satis doctus est" — The large language model is not sufficiently learned.
"Pickleball occidit iterum" — Pickleball kills again.

An illuminated manuscript page in the style of the Limbourg Brothers' Très Riches Heures du Duc de Berry (circa 1410), rendered as a full folio page on aged vellum with rich ultramarine, vermillion, and burnished gold leaf accents. The central miniature depicts, in the earnest devotional style of a Book of Hours, a monk in a scriptorium — but the monk wears a solar-powered collar identical to those worn by the cattle visible through the arched window behind him. The cattle in the background pasture wear matching collars and appear to be engaged in theological contemplation. In the left margin, a vertical column of marginalia scenes, each bordered in gold: a tiny antebellum gentleman dueling with a pistol against a skeleton labeled "HR DEPARTMENT"; two armored knights on a battlefield strewn with Kit Kat bars, one labeled "ANTIETAM" and one labeled "POLAND"; and a robed figure carrying a glowing red panel of light therapy, labeled "PHOTOBIOMODULATION, ANNO DOMINI 2026." In the right margin: a wheeled cart bearing a large magnetic vessel labeled "ANTIMATERIA — HANDLE WITH CARE — CERN"; and a small illuminated scene of a scribe consulting a glowing rectangular tablet, the tablet emitting rays of light and labeled "CHATBOTUS PSYCHIATRICUS." The bottom border contains a bas-de-page scene of pickleball players in medieval dress, one collapsed dramatically, labeled "MORS SUBITA PER LUDUM RIDICULUM." The color palette is strictly medieval: lapis lazuli blue, vermillion red, malachite green, lead white, and burnished gold. Latin captions throughout in Gothic blackletter script. No modern design elements. No humor signaled through style — the deadpan devotional seriousness of the original manuscript tradition must be maintained throughout.



Clean hands and sharp minds,
Adam

Comments