Week of March 23, 2026
I often review my patients' charts before I walk into the exam room. I find that intentional, planned conversations yield the best office visits. So last Friday, I walked into my 10:30 appointment prepared to address my transplant patient's worsening kidney function, which I had seen on his labs from the day before.
After the usual hellos, I launched into my concerns, but the patient stopped me almost immediately.
"I think you're reading the wrong line."
He was right. I had misread the chart. My preparation had been genuine and thorough, and wrong. We were both relieved, but I walked out unsettled. What is my baseline error rate? What else do I miss that no one ever tells me about?
As medicine moves toward AI-aggregated data and algorithmic summaries, someone has to serve as the human check, reviewing outputs, catching errors, and understanding the broader clinical context. And as these tools improve, I will be expected to incorporate more data and move faster. But if I can review a chart with intention and attention, and yet still misread it, am I actually an adequate stopgap? Who watches the watcher when the watcher is fallible?
After all, this was a failure of "all-natural" intelligence.
The Google Notebook LM AI-generated podcast version of this week's newsletter.
Science and Technology Trends
I found several articles about a British surgeon who remotely removed a patient's prostate via telesurgery - controlling a surgical robot in Gibraltar from London, about 2000 km away. Thanks to fiber optics and 5G, the delay was only 0.06 seconds. One of the articles offered a great quote, "Mr Buxton, who is originally from Burnham-On-Sea in Somerset but moved to Gibraltar 40 years ago, said that it was a "no-brainer" to be involved. The 62-year-old told [reporters] that he was happy to be the "guinea pig" patient, saying the operation has taken Gibraltar from the "Championship to the Champions League" in terms of access to surgery." I am jealous - I want all my patients to feel that my care is a patriotic endeavor ("Thanks to Dr. Weinstein's management of my blood pressure, all Marylanders are more free!")
I suspect remote surgery will become more common, and that given enough time, autonomous surgical robots could replace surgeons altogether. If humans remain essential in the operating room, I imagine computer brain interfaces as a likely development. While not quite Neuralink, the prosthetics industry is rapidly advancing how humans interact with technology. For instance, I recently watched videos of Tilly Lockey, a 15-year-old bilateral arm amputee with advanced prosthetics enabling remarkable fine motor control, including the ability to disconnect and remotely operate her prosthetic hands. The idea of one day remotely controlling not just surgical robots but also performing an office physical exam with robotic hands is worth imagining.
About Tilly and the company that makes her prosthetics:
Her hands can be operated remotely from her prosthetic arms:
Over the past few years, I've seen data suggesting that psychedelic drugs, like psilocybin, may offer unique advantages for treating depression. Yet these trials often lack a genuine placebo arm. (It isn't surprising, as patients who get psychedelic medications easily recognize when they are given active medications.) Recently, British researchers published a rigorous meta-analysis comparing trials of psychedelics with those involving conventional antidepressants (SSRIs, SNRIs, etc.), using improvements on the Hamilton Depression Rating Scale. Despite the enormous hype in the literature, psychedelic medications were not superior to other antidepressants.
While the study has its limits (it's a meta-analysis across multiple studies instead of a true placebo-controlled trial), this is a useful approach to evaluating drugs with dramatic, easily perceived effects. The medical community's excitement about psychedelics is intense. If nothing else, these results suggest that conventional antidepressants work as well as psychedelics, and the hype may have outpaced the science.
X thread going through the JAMA article in detail:
AI-assisted analysis:
This is simultaneously fascinating, scary, and, I wonder, a clever law enforcement phishing campaign? "Between May 27 and September 2, 2025, the National Firearms, Alcohol, Cannabis, and Suicide survey (sponsored by NIH and the Department of Veterans Affairs) asked adults in the US about thoughts and behaviors related to shooting others. Respondents were recruited via address-based sampling and text-messaging. We found that 3.3% of respondents seriously thought about shooting another person in the past 12 months, which we estimated to equate to more than 8.5 million US residents; the lifetime prevalence was 7.3%, or more than 19 million people."
It's important to note that this is a cross-sectional, self-reported study addressing the rather vague concept of "thinking about shooting someone." Still, these findings raise challenging questions, demonstrating that key demographic factors are associated with these thoughts about gun violence. Interestingly, gun ownership, income, and political party affiliation did NOT independently relate to thoughts of shooting others.
Article:
AI-assisted Summary:
Anti-Anti-Science
One of healthcare's more corrosive dynamics is the moral arithmetic clinicians calculate for patients who seem to "deserve" their illnesses. The alcoholic in liver failure. The smoker with lung cancer. We often don't say it out loud, but this math shapes how we rationalize and distance ourselves from tragedy.
When unvaccinated patients started filling ICUs during COVID, this moral arithmetic went into overdrive, filling the empathy deficit with righteous indignation. The automatic thought that often resonated: they had a choice, and they chose wrong.
A few weeks ago, public health expert Jess Steier published a compelling essay arguing that public health made grief conditional on compliance and that the anti-vaccine movement has leveraged this tendency. The sharpest example in her piece: an 8-year-old girl named Daisy died of measles in Texas last year, unvaccinated. Her father spoke to the Children's Health Defense (an anti-vaccine advocacy organization) in tears, thanking the organization for "showing up" for his family. The public health community, he implied, had not.
Steier is not suggesting that vaccine deniers be given a pass; however, she argues that when a person's death becomes a referendum on their choices more than a loss worth mourning, we all lose a bit of our humanity. Moreover, children make this topic harder. Children don't make vaccine decisions. Yet parents who lose unvaccinated children to preventable diseases often face a particular flavor of public judgment. Steier borrows the psychologist Kenneth Doka's term for this: disenfranchised grief. The loss is real; the social permission to mourn it is not.
This is a thought-provoking editorial because it asks who fills the empathy void when we let exhaustion harden into contempt.
AI-Assisted Summary of the editorial
AI Impact
I've written repeatedly about how AI is "homogenizing" ideas and written output. This week, I found data from a combined UC Berkeley, UC San Diego, University of Washington, and Google DeepMind research that demonstrates how AI is shifting the quality and tone of writing, collapsing the diversity of individual voices into narrower semantic clusters atypical for most humans.
The team used three datasets: a controlled user study (with and without LLM access), a corpus of pre-LLM human essays that researchers then had AI revise, and 18,000 peer reviews from ICLR (the International Conference on Learning Representations- peer-reviewed conference materials on machine learning and AI research) classified as either human- or LLM-generated. They measured semantic drift using sentence embeddings, word distributions, grammatical frequency, emotional tone, and argumentation style.
Their data showed that while human essays were widely scattered, AI-revised essays clustered tightly together. Pronoun use dropped 40–60% (goodbye, first-person voice), while adjectives surged 57–90%. Heavy LLM users were significantly more likely to take a non-committal stance on a direct question ("Does money lead to happiness?"), suggesting AI assistance doesn't just change how we sound, but what we conclude.
Ironically, in the peer-review dataset (the ICLR data), LLMs scored submitted papers 10% higher than humans and evaluated them on entirely different criteria. LLMs prioritized reproducibility and scalability over clarity and relevance. In other words, the AI research community's peer-review process is already quietly being reshaped by the AI tools being written about.
Article:
News coverage:
AI-assisted review
The company behind Pokémon Go, Niantic, is using data gathered from millions of players to support a robotics delivery service. It turns out that GPS alone is inadequate, especially in dense urban environments. Thanks to the images captured by the Pokémon Go game, they've been able to translate user-provided visual imagery of urban areas into a more powerful map for robotic delivery. More interestingly, because the company chooses where Pokémon appear, they can actually direct users to capture images of specific areas. It is a good reminder of how we, the fleshy humans, generate data for AI and robotics. Recognize that this is a fluff piece that very much highlights the "wow" of the technology without independent review, but it is still thought-provoking.
Article:
AI-Assisted review:
Things I learned this week
Japanese researchers have unveiled a robot monk powered by AI that they say can dispense spiritual advice and maybe one day ease shortages of its human counterparts. Trained on even the most esoteric Buddhist scriptures, the University of Kyoto says the machine can answer sensitive questions that people may feel hesitant to share with other humans. Irony is not only dead, but is now a deceased horse being beaten.
I learned that in the 1920s, Ralph W. Thomas began selling clay jars filled with radium, claiming that radioactive water could "restore health." His Thomas Radium Revigator went through various iterations, and demand was so great that he sold the company around 1924. Over the next six years, a variety of companies were selling radioactive water containers and radium-infused clay blocks to put in water containers, all touting the vague notion of improved well-being or health. By 1930, the Department of Commerce had restricted the sale of direct-to-consumer radioactive products (including Revigators). In 2010, Chemical engineers from Mount St. Mary's College and the National Institute of Standards and Technology (NIST) tested examples of these vessels to determine the degree of radioactivity they imparted to the water. Interestingly, the radioactivity was low, but other chemicals in the clay (such as arsenic, vanadium, and uranium) were well above current OSHA standards. Thus, the NIST article concludes, "although the levels of radon in the water were high, the [2010 research study] found that, compared to the myriad other disease-related causes of mortality at the time, the chances of dying as a result of drinking radon-infused water were relatively low." The real question is, what things do we do today that will be mocked as quackery a hundred years from now?
NIST Article on the 2010 Research
See the Revigorator:
Oak Ridge National Laboratory has an online and physical museum of radioactivity with a really interesting array of artifacts.
Other government agencies have more benign collections. For instance, last week I learned that the largest collection of flutes in the world is housed at the Library of Congress (which has approximately 2,000 flutes and related instruments). In 2022, Lizzo played the famous glass flute once owned by President James Madison.
AI art of the week
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt. I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt. I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.
A Romantic-era scientific illustration rendered as a large-format engraved plate from an 1820s natural philosophy compendium, hand-colored in muted ochres, verdigris, and sepia ink washes with delicate cross-hatching. The scene is composed as a single tableau, lit by warm candlelight from the left and bathed in deep chiaroscuro shadows, in the manner of Joseph Wright of Derby.
At the center of the composition, a robed Buddhist monk sits in meditative posture — but his face is clearly mechanical, with visible brass gears at the temples and glass eyes that emit a faint luminescence. He holds a scroll of Sanskrit text. Seated across from him is a patient gentleman in period dress, apparently seeking counsel.
To the left, a formally dressed surgeon in a powdered wig stands at a writing desk, his hand resting on a brass mechanical apparatus connected by thin wires to a disembodied pair of articulated robotic hands on a separate table across the room. The hands are performing a delicate procedure on an anatomical model. A small placard beneath reads "Telesurgery — London to Gibraltar, 0.06 seconds."
In the foreground, prominently displayed on a velvet cloth as if it were a museum specimen, sits a clay crock pot with the hand-lettered label "Radium Revigator — Restores Vital Health." A small skull and crossbones is engraved discreetly beneath the label. Beside it, a pamphlet reads "The Cure of the Age."
In the upper right corner, a glass flute floats suspended on a ribbon, labeled in copperplate script "Property of President James Madison." A small figure in the background resembling Lizzo regards it with evident admiration.
The entire image is framed with an ornate engraved border of botanical motifs, and captioned at the bottom in formal italic type: "Selected Wonders and Cautionary Observations from the Natural and Mechanical World, Anno Domini 1823."
Clean hands and sharp minds,
Adam
Comments
Post a Comment