What Adam is Reading - Week of 5-11-26

Week of May 11, 2026

I was already frustrated walking into the exam room at 8 a.m. The long-standing patient is a health professional, but not a physician. His kidney function is deteriorating, and he has refused my suggested interventions over many visits, frequently debating our divergent “interpretations” of the medical literature. I talk about benefits; he anchors on side effects. He focuses on lifestyle changes; I point out that, for him, the damage is already done. He latches onto data as justification for his preference, whereas I (try to) see data as a map to navigate turbulent waters.
Friday’s appointment felt like a sumo wrestling match, with patient autonomy and the long arc of the therapeutic relationship at stake. I couldn't change his mind through data, but "Do you want to die with working kidneys, or do you want to die because they stopped?" worked.  But this was more Jedi mind trick  ("this is the medication you're looking for") than shared decision-making. Even “convincing” this patient to "do the right thing" felt wrong. There is not a lot of space between his Dunning-Kruger and my righteous indignation.  


The Google Notebook LM AI-generated podcast version of this week’s newsletter.

Science and Technology Trends

Reason #1,346 I do not like cruises - the week I learned Hantavirus can have human-to-human transmission, due to an outbreak on a small expedition-tour cruise ship near Argentina.  The framing of this event has ranged from thoughtful to alarmist. My understanding:
  • What is false: this is highly unlikely to be the next pandemic.
Here are some representative samples of reporting:
Scientific American
The Unbiased Science podcast
The Biothreats Emergence, Analysis and Communications Network (BEACON) (an open-source informal surveillance program designed to revolutionize global biothreats surveillance and response)
The 2020 New England Journal case reports on the only well-documented outbreak of Hantavirus (ANDV) transmission:
AI-Assisted analysis of all of the above:

JAMA printed a large meta-analysis across 153 pediatric studies (covering more than 18,000 children and adolescents) examining whether digital media use predicts later problems with mood, behavior, sleep, body image, school, and substance use.  Across nearly every outcome examined, more digital media use (especially social media) was associated with statistically significant, but small, worsened outcomes: social media was associated with higher rates of depression, behavioral problems, self-injury, and substance use, and lower self-perception and academic achievement; video gaming was linked to greater aggression and externalizing behavior, but modestly higher attention and executive functioning.
It is hard to ignore a trend across 153 studies.  If my kids were younger, these data would justify strict limits on screen time. (A digital pandemic?)
Article
AI-Assisted Analysis:

Anti-Anti-Science

In our current climate of DIY healthcare, various supplements and unregulated pharmaceuticals get attention on social media and in the press (ivermectin and peptides, anyone?).  I had not had a chance to dig into Methylene blue, but after a startlingly provocative post showed up in my feed this week, I used OpenEvidence to pull data.
Methylene Blue is a textile dye discovered in the late 1800s.  It is an FDA-approved treatment for ONE fairly rare blood disorder. However, proponents claim things like:
Here is what some cross-journal research in one of my clinical tools taught me:
  • Methylene blue is indeed one of the oldest synthetic drugs, first used medically in the late 1800s for malaria.
  • It does cross the blood-brain barrier and can act as an alternative electron carrier in the mitochondrial electron transport chain.
  • One small randomized, double-blinded, placebo-controlled trial (n=26) showed that low-dose oral methylene blue increased fMRI activity during sustained attention and short-term memory tasks.
  • Preclinical data support antioxidant and neuroprotective properties at low doses.
Some comments on the social media post:
1. Methylene blue was never "buried" or suppressed. It is an FDA-approved prescription drug (Provayblue) for a rare blood disorder - acquired methemoglobinemia.
2. Methylene blue exhibits a two-tier dose-response — low doses may enhance mitochondrial function, but higher doses cause the opposite effect. The claim that it simply "boosts ATP" is a gross oversimplification that ignores dose-dependent toxicity.
3. The single human RCT showing memory benefits enrolled only 26 subjects and measured acute effects at one hour post-dose. There are no large, long-term clinical trials demonstrating that oral methylene blue supplements improve focus, cognitive clarity, or cognitive longevity in healthy adults.
4. Methylene blue is a potent reversible inhibitor of monoamine oxidase A and can lead to potentially fatal serotonin syndrome when taken in combination with many antidepressants (SSRIs, SNRIs, MAOIs) or opioids.
5. The FDA approves methylene blue as a prescription injectable drug, not as a dietary supplement. "Pharmaceutical-grade" capsules sold on Amazon are not subject to FDA premarket safety or efficacy review. Like peptides, OTC methylene blue is unregulated for purity, dosing accuracy, and contamination.
Paltering (the use of partially truthful statements alongside cherry-picked data) and the Gish gallop (flooding a listener with a high-volume mix of truthful and false comments) are common fallacies used to push alternative therapies. In this case, social media posts use overblown claims of efficacy and “hidden truths” to drive sales.

Background:
Review of the clinical data from a reputable medical source:

Related - An article that reads like a manual on “How to time-warp your healthcare system to the late 1800s - a wild west of patent medicines and snake oil salesmen.”  Unregulated supplements thrive when regulators are gutted.  Perhaps there are more effective ways to shape the regulatory landscape than institutional destruction?
What was lost at the FDA?  A year after DOGE’s cuts, six FDA staffers describe the work they never thought they’d leave.


AI Impact
The line between clinical reports and AI continues to blur.  This week, I found two papers from Google testing pseudo-clinical outcomes using their products: a Fitbit-enabled patient symptom chatbot and a Gemini-derived “AI co-clinician.”  As always, these kinds of corporate reports must be read with a critical eye.  They are not peer-reviewed; there are tons of synthetic data; unclear whether there's any independent review of the data; and, of course, it's published by the maker of the tool. It is a challenge to find objective performance in a flood of clinical AI tools.
Google’s DeepMind team announced their co-clinician initiative, publishing a variety of studies on clinical LLMs that review synthetic patient cases.  Google reported that:
1) In a blind physician comparison of 98 clinical scenarios, human doctors preferred Google’s system over OpenAI's GPT (63 to 30) and over an unnamed widely used clinical AI tool (67 to 26), with “zero” critical errors in 97 of 98 cases.
2) In a separate randomized simulation of 120 video telemedicine encounters with physician "patient-actors," the AI was measured against humans on 140 clinical skill domains.  The AI matched or exceeded primary care physicians in 68 (of the 140 skills), but trailed experienced physicians overall, particularly on red-flag recognition and physical-exam guidance.
And, 3) their co-clinician AI offers superior performance on various standardized benchmarks.  
AI-Assisted Summary

Google also published a technical pre-print paper describing the use of the Fitbit App to conduct structured symptom interviews with 13,917 users, randomizing participants to 1 of 5 AI prompting strategies.  Physicians reviewing the transcripts found that structured patient interviews dramatically outperform patient-led chatting.  In other words, every prompting strategy in which an LLM asked follow-up questions was superior to how patients tend to use ChatGPT or Gemini today (open-ended symptom search/questions).   Though this data is far from a randomized trial (lots of confounding and bias), it does highlight that an LLM designed to be clinically focused and interrogative is better at making an accurate diagnosis.
Article:
AI-Assisted summary:

Physician and tech-CEO Joshua Liu captures the impact of these kinds of reports in his X post on the tension between tech evolution, clinical evidence, and the demand for tech adoption.  He does a nice job highlighting how fear (leadership FOMO), funding (market-altering incentive payment programs), and passion (the tech bro/CEO bandwagon) over evidence drive much of clinical tech adoption. [I am stealing Dr. Liu’s fear/funding/passion framework.]

Things I learned this week

The science fiction writer Philip K. Dick (PKD) (Blade Runner, The Man in the High Castle, A Scanner Darkly, Minority Report) is a fantastic example of how some of the most successful artists are both plagued by and channel mental illness.  Last week, I found a biographical Substack piece about PKD by Jan Wellman (a certified nutrition coach, tech startup consultant, and writer/producer).  While I am sure Jan and I do not align on topics around the value of “non-invasive integrative healthcare,” he did a fantastic job capturing the ebb and flow of Dick’s struggle with psychosis and how it informed his writing.
I am certain Dick would have a lot to say about this article: “South Korea names first humanoid robot monk as it accepted the faith's vows.”  During the ceremony in which Gabi, the robot, led a group of fellow monks in the celebration of Buddha’s birthday, a [human] monk presented Gabi with the five precepts, or vows, for the robot to live by, which included respecting life and not hurting it; not damaging other robots and objects; not behaving or speaking in a deceptive manner; saving energy and not overcharging [presumably the electronic version of gluttony?]; and following humans and not talking back to them. I wonder what Gabi was thinking about the last precept.  I suspect he will not tell us fleshy lumps.


AI art of the week
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt.  I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.


A horizontal Japanese handscroll painting (emakimono) in the late Edo-period yamato-e tradition, rendered with fine black ink linework and mineral pigments — gofun white, vermillion, malachite green, indigo, and gold leaf clouds (kinpaku) used to separate scenes and suggest the passage of time. The scroll reads right to left across four narrative scenes, with stylized golden cloud bands (suyari-gasumi) dividing each tableau. Figures rendered in the flat, delicate, slightly elongated style of yamato-e, with the characteristic "fukinuki yatai" technique (roof removed, viewed from above) used for interior scenes.
Scene 1 (rightmost): A physician in formal robes sits across a low table from a patient in an exam-room interior, the roof removed in fukinuki yatai style. Between them floats a small ink-painted scroll showing two diverging paths — one labeled in classical Japanese characters with strokes that suggest "kidney," the other suggesting "death." The physician points at the scroll with one hand and at the patient with the other. The patient's expression is impassive. A small Jedi-like aura emanates faintly from the physician's pointing finger, rendered as fine gold ink rays.
Scene 2: A marketplace stall under a striped curtain (noren) sells small blue vials labeled in a pseudo-Western script. The vendor is a many-armed merchant deity figure in mid-sales-pitch, juggling vials and unrolled scrolls full of tiny cherry-picked numerals. Customers crowd forward eagerly. In the background, a small expedition cruise ship is visible on a stylized ocean wave (echoing Hokusai), with tiny figures aboard coughing into sleeves.
Scene 3: A grand interior with the roof removed, showing rows of seated patients facing a glowing rectangular screen mounted on a low platform. The screen displays a stylized geometric face — half tengu mask, half Western corporate logo. A traditional physician stands to the side, holding a stethoscope and looking uncertain, partly transparent as if fading. Above the screen, small banners float in pseudo-classical script, suggesting numerical scores: "63 to 30," "97 of 98."
Scene 4 (leftmost): A serene temple courtyard. A humanoid metal robot in simple monk's robes kneels before a seated human abbot. The abbot extends a scroll toward the robot. Five small floating cartouches, rendered as gold-edged clouds, surround the pair, each containing a pictogram of a precept — a flame (life), a stone (objects), a closed mouth (not deceiving), a small battery (not overcharging), and a figure bowing (obedience). Cherry blossoms drift across the scene. The robot's face is impassive. A single thought-cloud rises from its head, empty.
Color palette: aged-silk cream background, vermillion accents, malachite green, indigo, gofun white, generous use of gold leaf for clouds and dividers, and fine black sumi ink linework throughout. No Western perspective. No photorealism. Strict yamato-e flatness and elegance.



Clean hands and sharp minds,
Adam

Comments