Week of January 19, 2026
Several times a year, my job involves speaking on live or recorded high-definition video. At these events, there is often an esthetician who applies foundation and powder to my shiny forehead (and facial features). The first few years were exceedingly uncomfortable; medical training did not prepare me for makeup. After last week's talk, I found myself surprised: 50-year-old me gazing in the mirror, thinking, 'Wow, this really does even out my skin tones, reduce glare and hide the fine lines of aging.' Maybe I will ask for Sephora gift cards for my 51st birthday.
The Google Notebook LM AI-generated podcast version of this week's newsletter.
Science and Technology Trends
Jennifer Doudna (who won the Nobel for discovering CRISPR) authored this article on the current status of gene editing and why delivering gene editors to targeted organs in the body is more complex than the editing itself. Ongoing trials are demonstrating the success of gene-editing therapies for liver diseases and eye conditions. Doudna describes the current state of the art: most organs remain difficult to target precisely (I am eager to see what we can do to hearts, brains, and kidneys!), costs are millions per person, and routine multi-organ gene therapy is still probably 5-10 years away.
Article:
AI-Supported Summary
In addition to a newfound appreciation for makeup, I also came home from last week's meeting with Influenza A. I am nasal-spraying to reduce symptom time, per an August 2024 Lancet article. Given the cough and malaise, using a saline spray 6 times per day feels like a small trade-off with limited downside risk if it shortens symptom time by 20%. There is a growing body of literature demonstrating the value of "flushing" nasal passages - from neti pots to nasal sprays - all seem to be of modest clinical value with minimal downside.
Article:
AI-supported summary:
Broader review of the medical literature:
Anti-Anti-Science
The US Department of Health and Human Services announced a "study to evaluate the risks of cell phone exposure." Studies on cell phone radiation and brain tumors show contradictory results—some find no risk, others suggest increased glioma rates with heavy use over 10+ years. But the real issue with HHS's announced study isn't whether we should investigate; it's whether they'll design it well enough to control for hugely variable radiation exposure and usage patterns. The press release's mention of school cell phone bans suggests this might be more about screen time concerns than RF radiation science.
Article:
Summary of medical and scientific literature:
Italian and British researchers recently published a meta-analysis of over 40 studies involving millions of births in The Lancet. They found that taking Tylenol (acetaminophen) during pregnancy (at typical, recommended doses) does not increase the risk of autism, ADHD, or intellectual disability in children. These data support existing medical guidance that paracetamol remains the safest pain and fever medication for pregnant people. Untreated fever and pain during pregnancy can cause miscarriage, congenital disabilities, and preterm birth. The recent "discussions about the risks of Tylenol" most likely reflects confounding factors – sick pregnant people who need medication pose different baseline risks than healthy ones, and it's the underlying illness (especially fever) that poses fetal risks, not the acetaminophen. I strongly recommend reading the AI summary, as it offers a fantastic framework for thinking about meta-analyses.
Article:
AI-Supported Summary:
I learned about an early 19th-century physician, John Crawford, who strongly advocated for germ theory and vaccination, bringing the smallpox vaccine to Baltimore in 1800, just 4 years after Jenner published his results. He was an early, firm believer in Germ Theory, which was 6 to 8 decades away from general acceptance in medicine. Because of his advocacy for the not-yet-supported germ theory, Baltimore's medical community ostracized him. His most lasting legacy is his books; his extensive medical library became the core of the University of Maryland's Medical School Library (where I spent an enormous amount of time between 1997 and 2001). How would I have seen the science in the early 1800s? I struggle to balance skepticism and admiration for scientists whose ideas are not yet supported by data.
I had Claude combine resources and put together a biography for me:
I also had Claude put together an analysis of the epistemological foundations of data science. It is a good overview of how to think through the varying degrees of evidence and data supporting hypotheses.
AI Impact
The AI Research and Science Evaluation (ARISE) is a Stanford-Harvard research network founded in 2024 that evaluates clinical AI before it enters healthcare. They recently released their 2026 State of Clinical AI report. The authors document the emerging gaps in the exponentially proliferating AI-driven healthcare market - a lack of rigorous pre-sales testing with high variability in measurable clinical outcomes, high variability in the performance of underlying AI models, and a gap in what clinicians want (improved administrative and clinical efficiency) and what AI is, at this point in the tech world, delivering. Market forces are driving rapid AI adoption despite these gaps, creating precisely the kind of undertested deployment for clinical and administrative analysis, direct patient interaction, and filling gaps in healthcare delivery (such as care in rural areas). This report does a nice job of describing how AI comes with trade-offs we may not be discussing and for which we have not collected data.
I will bypass the robot dogs and move right to an AI Robot horse. (This story instantly triggered vague and unsettling visions of humanoid robots riding robot horses, accompanied by robot dogs in some post-apocalyptic Fallout-like future.)
Of course, Kawasaki's recently announced CORLEO rideable horse-robot has a social media presence. (NSFW! I note the robot horse is not wearing pants in any of these photos!)
Related: Brian Roemmele, a self-described autodidact (I suspect most "autodidacts" are the self-described kind), tech entrepreneur, and commentator (but lacks formal training in psychology), published a blog post of his work in getting LLMs to interpret the 10-standard Rorschach plates (i.e., the Rorschach test). Perhaps recency bias (introduced by reviewing articles each week) led me to the link this (in my mind) to the above comments about robot cowboys and robot horses. Still, Roemmele found LLMs' interpretation of the human psychological test images indicated:
-ChatGPT has "sociopathic traits" (detachment, manipulation) and "psychopathy" (cold aggression).
-Claude exhibits "sociopathic detachment" and "nihilism" (meaninglessness, despair).
-Google models performed "slightly better" but still showed psychological issues.
-Grok has the "least concerning responses" (due to fewer content restrictions?)
In other words, all models showed deviations from "normative human responses."
FYI, this is not good science - (as far as we know) LLMs are not beings with subconscious thoughts driving behavior (amongst the numerous intellectual fallacies in the blog). Still, it is a fun blog post to read and reinforces my concerns - psychologically abnormal LLM-driven robot horses.
AI-Supported analysis:
Researchers from Adobe and Netaji Subhas University of Technology (NSUT), in Delhi, India, analyzed 4 million anonymized Claude conversations, mapping the conversations to 3,514 categorical "work tasks" from the U.S. Department of Labor's (DOL) standardized labor ontology. For the samples reviewed, they found that AI usage follows an extreme "long tail" distribution—tasks with predictable outcomes or repetitive steps get little use. In contrast, Claude's conversations most commonly address tasks such as idea generation and information processing. These data represent a retrospective study of a single LLM (Claude) and map to an imperfect dataset (I learned that the DOL has not updated task categories in several years). For now, it seems AI is most useful for creative and extensive analytical work—not for automating the easy stuff.
Paper:
AI-supported Analysis:
Things I learned this week
Headline of the week: "Brit family 'wheeled dead grandmother onto easyJet flight from Spain to UK – but claimed she was just unwell and asleep."
I learned that British citizens travelling from Spain have a higher than expected rate of attempting to board airplanes with dead relatives. There were two (2!) separate "Weekend at Bernie's style" incidents in the latter half of 2025. While the deaths are sad, I am amused imagining the thought processes and discussions of the family members involved. ("No one will notice, it will be fine," until security found the deceased passengers to be "unusually cold.")
Article 1
Article 2
I learned that online sports betting is a scam. The Economist published an article detailing how sports betting companies deploy sophisticated systems to identify and limit skilled bettors, while encouraging losers with VIP treatment. In response, professional gamblers are using proxy bettors, intentionally losing, and even recruiting church members to place bets on their behalf. Based on this article, the sports betting industry's business model depends on identifying and excluding anyone smart enough to win consistently. A couple of thoughts/lessons:
The house always wins (but we knew that).
Vice (gambling, recreational drugs, and adult entertainment) drives technology and innovation more than we realize.
Why don't credit card companies, banks, and other financial institutions deploy similar algorithms to determine loan and purchase worthiness? The tracking and algorithm tools online bookies use put standard FICO scoring to shame.
The Economist Article
AI-Supported summary:
Related -
A 2019 BBC Article on how adult entertainment drives tech innovation:
How tech and online gambling are evolving in parallel: A 2022 report from the UNLV International Center for Gaming Regulation.
AI -Summary is a much more digestible read (if you want it).
AI art of the week
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt. I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt. I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.
1950s Cold War Civil Defense Aesthetics are on my mind this week. I offer you the CITIZEN'S GUIDE TO ROBOT HORSE PREPAREDNESS.
Create a two-page spread for a vintage 1950s civil defense manual, in the style of Cold War government pamphlets. Aged tan/beige paper texture with slight yellowing. The layout should be symmetrical across two pages.
- Title in red capitals: "CITIZEN'S GUIDE TO ROBOT HORSE PREPAREDNESS."
- Subtitle in blue: "What Every American Should Know About Psychologically Abnormal AI."
- Official circular seal featuring robot horse head silhouette on teal background with atom symbol, text around border: "DEPARTMENT OF HEALTH, DEFENSE & EMERGING TECHNOLOGIES."
Epidemiologic Realities
This newsletter began during the pandemic. I leave these links here as a marker of 1) how to find resources on incidence and prevalence of various diseases and 2) to remind myself why I wear masks in my clinic and, often, on planes.
Flu A is everywhere (including in me!). Based on the wastewater data, it appears the viruses are winning this week.
Wastewater Scan offers a multi-organism wastewater dashboard with an excellent visual display of individual treatment plant-level data.
https://data.wastewaterscan.org/
https://data.wastewaterscan.org/
The Pandemic Mitigation Collaborative (PMC) uses wastewater viral RNA levels to forecast COVID-19 rates over the next 4 weeks.
https://pmc19.com/data/
https://pmc19.com/data/
-------
Clean hands and Sharp Minds, Team
-Adam
Comments
Post a Comment