What Adam is Reading - Week of April 13, 2026

Week of April 13, 2026

In the summers of the mid-1980s, I was in a special bunk at Beth Tfiloh Day Camp. Several afternoons a week, we traded hiking and swimming for coding on Apple IIes at a nearby elementary school. My camp photos from those years show a conspicuously pale group of children sitting on school steps, not the typical tableau of tanned kids on picnic benches wearing camp shirts, striped calf-high socks, and 1980s dolphin shorts.
Over the last few weeks, I’ve started vibe coding in earnest.  Having AI write software is less like my summer programming and more like describing what you want to a highly capable, aggressively enthusiastic thought partner.  My app is a working web-based application with two-factor authentication, embedded AI-analyzed images, complex database management, and voice integration. Things I could not have built myself, but that took minutes, not months, and several dozen dollars' worth of tokens. The experience is unsettlingly easy.
During our recent father-son trip, I found myself having The Talk with my younger son (who is a bit of a techno skeptic). Not the one my father gave me in the early 1990s (about girls), but a conversation, with its own solemnity and avoided eye contact, delivered from my car’s driver seat (naturally, you must look forward in a Tesla running on full self-drive).  The Talk is not about biology and reproduction (and the prevention thereof); rather, it is about artificial intelligence, the changing nature of work, and what he needs to figure out before the technology figures it out for him.
Different decade. Different existential terrain. Equivalent awkwardness.


The Google Notebook LM AI-generated podcast version of this week’s newsletter.

Science and Technology Trends

I've seen several articles discussing how the U.S. military located the missing F-15E pilot in Iran.  Right after the rescue, I read about the Combat Survivor Evader Locator (CSEL), an encrypted, frequency-hopping device that allows U.S. military pilots to send messages and their location via satellite.  However, at the end of last week, news about “Ghost Murmur” technology emerged.  Developed by Lockheed Skunk Works, this tech is supposedly a “quantum magnetometry system that detects the electromagnetic fingerprint of a human heartbeat and pairs the data with AI to isolate it from background noise.” In other words, a device that permits the detection and isolation of a single human heartbeat from miles away.  While amazing, if true, my suspicion is that CIA director John Ratcliffe and the President (who both spoke about Ghost Murmur tech) are exaggerating U.S. capabilities.  Either way, Ghost Murmur now has its own Wikipedia page, and I had Claude offer a framework for the story's plausible and implausible portions. Let's just say the physics required for long-range individual human heartbeat electromagnetic detection may be more aspirational than real.
CSEL:
Ghost Murmur:
Claude-assisted review of the topic:

Some patients tolerate medications without issue. Some patients take medications that “don’t work.”  And there are patients who seem to have side effects or sensitivities to every medication prescribed.  Thanks to the ubiquity of GLP-1 drugs (tirzepatide and semaglutide), there is an enormous body of patients who are both taking the drug and have had genome analysis.  Nature published a research paper and editorial about the pharmacogenetics of GLP-1s.  “Using data from 23andMe, researchers (from the 23andMe team) analyzed nearly 28,000 people who self-reported using semaglutide or tirzepatide, identifying variants in the genes that encode the very receptors these drugs target — GLP1R and GIPR — as meaningful predictors of both weight loss magnitude and GI side effects.” In other words, the authors identified gene variants that correlate with both the likelihood of gastrointestinal side effects and the magnitude of weight loss. This study is imperfect (weight loss and side effects are self-reported), and 23andMe has an obvious commercial interest in pharmacogenetics. However, this is still a fascinating data set that may help explain why patients respond differently to the same medications.  And I’m sure 23andMe will soon offer tiered genome analysis add-ons to predict our responsiveness to GLP drugs.


Anti-Anti-Science

Peptides are a symptom of the frustration many people experience with our health care system. In early March, I wrote about the deregulation of compounded peptides. This week, The New Yorker published a careful look at the blossoming peptide industry and its problems with purity, reliability, insufficient data, and potential side effects. Cornell physician Dhruv Khullar — writing, it's worth noting, as an insider arguing for medicine's own reinvention — offers a portrait of patients, clinicians, and organizations working in the space of non-FDA-approved therapies. He highlights the companies monitoring the peptide industry for purity (there is a wide range of manufacturing consistency); the large numbers of patients dissatisfied with existing health care (the logical fallacies used to justify the purchase of compounded and gray-market peptides are remarkable); and numerous anecdotes about one compound or another relieving symptoms or resolving a patient's concerns (read the quotes of the concierge and cash-based physicians offering these alternative therapies). The core questions remain the same: How do you know if a compound is consistently manufactured to ensure purity and safety? How do you know if the substance will cause harm, and if so, in whom, and what harm? And if the drug is well-manufactured and safe, is it effective for its intended use?
Here is my March What Adam is Reading with analysis of Peptides:

A Pew Research survey published last week found that Americans continue to overwhelmingly trust health care providers as a source of information. Most striking, the survey found that while 75% of Democrats rate provider information as highly accurate, only 58% of Republicans do. AI chatbots and the internet highlight and, to some degree, exacerbate structural, philosophical, and economic societal gaps. Even so, Pew's data suggests that the fleshy clinician-to-patient connection still matters.
Pew Study
AI-Assisted Summary:
More interesting still: the AI-accelerated disintermediation of health care providers is increasingly becoming a topic of editorial interest in medical journals. The Journal of the American Medical Association (JAMA) published an editorial by bioethicist John Lantos discussing the historical impact of technology on the patient-physician relationship and how AI may be different. Lantos offers a good perspective: AI fits into a longer history of technologies accused of coming between doctors and patients (which could be reassuring or not, depending on how you read that history).
AI summary of the Lantos article:

AI Impact

The State of Utah Division of Professional Licensing and Legion Health AI agreed to pilot Legion AI's chatbot to refill 15 different depression and anxiety medications following an online discussion between a patient and the chatbot. The goal is to help alleviate a critical shortage of mental health providers in Utah, and there are lots of controls in place, including the requirement that patients see and renew with an actual clinician at least every six months.  Moreover, the Chatbot is not permitted to offer dose escalation, a change in therapy, or anything related to diagnosis. It's a pretty narrow window of stable patients who can use this refill process. Nevertheless, it will be interesting to see how patients respond to chatbot-mediated refills, especially given the clinical context, which will likely involve screening for changes in symptoms or medication effectiveness over time.  Obviously, this is a novel use of AI and raises numerous slippery-slope concerns about the future of humans delivering clinical care.
Article:
State of Utah web page about the agreement:
AI-Assisted summary of the details,  including the PDF agreement between the state of Utah and Legion Health

Ronan Farrow’s New Yorker profile of Sam Altman is well worth the time investment. It captures many fascinating details about Sam and the larger circle of people who control AI, and at the risk of being dramatic, the future of humanity. Farrow’s article depicts the brilliance and deep flaws of Altman and those around him. While it is (somewhat) comforting to learn that those at the forefront of this technology are debating and struggling with the impacts on humanity, safety, and a myriad of related themes, it is equally disconcerting to learn that the people shaping the future of artificial intelligence are so human. 
AI-assisted summary:

Things I learned this week

A jar of Nutella was on the Artemis mission, seen floating through one of the live shots during their lap around the moon.  While the product placement was unintentional, the exposure (combined with the quick-thinking Nutella marketing team’s tie-ins) was so powerful that Nutella saw a 900+% increase in social media mentions during the mission.  It seems like the opportunity for vendors and sponsors to start recruiting at space camp so that our eventual astronauts look like NASCAR drivers is the dystopian, unregulated capitalism that would give Karl Marx heartburn (for which he would seek relief from TUMS Smoothies, tropical fruit flavored, the #1 doctor-recommended brand of over-the-counter antacids). I see a much better-funded, albeit trashy, M&M logo-emblazoned path to Mars.

In the early 1980s, my parents had friends who opened a natural food store. Thanks to this store, not-so-great carob chip cookies, honey-sweetened cupcakes, and bottles of Dr. Bronner's soap became part of my life. I remember spending hours reading the bottle’s tiny, manic writing espousing social activism, good hygiene, and a hippie-like philosophy. “Emanuel Bronner was a third-generation soap maker who escaped the Holocaust. He was a rambling eccentric whose personal pantheon included Jesus, Hillel, Muhammad, Karl Marx, and Carl Sagan. He escaped from a mental institution, settled in Southern California, and claimed that he was a rabbi, a doctor, and Albert Einstein’s nephew.” Despite using the soap for the last 35 years (it is part of my travel kit), I've never looked into the brand.  Last week, I found this Wall Street Journal article profiling the company, the three generations of family ownership, and how Emanuel’s grandchildren have leveraged very liberal social activism into an ongoing financial success. And now that I know I use socially activated soap, the question is, have I really been cleaning myself as much as I thought?
Article:
AI Summary:
Related: I found a Reddit post from the pandemic where someone transcribed the entire Dr. Bronner's Bottle. You will see that Bronner was certainly someone who could be described as a rambling eccentric.  I had Claude organize the information into a more readable form and find some citations.

Headline of the week: “Man Creates Tiny Submarine for His Parakeet to Experience Life Underwater - He voluntarily went into the tube.” After watching the video, I suspect Bebe the Bahamian Parakeet’s owner is over-indexing on Bebe’s agency in ‘voluntarily’ entering his custom submarine.

AI art of the week
A visual mashup of topics from the newsletter, and an exercise to see how various LLMs interpret the prompt.  I use an LLM to summarize the newsletter, suggest prompts, and generate images with different LLMs.
Grok: refused to generate an image due to "children wearing 1980s dolphin shorts."  (This is now Grok’s bar for inappropriateness?!?)

A Wes Anderson-style symmetrical diorama tableau, with a pastel color palette of dusty rose, mint green, pale yellow, and powder blue. Perfect bilateral symmetry. Deadpan figures. Flat, warm lighting.
Center frame: a pale child in a camp shirt and 1980s dolphin shorts sits at an Apple IIe computer on a wooden school desk, surrounded by tanned campers in striped calf-high socks eating carob cookies with expressions of mild disappointment.
Left side: a Tesla viewed from behind, driver looking rigidly forward, a small thought bubble containing a diploma and a robot arm shaking hands. A teenager in the passenger seat stares out the window with existential resignation.
Right side: a suited figure at a podium labeled "STATE OF UTAH" gesturing toward a vintage telephone handset with a small glowing brain inside it. A clipboard reads "STABLE PATIENTS ONLY."
Background center: an astronaut in a full NASA suit floats serenely, one hand extended toward a slowly tumbling jar of Nutella. A small M&M logo is stitched onto the shoulder of the spacesuit, like a NASCAR patch.
Foreground left: a very small submarine in a fish tank. Inside the submarine, a parakeet sits in a tiny seat, its expression one of profound uncertainty.
Foreground right: a tall amber glass bottle of Dr. Bronner's soap, the label covered in tiny illegible text, next to a portrait miniature featuring a man with wild hair labeled "NOT EINSTEIN'S NEPHEW."
On the wall behind everything: a symmetrical grid of framed portraits — Jesus, Hillel, Muhammad, Karl Marx, and Carl Sagan — each in an identical oval frame with a small gold nameplate.
Muted cinematic film grain. Shot on 35mm. The overall mood is melancholy whimsy.


Clean hands, sharp minds, and in the words of Dr. Bronner, Absolute cleanliness is Godliness! Health is our greatest wealth!

Adam

Comments