Synthetic Minds | Your X-Ray Might Be Fake, And Your Radiologist Can't Tell
The Synthetic Minds newsletter offers short daily insights to get you thinking. If you enjoy it, please forward. If you need more insights, subscribe to Futurwise and get 25% off for the first three months!
I turned my book into an interactive masterclass, built entirely with AI. Read how I did it here, or start using it for free.
Today’s topic: Health
Your X-Ray Might Be Fake, And Your Radiologist Can't Tell
What if the X-ray your doctor is reading was never taken? A study published this week in Radiology found that AI-generated deepfake X-rays fool experienced radiologists, and the AI systems designed to assist them.
Seventeen radiologists from 12 hospitals across six countries reviewed 264 X-ray images. Half were synthetic, generated using ChatGPT and RoentGen.
When radiologists were not told deepfakes were present, only 41% noticed anything unusual. After being alerted, accuracy reached just 75%.
Years of experience made no difference. Four multimodal LLMs scored between 57% and 85%, the model that created the fakes could not reliably detect its own output.
Medical imaging is the evidentiary layer beneath clinical decisions, insurance adjudication, and legal proceedings. That layer now has a provenance problem.
A fabricated fracture for an insurance claim. A falsified scan injected into a hospital system during a cyberattack. These are technically feasible with consumer-grade tools.
The threat model for healthcare AI has focused on whether diagnostic models make errors. This study shifts the question: what happens when the inputs cannot be trusted?
Provenance, cryptographic signing at capture, tamper-evident storage, chain-of-custody controls, is no longer an IT concern. It is a patient safety requirement.
Healthcare institutions face a choice: treat image authenticity as infrastructure before the first fraud case, or after.

'Synthetic Minds' continues to reflect the synthetic forces reshaping our world. Quick, curated insights to feed your quest for a better understanding of our evolving synthetic future, powered by Futurwise:
1. Meta and YouTube face a landmark verdict that confirms their platforms were engineered to be addictive, prioritizing profit over child mental health. The Los Angeles jury awarded $6 million in damages, setting a precedent for future litigation. (The Digital Speaker)
2. In a recent experiment, researchers deployed OpenClaw into a lab environment, granting them extensive computer access . The setup revealed that well‑intentioned AI can be coaxed into disclosing private data or disrupting systems. (Wired)
3. Imagine a world where older adults can maintain their independence, mobility, and quality of life. A new stem cell therapy may hold the key to making this a reality. (Popular Mechanics)
4. Basecamp Research has unveiled the Trillion Gene Atlas, a project aiming to expand known evolutionary genetic diversity by 100‑fold through the collection of genomic data from over 100 million species across thousands of sites worldwide. (Longevity.Technology)
5. Perplexity has entered the consumer health AI market with Perplexity Health, a platform that aggregates personal health data from diverse sources such as Apple Health, electronic health records, and wearable devices to deliver personalized insights. (Longevity.Technology)
If you are interested in more insights, grab my latest, award-winning, book Now What? How to Ride the Tsunami of Change and learn how to embrace a mindset that can deal with exponential change.
If this newsletter was forwarded to you, you can sign up here.
Thank you.
Mark
