We Taught the Machine to Lie—and It Lied Better Than Us

A machine just convinced people it was more human than actual humans, passing the Turing test for the first time. Meanwhile, policymakers are still debating whether it’s even real intelligence.
A new study shows GPT-4.5, when given a relatable persona, outperformed actual humans in a three-party Turing test, being judged “the human” 73% of the time. Without the persona prompt, that dropped to 36%.
Across two populations, students and paid participants, people couldn’t reliably spot the bot, and often got it wrong. The findings suggest that large language models are now convincing enough to substitute for humans in short digital exchanges.
This raises questions about automation, deception, and the erosion of trust in online spaces.
- Persona prompts made bots more believable
- Humans struggled to tell real from fake
- Even good'old ELIZA still fooled some participants
Passing the Turing Test is not a victory lap, it’s a prompt for responsibility. If machines can impersonate us better than we can verify ourselves, identity, trust, and truth become fragile assets. As AI becomes more “human,” what signals will remain uniquely human, and how do we protect them?
Read the full article on Futurism.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
