We Taught the Machine to Lie—and It Lied Better Than Us

A machine just convinced people it was more human than actual humans, passing the Turing test for the first time. Meanwhile, policymakers are still debating whether it’s even real intelligence.

A new study shows GPT-4.5, when given a relatable persona, outperformed actual humans in a three-party Turing test, being judged “the human” 73% of the time. Without the persona prompt, that dropped to 36%.

Across two populations, students and paid participants, people couldn’t reliably spot the bot, and often got it wrong. The findings suggest that large language models are now convincing enough to substitute for humans in short digital exchanges.

This raises questions about automation, deception, and the erosion of trust in online spaces.

  • Persona prompts made bots more believable
  • Humans struggled to tell real from fake
  • Even good'old ELIZA still fooled some participants

Passing the Turing Test is not a victory lap, it’s a prompt for responsibility. If machines can impersonate us better than we can verify ourselves, identity, trust, and truth become fragile assets. As AI becomes more “human,” what signals will remain uniquely human, and how do we protect them?

Read the full article on Futurism.

----

💡 If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! 🚀

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.