AI Outperforms Humans in Mental State Tests—But Do They Really Understand Us?
Think AI can't read minds? Think again — new research shows AI models beating humans at tasks designed to measure our ability to understand each other’s mental states.
AI models are starting to outperform humans in tests that measure the ability to infer mental states, known as "theory of mind." According to a study published in Nature Human Behavior, large language models (LLMs) like OpenAI’s GPT-3.5 and GPT-4 are excelling in tasks such as identifying false beliefs, recognizing faux pas, and understanding implied meanings. Remarkably, GPT-4 outperformed humans in tests involving irony, hinting, and strange stories.
While these findings might suggest that AI is getting better at understanding human emotions, experts caution against overestimating these capabilities. The models are likely drawing from vast amounts of training data that include similar tests, which is a luxury human children don’t have. This raises questions about whether AI truly grasps the complexities of human thought or is simply mimicking learned patterns.
Can we trust AI to understand us, or are we just seeing impressive but ultimately superficial imitations?
Read the full article on MIT Technology Review.
----