AI Is Breaking Science, And We’re Letting It

AI-generated misinformation is moving from social media into the academic world. The same tools that can write PhD-level research in minutes are also churning out perfectly plausible nonsense, and most people won’t spot the difference.
We are on the verge of scientific model collapse, and the real danger isn’t AI itself; it’s our blind trust in it, argues Gary Marcus.
Deep Research, OpenAI’s AI-powered research agent, is being hailed as a revolution in academia. But here’s the problem: it can generate high-quality errors at scale. While AI-enhanced research could be transformative, it also blurs the line between real insight and convincing fiction.
Once AI-generated misinformation seeps into peer-reviewed journals, it will contaminate future models, accelerating a cycle of self-referential garbage that even experts will struggle to untangle.
- AI-generated research is dangerously convincing, passing academic scrutiny without true verification.
- Errors in AI-written papers will persist, polluting the scientific record for years to come.
- Model collapse is real, where AI trains on its own flawed outputs, reinforcing misinformation at scale.
AI isn’t ruining research. Our uncritical adoption of it is. If we don’t double-check our sources and implement stronger verification processes, we risk eroding the foundation of knowledge itself. How should we rethink research integrity in the AI era?
Read the full article on Marcus on AI.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
