Why ‘Normal’ AI Might Be More Dangerous Than ‘Super’ AI

While everyone’s obsessing over AI turning into a god, the real threat may be its awkward, error-prone teenage phase, and we’re handing it the keys to society anyway.

We’re not heading for an AI apocalypse; we’re already living in its bureaucracy. This great essay argues AI isn’t superhuman nor about to be. Instead, it’s a “normal technology,” like electricity or the internet; slow, flawed, and shaped by humans.

That’s not comforting; it’s clarifying. Real risk lies not in machines outsmarting us, but in institutions deploying brittle models into messy realities. Think sepsis predictors missing patients, or benchmarks mistaking exams for competence.

I’ve seen this firsthand: excitement leads, control lags, context vanishes. We need to design for human oversight, not mythic intelligence.

  • Generative AI use grows, but work impact remains tiny, although changing rapidly
  • High-risk sectors resist diffusion due to safety concerns
  • Benchmarks mask real-world AI limitations

The real challenge isn’t preparing for AI to surpass us, it’s accepting that it won’t, and still making responsible choices in its shadow. If AI evolves gradually but we act like it’s divine, are we creating the crisis we feared?

Read the full article on Knight First Amendment Institute.

----

💡 If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! 🚀

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.