When ChatGPT Joins the Marines: Who’s Really in Command?

If your AI can suggest a Netflix show, it can now also suggest a missile strike. Welcome to the Pentagon’s next phase of decision-making.
The U.S. military is now using generative AI like chatbots to analyze surveillance, generate target lists, and support decision-making, without clearly defined safeguards.
I find this shift deeply revealing: it’s not just about efficiency, but control. AI is connecting unclassified dots (classification by compilation), scaling beyond human oversight (“human in the loop” is breaking), and inching toward frontline autonomy.
Three crucial observations:
- AI models are determining which data should be classified.
- “Human-in-the-loop” is proving mostly symbolic.
- AI is climbing the military decision ladder, while chatbots continue to hallucinate.
This evolution mirrors consumer AI trends. Fast, useful, and mostly unregulated. But what happens when LLMs whisper into generals’ ears? When risk meets speed at scale, do we stay in control, or just tell ourselves we are?
Read the full article on MIT Technology Review.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
