OpenAI’s o3: The Pricey Future of AI Economics

What if the smartest AI yet costs more to run than hiring a PhD? Welcome to the expensive reality of reasoning models, or did Deepseek R1 just change the game?
OpenAI’s latest AI model, o3, redefined the economics of AI with its “test-time compute” approach, which delivers better answers by consuming more processing power. Its success on François Chollet’s ARC challenge, scoring a groundbreaking 91.5%, came at staggering costs; spending thousands of dollars on a single query.
This marks a shift from the low-cost distribution model of traditional software to one where higher costs per query dominate, although it remains to be seen how long this will be viable with new open source models such as DeepSeek R1 that perform better and can run on Raspberry Pi.
Companies like Nvidia and cloud-service providers (Amazon, Microsoft, Alphabet) stand to benefit from the demand for processing power. Meanwhile, competition heats up as rivals like Google’s Gemini 2.0 Flash enter the market. OpenAI’s premium pricing model faces skepticism, as enterprises demand specialized, cost-effective AI solutions.
Are the higher costs of smarter AI models worth the productivity gains they promise, or are we entering an unsustainable economic model, or will open-source soon take over?
Read the full article on The Economist.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
