In partnership with

Dear Readers,

How much progress is actually behind a new model—and how much in the data centers that make it possible in the first place? GPT-5 is making headlines, but while many are discussing cold responses and a lack of warmth, OpenAI has long been thinking bigger: data centers worth trillions that will form the foundation for the next generation. So it's not just about smarter models, but about the infrastructure that determines how far that intelligence actually reaches.

In this issue, we take a look at exactly where the future is being built: the engine rooms of AI. We show why GPT-5 should be seen less as a goal and more as a starting point, what new methods are making language behavior more human, and how speed and efficiency are becoming the key metrics. We also take a look at rumors, breakthroughs, and what's currently shaking up the scene. If you want to know where the booster train is headed, read on.


In Today’s Issue:


All the best,

How GPT-5 is really just the start of an AI infrastructure booster train

The Takeaway

👉 OpenAI admits GPT-5 launch errors – users miss the human tone of GPT-4o.

👉 API usage doubled within 48 hours – demand is there, capacity is limited.

👉 OpenAI plans trillions in investments in data centers – integration of software and hardware will determine the future.

👉 GPT-5 may be technically improved, but for many users it remains a step, not a leap – humanity in AI remains a key issue.

With an honest “We messed up,” Sam Altman begins a message that is more than just damage control. GPT-5, announced with great fanfare on August 7, 2025, brings advances in code, benchmark performance, and efficiency thanks to its automatic router system, but many users find the responses cold and mechanical, missing the warmth of GPT-4o.

What's next? Instead of relying on applause, OpenAI is now going for the gigabit: the company wants to invest trillions – with a T – in new data centers to ensure it has the capacity for better models.

What's next? Instead of relying on applause, OpenAI is now going for the gigabit: the company wants to invest trillions – with a T – in new data centers to ensure it has the capacity for better models.

You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” he told the room, according to a Verge reporter.”

— Fortune

It's a bit like repairing a car and, instead of just fixing the engine, expanding the entire garage.

For the AI community, this means: We should not just talk about models, but also about the foundation on which they run. Where else does reach fail if not in mere hardware? And how much future does an AI model have without infrastructure?

Why it matters:

  1. It shows that AI progress is not just a code problem – it's an infrastructure problem.

  2. The direction of investment opens up new debates about sustainability, control, and technological feasibility.

Sources:

Love Hacker News but don’t have the time to read it every day? Try TLDR’s free daily newsletter.

TLDR covers the best tech, startup, and coding stories in a quick email that takes 5 minutes to read.

No politics, sports, or weather (we promise). And it's read by over 1,250,000 people!

Subscribe for free now and you'll get our next newsletter tomorrow morning.

In The News

GitHub Comes to AI Studio

Google has announced a new GitHub integration for AI Studio, allowing developers to connect their accounts, build AI-powered applications, and seamlessly commit changes all from within the same platform.

The Doubling Law of AI

According to a new observation, the length of tasks an AI can reliably complete is doubling every seven months, suggesting that in just a single generation, AI will be able to accomplish tasks that would take a human a thousand millennia to finish.

Groq Supercharges GPT-OSS Models

Groq has rolled out major quality improvements to the open-source GPT-OSS models since their initial launch, significantly boosting their performance and capabilities.

Graph of the Day

Threads is nearing X in daily active app usage.

Reinforcement Learning with Rubric Anchors

Reinforcement Learning with Rubric Anchors describes a new method for controlling language models using rubrics as anchor points in their expressions. What is new is how these rubrics enable stylistic control — for example, responding in a more human and nuanced way instead of sounding clumsily “AI-like.” This represents progress in the field that goes beyond mere content accuracy. For newsletter readers, this means AI texts that feel more natural and may soon be more convincing in communication — with implications for media, education, and creative applications.

Has GPT-5 Achieved Spatial Intelligence?An Empirical Study

GPT-5 surprises not only with its impressive text comprehension, but also with significantly improved spatial reasoning: Researchers systematically organize tasks related to spatial perception—from object localization to navigation—and test GPT-5 (and other models) on eight standardized benchmarks. The result: GPT-5 sets new standards, but still lags behind human performance. This is relevant because it shows how AI is increasingly understanding where and how things exist in space – a step toward AI with real everyday understanding that could impact media, education, robotics, and more.

Speed Always Wins: A Survey on Efficient Architectures for Large Language Models

“Speed Always Wins: A Survey on Efficient Architectures for Large Language Models” summarizes current architectural approaches that focus on efficiency in 82 pages. Among other things, linear and sparse sequence models, more efficient attention variants, sparse mixture of experts, and hybrid models are examined – even diffuse LLMs are mentioned. A new feature is the structured overview of concepts that make transformers leaner, faster, and more resource-efficient. For future architectures, this means AI that is more scalable and easier to implement in real-world applications.

Get Your AI Research Seen by 200,000+ People

Have groundbreaking AI research? We’re inviting researchers to submit their work to be featured in Superintelligence, the leading AI newsletter with 200k+ readers. If you’ve published a relevant paper on arXiv.org, email the link to [email protected] with the subject line “Research Submission”. If selected, we will contact you for a potential feature.

Question of the Day

Do you regularly read research papers to stay up to date?

Login or Subscribe to participate

Quote of the Day

Googles new image Model “nano-banana” incoming!

Ad

Ready to get your team on the same page?

When workplace comms are clear and concise, you cut out:

  • Endless back-and-forths

  • Confusion and misalignment

  • Time-consuming follow-ups

Get — or give — the workbook that shows you how to be more succinct.

Rumors, Leaks, and Dustups

Looks like Grok 4 coder will arrive soon!

GPT-5 just corrected itself midstream. A new feature?

How'd We Do?

Please let us know what you think! Also feel free to just reply to this email with suggestions (we read everything you send us)!

Login or Subscribe to participate

Reply

or to participate

Keep Reading

No posts found