In partnership with
Dear Readers,
Who would have thought that the “smartest model ever” would trigger one of the loudest user revolts in AI history? The return of GPT-4o after only 24 hours shows how attached people are to the personality of their AI—and how quickly trust crumbles when expectations are not met. In this issue, we not only look at OpenAI's response, but also at how the balance of power between developers and the community is shifting.
We also take you on a journey through the tectonic shifts in the financial world: Hedge funds are automating up to 75% of traditional analyst work, central banks are discussing cautious interest rate policy in the AI era, and analysts are warning of a US economic slowdown despite the AI boom. Plus, there are benchmarks, market trends, and a few sharp jabs from the xAI camp. An issue full of dynamism, insights, and food for thought.
In Today’s Issue:
OpenAI backpedals after GPT-5 launch backlash
Wall Street is automating analyst work with AI
The US Federal Reserve is taking a "cautious and humble" approach to AI's economic impact
Why the AI boom might not be enough to stop a potential US recession
And more AI goodness…
All the best,

ChatGPT changes: 4o is back, and Plus users get 3000 reasoning requests per week with GPT-5!
The Takeaway
👉 Community power trumps hype: Even OpenAI had to bring back old models after only 24 hours of complaints – a sign that user feedback is more important today than marketing promises.
👉 Technical details determine acceptance: The broken auto-switch system made GPT-5 worse than advertised – a reminder that seamless user experience is critical for AI tools.
👉 Rate limits are becoming a competitive factor: The increase from 200 to 3,000 messages per week shows that usage limits are increasingly determining customer satisfaction.
👉 Transparency as a crisis strategy: Altman's open communication about technical problems and quick fixes could become the norm for AI companies.
It only took 24 hours for OpenAI to pull the emergency brake and bring GPT-4o back to life. What happened?

The launch of ChatGPT-5 was supposed to be a triumph: the smartest, fastest, and most useful model ever, as OpenAI boldly promised. But instead of cheers, Sam Altman got an unprecedented backlash.
Reddit users called GPT-5 “garbage,” complained about shorter, less helpful responses, and missed their familiar models.
The problem? A broken auto-switch system that made GPT-5 appear significantly dumber, combined with drastically reduced rate limits that frustrated even paying Plus users.
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly
— #Sam Altman (#@sama)
12:37 AM • Aug 11, 2025
For the AI community, this episode reveals fascinating insights: users develop emotional attachments to AI personalities, technical perfection alone is not enough, and even market leaders must respond quickly to dissatisfied customers. Altman's response was remarkably transparent – he doubled the rate limits from 200 to 3,000 per week, explained technical details, and brought back old models.
Doesn't this controversy show us that AI development is increasingly becoming a dialogue between developers and the community? What lessons can other AI companies learn from OpenAI's “bumpy” launch?
today we are significantly increasing rate limits for reasoning for chatgpt plus users, and all model-class limits will shortly be higher than they were before gpt-5.
we will also shortly make a UI change to indicate which model is working.
— #Sam Altman (#@sama)
5:56 PM • Aug 10, 2025
Why it matters: This controversy shows that even leading AI companies must expect unexpected user reactions when introducing new models. It also highlights the importance of transparent communication and rapid adjustments for maintaining the trust of the AI community.
Sources:
Ad
Love Hacker News but don’t have the time to read it every day? Try TLDR’s free daily newsletter.
TLDR covers the best tech, startup, and coding stories in a quick email that takes 5 minutes to read.
No politics, sports, or weather (we promise). And it's read by over 1,250,000 people!
Subscribe for free now and you'll get our next newsletter tomorrow morning.
In The News
One word: relentless. just in the past two weeks, we’ve shipped:
🌐 Genie 3 - the most advanced world simulator ever
🤔 Gemini 2.5 Pro Deep Think available to Ultra subs
🎓 Gemini Pro free for uni students & $1B for US ed
🌍 AlphaEarth - a geospatial model of the entire planet— #Demis Hassabis (#@demishassabis)
6:33 PM • Aug 8, 2025
Shots fired at OpenAI
Grok has introduced a new "Auto mode" that automatically selects the right level of AI for any task, while still giving users full control to manually choose its most powerful models.
Gemini Beats GPT-5
Google's Gemini 2.5 Pro is achieving a 67% winrate against OpenAI's new GPT-5 in its 'Thinking' mode.
Graph of the Day

GPT-5 with reasoning only achieves 5th place in the SimpleBench benchmark.

Finance jobs in transition: LLMs shift value creation
Hedge funds and research teams are automating up to 75% of traditional analyst work (DCF, screening, CRM) with LLMs; productivity is quadrupling, and hiring is shifting to client-facing roles. Quants are not immune (e.g., AlphaGPT). This is a turning point for cost structures, margins, and data moats in asset management; committee decisions will be partly AI-supported. Implication: The industry's headcount mix, wages, and unit economics are undergoing structural change.
Central bank view: AI as GPT – cautious calibration of monetary policy
Fed representatives (including Lisa D. Cook and Susan M. Collins) emphasize AI as a general-purpose technology with productivity gains, changing job tasks, and potential effects on inflation.
Political conclusion: The data is shaky, so calibration should be “cautious and humble” while companies realize initial efficiency gains (e.g., fewer production errors). Implication: AI may dampen price and wage dynamics in the medium term, but policy will respond gradually.
Boom ≠ economic insurance: macroeconomic dampers prevail
BCA Research sees a 60% risk of recession in the US despite AI euphoria: capex leakage abroad (chips), weak tech employment, rising electricity prices due to data centers, meager productivity effects to date, and soft broad indicators. Implication: Stocks may play the AI story, but the real economy and policymakers should not rely on short-term AI rescue; energy/grid bottlenecks are becoming a macro issue.

Get Your AI & Finance Research in Front of 200,000+ People
Working on the future of finance through AI? From trading algorithms to credit modeling and fintech infrastructure, we’re looking for work that explores the intersection of AI and financial systems.
Submit your paper or project to Superintelligence, the top AI newsletter with 200k+ readers, by emailing [email protected] with the subject line “Finance Submission”. We’ll contact you if we’d like to feature it.
Question of the Day
Tweet of the Day
gpt-5 as a knowledge work amplifier:
— #Greg Brockman (#@gdb)
11:55 PM • Aug 10, 2025
Sponsored By Vireel.com
Vireel is the easiest way to get thousands or even millions of eyeballs on your product. Generate 100's of ads from proven formulas in minutes. It’s like having an army of influencers in your pocket, starting at just $3 per viral video.
Rumours, Leaks, and Dustups
Grok makes things easy with Auto mode, but we never take optionality away from you.
If you want to make our PhD-level Grok 4 suffer through basic problems like 1 + 1, you are more than welcome to do so😅
Also, glad we don't show 42 different models in the dropdown menu here
— #Eric Jiang (#@veggie_eric)
1:07 AM • Aug 11, 2025
xAI is mocking OpenAI by recommending its version of the router, which allows users to select models manually. Is this really the better approach?