Dear Readers,
We are ringing in the end of the week with great hope in the medical sector. AI is the fastest implemented technology. And so it is no surprise that the FDA is now collaborating with OpenAI to enable even faster drug approval.
Plus: The latest AI news from around the world. Have fun!
In Today’s Issue:
OpenAI and FDA partner to speed up drug approvals with AI
Qwen3 launches with multilingual support and thinking budget innovation
Microsoft cuts 7,000 jobs amid growing tech employment concerns
AI now beats doctors on Healthbench, error rates keep dropping
And more AI goodness…
All the best,
PS. We have a special sponsor today who is a friend of ours. The product is AIR Insider and gives you real insider tips on alternative investments— check it out if you are into investing.
The TLDR
The FDA is partnering with OpenAI to explore “cderGPT,” an AI system designed to streamline the drug approval process by automating repetitive tasks. Early tests show scientific reviews can be completed in minutes instead of days. If successful, this could accelerate access to life-saving therapies and modernize regulatory workflows.
Why does it take over a decade for a new drug to reach the market? The US Food and Drug Administration (FDA) is asking itself this question and is now looking for answers in artificial intelligence. The “cderGPT” project, which aims to speed up the approval process with the help of AI, is being discussed in talks with OpenAI.
“cderGPT” - named after the Center for Drug Evaluation and Research - could automate repetitive tasks such as checking the completeness of applications and thus save valuable time. An initial pilot project was very promising: scientific reviews that previously took three days were completed in minutes.
This is a significant step for the AI community: the use of AI in drug evaluation could not only make processes more efficient, but also speed up access to vital therapies. Of course, there are challenges: The reliability of AI models and the quality of training data are in focus. However, the FDA plans to equip all departments with a secure, generative AI platform by the end of June 2025.
Why its important: The integration of AI into drug approval promises faster decisions and more efficient processes. This could speed up access to innovative therapies and improve healthcare worldwide.
The people behind the best private markets newsletter - and a partner of Superintelligence - launched a premium service that gives you weekly investment picks from top alts investors along with exclusive deals and bonuses to invest with top managers. Free trial of AIR Insider just for Superintelligence readers is available if you join today. Sign up here
Qwen3, the latest from the Qwen model family, introduces dynamic mode switching and a "thinking budget" to optimize reasoning and latency. Supporting 119 languages, it blends dense and MoE architectures and is fully open-source under Apache 2.0.
Microsoft Lays Off 7,000 Employees Amid Tech Job SlumpMicrosoft has cut about 3% of its workforce — roughly 7,000 employees — saving an estimated $1.4 billion annually. The move highlights growing concerns in the tech job market, as even top computer science graduates struggle to find employment. | AI Surpasses Doctors in Healthbench AccuracyAs of GPT-4.1 and O3, AI models now outperform both standalone physicians and physician-AI teams on the Healthbench benchmark. Error rates are also steadily declining, signaling rapid advancements in medical AI reliability. |
A new bill in the US House of Representatives proposes to prevent states from enacting their own AI laws for a decade. This measure, embedded in a comprehensive legislative package, would centralize regulatory authority and protect Big Tech from local control. Critics warn of a step backwards for consumer protection, transparency and democratic control.
At the ICLR conference, Singapore presented an international consensus on AI safety research to promote cooperation across geopolitical divides. The aim is to establish common standards for the development of safe AI systems. This initiative positions Singapore as a neutral mediator between the rival AI powers of the US and China.
On May 12, 2025, UN member states met to discuss the regulation of AI-controlled weapon systems. Despite growing concern about the use of such systems in conflicts such as in Ukraine and Gaza, there are no binding international standards to date. Human rights groups warn against uncontrolled armament and call for urgent action.
Mindstream brings you 5 essential resources to master ChatGPT at work. This free bundle includes decision flowcharts, prompt templates, and our 2025 guide to AI productivity.
Our team of AI experts has packaged the most actionable ChatGPT hacks that are actually working for top marketers and founders. Save hours each week with these proven workflows.
It's completely free when you subscribe to our daily AI newsletter.
Should autonomous AI drones be banned globally? |
How'd We Do?Please let us know what you think! Also feel free to just reply to this email with suggestions (we read everything you send us)! |