OpenAI and FDA start talks on AI use

OpenAI and the FDA are exploring how generative AI could drastically cut the time it takes to approve new drugs.

In partnership with

Dear Readers,

We are ringing in the end of the week with great hope in the medical sector. AI is the fastest implemented technology. And so it is no surprise that the FDA is now collaborating with OpenAI to enable even faster drug approval.

Plus: The latest AI news from around the world. Have fun!

In Today’s Issue:


All the best,

PS. We have a special sponsor today who is a friend of ours. The product is AIR Insider and gives you real insider tips on alternative investments— check it out if you are into investing.

OpenAI and FDA start talks on AI use

The TLDR
The FDA is partnering with OpenAI to explore “cderGPT,” an AI system designed to streamline the drug approval process by automating repetitive tasks. Early tests show scientific reviews can be completed in minutes instead of days. If successful, this could accelerate access to life-saving therapies and modernize regulatory workflows.

Why does it take over a decade for a new drug to reach the market? The US Food and Drug Administration (FDA) is asking itself this question and is now looking for answers in artificial intelligence. The “cderGPT” project, which aims to speed up the approval process with the help of AI, is being discussed in talks with OpenAI.

“cderGPT” - named after the Center for Drug Evaluation and Research - could automate repetitive tasks such as checking the completeness of applications and thus save valuable time. An initial pilot project was very promising: scientific reviews that previously took three days were completed in minutes.

This is a significant step for the AI community: the use of AI in drug evaluation could not only make processes more efficient, but also speed up access to vital therapies. Of course, there are challenges: The reliability of AI models and the quality of training data are in focus. However, the FDA plans to equip all departments with a secure, generative AI platform by the end of June 2025.

Why its important: The integration of AI into drug approval promises faster decisions and more efficient processes. This could speed up access to innovative therapies and improve healthcare worldwide.

Special Sponsor

The people behind the best private markets newsletter - and a partner of Superintelligence - launched a premium service that gives you weekly investment picks from top alts investors along with exclusive deals and bonuses to invest with top managers. Free trial of AIR Insider just for Superintelligence readers is available if you join today. Sign up here

In The News

Qwen3 Unveiled: Next-Gen Multilingual AI with Thinking Budget

Qwen3, the latest from the Qwen model family, introduces dynamic mode switching and a "thinking budget" to optimize reasoning and latency. Supporting 119 languages, it blends dense and MoE architectures and is fully open-source under Apache 2.0.

Microsoft Lays Off 7,000 Employees Amid Tech Job Slump

Microsoft has cut about 3% of its workforce — roughly 7,000 employees — saving an estimated $1.4 billion annually. The move highlights growing concerns in the tech job market, as even top computer science graduates struggle to find employment.

AI Surpasses Doctors in Healthbench Accuracy

As of GPT-4.1 and O3, AI models now outperform both standalone physicians and physician-AI teams on the Healthbench benchmark. Error rates are also steadily declining, signaling rapid advancements in medical AI reliability.

Graph of the Day

Currently, the best model in terms of price-performance ratio is Grok-3-mini high

USA: Republicans want to exclude states from AI regulation for ten years

A new bill in the US House of Representatives proposes to prevent states from enacting their own AI laws for a decade. This measure, embedded in a comprehensive legislative package, would centralize regulatory authority and protect Big Tech from local control. Critics warn of a step backwards for consumer protection, transparency and democratic control.

Singapore presents global consensus for AI safety

At the ICLR conference, Singapore presented an international consensus on AI safety research to promote cooperation across geopolitical divides. The aim is to establish common standards for the development of safe AI systems. This initiative positions Singapore as a neutral mediator between the rival AI powers of the US and China.

UN officially discusses autonomous weapon systems for the first time

On May 12, 2025, UN member states met to discuss the regulation of AI-controlled weapon systems. Despite growing concern about the use of such systems in conflicts such as in Ukraine and Gaza, there are no binding international standards to date. Human rights groups warn against uncontrolled armament and call for urgent action.

Get Your Free ChatGPT Productivity Bundle

Mindstream brings you 5 essential resources to master ChatGPT at work. This free bundle includes decision flowcharts, prompt templates, and our 2025 guide to AI productivity.

Our team of AI experts has packaged the most actionable ChatGPT hacks that are actually working for top marketers and founders. Save hours each week with these proven workflows.

It's completely free when you subscribe to our daily AI newsletter.

Question of the Day

Should autonomous AI drones be banned globally?

Login or Subscribe to participate in polls.

Quote of the Day

How'd We Do?

Please let us know what you think! Also feel free to just reply to this email with suggestions (we read everything you send us)!

Login or Subscribe to participate in polls.