Dear Readers,
Google has once again catapulted itself to first place among the best coding LLMs! The latest update shows how Google is celebrating its comeback. In addition: important current research papers, as every Wednesday. Enjoy reading!
In Today’s Issue:
Gemini 2.5 pro updated: King of Coding
NVIDIA introduces new family of models with Llama Nemotron
Palintir crushes Q1 expectations
Pet Robots that respond to your voice?
And more AI goodness…
All the best,
The TLDR
Google quietly supercharged Gemini 2.5 Pro, boosting its coding skills significantly—especially in UI generation, code refactoring, and agent workflows. With top-tier benchmark gains and expanded token context, it's now a stronger tool for building full apps directly from prompts.
An inconspicuous update with a big impact: Google has released an improved version of Gemini 2.5 Pro today - the model remains the same, but its programming capabilities have increased significantly.
Gemini can now not only generate user interface code faster, but also process code transformation, refactoring and agent workflows more reliably. On the coding benchmark LiveCodeBench v5, the hit rate jumps from 70.4 % to 75.6 %, while it moves to the top of the WebDev Arena ranking thanks to +147 Elo points.
For the AI community, this means less time in the IDE and more room for creative problem-solving: Prototypes of interactive web apps can now be built directly in the prompt, supported by a context window of one million tokens and stable multimodal functions.
Google is making the update available immediately in the Gemini API, AI Studio and the Gemini app - at no extra cost. Anyone planning complex agents, interactive learning applications or fast MVPs will now have a noticeably stronger foundation.
Why it matters: The update shows that fine-tuned model maintenance brings major productivity gains. It lowers the barrier to entry for dialog-based programming and accelerates research and product development.
Stop burning budget on ads and hoping for clicks. Podcast listeners lean in, hang on every word, and buy from guests who deliver real value. But appearing on dozens of incredible podcasts overnight as a guest has been impossible to all but the most famous.
PodPitch.com is the NEW software that books you as a guest (over and over!) on the exact kind of podcasts you want to appear on – automatically.
⚡ Drop your LinkedIn URL into PodPitch.
🤖 Scans 4 Million Podcasts: PodPitch.com's engine crawls every active show to surface your perfect podcast matches in seconds.
🔄 Listens to them For You: PodPitch literally listens to podcasts for you to think about how to best get the host's attention for your targets.
📈 Writes Emails, Sends, And Follows Up Until Booked: PodPitch.com writes hyper-personalized pitches, sends them from your email address, and will keep following up until you're booked.
👉 Want to go on 7+ podcasts every month? Book a demo now and we'll show you what podcasts YOU can guest on ASAP:
Palantir reported 39% year-over-year revenue growth in Q1 2025, with U.S. commercial revenue soaring 71% and government revenue up 45%. The company raised its full-year revenue outlook, projecting 36% total growth and a massive 68% jump in U.S. commercial sales. Strong margins and $370M in free cash flow further solidified Palantir’s breakout quarter.
2.5 + GPT-4.1 Power Up LlamaParseGemini 2.5 Pro and GPT-4.1 are now integrated into LlamaParse, boosting its ability to understand complex PDFs, presentations, and tables. With a simple token trick, Gemini becomes a powerful document agent—no setup required. | Petoi’s Robot Pets Now Respond to Your VoicePetoi’s open-source robotic cat Nybble and dog Bittle X now feature voice control, making them more interactive than ever. Built on Arduino and compatible with Raspberry Pi, they’re perfect for both learning and play. |
NVIDIA introduces Llama Nemotron: a new family of open AI models that combine excellent reasoning capabilities with high efficiency. A new feature is a switch that adjusts the thinking mode as needed—from simple answers to complex conclusions. As open-source models, they are powerful and resource-efficient, promising more flexible, economical, and accessible AI applications for research and business.
AI models often forget what they have learned when they learn new things (“catastrophic forgetting”). This study presents a solution: by combining efficient “LoRA” fine-tuning with minimal “repetition” of old data, models retain their knowledge better – even during continuous learning in medicine, genetics, and law under scarce resources. This is new for real-time applications and promises more adaptable AI that stays up to date without expensive retraining, making it more useful in dynamic fields.
Diffusion transformers produce impressive images, but learn slowly. This paper introduces the “Decoupled Diffusion Transformer” (DDT), which solves the problem by separating semantic extraction (what?) and detail decoding (how?). This decoupling is new and leads to 4x faster training and better image quality. In practice, this means more efficient AI image generation, which reduces costs and enables new applications in design and media.
How'd We Do?Please let us know what you think! Also feel free to just reply to this email with suggestions (we read everything you send us)! |