Superintelligence has partnered with Perplexity to give 50 readers early access to Comet, a new browser that can browse the web for you, fill in forms, manage emails and calendars, search your history, and help with research — see Comet in action — click the button below to enter!

In partnership with

Dear Readers,

Sometimes the greatest progress isn't another step toward “more,” but rather the art of “less, but better.” With Gemma 3 270M, Google has unveiled a tiny AI model that nevertheless achieves amazing things – fast, efficient and even offline on a smartphone. This could be the beginning of an era in which we no longer build gigantic all-rounders, but highly specialised, handy tools that do exactly what we need – anywhere. In this issue, we not only look at this paradigm shift, but also at how the major political levers in the AI sector are shifting: from the bumpy start of the GPAI rules in the EU to Colorado's dispute over AI regulation to the ambitious but shaky plans of the US. Plus, we'll share insights into new performance data from Qwen3, fresh industry rumors, and a look at the growing importance of on-device AI. It's worth staying tuned—because the next few months could be decisive.


In Today’s Issue:

  • Google’s new tiny AI model proves that bigger isn’t always better.

  • The EU's groundbreaking AI law is officially in effect, but is anyone ready to enforce it?

  • Under pressure, Colorado reconsiders the high cost of its ambitious AI safety law.

  • Washington is pushing its AI master plan, but big questions about funding and execution remain.

  • And more AI goodness…


All the best,

Google releases a tiny but impressive 270m AI model!

The Takeaway

👉 Paradigm shift toward efficiency: Google's 270M parameter model proves that compact AI models can outperform large models in specific tasks through clever architecture and targeted fine-tuning.

👉 On-device AI goes mainstream: With the ability to run on smartphones and consume only 0.75% battery for 25 conversations, a huge market is opening up for privacy-first AI applications without cloud dependency.

👉 Democratization of AI development: With rapid fine-tuning in hours instead of days and minimal hardware requirements, even small teams and individual developers can develop highly specialized AI solutions.

👉 Strategic shift to task-specific models: The trend is moving away from “one-size-fits-all” models toward specialized model fleets, where each model perfectly masters a specific task.

Google is breaking the “bigger is better” paradigm and proving that sometimes less is more! With the brand-new Gemma 3 270M, the company is introducing an AI with only 270 million parameters that delivers impressive performance despite its compact size.

The real innovation lies in its intelligent architecture: 170 million parameters are dedicated to a gigantic vocabulary of 256,000 tokens, while 100 million parameters power the transformer blocks. This clever division enables the model to understand even rare and specific terms – perfect for fine-tuning in niche areas.

This is a game changer for the AI community: the model runs smoothly on smartphones and can even be operated offline. Instead of requiring expensive cloud infrastructure, developers can deploy their own specialized AI assistants directly on edge devices. Tests are already showing impressive efficiency in tasks such as text classification and data extraction. Will we soon see an army of small, highly specialized AI models, each optimized for its specific task? Gemma 3 270M could be the starting signal for a whole new era of decentralized, task-specific AI systems.

Why it matters: Gemma 3 270M democratizes AI development by making powerful models accessible to everyone—without cloud dependency or massive hardware requirements. This could lay the foundation for a new generation of privacy-first AI applications that run directly on our devices.

Sources:

Love Hacker News but don’t have the time to read it every day? Try TLDR’s free daily newsletter.

TLDR covers the best tech, startup, and coding stories in a quick email that takes 5 minutes to read.

No politics, sports, or weather (we promise). And it's read by over 1,250,000 people!

Subscribe for free now and you'll get our next newsletter tomorrow morning.

In The News

Google's Tiny AI Packs a Punch

Google has released Gemma 3 270M, a surprisingly powerful yet compact AI model with only 270 million parameters, designed to run efficiently on smartphones and other edge devices.

Gemini Doubles Deep Think Limit

Responding to user feedback, Google has doubled the daily query limit for its "Deep Think" feature for Gemini Ultra subscribers, increasing it from 5 to 10.

Imagen 4 Is Here

Google's Imagen 4 is now generally available, and the update includes a new "Imagen 4 Fast" model for rapid image generation at just $0.02 per image.

Graph of the Day

Qwen3-30B-A3B-Instruct — with just 3B active parameters, it’s closing in on the performance of far larger models.

EU AI Act: GPAI rules take effect – authorities lag behind

The obligations for general-purpose models have been in force since August 2, but many EU countries have not designated their supervisory authorities by the deadline. Politically sensitive: Providers must already comply with transparency and risk obligations, while enforcement remains patchy. The “breakthrough” is the shift from paper to practice; in the long term, there is a risk of enforcement gaps or, once authorities are in place, a Europe-wide compliance leveling up.

USA: Colorado wants to tighten its AI law

Colorado is holding a special session to discuss changes to key parts of its AI law passed in 2024, caught between cost pressures and concerns about watering down protection standards.

This is politically relevant because it is an example of whether US states will maintain robust AI regulation or back down in the face of economic burdens and lobbying pressure. In the long term, it will determine whether a patchwork quilt will continue to exist or whether nationwide standards will emerge.

America's AI action plan: ambition vs. implementation

Washington is fleshing out its AI strategy: new platforms for government agencies, focus on innovation/infrastructure/diplomacy; at the same time, analysts warn of implementation and financing risks.

Politically relevant because the US is readjusting standards, export policy, and public procurement. The breakthrough lies in the political framework for massive scaling; in the long term, the impact will depend on the budget, government capabilities, and international response.

Put Your AI & Politics Insight in Front of 200,000+ Readers

Exploring how AI is reshaping politics, policy, or governance? We’re featuring sharp analysis and research on the intersection of AI and political systems in Superintelligence, read by 200k+ people.

If you have significant comments on AI and politics (unemployment, geopolitics, proposed legislation) and would like to have this discussion presented in the newsletter, get in touch via a brief summary to [email protected] with the subject line “Politics Submission”. We’ll reach out if your work is a fit for a future issue.

Question of the Day

OpenSource or Closed source, what do you prefer?

Login or Subscribe to participate

Quote of the Day

Ad

Fact-based news without bias awaits. Make 1440 your choice today.

Overwhelmed by biased news? Cut through the clutter and get straight facts with your daily 1440 digest. From politics to sports, join millions who start their day informed.

Rumours, Leaks, and Dustups

Meta recruits the next OpenAI researcher: Zhiqing Sun

How'd We Do?

Please let us know what you think! Feel free to reply to this email with suggestions (we read everything)!

Login or Subscribe to participate

Reply

or to participate

Keep Reading

No posts found