In partnership with:
Dear Readers,
Have you ever paused to consider how fast the world around you is rewriting what “possible” means? Today we kick off this edition with that very question, and through it you’ll discover threads connecting solar-powered cargo ships slicing across the seas, deeply human biases spiralling into AI-generated media, and breakthrough models solving mathematics at near-superhuman levels. I’m inviting you into a space where curiosity leads and the familiar is challenged — stay with me.
In this issue, you’ll first ride the deck of the MV Vertom Tula, the Netherlands-built cargo ship all powered by solar panels and a stark symbol of innovation in motion. Then we’ll switch gears into the arrival of the Sora app on Android - simple yes, but loaded with implications for access, region-gaps and who controls AI tools in our pockets. Next we dive into how Fox News mis-aired AI-generated racist imagery and what that says about trust, tech-literacy and media oversight today. We round it off with a deep dive into mind-bending AI research where the boundaries between human reasoning and machine logic blur. Scroll further… you’re only getting started.
In Today’s Issue:
🛰️ Google's "Project Suncatcher" explores space-based AI infrastructure
➕ Google DeepMind's Gemini Deep Think achieves a rigorous math reasoning benchmark
🤖 Generalist AI unveils GEN-0
🧠 Kimi Linear introduces a hybrid attention architecture
✨ And more AI goodness…
All the best,




First Solar-Powered Cargo Ship Launches
The Dutch vessel MV Vertom Tula is now the world’s first cargo ship fitted with a marine solar-energy system, 44 panels generating 79 kWp, covering about 20% of its onboard power needs. Built by Wattlab, the modular setup was installed in just one day, marking a key step toward cleaner, hybrid-powered shipping.

Sora app now also available in the Google Play Store
After being available exclusively on iOS, OpenAI has now released the Sora app for Android as well. It can now be downloaded from the Google Play Store. However, it remains unclear when it will become generally available in Europe.

Fox News Defends After Dropping Racist AI Clip
The story claims Fox News aired a segment based on AI-generated footage that portrayed racist stereotypes, even despite a visible watermark indicating the video was AI-made. The incident raises questions about editorial oversight, verification of AI content, and how legacy media navigate a surge in deepfakes. For viewers and media-consumers it underlines a need to pause and check—even when the source seems trusted.


Exploring a space-based, scalable AI infrastructure system design
The Takeaway
👉 Google’s exploring orbital AI clusters powered by sunlight, not fossil fuels.
👉 Launch cost drops make space compute economically plausible this decade.
👉 The concept could bypass Earth’s energy limits and cooling constraints.
👉 A real step toward off-planet infrastructure for the AI era.
Google just sketched a moonshot that sounds like sci-fi but reads like a roadmap: satellite constellations powered by near-continuous sunlight, packed with TPUs, and stitched together by laser links to run large-scale AI off-planet. The idea, “Project Suncatcher,” bets on sun-synchronous orbits—where panels produce up to ~8× more energy than on Earth—and tight formations to beam tens of Tbps between nodes. Early lab demos hit 1.6 Tbps per link, Trillium (v6e) TPUs showed promising radiation tolerance, and a learning mission with Planet is planned to test hardware and distributed training in space.

If launch costs keep falling toward <$200/kg in the 2030s, space compute could rival terrestrial energy costs on a per-kW/year basis while easing land, water, and grid constraints. Think of it as “cloud” in its literal sense: a modular, scalable fabric above weather, politics, and transmission bottlenecks - yet still facing hard problems like thermal management, ground backhaul, and on-orbit reliability. The bet is simple: if AI demand keeps compounding, the cheapest, cleanest photons win. Are we ready to build a data center where the power plant is the Sun?
Why it matters: Space-based AI could decouple compute growth from terrestrial energy and permitting limits. It also opens a path to cleaner, denser, and more resilient global AI infrastructure.

Your next hire won't be human—it will be AI.
If ChatGPT could actually do the work, not just talk about it, you'd have Lindy.
Just describe what you need in plain English. Lindy builds the agent and gets it done—no coding, no complexity.
Tell Lindy to:
Create a booking platform for your business
Handle inbound leads and follow-ups
Send weekly performance recaps to your team
From sales and support to ops, Lindy's AI employees run 24/7 so you can focus on growth, not grunt work.
Save hours. Automate tasks. Scale your business.



“On average, it takes 3.5 months for an open-weight model to catch up with closed-source SOTA.”



AI Hits Gold at IMO-Level Math
Google DeepMind unveiled IMO-Bench, a trio of rigorous benchmarks built to test mathematical reasoning at International Mathematical Olympiad (IMO) difficulty. The suite—IMO-AnswerBench (400 problems with short answers), IMO-ProofBench (60 proof challenges), and IMO-GradingBench (1000 graded proofs)—redefines how AI’s reasoning depth is measured. DeepMind’s Gemini Deep Think model scored 80.0% on AnswerBench and 65.7% on ProofBench, outclassing GPT-5 and Grok 4 by wide margins and achieving gold-level IMO performance.

GEN-0 brings scalable embodied intelligence
Generalist AI unveiled GEN-0, an embodied foundation model family that hits a 7B-parameter “intelligence threshold”—small models ossify, larger ones keep improving-and is trained on 270,000+ hours of real-world manipulation data growing by ~10,000 hours/week. It introduces Harmonic Reasoning (think+act in continuous time), shows predictable scaling laws, runs across 6–16+ DoF robots, and an infra stack that ingests the equivalent of 6.85 years of experience per training day—promising faster adaptation with less post-training for factories, logistics, and service workflows.

Kimi Linear Beats Full Attention
Kimi Linear is a new hybrid attention architecture that blends linear attention (fast, memory-efficient) with full attention (expressive, global). It introduces Kimi Delta Attention (KDA), a fine-grained, channel-wise gated linear attention mechanism built on top of Gated DeltaNet. The first linear attention mechanism O(n) that outperforms modern attention O(n^2).

Build smarter, not harder: meet Lindy
Tired of AI that just talks? Lindy actually executes.
Describe your task in plain English, and Lindy handles it—from building booking platforms to managing leads and sending team updates.
AI employees that work 24/7:
Sales automation
Customer support
Operations management
Focus on what matters. Let Lindy handle the rest.










