In partnership with

In Today’s Issue:

🤖 China deploys Walker S2 humanoid robots for 24/7 border patrol

🚀 GLM-4.7 launches as a new open-source coding superstar

🧠 Hassabis vs. LeCun: A high-stakes debate erupts over "General Intelligence"

And more AI goodness…

Dear Readers,

If you’ve ever watched two top minds argue past each other while accidentally revealing the real stakes, today’s issue is for you: Demis Hassabis and Yann LeCun are sparring over whether “general intelligence” is a meaningful label for humans and, by extension, modern AI - Demis insisting the brain (and foundation models) are astonishingly general learning architectures in principle, Yann pushing back that in practice we’re brutally resource-bounded specialists who only look “general” because the world is unusually learnable. That debate isn’t academic hair-splitting; it shapes how we forecast AI’s ceiling, what “human-level” even means, and whether today’s scaling wins translate into robust, reliable capability across weird edge cases.

From there we drop straight into the messy reality of embodied intelligence: China’s move toward 24/7 humanoid deployment in sensitive border workflows, and a “Robot Olympics” benchmark that punctures glossy demos with chores like doors, laundry, and slippery manipulation showing both how far physical AI still has to climb and how quickly it can level up when the bottleneck becomes task data rather than exotic theory. If your mental model of progress is still “bigger models → smarter everything,” this issue will either sharpen it—or break it in the best way.

All the best,

Humans Prove Intelligence’s Expansive Power

Demis Hassabis argues that Yann LeCun mixes up general and universal intelligence, stressing that both the human brain and modern AI systems are incredibly general learning architectures, theoretically capable of learning anything computable given enough data, time, and memory. Despite limits like the “no free lunch” theorem, humanity’s achievements from inventing chess to building jet airplanes prove how astonishingly adaptable and powerful general intelligence really is, and that should make us optimistic about AI’s future, too.

Humans Aren’t Truly General Thinkers

However, Yann LeCun argues that calling human intelligence “general” is misleading, because in practice our brains are highly specialized, resource-bounded systems that can only handle a tiny slice of all possible problems efficiently. Yes, humans are theoretically Turing-complete, but like shallow neural networks that can approximate anything only with absurd inefficiency, human cognition is heavily constrained — our minds grasp only a minuscule, structured corner of reality while the overwhelming rest is entropy we simply can’t comprehend. It’s a humbling reminder: human brilliance is real, but it thrives only because the universe is unusually understandable, not because our intelligence is limitless.

China Fields 24/7 Border Humanoids

China has signed a 264 million yuan (~$37 million) deal to deploy UBTech’s Walker S2 humanoid robots at the Fangchenggang border with Vietnam, where they’ll handle personnel flow, inspections, and logistics in harsh, remote conditions around the clock. These 176 cm, 70 kg robots walk at about 2 m/s, can autonomously hot-swap their batteries in under 3 minutes for true 24/7 operation, and are managed via fleet software and remote teleoperation—turning border control into a real-world testbed for “physical AI” that could soon spill over into warehouses, factories, and critical infrastructure jobs currently done by humans. For people, this is both a huge leap in embodied AI capability (endurance, precision, logistics efficiency) and a wake-up call about how fast humanoids are moving into sensitive domains like surveillance, security, and labor.

The truth about the AI bubble

GLM-4.7: the new open source superstar!

The Takeaway

👉 GLM-4.7 boosts coding and complex reasoning with enhanced planning mechanisms and consistent multi-turn thinking.

👉 The model shows substantial benchmark gains over GLM-4.6, especially in agentic and tool-use scenarios.

👉 It’s fully open source and accessible via APIs, making advanced AI capabilities broadly available to developers.

👉 Practical use cases include smarter coding agents, deeper context understanding, and more robust long-form tasks.

The AI world just got a major upgrade: GLM-4.7, Z.ai’s newest flagship model, has officially launched with a clear focus on stronger coding, reasoning, and multi-step task performance. At its core, GLM-4.7 builds on the impressive foundation of earlier GLM models by enhancing interleaved thinking, a mechanism that lets the system plan ahead before responding, and adding preserved and turn-level thinking for better consistency over long, complex conversations. These upgrades make it much better at handling tasks that require deep logical steps, like debugging code, maintaining context across turns, or orchestrating actions with external tools.

But here’s what really makes GLM-4.7 stand out for the AI community: it significantly improves real-world coding metrics and reasoning benchmarks compared to its predecessor, while remaining fully open-source and accessible through APIs and platforms like OpenRouter. Imagine an AI that not only writes cleaner, more efficient code, but also keeps the “thought process” behind the scenes more stable and reliable - that’s what GLM-4.7 aims to deliver. 

Looking ahead, could models like GLM-4.7 reshape how open-source AI competes with proprietary giants on reasoning and developer tooling? It’s a question worth exploring.

Why it matters: GLM-4.7 pushes the envelope for open-source models in coding and reasoning tasks, offering powerful capabilities without API lock-in. It strengthens the bridge between research and real-world developer workflows in the AI ecosystem.

Sources:

🔗 https://z.ai/blog/glm-4.7

🔗 https://docs.z.ai/guides/llm/glm-4.7

The Future of AI in Marketing. Your Shortcut to Smarter, Faster Marketing.

Unlock a focused set of AI strategies built to streamline your work and maximize impact. This guide delivers the practical tactics and tools marketers need to start seeing results right away:

  • 7 high-impact AI strategies to accelerate your marketing performance

  • Practical use cases for content creation, lead gen, and personalization

  • Expert insights into how top marketers are using AI today

  • A framework to evaluate and implement AI tools efficiently

Stay ahead of the curve with these top strategies AI helped develop for marketers, built for real-world results.

Robot Olympics Meet Reality

Robotics just got its own brutally honest “Olympics,” and it’s the kind of benchmark that exposes the gap between slick demos and dependable household skills. Physical Intelligence (π) took Benjie Holson’s “Humanoid Olympics” challenge set - doors, laundry, tool use, fingertip dexterity, and “slippery when wet” chores - and tried to knock out medal-tier tasks by fine-tuning its latest robot foundation model (π 0.6). Think of fine-tuning like coaching an already athletic generalist: you’re not building a new robot brain from scratch, you’re sharpening it for a specific event.

Two details make the report pop. First, most of the work wasn’t exotic research - it was data collection per task, often under 9 hours. Second, the “cheap baseline” approach—fine-tuning a standard vision-language model without large-scale robot pretraining - basically face-planted, which is a loud hint that robot-native pretraining is becoming table stakes.

The results are a reality check and a reason for optimism: across these messy, high-friction tasks (keys, peanut butter, wet sponges), π reports 52% average success and 72% task progress. They also call out a hard truth engineers love to ignore: some “failures” aren’t model failures—they’re hardware geometry (e.g., grippers too wide for certain clothing manipulations), meaning the roadmap is software and bodies.

This is Moravec’s paradox in the wild: abstract reasoning scales fast, but hands-on competence is the true bottleneck for real-world agents. If a strong robot foundation model can learn new skills with hours - not months - of task data, the path from lab to useful home robots gets a lot shorter.

A free newsletter read by 117,000 marketers

The best marketing ideas come from marketers who live it. That’s what The Marketing Millennials delivers: real insights, fresh takes, and no fluff. Written by Daniel Murray, a marketer who knows what works, this newsletter cuts through the noise so you can stop guessing and start winning. Subscribe and level up your marketing game.

Reply

or to participate

Keep Reading

No posts found