In partnership with:


Dear Readers,
Artificial intelligence isn’t just rewriting code - it’s rewriting power. In one day, we saw Meta’s chief scientist Yann LeCun leave to chase independent, open-ended AI; SoftBank liquidate its entire Nvidia stake to bet billions elsewhere; and Elon Musk propose humanoid robots that prevent crime before it happens. Each headline points to the same crossroads: the age of experimentation is over, and AI is now shaping how capital, ethics, and control converge.
Today’s stories dive into that collision. We break down why J.P. Morgan believes AI must generate $650 billion a year to justify its costs, how LeCun’s exit signals a deeper philosophical divide inside Big Tech, and why Musk’s “Optimus as crime prevention” idea forces society to rethink safety itself. From lab to law, from silicon to street - AI’s next frontier isn’t just technological, it’s moral and economic.
In Today’s Issue:
👮 Elon Musk proposes Optimus robots for crime prevention
🎥 MVU-Eval measures multi-video understanding in multimodal AI models
📝 QG-CoC introduces Question-Driven "Chain-of-Captions"
🌐 OmniField develops Robust Spatiotemporal Representations
✨ And more AI goodness…
All the best,




LeCun Departs Meta For Independence
Yann LeCun, Meta’s chief AI scientist and Turing Award winner, is preparing to leave the company to launch his own start-up focused on “world models” — AI systems that learn from video and spatial data rather than text. His exit follows growing friction with Mark Zuckerberg’s pivot toward short-term, LLM-driven “superintelligence” projects. The move underscores a deeper philosophical rift inside Meta — between product-oriented AI and research that aims for true human-level reasoning.

SoftBank Dumps Entire Nvidia Stake
In October 2025, SoftBank Group sold all 32.1 million shares of Nvidia Corporation for about US $5.83 billion, averaging just under US $182 per share — notably lower than Nvidia’s close of roughly US $199 the previous day. The move isn’t about losing faith in Nvidia, but freeing up capital for a massive ramp-up in AI investments, particularly in OpenAI, where SoftBank aims to commit over US $20–30 billion.

Massive AI Revenue Needed for Returns
According to J.P. Morgan, to achieve just a 10% return on AI-build-out investments, the industry must generate roughly US$650 billion in annual revenue. That sum translates to an extra ~$35 per iPhone user or ~$180 per Netflix subscriber in perpetuity.


Alex Karp, CEO of Palantir: Exclusive Interview Inside PLTR Office



Optimus as Crime Prevention
The Takeaway
👉 Musk’s vision pushes Optimus from factory floor to public safety — turning robots into proactive guardians.
👉 The idea reframes justice around prevention, not punishment, but raises serious ethical and legal dilemmas.
👉 Real-world deployment demands breakthroughs in AI reasoning, prediction accuracy, and non-lethal intervention.
👉 The next frontier of robotics isn’t just technical — it’s moral: who decides when a robot should act?
Elon Musk wants Tesla’s Optimus robots to do more than assemble cars — he wants them to prevent crime. Speaking at a recent event, Musk suggested that future humanoid robots could “follow you around and stop you from doing crime,” reframing them as tools for social safety rather than simple automation. It’s a bold claim that blends robotics, ethics, and governance: could mechanical companions become a substitute for prisons and policing?

For now, Optimus can fold laundry and work factory shifts, but Musk’s vision pushes far beyond that. If humanoid robots can learn situational awareness, physical restraint, and moral reasoning, they could transform justice systems from punishment-based to prevention-oriented. Yet it raises hard questions about consent, accountability, and surveillance — who decides when a robot intervenes, and at what cost to human freedom?

Why it matters: Preventive robotics could redefine public safety - replacing punishment with prevention - but only if prediction accuracy and ethical safeguards evolve just as fast as the machines themselves. Societies that balance these forces well could set the global standard for humane automation.
Sources:


Chatbots just died. Meet the PALs: AI humans that see you, hear you, act for you.
Your AI just took care of your biggest problems. PALs check in when you forget, remember what you said you do, and take care of tasks for you. Text, phone call, or video chat. This is what AI was always supposed to be.



MVU-Eval: Measuring Multi-Video Understanding
A new benchmark tests whether multimodal models can understand multiple videos simultaneously - for example, connecting events, recognizing sequences, and answering related questions. The novelty lies in the focus on multi-video context rather than just single clips. This translates to fairer, more realistic testing for agents that need to combine security footage, sports analysis, or quality control from multiple cameras. This, in turn, accelerates the development of robust video AI in media, industry, and security.

QG-CoC: Question-Driven “Chain-of-Captions”
Instead of just image and text, this method uses a chain of intermediate descriptions that are specifically driven by the user's question. This helps models structure visual details step by step and avoid hallucinating. Implications: better image/diagram responses for support, education, e-commerce - anywhere precise visual references are crucial.

OmniField: Robust Spatiotemporal Representations
"Conditioned Neural Fields" connect signals across space and time (e.g., video, sensors) to create stable representations that adapt to context. The novel aspect is the conditioned, multimodal learning strategy for dynamic scenes. Implication: more reliable perception for robotics, autonomous systems, and digital twins - fewer failures under changing light/visual conditions.






