
In Today’s Issue:
💰 Elon Musk’s AI startup hits a $230B valuation
🚪 A key architect of the o1 reasoning model and GitHub Copilot, resigns after seven years to pursue "independent" research.
🚧 Internal reports reveal the shutdown of Omniverse Cloud following weak demand and technical bugs
💻 Anthropic launches a native app for parallel coding sessions
✨ And more AI goodness…
Dear Readers,
What happens when the AI race stops being about clever architectures and turns into a raw contest of power, capital, and execution? Today’s issue opens with xAI’s $20B Series E—industrial-scale money that screams one thing: compute is the battlefield. Then we hit the pressure points: Nvidia’s Omniverse still struggling to turn manufacturing ambition into revenue, a senior OpenAI research leader leaving to chase work that doesn’t fit inside the lab, and Anthropic bringing Claude Code to the desktop to make parallel development feel effortless. Layer in Amazon’s “boring-but-reliable” Nova push and the hardware momentum behind NVIDIA’s Rubin platform, and the pattern is clear: AI is getting heavier, pricier, and more operational. If you want to see who’s positioned to ship - not just hype - keep reading.
All the best,




🤖 Nvidia’s Manufacturing AI Bet Stalls
Nvidia has poured hundreds of millions into Omniverse, betting its simulation software could unlock a share of the $50 trillion manufacturing and logistics market - but real revenue remains elusive. Despite eye-catching partnerships and CES hype led by CEO Jensen Huang, demand for Omniverse Cloud was so weak that Nvidia shut it down in 2025, with developers citing buggy, incomplete tools and high costs. The takeaway: Omniverse may still be a long-term “Cuda-like” play, but for now it’s a slow burn overshadowed by Nvidia’s explosive AI chip boom.

🤖 Big loss for OpenAI: Research Leader Exits Company
Jerry Tworek, a veteran VP researcher at OpenAI, has announced his resignation after nearly seven years, saying he wants to pursue research directions that are hard to explore within OpenAI. Tworek was a key architect behind the o1 reasoning model and played a major role in advancing ChatGPT’s coding skills and GitHub Copilot, marking his exit as a notable moment for the AI research world.

🚀 Claude Code Desktop Preview Launch
Claude has rolled out a desktop preview that lets developers run multiple Claude Code sessions locally or in the cloud, with slick Git worktree isolation for parallel tasks. Key highlights include secure web session launches, automatic handling of ignored files via .worktreeinclude, and a bundled, stability-first Claude Code version managed by the app. It’s a productivity boost for teams juggling multiple code tasks, without conflicts or setup headaches.



The NVIDIA Rubin Platform: Six New Chips, One AI Supercomputer



The race for data centers is entering the next round.
The Takeaway
👉 xAI raised $20B in an upsized Series E round, above its reported $15B target, at a valuation reported around $230B.
👉 Investors named include Valor Equity Partners, StepStone Group, Fidelity, plus strategic participation from Nvidia and Cisco.
👉 xAI said the funding will support expansion of the compute clusters used to train its AI models.
👉 The company is expanding data-center plans near Memphis, Tennessee, including a third large facility, and claims compute capacity equivalent to ~1 million Nvidia H100s.
Elon Musk’s xAI just pulled off a funding move that feels less like “startup finance” and more like mobilization: $20B raised in an upsized Series E at a reported $230B valuation. This is money to buy time - by buying compute. xAI says the round will accelerate its infrastructure buildout and the training of the next Grok models, with strategic backing from Nvidia and Cisco alongside big financial names like Valor, StepStone, and Fidelity. The company is also scaling its Memphis footprint, including plans for a third supersized data center near its “Colossus” cluster - because frontier models now live or die by power, cooling, and GPUs.

The signal is loud: the competitive edge is shifting from clever architectures to industrial capacity. If xAI can reliably run “million-H100-equivalent” scale, it pressures everyone on speed, cost per token, and product cadence

The fun part: this compute race can also spill over into better tools for developers - faster iteration, cheaper experimentation, and new real-time agents. Who do you think keeps up: the best researchers, or the best builders?
Why it matters: This round is a reminder that frontier AI is becoming an infrastructure business as much as a model business. Whoever controls reliable, low-cost compute can ship faster, price more aggressively, and win distribution.
Sources:
🔗 https://x.ai/news/series-e
🔗 https://www.reuters.com/business/musks-xai-raises-20-billion-upsized-series-e-funding-round-2026-01-06
🔗 https://www.ft.com/content/f87bde18-ffd4-4e47-a5c8-a3e2099e08f9


Modernize your marketing with AdQuick
AdQuick unlocks the benefits of Out Of Home (OOH) advertising in a way no one else has. Approaching the problem with eyes to performance, created for marketers with the engineering excellence you’ve come to expect for the internet.
Marketers agree OOH is one of the best ways for building brand awareness, reaching new customers, and reinforcing your brand message. It’s just been difficult to scale. But with AdQuick, you can easily plan, deploy and measure campaigns just as easily as digital ads, making them a no-brainer to add to your team’s toolbox.



Amazon’s Nova Gets Serious
After years of chasing “frontier” hype, AWS is leaning into a different superpower: making AI boring - in the best way. Amazon’s Nova 2 model family and its Nova Act agent service are built for the unglamorous work that actually runs companies: automating repetitive browser workflows, extracting data from messy screens, and summarizing mountains of internal chatter with steady uptime.

Think of it like switching from a prototype race car to a production vehicle line. Nova is meant to be the reliable, cheaper option you can deploy everywhere, while partner models (notably Anthropic’s Claude) still step in when you need sharper reasoning or coding. To close that gap, Amazon is pushing harder on “post-training” - the final tuning phase (often with reinforcement learning) that turns a general model into something more job-ready.

A leadership reshuffle that unifies AI models, custom silicon (Trainium), and quantum under AWS veteran Peter DeSantis - and taps robotics researcher Pieter Abbeel to lead frontier model research - signals urgency. Meanwhile, Project Rainier, a massive Trainium2-based cluster built with Anthropic, shows Amazon is betting big on owning the compute stack too.

If “good enough + ultra-reliable” becomes the default buying criterion, Amazon can win by turning cost and uptime into a platform moat. The bigger question: can Nova climb the reasoning curve fast enough to reduce dependence on rivals’ best models?










