In partnership with:

In Today’s Issue:

🤖 NVIDIA launches Nemotron 3 Nano

📺 LG TV users push back against a forced Microsoft Copilot tile

🎙️ OpenAI upgrades its Realtime API with new "mini" snapshots

📱 ChatGPT mobile users can now branch conversations

And more AI goodness…

Dear Readers,

What happens when “open source” stops being a model drop and turns into an ecosystem play? Today’s main story tracks NVIDIA’s Nemotron 3 push - long-context, agent-ready, and backed by datasets + RL tooling that make shipping real agents feel a lot more attainable. Then we zoom out to the messy reality of AI as a default layer: LG TV owners are reportedly seeing a pinned Copilot tile they didn’t ask for, while ChatGPT mobile finally adds conversation branching so you can explore alternate paths without losing your thread. We’ll also hit the latest realtime audio model snapshots, a spicy LeCun-vs-DeepMind debate on whether LLMs “understand,” plus a new agentic benchmark that turns Pokémon into a surprisingly revealing stress test - keep reading.

LG TVs Force Copilot Bloatware

A recent webOS update is reportedly adding a Microsoft Copilot tile to some LG smart TVs. It is pinned on the home screen and cannot be deleted (at best, you can hide it), which has triggered a significant user backlash (one Reddit post hit ~35k upvotes). What makes this extra spicy is that the current “Copilot” experience looks more like a shortcut to a web-based UI than a deeply integrated TV assistant. Yet, it signals how fast “AI default apps” are becoming part of consumer electronics, whether you asked for them or not.

Branch conversations in ChatGPT mobile

You can now branch conversations in ChatGPT, letting you easily explore different directions without losing your original thread. This option was previously only available for the web version, but is now rolling out to iOS and Android.

Realtime audio models just leveled-up

New 2025-12-15 snapshots are live in the OpenAI Realtime API, targeting higher reliability, lower error rates, and fewer hallucinations. The new gpt-4o-mini-transcribe claims 89% fewer hallucinations vs. whisper-1, while the TTS model shows 35% fewer word errors. Additionally, gpt-realtime-mini improves instruction following by 22% and function calling by 13%.

Do LLMs Understand? AI Pioneer Yann LeCun Spars
with DeepMind’s Adam Brown.

NVIDIA Nemotron 3:
Open source gets an upgrade

The Takeaway

👉 NVIDIA released the Nemotron 3 open-model family (Nano, Super, Ultra), positioning it as a full open stack rather than just model weights.

👉 Nemotron 3 Nano is built for long-context work (up to 1M tokens) and uses a hybrid Mixture-of-Experts setup, activating only a small slice of parameters per token to stay efficient.

👉 NVIDIA claims major quality + reliability gains in agent-style tasks, and reports significant throughput improvements versus its prior Nemotron 2 Nano generation.

👉 Beyond the model, NVIDIA is shipping datasets, RL tooling, and environments, signaling an effort to standardize how open “agentic” systems are trained and deployed.

NVIDIA just turned “open model” into an ecosystem play. Nemotron 3 Nano lands with a 1-million-token memory and a design built for agentic workflows, not just chatting. Nemotron 3 is a new family of open models (Nano, Super, Ultra). The Nano version is 30B parameters on paper, but it only “wakes up” ~3B at a time using a hybrid mixture-of-experts setup, think a team of specialists where only the right few step in per task.

NVIDIA just turned “open model” into an ecosystem play. Nemotron 3 Nano lands with a 1-million-token memory and a design built for agentic workflows, not just chatting. Nemotron 3 is a new family of open models (Nano, Super, Ultra). The Nano version is 30B parameters on paper, but it only “wakes up” ~3B at a time using a hybrid mixture-of-experts setup, think a team of specialists where only the right few step in per task.

Result: NVIDIA claims up to 4× higher token throughput than Nemotron 2 Nano, and the technical report shows 2.2–3.3× faster generation than similarly sized open contenders in a heavy-output setting, while staying competitive on accuracy.

What’s especially notable for builders is that NVIDIA isn’t just dropping weights. It’s also releasing training datasets, reinforcement-learning tools, and ready-made environments on Hugging Face and GitHub, so teams can fine-tune “transparent” agents for real work like debugging, summarization, and retrieval. Super (~100B) and Ultra (~500B) are slated for 2026. How far can open, efficient agents go before “closed” feels like the slow option?

Why it matters: This is a credible path to scaling agents without scaling your inference bill into orbit. And by publishing not just weights but also data and RL tooling, NVIDIA is lowering the barrier to building auditable, specialized systems that actually ship.

Sources:
🔗 https://nvidianews.nvidia.com/news/nvidia-debuts-nemotron-3-family-of-open-models

Not Another 3D AI Tool: Hitem3D Is Built for Making Real Things

Most “AI 3D” tools are optimized for virtual assets—great for renders, games, and demos. But when you try to manufacture with those meshes, reality hits: broken topology, missing thickness, messy geometry, and models that look fine… yet fail the moment you send them to a printer or machine.

Hitem3D is different. It’s an AI 3D model generation platform purpose-built for physical manufacturing, not just visualization. Under the hood is Hitem3D’s in-house 3D foundation model—once ranked #1 on Hugging Face’s download chart for three consecutive weeks—and capable of reaching up to 1536³ resolution, a meaningful jump in fidelity compared with many tools capped at 1024³.

The real edge isn’t only the number. It’s the training target and data: Hitem3D is trained on product-grade 3D models, aiming to generate meshes that can go directly into 3D printing, laser cutting, and even CNC workflows—instead of stopping at “looks good in a viewer.”

As desktop 3D printers, laser cutters, and CNC machines spread globally, hardware is getting cheaper and better. The new bottleneck is shifting fast: content. The world needs more high-quality, production-ready 3D models—created with less friction, higher accuracy, and better compatibility with real-world pipelines.

Hitem3D’s bet is simple: become the high-fidelity model layer powering the next maker wave—where building physical products is as accessible as generating images today.

Google just dropped a new Agentic Benchmark: Gemini 3 Pro beat Pokémon Crystal (defeating Red) using 50% fewer tokens than Gemini 2.5 Pro.

The most impressive robotics updates from this week:

Last week was packed with impressive updates. This time: a self-assembling photovoltaic module whose arms dock onto the layer above.

And while we're on the subject of impressive ideas, how about looking more at the animal kingdom and having a robot, like a snake, slither around objects and then grab them?

And last but not least, we are seeing an increasing number of robots taking over the last mile, i.e., making deliveries right to the front door. These robots are becoming ever more agile and versatile, so deliveries could soon be handled entirely by robotics.

Reply

or to participate

Keep Reading

No posts found