This Christmas Week in AI: AI's Biggest Non-Acquisition, Two GPT-Killers, and the Death of Video Editing As We Know It
Christmas 2025 was supposed to be the week AI took a break. Three-day work weeks. Skeleton crews. Maybe a blog post about "2025 in review" and everyone clocks out early.
Instead, Nvidia dropped $20 billion on Christmas Eve without technically buying anything, two open-source models quietly surpassed GPT-5.2 on coding benchmarks, Alibaba shipped five production-ready creative tools in 72 hours, and ByteDance released so many video editing AIs that "Nano Banana for video" is now a product category.
If you took the week off, congratulations—you missed the moment the 2026 AI landscape got rewritten. Here's what just changed while you weren't looking.
The $20B "Not-Acquisition": How Nvidia Bought Groq Without Actually Buying It
On Christmas Eve, while most of the Valley was offline, Nvidia announced a $20 billion deal with Groq—the AI chip startup behind LPUs (Language Processing Units) that run inference 10x faster than Nvidia's own GPUs while using 10x less energy. The headline number makes it the largest AI deal in history. The structure makes it the smartest regulatory dodge in Silicon Valley this decade.
Because Nvidia didn't buy Groq. They licensed the technology. Non-exclusively. Then hired Jonathan Ross (Groq's founder and the ex-Google engineer who built Google's TPU chip) plus the entire executive team. Groq continues to exist on paper under a new CEO, the cloud business keeps running, and investors like BlackRock, Samsung, and 1789 Capital (yes, Donald Trump Jr.'s fund) just tripled their money in 90 days—Groq was valued at $6.9 billion in September.
Why structure it this way? Simple: regulators. Nvidia tried to acquire ARM for $40 billion in 2022 and got blocked. An outright Groq acquisition—with Nvidia already controlling 90%+ of the AI chip market—would trigger immediate antitrust review. But a "licensing agreement"? No review required. Google did the same thing with Windinsurf ($2.4B). Microsoft with Inflection AI. Amazon with Adept. It's the new big tech playbook: buy the brain, skip the body, dodge the FTC.
The timing matters. Google just released Gemini 3 trained entirely on its own 7th-gen TPU chips (Ironwood)—no Nvidia hardware involved. For the first time, the market saw a credible path to world-class AI that doesn't flow through Nvidia. Google's stock climbed. Nvidia's dipped. So Nvidia went out and hired the guy who built Google's TPU in the first place. Checkmate.
The downside? Rank-and-file Groq employees with unvested stock options likely got nothing. In traditional acquisitions, employee equity converts to cash or acquirer stock. In reverse acqui-hires like this, only the executives and investors cash out. The engineers who built the tech? They're now employees of a shell company watching their bosses collect $20 billion paychecks at Nvidia. Not a great look—but entirely legal.
The Pocket-Sized Genius: When GPT-4 Fits on Your Phone and Costs Nothing
Remember March 2024 when GPT-4 launched and changed everything? The model that made ChatGPT feel like magic, that agencies built entire workflows around, that justified $200/month Enterprise subscriptions?
It now fits in your pocket. And it's free. And it's better.
Liquid AI just released LFM2-2.6B-exp—a 2.6 billion parameter model that outperforms GPT-4 on standard benchmarks while running locally on mobile phones. It uses pure reinforcement learning (no supervised fine-tuning), supports 32K context windows, and requires so little compute that edge devices can run it without cloud connectivity.
The implications are staggering. GPT-4 was the "wow" moment that convinced non-technical users AI had arrived. Now that same capability runs offline, costs nothing per token, and doesn't send your data to San Francisco. For agencies running high-volume workflows—social copy, email sequences, campaign briefs—the cost delta between cloud APIs and local inference just became impossible to ignore.
This isn't a research demo. It's production-ready, downloadable today, and optimized for the exact workflows agencies run 10,000 times per month. The AI wars just went local—and your API bills are about to look ridiculous.
The Open-Source Rebellion: Two New Models Just Matched GPT-5.2 (And You Can Download Them Right Now)
If Liquid AI's pocket model was the appetizer, this is the main course. Two open-source models dropped this week that rival—and in some cases beat—GPT-5.2, Claude 4.5, and Gemini 3 Pro on coding and reasoning benchmarks. Both are free to try. Both are downloadable. Both just rewrote what "state-of-the-art" means.
MiniMax M2.1 (229B parameters) is a monster. It tops the leaderboards on SWEBench Verified, Multi-SWEBench, and multilingual coding benchmarks—yes, better than Claude 4.5 and Gemini 3 Pro. Developers are reporting one-prompt wins on tasks that previously required multi-step scaffolding: 3D racing games with collision physics, interactive financial dashboards from raw spreadsheets, beehive simulations with worker bee pathfinding. The kind of "vibe coding" that six months ago required GPT-4 + three follow-up prompts now ships in one shot.
GLM 4.7 matches the energy. It scores highest on Humanity's Last Exam (obscure scientific domain questions), rivals GPT-5.1 on SWEBench, and ships zero-shot wins on Android OS simulations, video editors, and fully working Sim City clones. Both models are open-weight, Apache 2.0 licensed, and available for self-hosting on enterprise GPU clusters (or cloud APIs if you prefer).
The narrative six months ago was "open-source is 6-12 months behind." This week proved that narrative is dead. The best coding models in the world are now free, downloadable, and improving faster than the closed alternatives agencies are paying $200/month to access.
YouTube's Trojan Horse: Every Creator Just Became a Game Developer
YouTube rolled out Playables Builder this week—a Gemini 3-powered tool that lets creators build playable video games using text, image, or video prompts, then publish them directly on YouTube with ad-based revenue sharing. Think "vibe coding for games," but inside the platform that already owns 2 billion daily users.
The interface mirrors ChatGPT: describe the game, iterate with follow-up prompts, preview in real-time, publish when ready. Creators are already shipping surprisingly polished results—racing games, puzzle mechanics, even multi-level platformers—all generated without writing a line of code. YouTube's bet is simple: if games live natively on the platform, users stay longer, watch more ads, and YouTube splits the revenue with creators who built the "content."
For marketers, the implications are obvious. Branded mini-games as campaign activations. Gamified product tutorials. Interactive storytelling that keeps users engaged 10x longer than static video. The closed beta is US/Canada/UK/Australia only for now, but global rollout is imminent—and agencies that figure out game-as-content before Q2 2026 will own a channel competitors don't even know exists yet.
Alibaba's 72-Hour Creative Nuke: Five Production Tools That Just Shipped
While OpenAI spent the week debating emoji slider settings, Alibaba's Qwen team dropped five production-ready creative tools in 72 hours. Not demos. Not research previews. Shipping, downloadable, open-source tools that work today.
Qwen-Image-Edit-2511 is the standout—the best open-source image editor you can run offline. Think Nano Banana, but free, faster, and with relighting + novel view synthesis built into the base model. Change car colors, swap textures, adjust lighting, generate new camera angles—all with natural language prompts. The 2-bit quantized version runs comfortably on 8GB VRAM, meaning consumer-grade GPUs can handle high-volume agency workflows without cloud dependency.
FlashPortrait does infinite-length portrait animation 6x faster than competitors. Upload a reference image and a video of someone talking/acting, and it maps facial movements onto the new character while staying consistent across 60+ second videos. Previous tools (Live Portrait, Hailuo) degrade after 10-15 seconds. FlashPortrait holds coherence for minutes. Use case: multilingual spokesperson videos at scale without hiring talent in 94 languages.
Generative Refocusing fixes out-of-focus photos after the fact. Adjust focus depth, change aperture, blur/sharpen backgrounds—basically Photoshop's depth-of-field tools, but AI-powered and accessible via text prompt. The model is 2.6GB, runs on low VRAM, and solves the "client sent unusable product shots" problem every agency knows too well.
Add Qwen3-TTS (50-voice text-to-speech with cloning) and you have a complete creative suite that shipped in one week, costs nothing, and runs locally. While the West debates subscription pricing, China's shipping the infrastructure of 2026.
The Video Editing Revolution: Seven Tools That Just Killed Traditional Post-Production
Video editing just got its ChatGPT moment—except it happened seven times in one week.
ReCo (Region-Constrained In-Context Generation) is "Nano Banana for video." Select a region in any video, type what you want: replace the person with a cartoon penguin, swap the car for a Jeep, add a crown to the seal's head, remove the woman from the scene. It handles addition, replacement, removal, and full style transfer (watercolor, 3D animation, Lego) better than previous tools like Edit-Video or Ditto. Model + inference code drops in 2-3 weeks.
StoryMem (ByteDance) creates multi-shot cinematic videos with memory. It remembers characters and scenes from earlier shots so long-form narratives stay consistent. Previous tools generated one scene at a time; StoryMem tracks a "memory bank" across sequences, enabling actual storytelling instead of disconnected clips.
INF-Cam changes camera movement in existing videos. Take any video, prompt it to pan/zoom/orbit differently, and it regenerates the scene from the new perspective while preserving character expressions and scene details. Requires 50GB VRAM for now, but quantized versions are inevitable.
DreamMontage (also ByteDance) lets you upload keyframes at specific timestamps (0s, 5s, 10s, 15s) and the AI fills in smooth transitions between them. Water droplet → swan → watch → rose. Gorilla → caveman → knight on horse. The model blends wildly different scenes into coherent narrative flow.
The pattern is clear: video editing is shifting from timeline-based to prompt-based. The agencies that adapt workflows now—training teams on text-driven post-production, building client demo reels with these tools, locking in "AI-native video" as a service offering—will own the category before competitors even understand the shift happened.
What Agencies Do Next
OpenAI added emoji sliders this week. Alibaba shipped five creative tools. ByteDance dropped four video editors. Two open-source models beat GPT-5.2 on coding. Nvidia spent $20 billion to hire one engineer and dodge regulators.
The gap between "AI news" and "AI you can actually use" just collapsed. The tools that agencies will run in Q2 2026 shipped this week—most of them open-source, most of them free, all of them production-ready today.
The winning move isn't waiting for the next model announcement. It's spinning up Qwen-Image-Edit this week, testing MiniMax M2.1 on your coding workflows next week, and pitching clients on AI-native video by January 15th. Because when your competitor shows up with "same quality, zero API costs, your data stays local," the conversation ends before it starts.
Bangkok8 AI: We'll show you how to turn Christmas week's chaos into Q1's unfair advantage.
