- Lore Brief
- Posts
- OpenAI Restores GPT-4o After Backlash
OpenAI Restores GPT-4o After Backlash
PLUS: Claude’s 1M-token leap, Babuschkin’s new AI fund, and Microsoft’s Meta talent hunt.
Welcome to Lore Brief, your weekly edge in the age of AI.
This issue is brought to you by Factory, an engineer in every tab.
OpenAI restores GPT-4o (and other legacy models) after GPT-5 backlash
OpenAI briefly removed older models during the GPT-5 rollout, then reversed course after user outcry over tone, behavior, and broken workflows. Paid users can again select GPT-4o and other legacy options while OpenAI tunes GPT-5’s “personality.”
Sam Altman acknowledged misjudging attachment to 4o and promised warmer GPT-5 defaults.
ChatGPT adds a setting to surface additional and legacy models for selection.
Users preferred 4o’s style even where GPT-5 outscored it technically.
Early feedback is mixed: some users praise GPT-5’s reasoning and breadth, others feel underwhelmed and switch back to Claude—especially for coding.
Many casual users gravitate to the familiarity and predictable tone of 4o, even at the cost of some raw performance.
Users are voting with workflows: tone, structure, and reliability matter as much as raw IQ. Bringing legacy options back buys time while GPT-5’s behavior and controls are tuned.
Why Figma Make Might Be the Most Important AI Tool of 2025
DeepSeek R2 delay spotlights China’s chip dilemma
DeepSeek’s next model has reportedly slipped after attempts to train on Huawei’s Ascend hardware faltered, underscoring the gap with Nvidia-based stacks. The delay lands as Beijing pressures firms to justify or curb H20 orders even while labs seek performance parity.
The launch pushback is tied to unsuccessful Ascend training runs for R2.
Some teams are reverting to Nvidia for training while exploring Ascend for inference.
Chinese regulators are urging domestic giants to avoid Nvidia H20s, tightening scrutiny on imports.
The episode illustrates the friction between policy goals and current software/hardware maturity.
Compute sovereignty is colliding with performance gaps in today’s domestic stacks. Whether R2 trains cleanly on Ascend will signal how viable a fully local path really is.
Claude Sonnet 4 gets a 1M-token context window
Anthropic expanded Sonnet 4’s context to 1M tokens for Tier-4 API orgs and Bedrock users, a 5× jump aimed at whole-repo and multi-document work. Pricing doubles beyond 200K input tokens and rises 1.5× on output, with prompt caching and batch discounts still applying.
Availability: Anthropic API (beta) and Amazon Bedrock now; Vertex AI support “coming soon.”
Pricing: ≤200K at $3/M input and $15/M output; >200K at $6/M input and $22.50/M output.
Practical fit: large codebases, legal/research packets, and longer-horizon agents in fewer passes.
Access requires usage tier 4 or custom limits, with dedicated rate limits for long context.
A true million-token workflow reduces chopping and re-prompting across code, research, and legal review. Pricing tiers and access gates will decide how quickly teams can make it a default.
Igor Babuschkin exits xAI, launches Babuschkin Ventures
AI researcher and xAI cofounder Igor Babuschkin announced he’s leaving the company to start Babuschkin Ventures, an investment vehicle backing AI startups that advance humanity and probe fundamental mysteries. He says two lessons from Musk will guide the fund’s ethos: attack problems head-on and move with urgency.
At xAI he helped build foundational infrastructure, led engineering, and shipped frontier models at speed.
He highlights the “Memphis” supercluster built in ~120 days, recounting an all-hands late-night debug that traced RDMA failures to a BIOS setting.
The new fund targets AI safety, agentic systems, and long-horizon scientific discovery.
Babuschkin frames the mission as channeling superintelligence toward human flourishing, drawing on his physics background and prior DeepMind/OpenAI work.
He’s taking a builder’s urgency into capital, backing teams that ship fast and think about safety first. Expect quick bets on agentic systems, core infra, and safety research that traditional funding often slows down.
Leopold Aschenbrenner’s “Situational Awareness” fund
Ex-OpenAI researcher Leopold Aschenbrenner has launched an AGI-focused hedge fund, Situational Awareness, reportedly managing over $1.5B and posting a 47% H1 2025 return. The firm pitches itself as an AI “brain trust,” backed early by prominent tech founders.
Strategy concentrates on public and private AI supply-chain bets—from chips and data centers to power and top model labs.
Early LP base includes well-known tech founders and investors.
Reported performance outpaced the S&P 500 and tech-hedge benchmarks in H1 2025.
The fund’s narrative builds on his widely read “Situational Awareness” thesis about trillion-dollar AI clusters.
An operator-led fund with a hard AGI thesis concentrates capital where chips, power, and top labs intersect. With deep-pocketed LPs, its allocations will shape who actually gets to scale.
GPT-5 tops pre-licensed human experts on medical reasoning
A fresh study evaluates GPT-5 as a single generalist system on standardized medical QA and VQA, using a unified, zero-shot chain-of-thought protocol. The authors report GPT-5 beating pre-licensed human experts on MedXpertQA across reasoning and understanding, with sizable margins.
On MedXpertQA multimodal, GPT-5 exceeds human experts by roughly +24% (reasoning) and +29% (understanding).
On the text-only set, GPT-5 leads humans by ~+15% (reasoning) and ~+9% (understanding), passing prior GPT-4o results.
The setup standardizes prompts, splits, and scoring to isolate model gains over prompt hacks.
Case analyses show stepwise clinical reasoning that integrates text and imaging before issuing a single letter answer.
Beating clinicians on standardized tasks moves AI from “assistant” to credible decision support. The next test is prospective, real-world validation with clear guardrails and accountability.
Microsoft goes hunting for Meta’s AI top talent
Microsoft is reportedly courting top Meta researchers with multimillion-dollar packages, escalating the talent war across frontier labs. Internal docs describe a “most-wanted” list and a fast-track offer process spanning divisions led by Mustafa Suleyman and Jay Parikh.
Microsoft created a ranked list of Meta engineers and uses individualized, speed-to-offer comp playbooks to land them.
Packages can include seven-figure stock grants and large bonuses, calibrated to beat competing bids.
Meta has itself dangled eye-popping deals this year, intensifying churn and counteroffers.
The result is an arms race for core GenAI, infra, and agent teams with outsized control over compute and roadmap.
The frontier race is becoming a recruiting contest as much as a compute contest. If Microsoft captures enough of Meta’s core talent, roadmap velocity and platform lock-in could tilt decisively.
That’s it for today.
Consider forwarding Lore Brief to a colleague to help them get ahead in the AI Age.
(Disclosure: I may own equity in companies mentioned in Lore Brief.)