• Lore Brief
  • Posts
  • Oracle’s $300B AI Cloud Bet Sends Stock Soaring

Oracle’s $300B AI Cloud Bet Sends Stock Soaring

Plus: Thinking Machines takes on randomness, Claude gets memory and file editing, AirPods add live translation, Perplexity hits $20B, and ByteDance launches Seedream 4.0

Welcome to Lore Brief, your weekly edge in the age of AI.

This issue is brought to you by Factory, an engineer in every tab.

Oracle's Stock Soars on Aggressive AI Push

Oracle's stock experienced its best day in decades, a massive surge driven by the company's aggressive new plans for its AI cloud business. The jump was so significant that it briefly placed co-founder Larry Ellison on the top of the list of the world's wealthiest people.

  • Shares spiked by roughly 36% in a single day, marking the company's best performance in decades before settling slightly.

  • A key driver was a reported $300 billion deal for OpenAI to purchase Oracle's cloud computing power over the next five years.

  • The market excitement is tied to Oracle's massive backlog of future contracts, now valued at over $455 billion, showing immense demand for its AI infrastructure.

These ambitious plans show Oracle is betting its future on becoming a central provider of the computing power needed for the AI revolution. The company's success now hinges on its ability to deliver on these enormous promises and turn these massive deals into real-world infrastructure.

“Thinking Machines” Takes Aim at AI Randomness

Thinking Machines Lab (Mira Murati’s $12B startup) launched its research blog “Connectionism” with a first post arguing LLM unpredictability is an engineering bug, not a law of nature. Their thesis: most “nondeterminism” comes from how GPU kernels are stitched/scheduled during inference—and with batch-invariant kernels and tighter control, responses can be made reproducible.

  • The post challenges the common “it’s just floating point + concurrency” view, claiming the real culprit is lack of batch invariance in inference pipelines.

  • Thinking Machines published companion code showing batch-invariant kernels and a deterministic vLLM setup.

  • TechCrunch’s read: the lab wants models to answer the same way every time—without giving up performance.

  • Related: OpenAI researchers argue hallucinations persist because training/eval reward confident guessing over “I don’t know,” and propose metrics that penalize confident errors more than uncertainty.

If Thinking Machines can deliver deterministic inference at scale, it would reshape evals, safety, and enterprise reliability. The debate over randomness vs. engineering debt just moved from vibes to implementable kernels.

Claude Can Now Create and Edit Files

Anthropic added the ability for Claude to generate and edit spreadsheets, documents, slide decks, and PDFs directly in chat and the desktop app. It’s positioned as a workflow booster for analysis, reporting, and content assembly.

  • Works across Excel/CSV, PowerPoint, PDFs, and more; can add charts, formulas, and narrative.

  • Hooks into Anthropic’s tool APIs and Claude Code for code-aware edits and repo work.

  • Security caveat: the feature runs in a sandbox with (limited) internet; Anthropic and independent outlets urge monitoring to avoid prompt-injection/data-exfiltration risks.

This is a notable step toward AI-native productivity, collapsing “prompt → artifact” into one flow. Enterprises should pilot with guardrails (allow-lists, red-teaming, and data-handling policies).

Another One About Claude: It Also Has Memory Now

Anthropic has introduced a major new feature for its AI assistant: Memory. This allows Claude to remember key information across conversations, making it a more personalized and efficient tool that learns from your interactions.

  • Users no longer need to repeat context or instructions; Claude can remember project details, style preferences, or complex information from previous chats.

  • The feature is opt-in, giving users full control. You can view what Claude remembers and instruct it to forget specific details or turn the feature off completely.

  • Anthropic has stated that memories are kept private to the user's interactions and are not used to train its foundational AI models.

This update aims to move Claude from a chatbot to a persistent assistant that evolves with the user. It represents a significant step toward creating AI collaborators that understand context and history, much like a human teammate.

AirPods Pro 3 Add Live, On-Device Translation

Apple unveiled AirPods Pro 3 with “Live Translation,” enabling real-time, two-way conversations powered by Apple Intelligence. The feature launches in beta with select languages and integrates directly with iPhone for hands-free translating.

  • Works face-to-face; iPhone can display the other person’s transcript while AirPods read your translation back.

  • Initial languages include English, French, German, Portuguese, and Spanish, with Italian, Japanese, Korean, and Simplified Chinese “coming by year-end.”

  • Availability varies by region and device; notably, not available at launch in the EU due to DMA interoperability constraints.

It’s a big practical step toward ambient, assistive AI—especially for travel and accessibility. Watch for rollout timing, added languages, and regional approvals.

Perplexity Raises $200M at a $20B Valuation

Perplexity AI, the conversational AI search engine, has raised $200 million in a new funding round that values the company at $20 billion. The new funding comes just two months after the company raised $100 million at an $18 billion valuation, and it is a clear sign of the immense investor excitement around the company.

  • The new funding brings Perplexity's total funding to nearly $1.5 billion.

  • The company's annual recurring revenue is reportedly approaching $200 million, up from $150 million just last month.

  • Perplexity is quickly emerging as a major challenger to Google in the search market, with its conversational AI approach and its focus on providing direct, accurate answers to users' questions.

This is a major milestone for Perplexity, and it is a clear sign that the company is on a path to becoming a major player in the tech industry.

ByteDance’s Seedream 4.0 Targets Google’s “Nano Banana”

ByteDance launched Seedream 4.0, a unified image generator + editor meant to rival Google DeepMind’s Gemini 2.5 Flash Image (“Nano Banana”). Early reports say it combines Seedream 3.0’s text-to-image with SeedEdit 3.0’s precise editing in one model.

  • ByteDance claims internal wins (MagicBench) on prompt adherence, alignment, and aesthetics—no public tech report yet.

  • Coverage notes it’s priced competitively and emphasizes strong editing fidelity.

  • Available via partner platforms and APIs, with an emphasis on edit-in-place workflows.

If performance holds up outside internal metrics, Seedream 4.0 could pressure Google’s lead in image editing quality. Real validation will come from public benchmarks and creator adoption at scale.

That’s it for today.

Consider forwarding Lore Brief to a colleague to help them get ahead in the AI Age.

-Nathan Lands
ConnectX | LinkedIn
Listen to The Next WaveApple | Spotify | YouTube

(Disclosure: I may own equity in companies mentioned in Lore Brief.)