• Lore Brief
  • Posts
  • Zuckerberg Goes All-In on Personal Superintelligence

Zuckerberg Goes All-In on Personal Superintelligence

Plus: OpenAI Study Mode, Agent-Driven Development with Factory, Supermemory’s unified knowledge engine, Ideogram’s character consistency, and Runway’s Aleph video magic

Welcome to Lore Brief, your weekly edge in the age of AI.

This issue is brought to you by Factory, an engineer in every tab.

Zuck's Personal Superintelligence Vision

Mark Zuckerberg released a memo outlining Meta's ambitious plan for personal superintelligence, aiming to help individuals achieve goals and enhance connections. The company is investing billions, recruiting top talent, and planning to share some tech openly to benefit everyone.

  • The memo details a self-improving AI that learns independently, with Meta expanding hardware and drawing expertise from competitors like OpenAI.

  • Zuckerberg envisions empowering individuals, not just corporations, with plans to release some tools as open-source.

  • The initiative has sparked widespread interest, with Meta betting the next few years will shape AI's role in society. Check the full plan on Meta's site, this CNBC analysis. This could simplify daily tasks significantly. Meta is committed to a smarter future.

Factory is the Sequoia-backed platform that makes Agent Driven Development real for teams shipping production software.

  • Full inner loop, automated
    Factory’s “Droids” run Explore → Plan → Code → Verify, delivering a review-ready PR with tests and lint checks already green.

  • Stay inside the lines
    Drop a precise spec (or have Factory refine it) and the agent keeps edits scoped, leaving a clear audit trail you can trust.

  • Used by teams that ship fast
    Zapier, Bayer, MongoDB, Framer and more cut bug-fix time and ship features hours—not days—with Factory.

Learn the Playbook

The tweet links to Factory’s in-depth guide “Agent Driven Development: How to Build Software with Agents.” 

Read the full guide here: https://www.factory.ai/build-with-agents

OpenAI's Study Mode in ChatGPT

OpenAI introduced study mode in ChatGPT, offering students step-by-step guidance instead of direct answers to foster deeper learning. Developed with teacher feedback, it aims to reduce cheating and enhance educational use.

  • Users can select it from ChatGPT's tools to navigate homework or exam prep without receiving full solutions.

  • Educators see it as a promising step to address AI concerns in classrooms, though it requires further refinement for broader subjects.

  • Initial feedback highlights its ability to encourage critical thinking through interactive questions.

Learn more on OpenAI's blog, this Guardian article, and Greg Brockman's X announcement. This mode transforms AI into a thoughtful tutor. It could reshape how students learn.

Supermemory Launches Unified Memory Engine for AI Tools

Supermemory is an impressive new tool that automatically captures and organizes information from your apps and documents into a searchable knowledge base. The platform integrates with AI tools like Claude, ChatGPT, and Cursor through MCP (Model Context Protocol) while connecting to Google Drive, Notion, and OneDrive for comprehensive knowledge management.

  • Automatic real-time synchronization captures AI conversations and document updates without manual input, building a living knowledge graph that discovers connections between your information

  • Intelligent memory management mimics human cognition by forgetting unused information and making implicit connections, enabling complex queries beyond basic keyword search

  • Project-based organization allows users to segment memories by context, keeping work and personal knowledge separate while maintaining cross-reference capabilities

Join the waitlist at Supermemory's website or check the announcement from founder Dhravya Shah.

Tesla Robotaxi Hits Bay Area

Tesla launched its Robotaxi service in the Bay Area, covering a vast region far larger than Waymo's current zones, though it's invite-only for now with a human driver onboard. The rollout tests real-world driving across highways and urban streets.

  • The service spans a wide area, allowing app-based ride hailing with plans for broader access later.

  • Unlike the Austin launch, this expansion tests more complex routes to prepare for full autonomy.

  • Early feedback describes smooth rides, but complete driverless operation awaits regulatory approval. Check this Electrek report, and this map that compares Robotaxi’s service areas with Waymo’s.

Ideogram New Character Consistency Tool

Ideogram introduced a feature to maintain character consistency across images, generated from just a single photo with prompts for different styles or poses. It eliminates the need for additional training, streamlining creative workflows.

  • A clear front-facing photo yields the best results, avoiding blurry or group shots.

  • The tool allows free use, with tips like varying expressions or lighting in prompts for variety.

  • It suits projects like comics or ads where the same face needs to appear repeatedly. Check Marco's breakdown on X here.

Runway's Aleph Video Transformation Model

Runway unveiled Aleph, a new video model that transforms clips with simple prompts, editing scenes or objects without complexity. It enables quick adjustments like changing day to night or aging characters directly on existing footage.

  • Users input a video, and Aleph handles swaps or extractions like reflections with ease.

  • The model requires no elaborate setup, making it accessible for professionals creating ads or films.

  • Access is rolling out gradually, with a free trial available for early adopters. Check Runway's blog post and their X thread.

That’s it for today.

Consider forwarding Lore Brief to a colleague to help them get ahead in the AI Age.

-Nathan Lands
ConnectX | LinkedIn
Listen to The Next WaveApple | Spotify | YouTube

(Disclosure: I may own equity in companies mentioned in Lore Brief.)