Tuesday, April 28, 2026

AlphaGo Creator Bets Big On Superlearning

AlphaGo Creator Bets Big On Superlearning

Today’s Overview

Good morning, David Silver just launched a huge new lab aimed at building AI that learns from experience, while OpenAI and Microsoft rewrote the terms of their partnership. On top of that, DeepSeek is slashing prices hard, which keeps the pressure on US AI players. Let's dive in.

Top Stories

AlphaGo Creator Launches $1.1B Superlearner Lab

David Silver has launched Ineffable Intelligence, a London lab that raised $1.1B at a $5.1B valuation. The company is building AI that learns from experience instead of training data, with Silver framing it as a path toward superintelligence. Silver says the models skip pre-training and human data, relying instead on experience in simulations.

  • The lab raised $1.1B at a $5.1B valuation.
  • Its core idea is AI that learns from experience rather than training data.
  • Silver says the system skips pre-training and human data and learns in simulations.

OpenAI Reshapes Its Microsoft Deal

OpenAI and Microsoft reworked their partnership terms, ending Microsoft's exclusivity over OpenAI's IP and removing the AGI clause. OpenAI can now ship on any cloud, while Microsoft keeps a revenue share through 2030 and Azure-first launch access through 2032. The agreement also settles tensions around OpenAI's Amazon deal.

  • Microsoft no longer has exclusivity over OpenAI's IP.
  • OpenAI can now ship products on any cloud.
  • Microsoft still keeps revenue share through 2030 and Azure-first launch access through 2032.

DeepSeek Slashes V4-Pro Pricing

DeepSeek is cutting V4-Pro pricing by 75% and reducing input cache hits by 90%. The move adds more pressure on US AI giants in a tense geopolitical market. It is another aggressive pricing swing from a company that has been willing to move fast on cost.

  • Pricing for V4-Pro is down 75%.
  • DeepSeek is also reducing input cache hits by 90%.
  • The cut puts more pressure on US AI giants in a tense geopolitical market.

Research & Analysis

Amazon Probes Agentic Risks

Amazon researchers introduced ESRRSim, an agentic evaluation framework with a structured taxonomy for risks like deception and reward hacking. The benchmark shows wide variation in behavior across 11 LLMs, which makes the setup useful for comparing model risk profiles. It is a more explicit way to test how systems behave when asked to act, not just answer.

  • The framework is built around a structured taxonomy for risks like deception and reward hacking.
  • It evaluates model behavior across 11 LLMs.
  • The results show wide variation in behavior across the models.

Image Generators As Vision Systems

This paper argues that instruction-tuned image generation models can work as generalist vision systems. It claims state-of-the-art results by reframing perception as image generation. The core takeaway is that a generation-first approach may be enough to cover more vision tasks than expected.

  • The paper argues that instruction-tuned image generators can act as generalist vision systems.
  • It claims state-of-the-art results across tasks.
  • The approach reframes perception as image generation.

Efficient Video Intelligence Keeps Advancing

Recent work in efficient video intelligence includes compact universal encoders like EUPE, which distill capabilities from specialized models such as DINO and SAM. Systems like LongVU use adaptive token allocation and compression for long-form understanding, while edge and on-device deployment handle real-time processing. Open problems remain around streaming understanding, sparse-event detection, and robust multimodal reasoning.

  • Compact encoders like EUPE distill capabilities from specialized models.
  • Long-form understanding gets help from adaptive token allocation and compression.
  • Open problems still include streaming understanding, sparse-event detection, and robust multimodal reasoning.

TurboQuant Shrinks Vector Tables

TurboQuant compresses coordinates in large tables of high-dimensional vectors to 2 to 4 bits with near-optimal distortion. The method needs no training or calibration, adds no memory overhead for scale factors, and is reported to be dramatically faster than alternatives at 4-bit indexing. It also claims higher recall.

  • It compresses coordinates to 2 to 4 bits.
  • The method uses no training or calibration.
  • At 4-bit indexing, it is reported to be four to six orders of magnitude faster than alternatives.

Trending AI Tools

  • Blueprint A coding-focused tool for one-shot bigger coding tasks.

  • Monet Lets users edit videos and design images with Claude Code and Codex.

  • Create-Agent-TUI OpenRouter's terminal harness for agent development workflows.

  • Devin for Terminal A CLI agent that keeps working after you close your laptop.

  • Grok Imagine Adds lip sync and audio matching to video generation.

Quick Hits

  • Google's Gemini credits could introduce a monthly allowance and top-ups for heavier usage.

  • Pure Prophet is framed as a way to make guaranteed money from prediction markets.

  • VIDEO AI ME lets users create videos with AI actors that sound and look real.

  • SNEWPapers is billed as the world's first AI newspaper archive.

  • Atech is snap-together electronics built from a chat.

  • NVIDIA B200 spot prices jumped 114% in six weeks to $4.95 per hour.

Keep reading for free

Enter your email. If you're already subscribed, we'll send a sign-in code. If not, you'll subscribe in the next step.

Free access. Subscribe once, then use the same email on future issues.

Free to read. Subscription just unlocks the full issue.