Friday, May 1, 2026

Gemini Enters the Driver's Seat

Gemini Enters the Driver's Seat

Today’s Overview

Good morning, Gemini is moving into cars, and the rest of the issue is packed with model launches, agent tools, and a few big infrastructure bets. You also get fresh research on cancer detection, interpretability, and faster RL training. Let's dive in.

Top Stories

Gemini Moves Into Google-Powered Cars

Google is rolling out Gemini for vehicles with Google built in, replacing Assistant with a more conversational system for navigation, messages, music, and vehicle controls. The first rollout is coming to compatible cars in the U.S., and General Motors says the feature will reach about 4 million of its vehicles from model year 2022 onward. Gemini Live is also coming in beta, with Gmail, Calendar, and Home integrations planned later.

  • Google is swapping Assistant for a more conversational system that handles navigation and messages across compatible cars.
  • Drivers will also be able to ask for vehicle controls like temperature and radio changes.
  • General Motors says the feature will reach about 4 million vehicles from model year 2022 onward.

White House Rethinks Its Anthropic Fight

The White House is pushing back on Anthropic’s plan to expand access to its Mythos model, even as a national security memo is expected to address parts of the Pentagon dispute. Officials are weighing broader multi-vendor AI use for agencies while still raising compute and security concerns. The result is a messy split between wanting more access and keeping a hard line on the feud.

  • Anthropic wanted access expanded from about 50 firms to nearly 120.
  • The White House memo is expected to push multi-vendor AI adoption for agencies.
  • Officials are still citing compute and security concerns as the fight continues.

xAI Ships Grok 4.3

xAI says Grok 4.3 improves cost per intelligence compared with Grok 4.20 0309 v2. It scores higher on the Intelligence Index while costing less to run the full benchmark suite. The model is also positioned as one of the lowest-cost systems at its intelligence level.

  • Grok 4.3 improves on cost per intelligence versus Grok 4.20 0309 v2.
  • It scores higher on the Intelligence Index while costing less to run the benchmark suite.
  • xAI says it is one of the lowest-cost models at its intelligence level.

Research & Analysis

Mayo Clinic AI Spots Cancer Years Early

Mayo Clinic published new data on REDMOD, an AI that reads subtle tissue patterns on standard CT scans and can catch pancreatic cancer up to three years before diagnosis. The model reviewed nearly 2,000 routine scans that had originally been read as normal, then later tied to cancer cases. At the two-year mark before diagnosis, it spotted roughly three times as many early cancers as experienced radiologists did.

  • REDMOD reads subtle tissue patterns on standard CT scans.
  • The study reviewed nearly 2,000 routine scans that were later linked to diagnoses.
  • At two years before diagnosis, it found about 3x more early cancers than experienced radiologists.

Qwen Opens Up Model Interpretability

Qwen-Scope is an interpretability toolkit trained on the Qwen3 and Qwen3.5 series models. It is designed to shed light on internal model behavior and support optimization. The toolkit can also be used for controllable inference, data classification and synthesis, model training, and evaluation analysis.

  • Qwen-Scope is trained on the Qwen3 and Qwen3.5 series models.
  • It is meant to explain internal model behavior and support optimization.
  • The toolkit also supports controllable inference and evaluation analysis.

GLM-5V-Turbo Blends Vision and Reasoning

GLM-5V-Turbo integrates multimodal perception directly into reasoning and tool use. The model is described as improving performance on coding, visual tasks, and agent workflows across heterogeneous inputs.

  • The model integrates multimodal perception into reasoning and tool use.
  • It is described as improving coding performance across heterogeneous inputs.
  • It also aims to improve visual tasks and agent workflows.

Speculative Decoding Speeds Up RL

A new paper applies speculative decoding to RL rollouts without changing output distributions. The approach delivers up to 1.8x throughput gains and projected 2.5x end-to-end speedups at scale.

  • The method applies speculative decoding to RL rollouts without changing output distributions.
  • It delivers up to 1.8x throughput gains.
  • At scale, the paper projects 2.5x end-to-end speedups.

Trending AI Tools

Quick Hits

Keep reading for free

Enter your email. If you're already subscribed, we'll send a sign-in code. If not, you'll subscribe in the next step.

Free access. Subscribe once, then use the same email on future issues.

Free to read. Subscription just unlocks the full issue.