Thursday, April 23, 2026

ChatGPT’s Codex-Powered Agents For Teams

ChatGPT’s Codex-Powered Agents For Teams

Today’s Overview

Good morning, a few big moves are reshaping how AI gets work done. Google is bundling Gemini deeper into Workspace, OpenAI is pushing ChatGPT into team workflows, and both are racing to give agents more room to operate. There’s also fresh momentum around the infrastructure and tooling underneath it all. Let’s dive in.

Top Stories

Google adds Workspace Intelligence to Gemini

Google launched Workspace Intelligence as a semantic layer for Google Workspace, connecting emails, chats, files, and projects for Gemini-powered agents. It also adds natural-language spreadsheet building in Sheets, plus AI features across Docs, Slides, Gmail, and Drive. The pitch is a more centralized control layer for business work across the suite.

  • It ties together emails, chats, files, and projects so agents can work with more context.
  • Sheets now supports natural-language spreadsheet building for faster setup and edits.
  • The rollout also brings AI features across Docs, Slides, Gmail, and Drive.

ChatGPT gets Codex-powered workspace agents

OpenAI introduced Workspace Agents, shared Codex-powered bots built for multi-step team workflows across ChatGPT and Slack. The company positions them as the next step beyond solo GPTs, with memory, connected apps, scheduling, and Slack-native use when people are offline. OpenAI also says existing GPTs will keep working for now, with a conversion tool coming soon.

  • The agents are designed for multi-step team workflows across ChatGPT and Slack.
  • They can retain memory and connected apps to handle longer-running tasks.
  • OpenAI says old GPTs still work for now, and a conversion tool is coming soon.

Anthropic and Amazon expand compute deal

Anthropic and Amazon expanded their collaboration to secure up to 5 gigawatts of compute capacity. The deal is meant to support Claude training and deployment at much larger scale.

  • The collaboration now targets up to 5 gigawatts of compute.
  • That capacity is aimed at Claude training at larger scale.
  • It also supports Claude deployment as the system grows.

Research & Analysis

Perplexity details search model training

Perplexity outlines a two-stage pipeline for search-augmented language models, starting with supervised fine-tuning and then reinforcement learning. The approach is meant to improve factual accuracy, user preference, and tool-use efficiency while keeping guardrails intact. The models reportedly showed gains on FRAMES and FACTS OPEN, with lower cost per query and better tool usage efficiency than existing models like GPT-5.4.

  • The pipeline starts with supervised fine-tuning before moving to reinforcement learning.
  • It is designed to improve factual accuracy and tool use while preserving guardrails.
  • Perplexity says the models improved on FRAMES and FACTS OPEN benchmarks.

Anthropic maps the path to production agents

Anthropic lays out how agents can connect to external systems through direct API calls, CLIs, and MCP. The guidance compares where each fits and argues that MCP is becoming the key integration layer as agents move into production and the cloud. The takeaway is that each MCP integration adds more value to the ecosystem over time.

  • Agents can connect through APIs, CLIs, and MCP.
  • The post argues MCP is becoming the key integration layer for production agents.
  • As more integrations are built, they strengthen the ecosystem.

Applied Compute benchmarks agentic inference

Applied Compute released an open-source benchmarking tool for replaying agentic workloads. The work focuses on multi-turn, tool-using scenarios that stress KV cache management and scheduling, and it points to optimizations like KV cache offloading and workload-aware routing. It also introduces three workload profiles for testing engine and accelerator performance.

  • The benchmark replays agentic workloads to test inference engines.
  • Those scenarios stress KV cache management and scheduling.
  • The paper points to cache offloading and workload routing as key optimizations.

Trending AI Tools

Quick Hits

Keep reading for free

Enter your email. If you're already subscribed, we'll send a sign-in code. If not, you'll subscribe in the next step.

Free access. Subscribe once, then use the same email on future issues.

Free to read. Subscription just unlocks the full issue.