Thursday, April 30, 2026

China Blocks Meta's Manus Deal

China Blocks Meta's Manus Deal

Today’s Overview

Good morning, Meta’s push into AI agents just hit a wall in China, and OpenAI is making a bigger bet on cybersecurity. On the research side, food AI is getting surprisingly tactile, while the tools roundup brings a fresh batch of workflow updates. Let’s dive in.

Top Stories

China Blocks Meta’s Manus Acquisition

China halted Meta’s planned $2 billion acquisition of agentic AI startup Manus and ordered the deal unwound. The move adds another layer of friction to Meta’s AI agent ambitions and its cross-border expansion plans.

  • China halted the deal after a months-long probe, ordering it unwound and putting the acquisition on ice.
  • The target was Manus, an agentic AI startup that Meta wanted to buy for $2 billion.
  • The decision complicates Meta’s push into AI agents and broader international growth.

OpenAI Unveils Cybersecurity Action Plan

OpenAI released a Cybersecurity Action Plan focused on AI cyber defense and coordination with government and industry. The plan frames cybersecurity as a major area for AI deployment and collaboration.

  • OpenAI says it wants to democratize AI cyber defense with new tools for defenders.
  • The plan also emphasizes coordination with government and industry on threat response.
  • It frames cybersecurity as a major AI deployment area for future collaboration.

Research & Analysis

Food AI Gets Its Flavor Test

KAIKAKU AI’s Epicure paper claims a ‘ChatGPT moment’ for food AI by showing a model can infer flavor, cuisine, and texture from recipe patterns. The team cleaned messy ingredient data, mapped relationships across recipes, and says the approach could support menu development, recipe innovation, and flavor pairing.

  • Researchers cleaned 6,653 ingredient entries into 1,032 usable foods for the model.
  • Without chemistry data or taste labels, it still identified all five basic tastes and even ordered peppers by spiciness.
  • KAIKAKU AI says the work could support menu development and flavor pairing alongside its food robotics systems.

AI Evals Are Hitting A Cost Wall

AI evaluation costs have climbed to the point where they can rival or exceed training costs in some runs. The piece argues that this is creating a bottleneck for fair access and external validation in AI research.

  • Some evaluation runs now cost tens of thousands of dollars and can rival training spend.
  • The burden is uneven across models and tasks which makes the bottleneck worse.
  • The argument points to standardized documentation and data reuse as ways to lower the cost.

DeepMind Paper Says LLMs Can't Be Conscious

A new paper from Alexander Lerchner argues that large language models are structurally incapable of consciousness. The claim adds to the broader debate over what current AI systems can and cannot experience.

  • The paper describes LLMs as mapmaker-dependent simulators rather than conscious systems.
  • It argues they lack physical embodiment needed for genuine experience.
  • The idea sits inside a broader consciousness debate around current AI systems.

IBM Explains How Granite 4.1 Is Built

Granite 4.1 uses a dense, decoder-only architecture with 3B, 8B, and 30B models trained on 15 trillion tokens. IBM says the lineup is designed for efficient, reliable enterprise use with strong instruction-following and tool performance.

  • The series spans 3B, 8B, and 30B models in a dense, decoder-only architecture.
  • IBM says the models were trained on 15 trillion tokens using a five-phase pre-training approach.
  • The 8B model is described as matching the previous 32B MoE model through a multi-stage reinforcement learning pipeline.

Trending AI Tools

Keep reading for free

Enter your email. If you're already subscribed, we'll send a sign-in code. If not, you'll subscribe in the next step.

Free access. Subscribe once, then use the same email on future issues.

Free to read. Subscription just unlocks the full issue.