March 19, 2026

AI builds apps solo, desktop autonomy arrives & more

AI builds apps solo, desktop autonomy arrives & more

Today’s Overview

Enterprise AI is shifting from assistive tools toward autonomous agents that can design, code, and operate with minimal human input, while major players prepare to monetize these capabilities at scale. At the same time, the industry is formalizing AGI measurement and advancing self‑improving models, underscoring a rapid maturation of both productivity and research frontiers.

  • Google introduced a voice‑enabled “vibe design” workflow in its Stitch AI canvas, letting teams prototype UI instantly.
  • Meta’s Manus platform launched the “My Computer” AI agent that can autonomously build and run desktop applications without code.
  • OpenAI announced an IPO plan and a partnership with AWS to embed its models in government and enterprise environments.
  • MiniMax released the M2.7 model, which used self‑generated code cycles to improve its own performance by 30 % on internal benchmarks.
  • Google DeepMind published a cognitive taxonomy for AGI evaluation and sponsored a $200 k Kaggle hackathon to benchmark under‑assessed capabilities.
  • Gamma added Imagine, an AI design assistant that auto‑creates branded graphics within its presentation platform.

Top Stories

Google introduces 'vibe design' workflow to its AI-powered UI canvas

Stitch now enables users to begin projects from images, code, or written briefs and explore multiple design directions using an integrated agent manager. A preview voice mode allows hands-free editing through conversational commands, and instant prototyping transforms static screens into interactive flows within seconds. The new DESIGN.md feature helps teams transfer design constraints between Stitch and development tools, supporting a rapid 'vibe design' workflow.

Read Full Article

MiniMax releases M2.7 model that contributed to its own development

MiniMax reports that early iterations of M2.7 were employed to refine the model itself. The system completed over one hundred autonomous cycles of error analysis, code rewriting, and testing, achieving a thirty percent accuracy improvement on internal benchmarks. Results also show that M2.7’s coding performance approaches that of leading frontier models in agentic engineering tasks.

Read Full Article

Research & Analysis

Mixture-of-Depths Attention enables cross-layer token interaction

MoDA introduces an attention mechanism in which each head accesses key-value pairs from both the current transformer layer and preceding layers. By allowing cross-layer attention, the design aims to preserve informative signals that would otherwise diminish as model depth grows. The implementation is provided in a public GitHub repository for reproducibility.

Read Source

DeepMind proposes cognitive taxonomy to gauge AGI progress

DeepMind's new paper defines ten core cognitive abilities - such as perception, learning, and reasoning - to structure assessments of artificial general intelligence. It recommends a three-stage evaluation that benchmarks AI systems against human performance across these abilities. To accelerate development, the authors launched a Kaggle hackathon on the Community Benchmarks platform, focusing on five under-explored capabilities. Participants are invited to create and improve benchmarks that measure progress toward the outlined cognitive goals.

Read Source

Quick Hits

Join the AI Recap Newsletter

Get the latest AI news, research insights, and practical implementation guides delivered to your inbox daily.

By subscribing, you agree to our Terms of Service and Privacy Policy.