xAI’s Original Founding Team Is Gone
All remaining co-founders of xAI have reportedly departed, marking a complete exit from the company’s original founding team. It is a notable leadership reset for one of the most closely watched AI labs.

Major platform moves from Google and OpenAI, and a wave of ambitious infrastructure bets from Hark to Huawei. On the research side, new work is pushing agent learning, search, and neuroscience forward, while tools updates keep making AI more practical inside coding and transcription workflows. Policy and market news continue to reshape who can build, deploy, and govern these systems.
All remaining co-founders of xAI have reportedly departed, marking a complete exit from the company’s original founding team. It is a notable leadership reset for one of the most closely watched AI labs.
Brett Adcock has come out of stealth with Hark, a new AI lab backed by $100 million of his own money. The company is taking a full-stack approach, building foundation models, software, and purpose-built hardware from scratch. Its team includes talent from Apple, Meta, Google, Tesla, and Amazon, with models slated for this summer and dedicated devices to follow shortly after.
Google’s internal AI tool Agent Smith has become popular enough that access is being restricted. The company is also tying adoption to performance reviews and standardizing AI use across teams. At the same time, NotebookLM, Gemini for Business, and a new Skills framework suggest Google is pushing a broader AI operating layer across products and workflows.
Meta has open-sourced TRIBE v2, a model trained on brain scans from more than 700 people that simulates neural activity across vision, hearing, and language. The synthetic predictions reportedly outperform real fMRI recordings in some settings, and the team says the model can reproduce decades of neuroscience findings in software. Meta also released the code, weights, and a live demo for researchers.
MetaClaw is designed to upgrade AI agents while they are in use. When an agent fails, a separate model extracts a compact rule from the error and injects it into the system prompt, while reinforcement learning with LoRA fine-tuning updates weights during idle windows. The results are based on simulations, but the reported gains are substantial.
Context-1 is Chroma’s 20B parameter agentic search model, trained on more than 8,000 synthetically generated tasks. It aims to match frontier retrieval performance at much lower cost and with up to 10 times faster inference. The system separates search from generation and can iteratively refine sub-queries across multiple turns.
Cohere Transcribe New open-source Speech Recognition model topping accuracy on the Hugging Face Open ASR leaderboard.
Codex plugins OpenAI added plugin support so developers can package skills, app integrations, and MCP configs into installable units.
Claude Code scheduled tasks Claude Code on the web can now run recurring tasks on Anthropic-managed infrastructure.
Dreamina Seedance 2.0 CapCut is rolling out a new video and audio generation model to paid users in select markets.
Anthropic wins injunction blocking the Pentagon’s Claude ban as the company reportedly weighs an IPO as early as October.
Eli Lilly and Insilico deal brings a $2.75 billion AI-drug licensing agreement and 28 compounds into development.
Huawei’s new AI chip is drawing orders from ByteDance and Alibaba, signaling stronger competition with Nvidia in China.
Wikipedia bans AI writing while still allowing grammar fixes and translations with human review.
Get the latest AI news, research insights, and practical implementation guides delivered to your inbox daily.
By subscribing, you agree to our Terms of Service and Privacy Policy.