Devoured - April 21, 2026
Google builds elite team to close the coding gap with Anthropic (2 minute read)

Google builds elite team to close the coding gap with Anthropic (2 minute read)

Tech Read original

Google DeepMind assembled a specialized team to improve Gemini's coding abilities after internally acknowledging that Anthropic's coding tools currently outperform theirs.

What: The team, led by Sebastian Borgeaud (former pre-training lead), focuses on teaching Gemini to handle complex programming tasks like writing new software from scratch. Co-founder Sergey Brin is directly involved and has mandated that all Gemini engineers use internal AI agents for multi-step tasks.
Why it matters: Coding has become the primary competitive battleground for major AI labs in 2026, with Google viewing stronger coding capabilities as essential progress toward self-improving AI systems that could eventually automate AI research itself.
Deep dive
  • Google DeepMind formed a dedicated team led by Sebastian Borgeaud to improve Gemini's coding capabilities after internally acknowledging that Anthropic's coding tools are currently superior
  • The team focuses on complex long-horizon programming tasks like writing new software from scratch, requiring models to read files and interpret user intent
  • Sergey Brin and DeepMind CTO Koray Kavukcuoglu are directly overseeing the effort, with Brin writing that Google must "urgently bridge the gap in agentic execution"
  • Brin mandated that all Gemini engineers use internal AI agents for complex multi-step tasks, making AI adoption a requirement rather than optional
  • Google sees stronger coding capabilities as a stepping stone toward self-improving AI systems that could eventually automate AI research itself
  • The company tracks usage of its internal coding tool "Jetski" and ranks teams accordingly, similar to Meta's token usage metrics
  • Google is training models on its internal codebase, which differs from public code and cannot be released publicly, but could improve both internal development and future public models
  • Coding has become the primary battleground for major AI labs in 2026, with OpenAI shutting down Sora video generation to redirect compute resources toward other models
Decoder
  • Agentic execution: AI systems that can autonomously plan and carry out multi-step tasks without constant human guidance
  • Long-horizon tasks: Complex programming work requiring planning across many steps and future consequences, like designing entire applications from scratch
  • Self-improving AI: AI systems capable of modifying and enhancing their own capabilities, potentially leading to rapid recursive improvement
  • Jetski: Google's internal AI-powered coding assistant used by employees
Original article

Google builds elite team to close the coding gap with Anthropic

Image description

Google is doubling down on AI coding, using more AI internally and aiming for models that can eventually improve themselves.

Google Deepmind has put together a specialized team of researchers and engineers to sharpen the programming chops of its Gemini models, The Information reports. The group is led by Deepmind engineer Sebastian Borgeaud, who previously ran pre-training for the company's models.

The team is focused on complex, long-horizon programming tasks like writing new software from scratch, work that requires models to read files and figure out what the user actually wants. Part of the motivation: Google researchers think Anthropic's coding tools are better.

Coding has become the battleground for every major AI lab this year, with OpenAI and Google both scrambling to catch up to Anthropic. OpenAI recently pulled the plug on its Sora video generator to free up compute for training and running other AI models.

Brin pushes for self-improving AI

Google co-founder Sergey Brin and Deepmind CTO Koray Kavukcuoglu are directly involved in the effort. "To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers" of code, Brin wrote in an internal memo. He also required every Gemini engineer to use internal agents for complex, multi-step tasks.

Brin told employees that stronger coding skills are a stepping stone toward AI that can improve itself. A sophisticated coding agent, paired with AI that handles math problems and experiments, could eventually automate much of the work done by AI researchers and engineers.

Internally, Google tracks how much its coding tool "Jetski" gets used and ranks teams accordingly, a setup similar to Meta, which tracks token usage as its metric. Some teams outside Deepmind also require engineers to attend AI training sessions.

According to The Information's sources, Google is leaning more heavily on models trained on its internal code. Google's internal codebase looks very different from the public code typically used to train general-purpose coding agents, so these internally trained models can't be released publicly. They could, however, help Google build better models that eventually ship to users, while also speeding up internal development.