Google's AI adoption (3 minute read)
Former Googler Steve Yegge reveals that Google's own DeepMind engineers use Anthropic's Claude over Google's Gemini, exposing a two-tier system and internal dysfunction around AI adoption.
What: According to anonymous sources within Google, the company has a split AI tool policy where DeepMind engineers use Claude daily while other engineers are restricted to internal Gemini variants that reportedly have reliability problems severe enough to cause attrition concerns.
Why it matters: This reveals a fundamental credibility problem when the team building Google's AI doesn't trust it enough to use it themselves, and shows that even AI industry leaders struggle with the gap between developing AI and actually adopting it internally for productivity gains.
Takeaway: Evaluate AI coding tools based on actual productivity and quality rather than vendor or internal politics, and resist mandated adoption metrics that encourage box-checking over genuine value.
Deep dive
- Google attempted to equalize AI tool access by proposing to remove Claude for everyone, but DeepMind engineers objected so strongly that several threatened to quit
- Non-DeepMind engineers are pushed onto internal Gemini variants hidden behind router-style names that obscure which model is actually serving requests
- Multiple engineers report regressions and reliability problems severe enough that senior engineers have stopped using the tools entirely
- Leadership has responded to low adoption by mandating AI usage in OKRs and creating an internal token-usage leaderboard to track who uses AI tools
- Managers received contradictory guidance about whether the leaderboard will be used for performance reviews, creating confusion and distrust
- Google claims 40,000 software engineers use agentic coding weekly, but Yegge argues "weekly" is a low bar that includes people who tried it once and abandoned it
- A senior manager on a major product line has flagged attrition concerns specifically related to poor AI tooling quality
- Anonymous Googlers reached out to Yegge expressing fear of being doxxed and concern about internal bullying over this issue
- The situation suggests Google's engineering culture hasn't adapted to high-volume AI-assisted coding practices
- Yegge emphasizes that even companies that look far ahead from the outside are struggling with AI adoption, and no one should feel behind
Decoder
- DeepMind: Google's AI research lab, the team that built models like AlphaGo and contributes to Gemini development
- Agentic coding: AI tools that autonomously perform multi-step coding tasks rather than just autocomplete suggestions
- OKRs: Objectives and Key Results, Google's goal-setting framework used to measure employee performance
- Router-style names: Internal naming conventions that hide which specific AI model is actually processing requests
- Token-usage leaderboard: Internal dashboard tracking how many AI tokens (units of text processing) each engineer uses, meant to measure AI adoption
Original article
DeepMind engineers use Claude as a daily tool, but most of the rest of Google does not.