Ex-OpenAI researcher Jerry Tworek launches Core Automation to build the most automated AI lab in the world (1 minute read)
A former OpenAI researcher has launched an AI lab focused on automating research itself and developing alternatives to current pre-training and transformer architectures.
Decoder
- Pre-training: The initial phase of training large language models on massive datasets before fine-tuning for specific tasks
- Reinforcement learning: A machine learning approach where models learn by receiving rewards or penalties for their actions
- Transformers: The neural network architecture underlying models like GPT and Claude, which uses attention mechanisms to process sequences
Original article
Ex-OpenAI researcher Jerry Tworek launches Core Automation to build the most automated AI lab in the world
Jerry Tworek, a former OpenAI researcher, has unveiled his new AI lab, "Core Automation," with the goal of building "the most automated AI lab in the world," starting by automating its own research.
Instead of chasing ever-larger models trained on more data, Core Automation says it's developing new learning algorithms that go beyond pre-training and reinforcement learning, plus architectures designed to scale better than transformers.
The team pulls together experts in frontier models, optimization, and systems engineering. The vision is small teams with capable AI agents doing work that used to take entire organizations.
Tworek left OpenAI in January 2026 after seven years, saying this kind of fundamental research was no longer possible there. In his view, deep learning research "is done."
Core Automation joins a growing list of so-called Neo Labs founded by OpenAI alumni and others, including Thinking Machines Lab (led by the former CTO) and Safe Superintelligence (led by the former chief scientist). They all share the belief that real progress in AI now depends on fundamentally new approaches.