Jan 17, 2026
It’s amazing how fast the zeitgeist is swinging around here:
Every time you add a “smart” wrapper around model behavior - planning modules, verification layers, output parsers - you’re encoding what you think the model should do. But the model was trained on millions of examples. It has seen more patterns than you can anticipate. Your abstractions become constraints that prevent the model from using what it learned.
The Bitter Lesson from ML research is clear: general methods that leverage computation beat hand-crafted human knowledge every time. Agent frameworks are just the latest instance of this mistake.
99% of the work is in the model
Similar argument at play here as Ralph Wiggum, Skills vs MCP etc.
The Bitter Lesson of Agent Frameworks