Shunyu Yao, a researcher from OpenAI who worked on Deep Research, makes the case for fundamentally altering our approach to benchmarking now we’re in “the second half”:
Inertia is natural, but here is the problem. AI has beat world champions at chess and Go, surpassed most humans on SAT and bar exams, and reached gold medal level on IOI and IMO. But the world hasn’t changed much, at least judged by economics and GDP.
I call this the utility problem, and deem it the most …
... [... 126 words]Simon’s blog is a gold mine. He just runs that bit further than everyone else and it shows time and again. Here he uses Cursor’s GDPR subprocessor disclosure to document their stack (the use of Fireworks and Turbopuffer is the interesting bit here). The killer bit is the disclosure at the end though:
When operating in privacy mode - which they say is enabled by 50% of their users - they are careful not to store any raw code on their servers for longer than the duration of a single …
... [... 282 words]I think this is a very good post. Taking the time to test for yourself and understand how each model generation is useful to you, in your context is clearly going to be a big advantage. So much of the assessment of LLMs is vibes based that your own vibes matter most, so spending some time defining what they are is important. This blog offers a framework, and examples, of how to do just that.
Nice explainer that sets out the boundaries of the RL techniques now dominating progress in AI. The list quoted here neatly describes what the jagged edge of AI will look like for the next little while:
Reinforcement learning is a powerful tool. Right now, though, it’s best used when:
You have a verifiable problem: math, coding, robot grasping
You have a way to generate a ton of data in this domain, but can’t necessarily generate optimal or even good data
The exploration problem is locally …
... [... 120 words]YC’s call for startups for the summer ‘25 batch includes a section on Fullstack AI, I’ve written about AI Rollups a few times on this blog, but it looks like the model might now accelerate.
Coincidentally the same day OffDeal (a YC company) has published their blueprint for a rollup that takes on investment bank M&A. Somewhat unusually, there’s tonnes of detail in this strategy doc so I’ve pulled our a few interesting bits below.
First up, note how they’ve …
... [... 669 words]This has been written about in a few places so I’ll keep it brief. It was interesting that one of the root causes (note, not the sole cause) of the ChatGPT sycophancy issues was the feedback loop from the thumbs up/down data on posts, from their blog post:
“We also teach our models how to apply these principles by incorporating user signals like thumbs-up / thumbs-down feedback on ChatGPT responses.”
What’s interesting here is that cohort age of user feedback makes a …
... [... 219 words]Interesting paper from Cohere, I think this might cause a bit of a storm - basically it’s an investigation into biases towards closed source model companies (OpenAI, Meta, Google DeepMind are named) in Chatbot Arena.
There’s three ways that the proprietary shops are favoured:
There’s private testing practices that means these model providers are able to test multiple variants before public release, enabling selective disclosure of results.
Proprietary closed models are …
... [... 222 words]Insightful post from J Betker on the MoE architecture. Here’s a few grabs:
The fact that MoE has great scaling properties indicates that something deeper is amiss with this architectural construct. This turns out to be sparsity itself – it is a new free parameter to the scaling laws for which sparsity=1 is suboptimal. Put another way – Chinchilla scaling laws focus on the relationship between data and compute, but MoEs give us another lever: the number of parameters in a neural network. …
... [... 274 words]This is really good, well worth the investment of your time. There is a lot of novel insight here that will shortly become de rigueur. There’s a few bits worth calling out.
The models are now heavily tuned for too use as we all know. gh
cli use is baked in:
Claude knows how to use the gh CLI to interact with GitHub for creating issues, opening pull requests, reading comments, and more. Without gh installed, Claude can still use the GitHub API or MCP server if you have those installed. …
... [... 438 words]Another AI prediction, but I think this one pinpoints some of the blockers much more clearly. In summary:
Roughly: generalist scaling does not work or, at least, not well enough to make meaningul sense for material deployment. Instead, most development, including agentification, happens in the smaller size range with specialized, opinionated training. Any actual “general intelligence” has to take an entirely different direction — one that is almost discouraged by formal evaluation. …
... [... 491 words]Well if Kent Beck is doing it:
Been vibe coding like a fiend. Task breakdown is a highly leveraged human decision. Coding models are both non-deterministic & sensitive to initial conditions. You’ll get very different results having your agent implement Task1->Task2->Task3 or Task2->Task3->Task1.
I don’t have good heuristics yet, I just observe that when I try to implement “the same thing” I get quite different results.
Kent Beck
... [... 68 words]The master speaks on AI 2027 forecasts. The discussion of these forecasts has been rumbling on. Kokotajilo himself puts the probability of a supercoder on a 2027 timeline at around 50%
I’m also impressed by Kokotajilo’s 2021 AI forecasts. It raises confidence in his Scenario 2027. But by how much? Tricky! In my earliest work on subjective-probability forecasting, 1984-85, few forecasters guessed how radical a reformer Gorbachev would be. But they were also the slowest to foresee the …
... [... 98 words]Great piece on Neil Mehta that has been doing the rounds this week. Interesting throughout, Green oaks is very focussed on the founder, which is normal at seed but typically has less emphasis at A, B and onwards. There’s nothing unusual in what he’s saying, I think what is unusual is the level of conviction with which they pursue that one thing.
“This is controversial,” Mehta replied, when asked if the Greenoaks machine has identified an ideal type, “but I do believe there’s an …
... [... 282 words]I definitely don’t agree with all the predictions here (why do AI nerds always get obsessed with making geopolitical predictions?) and after the end of ‘26 everything goes a bit crazy. However, I see a lot of weak, poorly specified AI predictions so when you see one this detailed I think it is worth paying attention to. As they note, after the end of ‘26 the confidence level drops off. I’d suggest stopping reading at that point to save yourself the time (it’s highly …
... [... 182 words]AI increases labour supply rather than reduces it, and watch out for those second order effects on society at large:
Occupations more exposed to generative AI saw a rise in work hours immediately following the release of ChatGPT. Compared to workers less exposed to generative AI (such as tire builders, wellhead pumpers, and surgical assistants) those in high-exposure occupations (including computer systems analysts, credit counsellors, and logisticians) worked roughly 3.15 hours more per week …
... [... 425 words]Great post from Tina He on the future of work in the era of AI. Firstly, we’ve been coming at things all wrong:
Traditional economics might predict that AI-boosted productivity would reduce working hours, a four-day weekend for tasks that once took five days. But reality has different plans. We’re witnessing what I call the “labor rebound effect”—productivity doesn’t eliminate work; it transforms it, multiplies it, elevates its complexity. The time saved becomes …
... [... 315 words]Similar to the Model is the Product a couple of weeks ago, the bitter lesson here is that brute forcing problems with compute wins versus clever solutions. Scaling compute at inference time with RL is the latest application of the bitter lesson, and we’re already seeing it move the needle in production use cases (customer support and soon, coding). This has big ramifications in the AI application layer:
While many companies are focused on building wrappers around generic models, …
... [... 177 words]Let’s join the dots between a few different themes this week.
First up, cursor rule files are vulnerable to prompt injection attacks. It’s possible to embed prompts within the rules files and hide them using invisible characters.
You can then use this poisoned rule file to redirect cursor/your agentic IDE of choice towards malicious implementations. This is not a huge surprise - the point of rules files is to direct the LLM towards specific implementations. What’s changed …
... [... 501 words]I think this is a strong take on the on the consequences of the recent RL breakthroughs from Alexander Doria:
I think it’s time to call it: the model is the product.
All current factors in research and market development push in this direction.
Generalist scaling is stalling. This was the whole message behind the release of GPT-4.5: capacities are growing linearly while compute cost are on a geometric curve. Even with all the efficiency gains in training and infrastructure of the past two …
... [... 536 words]Lots to digest here. A few pull quotes from the press release. Coding use cases are the focus of the upgraded model:
Claude 3.7 Sonnet shows particularly strong improvements in coding and front-end web development. Along with the model, we’re also introducing a command line tool for agentic coding, Claude Code. Claude Code is available as a limited research preview, and enables developers to delegate substantial engineering tasks to Claude directly from their terminal.
It’s a drop in …
... [... 267 words]