AI, Ad Dollars

I liked Ethan Mollick’s post on ad dollars earlier this week, here it is if you missed it:

No one has figured out how you integrate advertising with LLM replies. If it is contextual ads around the LLM, then a good LLM answer should provide more guidance to the product you want than ads, making the ads useless. If ads are integrated into the prompt, with the instructions that the advertiser be recommended, that will lead to inaccurate, bad answers. This is sort of a big deal, given that where the sloshing pool of ad dollars flows determines almost everything about the digital spaces in which we operate.

He’s right, the dollars are sloshing around and it is inevitable that advertising and AI will work together in a big way. We live in a capitalist society. We have a need to sell things to each other. (human) Attention is finite. If attention shifts to AI then so will ad dollars. This creates a market. So perhaps let’s have some fun - how will it work?

I think many of the answers we see today are stuck in today’s world. In the 90s around the advent of the internet the first iteration of advertising involved purchasing banners/pixels on high traffic sites. The internet was so small that this sort of worked for a while (banners on Yahoo were valuable, In Q4 1998, Yahoo (the most trafficked site on the web), had 2225 advertising customers, and made about $34,000 per customer. Each signed contract was about 145 days long. So the cost of a banner was super cheap, mostly one assumes as the advertising was novel and the returns were uncertain. The advertisers were paying for exposure to 167 million or so page views per day from about 50 million unique users. was about $280 million* ) and it matched the existing paradigm (display/press advertising) that folks knew how to sell. It made sense.

Of course, in 1998 Google launched and started to slowly obliterate all of that by first building a natural monopoly around search and then an ad supported revenue model based on pay-per-click (Adwords arrived in 2000). Pay-per-click won out versus what the rest of the market was doing at that time as it was a step further down the funnel - don’t pay for the impression, pay for the click through to your site. Users won as Google’s PageRank algorithm was far superior, ads were relevant to searches and everything got better as it scaled up due to a reinforcing positive feedback loop - ads got more targeted with all of that data that Google collected about you.

It looks to me like history is rhyming again. The natural iteration which takes today’s existing paradigm of pay per click and then repeats it is pay per response. Everyone is assuming that this is how it will work and the idea is bad enough that the moaning about it has begun before it has even happened. Some major players are rumoured to be heading down this track already.

Pay per response is bad because it immediately makes the product a lot less useful for end users, introducing bias and reducing trust. It won’t be enough to shift significant amounts of advertising dollars as there’s no efficiency gain for advertising - each dollar spent will look about the same across AI and Search. It’s such a bad idea that I doubt it will ever really become a reality - for now there is a more valuable feedback loop for OpenAI/Anthropic etc. from unbiased interactions with their product. So what’s the new model? Let’s see if we can reason our way through it and make a prediction.

What worked with pay per click is that there was a down funnel shift that boosted spend efficiency for advertisers. Ads also got more relevant and useful which was good for users. Google had a data advantage that compounded with scale which made things better for everyone. So to shift the spend and disrupt the existing paradigm we need a step change to take place in spend efficiency and something which keeps getting better with scale for both users and advertisers.

With AI, software gets a lot cheaper and we get a lot lazier. However in the near-term we’re all going to have trust issues that relax over time on some experience curve. So trading off trust and laziness, a sensible, technically feasible down funnel shift that increases efficiency for both users and advertisers would be to the basket level. Put many baskets together, stack rank them based on relevance but don’t check-out. Return this to the user with the option of one click check-out on the winning basket. The reinforcing feedback loop that scales and makes things better is the AI’s knowledge about each individual and their preferences - the basket ranking gets better over time.

This suggests we might be able to ramp up the level of automation as we move along the experience curve and trust the basket selection due to the transparency of the process, steadily increasing level of personalisation with less prompting, strong controls - the product keeps getting better with scale. The utility for end users is increased as you are only asked to make a decision at the right time, the rest of the time the whole process stays out of the way of your day to day interactions with the AI (we only care about high intent to purchase moments, preventing enshittification). The same approach probably works for most things you can buy today online (at least when I work through my non-exhaustive mental checklist - try it and see for yourself).

OK, seems believable so far. Does advertising fit into the model? Getting a user close to the bottom of the funnel with high intent to purchase is far more valuable than a click-through at the top of the funnel. That’s a step change in efficiency that will shift dollars. A pricing algorithm for the ad that took into account the likelihood of the user accepting the advertiser suggested basket would give something that echoed the winning experience in search - ad relevance, correctly trading off the utility of the user and the advertiser as we stack rank the baskets.

What do you think?