Henry Dowling's Blog

What happens to consumer AI when models stop getting smarter?

If you're trying to improve an AI response, there are really only two ways to do it: use a smarter model, or write a better prompt. So, what happens if LLMs stop getting smarter? We'll have to start focusing on the prompt a lot more.

Most AI interactions today start with you explaining everything. We can basically break down any AI prompt into:

One obvious way to continue to improve the quality of AI interactions is to *automatically add* the background context, to save the user the trouble of typing it all out and including context that the user may forget or be too lazy to add.

AI products downstream of OpenAI already know this— when everyone has the same intelligence, the battle is about who can do the best context engineering (ie adding the right background information to AI, either via writing prompts or automatically pasting in outside information).

Improving context quality would be a really good thing for consumer AI

Explaining the relevant context before an AI interaction can get annoying, especially if you've already written it down somewhere else. This information usually isn’t a secret. What model car you drive, the link to the repo you’re working on, the names of your friends: all this info is easy to find from your search history, email, personal notes, etc.

AI interactions would be so much easier if every prompt you wrote was hydrated with context from these (easily accessible) sources.

AI's Google Ads moment

Here's what the future will look like: each person will have a consolidated memory store that aggregates facts worth remembering about the user from their online activity.

This memory store will act as context-injecting middleware, ensuring that every message sent between human and AI will come with the perfect background context already added. This will lead to much more relevant, and ultimately magical AI interactions.

This is basically exactly that happened with personalized advertising in the 2010s. Last decade, we built out massive infrastructure that pulls information from every corner of a user’s online footprint and consolidate it to improve ad relevance.[2] An industrial-scale AI memory infrastructure buildout in the service of background context feels inevitable.

Model providers probably won't own the memory store

Sounds like a data moat. Who has access to the background context dataset? Not model providers, for the most part. It’s highly balkanized:

A new company, sitting on some store of high-quality personal context data, will have to create this.

But wait, what if models do keep getting smarter?

These trends actually pretty much hold even in the world where AI does get a lot smarter. (I opted to focus on the "what if ai not smarter" case for clickbait, haha). Let's examine the priciples underlying the arguments made in this post and show that they still apply in the world where AI gets much smarter.

Who’s building this?

Me and Sam Liu. Specifically, we're building Allegory, a personal AI context dump that you control. It consolidates all personal context about your life that you feed it and uses it to augment AI interactions, search, and self-organize. If you think this sounds cool, say hi.



  1. This framing comes from Letta's great Sleep Time Compute paper.
  2. Aside: we expected this massive consolidation of information by a few key players (we called it “Big Data”) to fundamentally change the way society functioned, but it turned out to mostly just enable better ads. Will personal information for AI be different? I think so— AI excels at creating value out of aggregated information from disparate sources. (Credit to this observation goes to my friend spot lemma.)


Postscript: Beyond on-demand chat

In five years, we'll think it's crazy that we used to have to start every AI interaction by explaining everything. In ten years, we'll think it's crazy that we ever had to "prompt" AI at all. With high-enough-quality context, AI should be able to answer your question before you even ask it. Like personalized ads today, in the future AI will be able to read your mind.

As compute costs decrease (if models can’t get more intelligent, then costs have to go down), we’ll be able to invest more in precomputing responses to likely prompts creating these magical interactions.