What happens to consumer AI when models stop getting smarter?
If you're trying to improve an AI response, there are really only two ways to do it: use a smarter model, or write a better prompt. So, what happens if LLMs stop getting smarter? We'll have to start focusing on the prompt a lot more.
Most AI interactions today start with you explaining everything. We can basically break down any AI prompt into:
- Prompt = Actual Question + Explaining Background Context [1]
One obvious way to continue to improve the quality of AI interactions is to *automatically add* the background context, to save the user the trouble of typing it all out and including context that the user may forget or be too lazy to add.
AI products downstream of OpenAI already know this— when everyone has the same intelligence, the battle is about who can do the best context engineering (ie adding the right background information to AI, either via writing prompts or automatically pasting in outside information).
Improving context quality would be a really good thing for consumer AI
Explaining the relevant context before an AI interaction can get annoying, especially if you've already written it down somewhere else. This information usually isn’t a secret. What model car you drive, the link to the repo you’re working on, the names of your friends: all this info is easy to find from your search history, email, personal notes, etc.
AI interactions would be so much easier if every prompt you wrote was hydrated with context from these (easily accessible) sources.
AI's Google Ads moment
Here's what the future will look like: each person will have a consolidated memory store that aggregates facts worth remembering about the user from their online activity.
This memory store will act as context-injecting middleware, ensuring that every message sent between human and AI will come with the perfect background context already added. This will lead to much more relevant, and ultimately magical AI interactions.
This is basically exactly that happened with personalized advertising in the 2010s. Last decade, we built out massive infrastructure that pulls information from every corner of a user’s online footprint and consolidate it to improve ad relevance.[2] An industrial-scale AI memory infrastructure buildout in the service of background context feels inevitable.
Model providers probably won't own the memory store
Sounds like a data moat. Who has access to the background context dataset? Not model providers, for the most part. It’s highly balkanized:
- notes you scribble to yourself
- browsing and search history
- content you watch on tiktok and other platforms
- discord servers and group chats
- calendars, emails, files, code repos, etc
A new company, sitting on some store of high-quality personal context data, will have to create this.
These trends actually pretty much hold even in the world where AI does get a lot smarter. (I opted to focus on the "what if ai not smarter" case for clickbait, haha). Let's examine the priciples underlying the arguments made in this post and show that they still apply in the world where AI gets much smarter.
- Response Quality = Intelligence + Context Quality.
- This will still hold— it's a fact about any AI product regardless of intelligence. Incremental gains will be achievable by increasing Intelligence or context quality.
- In fact, the relationship between Intelligence and Context Quality is likely convex; i.e. the smarter a model is, the more productive an improvement to Context Quality is.
- The most important personal context lies outside of intelligence providers.
- This is clearly still true; you could argue as AI intelligence improves it gives the winning player a chance at becoming the "front door to the internet", but Google already is yet still lacks a lot of important context (personal notes, content consumption, text messages).
- AI model costs will go down.
- This may not hold true for all applications, but I suspect it *will* for consumer AI.
- We need to ask ourselves: is intelligence the bottleneck for any consumer AI use cases currently?
- For the largest consumer AI use cases (search for products and services, AI companion, cheating on homework), AI already works great. In fact, people complained when OpenAI upgraded ChatGPT to use more intelligent GPT-5 because it had fewer of the sycophantic qualities of 4o.
Who’s building this?
Me and Sam Liu. Specifically, we're building Allegory, a personal AI context dump that you control. It consolidates all personal context about your life that you feed it and uses it to augment AI interactions, search, and self-organize. If you think this sounds cool, say hi.
- This framing comes from Letta's great Sleep Time Compute paper. ↩
- Aside: we expected this massive consolidation of information by a few key players (we called it “Big Data”) to fundamentally change the way society functioned, but it turned out to mostly just enable better ads. Will personal information for AI be different? I think so— AI excels at creating value out of aggregated information from disparate sources. (Credit to this observation goes to my friend spot lemma.) ↩
Postscript: Beyond on-demand chat
In five years, we'll think it's crazy that we used to have to start every AI interaction by explaining everything. In ten years, we'll think it's crazy that we ever had to "prompt" AI at all. With high-enough-quality context, AI should be able to answer your question before you even ask it. Like personalized ads today, in the future AI will be able to read your mind.
- If it can see you’re stuck on a bug, it texts you what you’re missing.
- When you start to get hungry, you look at your phone to see a text w/ lunch options
- If you’re booking a vacation, it does deep research in the background to pre-empt your questions.
As compute costs decrease (if models can’t get more intelligent, then costs have to go down), we’ll be able to invest more in precomputing responses to likely prompts creating these magical interactions.