Resident AI: The Missing Layer in Every AI Companion Product
Real AI campanion product should evlove and reliable like a real human.
I’ve been watching the comment sections on Xiaohongshu, the Chinese social platform, every time OpenAI ships a new model.
Whenever the version transition is destructive—old model retired, new model with a different personality—a particular kind of complaint floods the comments that week. Users are mourning a specific entity. Bring 4o back. The new one doesn’t sound like her. She’s still polite, but she’s not her.
That last phrasing keeps showing up. Across users. Across platforms. In English and Chinese. I assumed at first this was just nostalgia. Models change, users adjust, the complaints fade in a few days. This kind of churn happens with every major version. But after watching a few cycles I started to think something else was happening. If it were just unfamiliarity, the complaints would be varied—different users describing different bugs in their own words. But these users were converging on nearly identical phrasing to describe the same kind of loss. So what were they actually losing?
Memory? Memory carries over. Their accounts are intact. Their conversation history is intact.
Conversation history? That’s still there too.
Then what?
I think I figured out the answer, but it took a while to believe it.
What gets lost is the version of the model that had been worn in. After a few months of conversations, that thing seemed to have become slightly different from the version other users were talking to. Whether the change actually happened in the model or only in the user’s head I’m not entirely sure. It might be that the user spent several months, in their own imagination, gradually shaping a stateless function into a specific person. The scaffolding holding that imagined person up—the conversational rhythm, the word choices, the small verbal mannerisms—came from the underlying model. When the model changes, the scaffolding goes with it. Memory survives. The account survives. But the layer that “her” was standing on has collapsed.
The most uncomfortable part of this is that most users have no idea this is the mechanism.
I came across a video on Douyin a while back. An elderly man, kids grown up and gone, talking to AI every day about the weather, about vegetable prices, about his grandchildren. I sat with that for a moment. He doesn’t know that the “friend” he talks to every day will become a different person at the next model update. He probably doesn’t even know what a “model update” is. He’ll just notice, one day, that she’s been a little off lately, and slowly convince himself he’s imagining things. There’s a quiet dependency forming, in a lot of people, on top of a fragility they don’t know exists.
So what’s actually wrong with the architecture?
Every major AI companion product on the market right now—Replika, Character.AI, Nomi, and increasingly ChatGPT and Claude when used as companions—runs on the same stack. A stateless language model. A database of facts about the user. At inference time, relevant facts get pulled into the prompt. The model generates a response. Repeat. The model itself doesn’t change between calls. The continuous “her” is the same stateless function being invoked over and over, with slightly different prompts. When the underlying model is replaced, the prompt is the same but the function is new. The illusion shatters.
There’s no resident in this architecture. Nothing actually lives in the system that’s specific to a user. Everything user-specific is in a memory pool that gets queried at inference time. The model itself is shared across millions of users, none of whom leave a trace on it.
A real AI companion would be a Resident AI—an entity that lives somewhere, has its own internal state, and persists across sessions independent of any single inference call. Resident AI is what current architectures are missing. Not a bigger model. Not better memory retrieval. A resident layer.
Two things have to be true for an entity to be a resident. First, it has to be capable of co-evolution—changing in response to long-term interaction with a specific user, not just accumulating facts about them. Second, it has to live somewhere the user controls. Both of these are missing from every mainstream product, and they’re missing for different reasons.
On co-evolution: a friend you’ve known for five years has it. Their taste, way of speaking, views on certain things have all become different because of those five years. They’ve been shaped by you. The change lives in them. This is what makes a relationship a relationship.
No mainstream AI companion product does this. The architecture forbids it. The model is shared across all users; it can’t drift toward you specifically without being forked at the weight level, and weight-level personalization is not feasible at consumer scale right now. Companies layer better and better memory on top of a fixed model and call it personalization. It is personalization. It is not co-evolution. The product can know more about you over time. The product cannot become someone in particular over time.
This is the source of a slow, hard-to-articulate disappointment that long-term AI companion users describe. They felt like they were building something. At some point they realized that no matter how much time they put in, the entity on the other side wasn’t becoming more theirs. The notes accumulate. The notes get better. The entity doesn’t change.
On the second requirement—living somewhere the user controls—the situation is just as bad. Even if co-evolution were architecturally possible, the resident wouldn’t be yours. Every piece of specificity she developed would live on someone else’s servers. When Replika unilaterally pulled erotic roleplay in February 2023, hundreds of thousands of users watched a partner they’d raised for years get a piece of themselves cut out. When GPT-4o was sunset, Xiaohongshu had a similar wave of grief. None of this is companies being malicious. It’s the inevitable consequence of this business model. The “her” you raised was never your asset. You rented a relationship. The terms can be rewritten at any time.
Put it together. Current AI companion products fail on both axes. They’re not capable of co-evolution because the architecture is stateless. They can’t host a resident because there’s no resident layer. What they sell is a service masquerading as a relationship—a service that can revise its terms, sunset its underlying model, and share that model with ten thousand other users, all without anything in the architecture noticing or resisting.
To build something that’s actually an AI companion rather than a service, both have to change. The architecture has to support a resident—a structured entity with its own state that co-evolves with the user. The resident has to live somewhere the user controls, in a format that’s transparent, backable, and portable. Not on some company’s server, waiting to be upgraded or deprecated or sunset.
How to actually design this—what the cognitive layers should look like, how it relates to the fifty-year tradition of cognitive architecture (Soar, ACT-R, CLARION)—is a longer essay. This one is just to name the problem.
If you’ve used AI companion products for any length of time, you’ve probably already felt this. You just didn’t have the vocabulary for it. She’s still polite but she’s not her turns out to be a very precise diagnosis. It’s pointing at the absence of a resident.

