One Memory, Any Model: Why AI’s Next Breakthrough Isn’t a Bigger Model—It’s Remembering Who We Are

Executives shouldn’t have to spend weeks vetting AI agencies. Thanks to my partnership with Clutch, I can now spotlight agencies that deliver tangible results, helping companies confidently invest in AI with providers I personally trust. #ClutchPartner


In today’s TomTalks🎤, we’re joined by Jing Xie, Vice President of AI Memory at MemVerge.

Jing and his team are building something that could redefine how AI works at its core: memory.

In our conversation, we explored his mission to solve “stateless amnesia” — the problem of AI forgetting everything between interactions — and his belief that private AI memory will become a human right. Jing shared two big ideas that anchor this Q&A: “One Memory, Any Model” and the notion of owning a private, sovereign digital memory in the age of generative intelligence.


1) You’ve said the next evolution of AI won’t come from bigger models, but from better memory. How do you explain the concept of “one memory, any model” to a non-technical audience that just wants AI to feel personal again?

Think about how frustrating it is when you have to repeat yourself to a new doctor, or explain your entire history to a different customer service agent. That’s exactly what happens with AI today. Every time you switch between ChatGPT, Claude, or any other AI tool, you start from scratch. They don’t remember you.

“One memory, any model” means your AI remembers you—your preferences, your context, your history—no matter which AI you’re talking to. It’s like having a single medical record that every doctor can access, rather than filling out the same forms at every appointment.

With MemMachine, your memory layer sits between you and whatever AI model you choose to use. You could start a conversation with GPT-4, switch to Claude mid-session, and even use a private local model—and they all know your context. The AI feels personal again because the memory is consistent.

Example #1: custom UI (streamlit), memory (MemMachine)

Article content

Example #2: Claude Desktop UI, memory (MemMachine)

Article content

2) Right now, every major AI company is building its own memory ecosystem—each one a walled garden. What risks do you see if users let their memories live inside someone else’s platform?

Let’s face it: if your memories live inside someone else’s platform, they’re not really yours.

Risk #1: Lock In – This happens when you let OpenAI or Anthropic store all your personal context and memory.  You’re trapped. They own your digital identity, and that gives them enormous power over you.

Risk #2: Privacy – The memory and context layer is filled with sensitive information.  What happens when they change their privacy policy? When they get acquired? When there’s a data breach? Your most intimate thoughts become someone else’s training data.

Risk #3: Control – These companies decide what gets remembered, what gets forgotten, what gets prioritized. They control the narrative of your own life and work.

Risk #4: Single Point of Failure – If OpenAI changes direction or decides you’ve violated their terms of service, your entire AI memory disappears.

I want a future that can best be described as wide open, deeply private, AI memory.  We are walking the walk and talking the talk too.  On November 21st, we are kicking off a first of its kind global AI memory tour starting first in the heart of Silicon Valley at the Computer History Museum.  We invited ALL our competitors because we believe that more ideas, more sharing, and more collaboration is good for the world and for the future of AI memory compared to a world in which only a powerful few AI companies dominate.  There might be a few spots left so please check it out and stay tuned for additional events planned in Seattle and other major cities around the world. https://luma.com/y9jr3gs5

3) You drew a comparison to Matthew McConaughey’s comment about wanting his own private AI memory. Why did that moment resonate with you, and what does it reveal about where the public conversation on AI privacy is heading?

McConaughey’s comment hit home because it captured what people are just starting to realize: Big Tech is extracting your most intimate data—your memories, preferences, your communication style—and you have no control over it.

Look at what happened with Scarlett Johansson. OpenAI didn’t just want her voice—they wanted her essence, her way of communicating. When she said no, they seemingly tried to replicate it anyway. That’s not a feature request gone wrong. That’s the business model.

The conversation is shifting from “AI is helpful” to “Who owns my AI memories?  Who has the rights to my AI identity?”

Every conversation you have with ChatGPT, every workflow you build, every preference you express—that’s your cognitive fingerprint. But most AI platforms treat it as their training data.  95% of ChatGPT users are not paying users and many unknowingly give themselves to OpenAI as the training data per the ToS that no one really reads.  You can’t take it with you. You can’t control who sees it. And you definitely can’t stop them from using it to improve models that make them billions.

That’s why we built MemMachine. Your AI memory should work like your brain—portable, private, yours. You shouldn’t have to choose between an AI that knows you deeply and having privacy. And you shouldn’t have to worry that your personal AI memory is training someone else’s product.

McConaughey gave voice to what people are feeling: AI is becoming too intimate to outsource to Big Tech. The future isn’t about whether you use AI. It’s about who owns YOUR context and YOUR memory.

4) You’ve called the current AI landscape “stateless amnesia.” If AI could finally remember like humans do—across time, context, and emotion—what kinds of experiences or tools suddenly become possible?

If LLMs are the new computer, then AI memory is the new storage—but it’s fundamentally different. Traditional storage held files and data.  When people talk about RAG, it isn’t really MemMachine, it is this traditional data that is simply being vectorized to make it easier and faster for LLMs to access it.  The new storage (ie: the MemMachine AI memory layer) holds context: the understanding of what those files mean to you, what you’ve done with them, and what you want to accomplish next.

With true memory, AI shifts from answering isolated questions to becoming a persistent teammate.

It makes the following possible:

Proactive AI that anticipates needs. Your AI remembers you’re launching a product next month and surfaces competitive intel without being asked.

Cross-conversation continuity. No more re-explaining your preferences, your company’s style guide, or last week’s decision. The AI carries context forward and evolves as your business does.

Emotional intelligence at scale. AI that remembers your work patterns, stress triggers, and communication preferences—adapting its approach accordingly.  It can even help you and others work better together by understanding EQ dynamics at a team, group, and organizational level.

Seamless multi-agent orchestration. When parallel agents share memory, they maintain consistent intent, style, and context—even across different models.  We already got OpenAI agent builder and Claude Desktop working with MemMachine MCP…enabling the best technologies from two unlikely foes to coexist nicely in a single agent tech stack that plays nice with one another.

The “stateless amnesia” we have today forces us to treat AI like a search engine. With memory, we unlock AI as a genuine collaborator—one that learns, adapts, and builds on every interaction to make your work faster, your decisions sharper, and your automation truly intelligent.  At the end of the day, we hope technologies like MemMachine gives each of us more time back in our day for what truly matters, our families, loved ones, friends, hobbies, dreams, experiences, and aspirations.

5) There’s something deeply human about memory—it defines identity. How do you think owning a personal AI memory might change the relationship between people and technology in the next five years?

Memory doesn’t just define identity—it defines agency.

Right now, our relationship with AI is tenant-landlord. We live in Big Tech’s house, following their rules, hoping they don’t lose our stuff or hand it to someone else. That’s why Matthew McConaughey wanted his own private AI memory. That’s why Scarlett Johansson lost her voice. We don’t own the thing that knows us best.

In five years, owning your AI memory changes everything. It’s the difference between renting and owning your home. When you control the memory layer, you’re not asking permission—you’re the one granting it.   Unlike the world we live in now where we are beholden to platforms like Instagram, YouTube, and TikTok which thrives on what we put into it and monetizes our content for the benefit of its advertisers, I imagine a world where because we own our own memory, we stop being the product.

Your AI works solely for you, helps your AI teammates carry your preferences, your context, your workflow patterns across every tool and model. Not because some company aggregated your data, but because you have a memory technology that works for you.

That shift—from dependency (ChatGPT for everything) to ownership (the best model/agent for each use case + one unified AI memory that you own and control) —that’s the relationship change. And it’s already starting.

6) Many enterprise leaders hear “AI memory” and immediately think compliance risk. What’s your pitch to them that this isn’t just secure—it’s necessary?

The compliance risk isn’t AI memory—it’s AI without memory.

Right now, enterprises are hemorrhaging context into every ChatGPT conversation their employees are having to help them get their work done, every Slack AI query, every customer support interaction. Without memory, employees re-enter sensitive information constantly because the AI forgets. That’s not compliance—that’s data sprawl.

MemMachine flips this. It enables all of this rich organizational context and memory to run in your private cloud or on-prem, every interaction stays inside your perimeter. Your compliance team finally has a single source of truth for what AI knows about your business. You can audit it. Redact it. Control who accesses what.  The backend to MemMachine is a graph and SQL database…clear, transparent, portable, secure.

Here’s why it’s necessary: AI without memory makes your employees less productive than they were five years ago. They’re context-switching between tools, re-explaining the same projects, losing work when sessions end. Your competitors who solve this first will move 10x faster—with compliant, private long-term context & memory architectures.

The question isn’t whether you’ll adopt AI memory. It’s whether you’ll do it with infrastructure you control, or let your team keep scattering your IP across consumer AI platforms you can’t audit.

7) You’ve had a career spanning IBM, startups, and now leading one of the most forward-looking AI infrastructure projects. What lessons from those experiences shaped how you think about building open systems versus closed ones?

IBM taught me that being first doesn’t mean you win. Watson was the first AI and it beat the best chess players and excelled at games like Jeopardy, but it didn’t gain the traction we had hoped in its early years.  In the big data and analytics space I watched Hadoop dominate early, but Spark overtook it by being better—faster, more developer-friendly, built for the problems people actually needed to solve. Spark wasn’t first but it was superior and it won the enterprise, including IBM who dedicated thousands of engineers to give Spark the escape velocity it needed to dominate all other competing projects.

That lesson shapes everything we’re doing with MemMachine. We’re not the first open-source AI memory project. But we’re building to be the best. Our LoCoMo benchmark score is 0.85 versus competitors at 0.52-0.75. Most recently, our engineers cracked 0.92 putting us clearly in the SOTA category as an AI memory and retrieval system.  We aren’t the first OSS AI memory project, but that is okay – we are focused on being the one enterprise developers choose and trust.

My startup years taught me that great technology without distribution is just a science project.You can build something brilliant, but if developers don’t adopt it and enterprises don’t trust it, it doesn’t matter.  MemMachine as OSS helps with distribution and building in the open helps enterprise developers trust us.

Now at MemVerge, we’re executing the Spark playbook for AI memory: open source for developer adoption, superior performance on benchmarks that matter, and enterprise-grade support that companies actually need. We’re not just building a memory layer. We’re building the memory layer that wins—the one developers want to use and enterprises feel confident betting on.

The lesson isn’t about open versus closed. It’s about building something so good that it becomes the standard.

8) If we fast-forward to 2027, how will we know that the “AI memory era” has truly arrived? What signs—technical, cultural, or behavioral—will tell us we’ve crossed that threshold?

Technical Signs: The same way you’d never buy a computer without storage, by 2027 you won’t deploy an LLM without memory infrastructure. Memory becomes non-negotiable—like RAM and hard drives became standard. Every AI system ships with episodic memory (graph DB for experiences), profile memory (SQL for identity), and working memory (context management) as baseline requirements, not optional add-ons.  If we are successful with MemMachine it will be the “gold standard” AI memory component…like Samsung for DRAM and SSDs and like “Intel Inside” during Intel’s good years…or perhaps now it would be “NVIDIA CUDA” standard.

Cultural Signs: Remember when “the cloud” shifted from nerdy IT jargon to something your parents casually reference? In 2027, people stop saying “AI forgot our conversation” because that becomes as absurd as “my computer forgot how to save files.” The expectation of memory becomes universal. Users demand to know where their AI memories live, who controls them, and can export them.  When AI does something great or does something bad, the detective work goes straight to what context and memory it was operating with.

Behavioral Signs: Here’s the killer tell: people start firing AI systems that either don’t remember or aren’t well integrated across devices, platforms, services, and applications. Enterprise CTOs stop asking “Should we add memory?” and start asking “Which memory architecture scales best?” Open-source memory layers like MemMachine become infrastructure-critical—the Linux of AI memory.

The threshold moment is when AI forgetting becomes the bug, not the feature.