AI infrastructure company EverMind has announced a major milestone in long-term memory research with its newly released EverMemOS achieving 92.3 per cent accuracy on LoCoMo and 82 per cent on LongMemEval-S—two of the field's most authoritative benchmarks. The results significantly surpass prior methods and establish a new State-of-the-Art for durable, coherent machine memory. Developed with the mission of building the memory layer for future intelligence, EverMemOS now powers Tanka and is designed to equip more AI agents with stable identities that evolve through time, enabling them to develop durable, coherent, continuously growing "souls."
EverMemOS's core design stems from an insight shared by entrepreneur and philanthropist Chen Tianqiao at the Tianqiao and Chrissy Chen Institute's Symposium for AI Accelerated Science (AIAS 2025) in San Francisco this October. Chen contrasted current AI's "spatial structure" paradigm, which is instantaneous, static, and reliant on massive spatial parameters, with the human brain's "temporal structure" paradigm: continuous, dynamic, and built to manage time-based information. Long-term memory, he argued, is the linchpin connecting time and true intelligence. EverMemOS was born from this idea: a system engineered to give AI temporal continuity—the ability to remember, adapt, and grow over time.
EverMemOS stands out through three industry-first innovations that solve longstanding memory limitations.
Its first innovation is a shift from treating memory as static storage to enabling it as an active, applied capability. Conventional methods retrieve information but do not integrate it into reasoning. EverMemOS, by contrast, uses unique fusion and decision mechanisms that allow memory to continuously influence a model's thinking and outputs. Each interaction draws on contextual history, enabling consistent, stable, and personalized experiences across time.
The second innovation is its hierarchical memory extraction and dynamic organization engine. Rather than storing memories as unstructured text fragments, EverMemOS converts them into semantic MemCells and organizes them within evolving memory graphs. This strategy links related ideas, overcomes the limitations of similarity-based retrieval, and provides a structured foundation for complex downstream applications such as reasoning-driven agents, professional assistants, and emotionally aware companion models.
A third innovation lies in EverMemOS's extensible modular memory framework, the first of its kind designed to meet diverse real-world needs. Memory requirements differ across scenarios: professional tasks demand precise, structured information; companionship models require empathetic, emotionally aware context; and agentic workflows rely on continuity and task-specific recall. EverMemOS adapts to each case, selecting the optimal memory organization and application strategy. This flexibility helps address the long-standing challenge posed by rigid, single-form memory systems and enables AI to operate effectively across varied environments.


