How to Prevent Agents from Overwriting Shared State
The Lost Update Problem
The lost update problem occurs when two agents read the same memory, both modify it, and the second write overwrites the first. Agent A reads a memory about server capacity ("current capacity: 100 connections"). Agent A updates it to "current capacity: 100 connections, peak observed: 95." Meanwhile, Agent B reads the same original memory and updates it to "current capacity: 100 connections, connection timeout: 30s." Agent B's write overwrites Agent A's, and the peak observation is lost.
This problem is identical to the concurrent update problem in database systems, and the solutions are well established. The most practical for agent memory systems is optimistic concurrency control, which does not require locks and works well when conflicts are infrequent (which they are in most multi-agent systems, because agents typically work on different topics).
Step-by-Step Implementation
Every memory gets a version number that increments on each update. When an agent reads a memory, it receives the current version. When it writes an update, it includes the version it read. The memory store rejects the update if the current version does not match, indicating another agent modified the memory in between.
class VersionedMemoryStore:
def __init__(self, backend):
self.backend = backend
def store(self, content, metadata=None):
"""Store a new memory with version 1."""
memory_id = generate_id()
self.backend.put(memory_id, {
"content": content,
"metadata": metadata or {},
"version": 1,
"updated_at": datetime.utcnow().isoformat()
})
return memory_id
def read(self, memory_id):
"""Read a memory, returns content and version."""
record = self.backend.get(memory_id)
return {
"id": memory_id,
"content": record["content"],
"metadata": record["metadata"],
"version": record["version"]
}
def update(self, memory_id, content, expected_version,
metadata=None):
"""Update only if version matches (CAS)."""
current = self.backend.get(memory_id)
if current["version"] != expected_version:
raise ConcurrentModificationError(
f"Memory {memory_id} was modified by another "
f"agent. Expected version {expected_version}, "
f"found {current['version']}"
)
self.backend.put(memory_id, {
"content": content,
"metadata": metadata or current["metadata"],
"version": expected_version + 1,
"updated_at": datetime.utcnow().isoformat()
})
return expected_version + 1The compare-and-swap pattern is: read the memory (getting its version), make your modifications, then attempt to write with the expected version. If the write succeeds, you are done. If it fails due to a version mismatch, another agent modified the memory concurrently. This is the optimistic path: you proceed assuming no conflict and handle it only when one occurs.
def safe_update(store, memory_id, update_fn, max_retries=3):
"""Safely update a memory with retry on conflict."""
for attempt in range(max_retries):
# Read current state
memory = store.read(memory_id)
# Apply the update function
new_content = update_fn(memory["content"])
try:
new_version = store.update(
memory_id,
content=new_content,
expected_version=memory["version"]
)
return new_version
except ConcurrentModificationError:
if attempt == max_retries - 1:
raise
# Another agent modified it, retry with
# the latest version
continue
# Usage: append a finding to an existing memory
def append_finding(existing_content):
return existing_content + (
"\nPeak load observed at 95 connections "
"(2026-05-12 14:30 UTC)"
)
safe_update(store, memory_id, append_finding)When a concurrent modification is detected, the agent needs to merge its changes with the other agent's changes rather than blindly retrying. The merge strategy depends on the type of update. For appends (adding information to a memory), merge by including both additions. For value changes (updating a metric), use the most recent observation. For deletions (removing outdated information), verify the deletion is still appropriate given the other agent's changes.
def merge_update(store, memory_id, my_changes, agent_id):
"""Merge concurrent updates using LLM assistance."""
memory = store.read(memory_id)
try:
store.update(
memory_id,
content=apply_changes(memory["content"],
my_changes),
expected_version=memory["version"]
)
except ConcurrentModificationError:
# Re-read to get the other agent's changes
updated = store.read(memory_id)
# Use LLM to merge both sets of changes
merged = merge_with_llm(
original=memory["content"],
other_version=updated["content"],
my_changes=my_changes
)
store.update(
memory_id,
content=merged,
expected_version=updated["version"]
)The simplest way to avoid write conflicts entirely is to never update memories in place. Instead, agents always create new memories, and the memory system handles deduplication and consolidation. Two agents observing different aspects of the same system each create separate memories, and the consolidation process merges them later. This eliminates concurrent write conflicts at the cost of more storage and periodic cleanup.
class AppendOnlyMemory:
"""Agents only create new memories, never update."""
def __init__(self, memory_client):
self.client = memory_client
def observe(self, content, agent_id, metadata=None):
"""Always creates a new memory."""
return self.client.store(
content=content,
metadata={
**(metadata or {}),
"agent_id": agent_id,
"type": "observation",
"stored_at": datetime.utcnow().isoformat()
}
)
def recall(self, query, top_k=10):
"""Retrieve and deduplicate."""
results = self.client.recall(query, top_k=top_k * 2)
return deduplicate(results)[:top_k]Track how often concurrent write conflicts occur. A low rate (under 1% of writes) means your current architecture is fine. A high rate (over 10% of writes) suggests that agents are competing for the same memories too often and you should consider scoped namespaces, append-only patterns, or redesigning agent responsibilities to reduce overlap.
Adaptive Recall's Approach
Adaptive Recall uses an append-only model for the store operation, so concurrent writes from multiple agents never conflict. Each agent's observation is stored as a separate memory with agent attribution. The consolidation process handles merging related memories during its periodic cycle, using confidence scoring and contradiction detection to produce clean, authoritative memories from multiple agent contributions. The knowledge graph connects entities across agent observations automatically, so the combined knowledge is greater than what any single agent stored.
Build multi-agent systems without worrying about write conflicts. Adaptive Recall's append-only storage and automatic consolidation keep shared memory consistent.
Try It Free