How to Share Memory Between Multiple AI Agents
Why Multi-Agent Memory Is Different
A single agent writing to its own memory store faces no coordination challenges. It knows what it stored, it trusts its own observations, and there are no conflicting writes. Multi-agent systems introduce three problems that single-agent memory does not have.
First, attribution. When Agent B retrieves a memory, it needs to know whether Agent A or Agent C stored it, because different agents have different reliability profiles. A monitoring agent that reads metrics directly is more trustworthy about performance numbers than a planning agent that estimated them.
Second, conflicts. Two agents investigating the same system may reach different conclusions. Without conflict handling, the memory store contains contradictory facts and retrieval returns whichever one happens to match the query more closely, regardless of which is correct.
Third, noise. Agents that store verbose intermediate observations pollute the shared memory space. When Agent B searches for information about the database, it finds Agent A's speculative notes alongside confirmed facts, and it cannot tell them apart without confidence metadata.
Step-by-Step Implementation
The sharing topology determines how memory flows between agents. A shared memory bus is simplest: all agents read and write to one namespace. This works when agents have complementary roles and low write frequency. Scoped namespaces give each agent private write space with shared read access, which prevents one agent's noisy observations from cluttering another agent's retrievals. Blackboard architecture adds a coordinator that controls which agents act based on the current state of shared memory, which is best for complex collaborative problem-solving where the order of agent execution matters.
# Scoped namespace pattern
class ScopedMemory:
def __init__(self, memory_client, agent_id):
self.client = memory_client
self.agent_id = agent_id
def store(self, content, confidence=0.8, tags=None):
"""Store to this agent's namespace."""
metadata = {
"agent_id": self.agent_id,
"confidence": confidence,
"tags": tags or [],
"namespace": f"agent:{self.agent_id}"
}
return self.client.store(content, metadata=metadata)
def recall(self, query, top_k=10, scope="all"):
"""Recall from own namespace, shared, or all."""
filters = {}
if scope == "own":
filters["namespace"] = f"agent:{self.agent_id}"
elif scope == "shared":
filters["namespace"] = "shared"
# scope="all" has no filter, searches everything
return self.client.recall(query, top_k=top_k,
filters=filters)
def promote(self, memory_id):
"""Promote a finding to the shared namespace."""
mem = self.client.get(memory_id)
return self.client.store(
mem.content,
metadata={**mem.metadata, "namespace": "shared",
"promoted_from": self.agent_id}
)Every memory must carry the identity of the agent that created it. This is not optional metadata; it is foundational to trust, debugging, and conflict resolution. Include the agent ID, the agent's role (researcher, executor, monitor), and the task context in which the observation was made. When retrieval returns conflicting memories, the consuming agent can use attribution to decide which source to trust.
# Agent identity attached to every memory
agent_identity = {
"agent_id": "monitor-agent-01",
"role": "infrastructure-monitor",
"trust_level": "high", # reads metrics directly
"capabilities": ["read-metrics", "check-logs",
"query-databases"]
}
# Contrast with a planning agent
planner_identity = {
"agent_id": "planner-agent-01",
"role": "task-planner",
"trust_level": "medium", # reasons about systems, may err
"capabilities": ["decompose-tasks", "estimate-effort",
"prioritize"]
}Not every agent should write to every namespace. A monitoring agent should write to its own namespace and to a "metrics" shared namespace but should not write to the "decisions" namespace owned by the planning agent. Implement permissions at the memory layer so that misconfigured agents cannot pollute namespaces they should not touch. Read permissions can be more permissive, since reading does not create conflicts, but some namespaces (containing credentials, PII, or other sensitive data) may need read restrictions too.
When an agent stores a memory that contradicts an existing one, flag the conflict rather than silently overwriting. Contradiction detection can be rule-based (two memories about the same entity with different values) or semantic (an LLM evaluator that checks whether a new memory contradicts existing ones). When a conflict is detected, both memories are preserved with their confidence scores, and a conflict flag is set so that consuming agents see the disagreement and can investigate.
def store_with_conflict_check(memory_client, content, metadata):
# Find existing memories about the same topic
existing = memory_client.recall(content, top_k=5,
filters={"min_similarity": 0.85})
for mem in existing:
if is_contradictory(mem.content, content):
# Flag the conflict, do not overwrite
metadata["conflicts_with"] = mem.id
metadata["conflict_type"] = "contradictory"
new_id = memory_client.store(content,
metadata=metadata)
# Also flag the existing memory
memory_client.update(mem.id, metadata={
**mem.metadata,
"conflicts_with": new_id
})
return new_id
return memory_client.store(content, metadata=metadata)
def is_contradictory(existing_text, new_text):
"""Use an LLM to check for contradiction."""
response = client.messages.create(
model="claude-haiku-4-5-20251001",
max_tokens=10,
messages=[{"role": "user",
"content": f"Do these statements contradict "
f"each other? Answer YES or NO.\n\n"
f"Statement 1: {existing_text}\n"
f"Statement 2: {new_text}"}]
)
return "YES" in response.content[0].textIndividual agent findings should be promoted to shared memory only after they reach a confidence threshold or are corroborated by another source. This prevents speculative observations from entering the shared namespace and polluting retrieval for all agents. The promotion can be automatic (when confidence exceeds 0.85 or when two agents independently store similar findings) or manual (a coordinator agent reviews individual findings and decides what to share).
Production Considerations
In production multi-agent systems, the memory sharing layer becomes a critical coordination mechanism. Monitor the write rate per agent per namespace to detect agents that are storing too aggressively. Set memory budgets per namespace so that a verbose agent cannot fill the shared space with low-quality observations. Implement garbage collection that factors in both the age and the access frequency of memories, since a memory that no agent ever retrieves is consuming space without providing value.
Adaptive Recall provides multi-agent memory sharing through its tagging and filter system. Each agent uses its API key to write memories tagged with agent identity and role metadata. Recall queries filter by agent ID, role, confidence threshold, or any combination of tags. The knowledge graph connects entities across agent boundaries automatically, so when one agent stores information about a database and another stores information about a service that uses that database, the entity connection is discoverable through graph traversal even though the agents never directly communicated.
Share memory across agents without building coordination infrastructure. Adaptive Recall handles namespacing, conflict detection, and entity connections automatically.
Get Started Free