Does Spreading Activation Work Without a Knowledge Graph
What Spreading Activation Actually Needs
In ACT-R, spreading activation flows through associative links between chunks in declarative memory. The specific implementation of those links is flexible. What matters is that given a source concept (a query entity or an activated memory), the system can find other memories that are associatively connected to it. The connection can come from any source: shared entities, shared topics, co-occurrence in conversations, or proximity in embedding space.
A full knowledge graph with typed relationships (subject-predicate-object triples stored in a graph database) provides the richest associative structure. But it is also the most complex to build and maintain. For many applications, simpler association mechanisms provide most of the benefit with much less implementation overhead.
Alternative Association Mechanisms
Entity Tag Overlap
The simplest approach is to extract entities from each memory during storage and store them as tags. Two memories that share at least one entity tag are associated. When a query mentions entities, all memories tagged with those entities receive spreading activation. This is what Adaptive Recall uses by default: the entity graph is built automatically from extracted tags, without requiring a separate graph database.
# Simple entity-based spreading activation
# No graph database needed, just entity lists on each memory
def spread_by_tags(query_entities, memories):
bonuses = {}
for mem in memories:
mem_entities = set(mem.get('entities', []))
shared = mem_entities.intersection(query_entities)
if shared:
bonuses[mem['id']] = len(shared) * 1.0
return bonusesThis approach captures depth-1 associations (direct entity overlap) but not depth-2 (indirect connections through intermediate entities). For many applications, depth-1 is sufficient because the most valuable spreading activation comes from direct associations.
Co-occurrence Matrices
Track which entities appear together across memories. Build a matrix where each cell (i, j) counts how many memories mention both entity i and entity j. When a query mentions entity i, use the co-occurrence matrix to find related entities j, and then find memories that mention entity j. This provides depth-2 spreading activation without an explicit graph structure.
The co-occurrence matrix can be stored as a sparse dictionary and updated incrementally as new memories are added. It requires more memory than simple tag overlap but provides richer associations. For systems with thousands of entities, the matrix is still small enough to keep in application memory.
Embedding Space Neighborhoods
If two memories have embeddings that are close in vector space, they are about related topics. You can use this proximity as a form of association. When a memory is retrieved, compute the nearest neighbors in embedding space (the N closest memories by vector distance) and give them a small spreading activation bonus. This does not require any entity extraction at all, just the embedding vectors you already have for vector search.
The disadvantage of this approach is that embedding neighborhoods capture semantic similarity, which overlaps with what vector search already provides. Entity-based associations capture structural connections (shared concepts) that are distinct from semantic similarity, which is why they add more value to the scoring pipeline.
Conversation Context
Memories stored during the same conversation or session share an implicit association. They were created in the same context and are likely related. You can use session IDs as a simple association mechanism: when a query retrieves one memory, other memories from the same session receive a small spreading activation boost.
This captures temporal associations that entity extraction might miss. Two observations made in the same debugging session are probably related even if they do not share explicit entities. The limitation is that session-based associations become less meaningful over time as the number of sessions grows and the association becomes diluted.
When You Need a Real Graph
A full knowledge graph with typed relationships becomes valuable when:
- You need to answer multi-hop questions ("what technologies does the team that built the authentication service use?")
- The relationship types matter ("developed by" vs "depends on" vs "replaced by")
- You need to traverse long chains of associations for complex reasoning
- Your domain has a well-defined ontology with specific relationship semantics
For most AI memory use cases, where the goal is to surface contextually relevant memories during retrieval, entity tag overlap provides 80% of the benefit with 20% of the complexity. Adding depth-2 associations through a co-occurrence matrix gets you to 90%. A full graph database is the remaining 10%, and whether it is worth the infrastructure cost depends on your specific requirements.
Adaptive Recall's Approach
Adaptive Recall uses entity tag overlap with automatic depth-2 expansion through co-occurrence tracking. When you store a memory, entities are extracted automatically using LLM-based extraction. The entity connections between memories are maintained as an in-memory index, updated incrementally with each new memory. No external graph database is required, and spreading activation runs on every retrieval call with minimal latency overhead.
If your application needs deeper graph capabilities (typed relationships, multi-hop traversal, graph queries), the graph tool provides direct access to the entity connection structure, which you can export to a graph database for advanced analysis while keeping the core retrieval path fast and simple.
Spreading activation without the graph database complexity. Adaptive Recall builds entity connections automatically.
Try It Free