How to Score Memories by Cognitive Importance
Before You Start
Importance scoring requires metadata that accumulates over time. You need access timestamps (how often and how recently each memory has been retrieved), confidence values (updated through consolidation), corroboration counts (how many other memories support this one), and entity connections (what the memory links to in the knowledge graph). If your memory store does not track these yet, start by adding access timestamp logging and work backward to compute importance once you have a few weeks of usage data.
Step-by-Step Implementation
Importance is not universal. In a customer support system, important memories are the ones that answer the most common questions, which means retrieval frequency is the dominant signal. In a research assistant, important memories are the ones backed by the most evidence, which means corroboration count matters more. In a coding assistant, important memories are the ones that connect to the most concepts, which means entity centrality is key. Decide which factors matter most before building the scoring function.
Base-level activation is the strongest proxy for importance because it reflects actual usage. Memories that are retrieved frequently and recently have high activation, which means they are serving real user needs. Use the standard ACT-R base-level learning equation: for each access timestamp, compute the time elapsed, raise it to the negative decay power, sum all contributions, and take the natural logarithm.
import math
import time
def base_level_activation(access_times, decay=0.5):
now = time.time()
total = 0.0
for t in access_times:
age = max(now - t, 1.0)
total += age ** (-decay)
if total <= 0:
return -10.0
return math.log(total)
def activation_importance(access_times):
bla = base_level_activation(access_times)
# sigmoid to 0-1 range
return 1.0 / (1.0 + math.exp(-bla))Confidence captures how reliable a memory is, which is a different dimension of importance than frequency. A memory accessed only twice but corroborated by four independent sources is more important than a memory accessed ten times but never verified. Normalize confidence to a 0 to 1 range and factor in the corroboration count as an additional signal.
def confidence_importance(confidence, corroboration_count,
max_confidence=10.0):
# base confidence normalized to 0-1
conf_norm = confidence / max_confidence
# corroboration bonus (diminishing returns past 5)
corr_bonus = min(corroboration_count / 5.0, 1.0) * 0.2
return min(conf_norm + corr_bonus, 1.0)Entity centrality scores how connected a memory is in the knowledge graph. A memory that mentions five entities, each of which connects to dozens of other memories, is a central node in the knowledge structure. A memory that mentions one obscure entity is peripheral. High-centrality memories are important because they serve as hubs that connect different parts of the knowledge base.
def entity_centrality(memory_entities, entity_graph):
if not memory_entities:
return 0.0
total_connections = 0
for entity in memory_entities:
neighbors = entity_graph.get(entity, [])
total_connections += len(neighbors)
# average connections per entity, normalized
avg = total_connections / len(memory_entities)
return min(avg / 20.0, 1.0) # cap at 1.0Blend the three components using weights that reflect your domain priorities. The default weighting gives activation 50 percent (usage is the strongest signal), confidence 30 percent (reliability matters), and centrality 20 percent (connectedness is a secondary signal). Adjust these based on what you defined in Step 1.
def importance_score(memory, entity_graph, weights=None):
if weights is None:
weights = {'activation': 0.50, 'confidence': 0.30,
'centrality': 0.20}
act = activation_importance(memory['access_times'])
conf = confidence_importance(memory['confidence'],
memory.get('corroboration_count', 0))
cent = entity_centrality(memory.get('entities', []), entity_graph)
return (weights['activation'] * act +
weights['confidence'] * conf +
weights['centrality'] * cent)Once every memory has an importance score, use it to drive automated lifecycle management. High-importance memories (above 0.7) should be protected from aggressive decay and flagged for consolidation to ensure they stay accurate. Medium-importance memories (0.3 to 0.7) follow the standard decay curve. Low-importance memories (below 0.3) are candidates for archiving or eventual deletion, especially if they have not been accessed in a long time.
def lifecycle_action(memory, entity_graph):
score = importance_score(memory, entity_graph)
if score >= 0.7:
return 'protect' # shield from decay, prioritize for review
elif score >= 0.3:
return 'standard' # normal decay and lifecycle
else:
return 'archive' # candidate for archiving or removalImportance vs Relevance
Importance and relevance are different concepts that serve different purposes. Relevance is query-dependent: a memory is relevant if it matches what the user is asking about right now. Importance is query-independent: a memory is important if the system relies on it heavily, regardless of the current query. A memory about your API's rate limits might have low relevance to a query about authentication errors, but it has high importance because it is accessed frequently and corroborated by multiple sources.
Both scores matter for different functions. Relevance drives retrieval ranking (which memories to return for a specific query). Importance drives lifecycle management (which memories to protect, consolidate, or archive). A well-designed memory system uses both, applying relevance at query time and importance at maintenance time.
Adaptive Recall scores every memory by importance automatically, driving consolidation, decay protection, and lifecycle management without manual curation.
Get Started Free