How to Apply Spaced Repetition to AI Memory
The Spacing Effect in Human and AI Memory
The spacing effect is one of the most replicated findings in cognitive science. Reviewing information at increasing intervals (one hour, then one day, then one week, then one month) produces dramatically better retention than reviewing at constant intervals or massing all reviews together. Ebbinghaus documented this in 1885, and every subsequent study has confirmed it. The effect is so robust that it works across languages, age groups, types of material, and testing conditions.
ACT-R explains the spacing effect through its activation equations. Each retrieval adds a new access event to the memory's history. When retrievals are spaced out, each new access occurs after the memory has partially decayed, which means the retrieval is effortful (the memory was hard to access) and therefore contributes more to long-term activation. Massed retrieval (accessing the same memory many times in quick succession) adds many access events but they all have nearly identical timestamps, so their combined contribution is barely more than a single access.
For AI memory systems, this means that how and when a memory gets accessed matters as much as how many times it gets accessed. A memory retrieved once per day across a week builds stronger activation than a memory retrieved seven times in a single session. The practical implication is that natural usage patterns, where different queries surface the same memory on different occasions, automatically produce spaced repetition without any explicit scheduling.
Step-by-Step Implementation
In the base-level activation equation, each access at time ti contributes (t - ti)^(-d) to the sum. Two accesses 10 seconds apart contribute almost identically because (t - t1) and (t - t2) are nearly equal. Two accesses 24 hours apart contribute differently because the older access has decayed significantly while the newer one is fresh. The total activation from spaced accesses is lower immediately but decays more slowly, which is exactly the retention advantage that spaced repetition exploits.
For each memory, store the complete access history (as described in the base-level activation guide). From this history, compute the intervals between successive accesses. These intervals reveal the natural retrieval pattern: is this memory accessed every day, every week, or sporadically? Memories with regular, increasing intervals are naturally building durable activation. Memories with irregular or clustered access patterns may need intervention.
def retrieval_intervals(access_times):
"""Compute gaps between successive accesses in hours."""
if len(access_times) < 2:
return []
intervals = []
for i in range(1, len(access_times)):
gap_hours = (access_times[i] - access_times[i-1]) / 3600
intervals.append(round(gap_hours, 1))
return intervalsUse the ACT-R activation equation to predict when a memory will drop below the retrieval threshold. The threshold is a configurable value (typically 0 in ACT-R, but you can set it higher if you want to reinforce memories while they are still comfortably accessible). Compute the activation at future time points and find when it crosses the threshold. Schedule reinforcement just before that point.
import math
def predict_threshold_crossing(access_times, decay=0.5,
threshold=-2.0, max_days=90):
"""Predict how many hours until activation drops below threshold."""
now = max(access_times) # start from last access
for hours_ahead in range(1, max_days * 24):
future_time = now + hours_ahead * 3600
total = 0.0
for t_access in access_times:
age = max(future_time - t_access, 1.0)
total += age ** (-decay)
activation = math.log(total) if total > 0 else float('-inf')
if activation < threshold:
return hours_ahead
return max_days * 24 # won't cross within max_daysActive reinforcement (explicitly re-retrieving memories on a schedule) is expensive and artificial. Passive reinforcement happens naturally when retrieving one memory causes related memories to gain activation through spreading activation. You can amplify this effect by recording a partial access event for memories that receive spreading activation during retrieval. This gives them a small activation boost without requiring explicit retrieval, mimicking how human memory strengthens associated concepts during recall.
def passive_reinforce(retrieved_memory, candidate_set, entity_graph,
reinforce_weight=0.3):
"""Give partial access credit to memories that received
spreading activation during this retrieval."""
retrieved_entities = set(retrieved_memory['entities'])
now = time.time()
for mem in candidate_set:
if mem['id'] == retrieved_memory['id']:
continue
shared = set(mem['entities']).intersection(retrieved_entities)
if shared:
# add a weighted access event (partial reinforcement)
mem['access_times'].append(now)
mem['access_weights'] = mem.get('access_weights', [])
mem['access_weights'].append(reinforce_weight)Not all memories deserve reinforcement. Focus reinforcement resources on memories that are high-value (high confidence, frequently used in the past, connected to many entities) and at risk of fading (activation approaching the retrieval threshold). Memories that are already well above threshold do not need reinforcement yet. Memories that have already decayed far below threshold may not be worth reinforcing. The sweet spot is memories that are within one interval of crossing the threshold and have demonstrated past value through usage.
def prioritize_reinforcement(memories, decay=0.5, threshold=-2.0):
"""Rank memories by reinforcement priority.
High priority = high value + close to threshold."""
priorities = []
for mem in memories:
bla = base_level_activation(mem['access_times'], decay)
hours_left = predict_threshold_crossing(
mem['access_times'], decay, threshold
)
# value score: confidence * log of access count
value = mem['confidence'] * math.log(len(mem['access_times']) + 1)
# urgency: inversely proportional to hours remaining
urgency = 1.0 / (hours_left + 1)
priority = value * urgency
priorities.append((mem['id'], priority, hours_left))
return sorted(priorities, key=lambda x: x[1], reverse=True)Track the percentage of memories that were accessible (above threshold) last month and are still accessible this month. A healthy system should maintain high retention for high-confidence memories while allowing low-confidence, unreinforced memories to naturally decay. If overall retention drops unexpectedly, the decay rate may be too aggressive or the reinforcement scheduling may need adjustment.
Natural vs Scheduled Repetition
In most AI memory applications, spaced repetition happens naturally through usage patterns. Users ask similar questions on different days, surfacing the same memories at natural intervals. The key insight is that you do not need to build an explicit scheduling system like Anki or SuperMemo. Instead, design your retrieval and scoring system so that natural usage patterns produce the spacing effect automatically.
Adaptive Recall achieves this through three mechanisms: base-level activation rewards spaced access patterns, spreading activation passively reinforces related memories during every retrieval, and the consolidation process (reflect tool) periodically reviews and reinforces high-value memories as part of its evidence-gated validation cycle. Together, these create a system where important knowledge is naturally maintained without explicit scheduling.
Adaptive Recall applies spaced repetition principles automatically. Important memories are reinforced through natural usage and consolidation cycles.
Try It Free