Documentation

API reference, integration guides, and tutorials for building with Adaptive Recall.

MCP (Model Context Protocol)

MCP is an open protocol that lets AI tools connect to external services. Adaptive Recall runs as an MCP server that any compatible client can connect to directly.

Your server URL and API key are shown in your dashboard after you sign up. The server URL follows the pattern https://s1.adaptiverecall.com (your assigned server number may vary).

Configuration

Add Adaptive Recall to your MCP client configuration:

{
  "mcpServers": {
    "adaptive-recall": {
      "type": "url",
      "url": "https://YOUR_SERVER_URL/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Replace YOUR_SERVER_URL with the server URL from your dashboard (e.g. s1.adaptiverecall.com) and YOUR_API_KEY with your API key.

Available Tools

Once connected, your AI assistant has access to eight tools:

store Save a new memory. Automatically generates embeddings and extracts entities.
recall Search memories using multi-strategy retrieval with cognitive scoring.
update Modify an existing memory. Re-embeds automatically if content changes.
forget Remove a memory by ID or by finding the closest match to a query.
graph Explore the knowledge graph, traversing entity relationships by name and depth.
status System health, memory counts, confidence distribution, and knowledge gap detection.
snapshot Get a formatted overview of your stored memories, organized by type.
feedback Send feedback directly to the Adaptive Recall developers. Report issues, request features, or share how the system is working for you.

HTTPS / REST API

Every tool available through MCP is also available as an HTTP endpoint. Use this when your platform does not support MCP or when you need direct API access from your own code.

Base URL

https://YOUR_SERVER_URL/v1/

All requests require a Bearer token in the Authorization header:

Authorization: Bearer YOUR_API_KEY

Endpoints

MethodEndpointDescription
POST/v1/storeStore a new memory
POST/v1/recallSearch memories or retrieve by ID
POST/v1/updateUpdate an existing memory
POST/v1/forgetRemove a memory by ID or query match
POST/v1/statusSystem health and statistics
POST/v1/graphExplore entity connections
POST/v1/snapshotSummary of stored memories by type
POST/v1/feedbackSend feedback to the developers

Memory Types

When storing memories, you can assign a type that affects lifecycle behavior and retrieval scoring:

  • general_knowledge - facts, observations, reference information
  • user_knowledge - information about people and their preferences
  • callable_scripts - tool and script references
  • work_project - project tracking, tasks, deadlines
  • cross_reference - pointers to external information and resources
  • learned_procedure - multi-step workflows and procedures

Rate Limits

Rate limits are per account and vary by plan. Check the pricing page for details. When you exceed your limit, the API returns HTTP 429.

Claude Code (Anthropic)

Claude Code supports MCP servers natively. Add Adaptive Recall to your project or global configuration and Claude will have persistent memory across sessions.

Project Configuration

Add to your project's .mcp.json file:

{
  "mcpServers": {
    "adaptive-recall": {
      "type": "url",
      "url": "https://YOUR_SERVER_URL/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Global Configuration

To make Adaptive Recall available in all projects, add the same config to your global Claude Code settings at ~/.claude/settings.json under the mcpServers key.

Quick Start

Once configured, Claude can store and recall memories naturally in conversation:

  • "Remember that our API rate limit is 500 req/min on the production server."
  • "What do you know about our deployment process?"
  • "Forget the old database credentials I stored last week."
  • "Give me a status report on my memory system."

Claude will call the appropriate tool (store, recall, forget, status) automatically based on context.

Codex / GPTs (OpenAI)

Codex CLI

OpenAI Codex supports MCP servers natively. Add Adaptive Recall to your Codex MCP configuration and it will have persistent memory across sessions, the same way it works with Claude Code.

{
  "mcpServers": {
    "adaptive-recall": {
      "type": "url",
      "url": "https://YOUR_SERVER_URL/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Custom GPTs

Custom GPTs connect to Adaptive Recall through Actions. In the GPT editor, go to Configure, then Actions, and create a new action. Import the schema from your server's OpenAPI endpoint:

https://YOUR_SERVER_URL/openapi.json

Set the authentication type to "API Key" with the Auth Type set to "Bearer" and paste your API key. The GPT will automatically discover all available tools from the schema.

Gemini CLI (Google)

Google's Gemini CLI supports MCP servers natively. Add Adaptive Recall to your Gemini settings and it will have persistent memory across sessions.

Configuration

Add to your ~/.gemini/settings.json file (or .gemini/settings.json in your project directory):

{
  "mcpServers": {
    "adaptive-recall": {
      "httpUrl": "https://YOUR_SERVER_URL/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Gemini CLI uses httpUrl for streamable HTTP MCP servers. Replace YOUR_SERVER_URL and YOUR_API_KEY with the values from your dashboard.

Other Platforms

Any platform that can make HTTP requests can use Adaptive Recall. The REST API accepts and returns JSON, authenticated with a Bearer token. See the HTTPS / REST API section above for full endpoint documentation.

If your platform supports MCP, use the MCP configuration for the best integration experience.

Start Building with Adaptive Recall

Sign Up Free

No credit card required. 500 memories free.