Private Beta - Invite Only

Be Among the First to Use Personize

Join select companies getting early access to the unified customer memory layer for AI agents. Apply now and help shape the future of AI coordination.

Beta Users Get:

  • Priority feature requests
  • 1:1 onboarding with our team
  • Lock in founding member pricing
  • Early access to new features
Limited spots per cohort
Invites going out now
or continue with email

Must include uppercase, lowercase, number, and special character.

Already have an account?

Frequently Asked Questions

Personize is governed memory infrastructure for AI agents. Your agents store what they learn about every contact, company, or account; recall the right context at runtime; and follow your organization's rules automatically. It solves the problem that today's AI agents have no shared memory, no policy enforcement, and no audit trail — so they repeat questions, hallucinate context, and can't be held accountable.

Some describe it as a CRM for AI agents, not humans. A traditional CRM stores contact records for salespeople to read manually. Personize stores memory for AI agents to read, write, and act on — automatically. Think of your agents as employees: they retrieve the context they need, learn from every interaction, and operate under your organization's policies at runtime.

Unlike a vector database (semantic search with no schema or governance) or a CRM (structured records with no AI-native interface), Personize combines both and adds policy enforcement, compliance, and an audit trail. When you deploy agents across multiple platforms — sales, support, marketing, product — they all share the same memory store, stay in sync, and always work from the most complete, most current data.

Companies deploying a growing number of AI agents across their stack. The value compounds with scale: more agents, more workflows, more platforms — the more critical it becomes that they share memory, follow the same policies, and don't contradict each other. The fit is especially strong when multiple teams are actively building and shipping agents.

Companies with products and product users are a natural home. Personize enables AI to deliver better services, boost UX, and improve conversions at every touchpoint in the customer journey — from first touch through expansion and retention.

All data is isolated per organization. API keys are encrypted via AWS KMS. Webhook payloads are HMAC-SHA256 signed. We support right-to-erasure with soft-delete, 30-day TTL hard-delete, and an immutable audit trail for every data access and deletion event. Our architecture is designed to support compliance frameworks like SOC 2, GDPR, and HIPAA — per-org isolation, encryption at rest and in transit, scoped access controls, and full audit visibility.

For organizations with stricter data control requirements, we offer enterprise deployments with dedicated data infrastructure, customer-managed encryption keys, regional data residency, and zero data retention on our systems. Talk to us about enterprise →

Credit-based. Memory writes cost 1–12 credits depending on extraction quality tier (basic/pro/ultra). Reads — recall, smart guidelines — are 1 credit flat. AI generation is billed per-token by tier. You can also bring your own API key at a flat 5 credits per call and pay your model provider directly.
Every memorize call writes structured properties to a fast-lookup store and vector embeddings to a semantic search engine. Reads route to whichever is optimal — exact lookups for browsing and pagination, vector similarity for semantic recall. This dual-write design keeps query latency low and costs predictable at scale.

Personize handles model selection for memory operations — memorize, recall, and smart guidelines. We optimize the underlying models so you get the best extraction and retrieval quality without managing model choices. Quality tiers (basic/pro/ultra) let you control the cost-quality trade-off.

For AI generation and agents, you can bring your own API key via OpenRouter and use any model — Claude, GPT-4, Llama, or anything OpenRouter supports. You pay your model provider directly; we charge a flat orchestration fee.

Governance variables are your organization's rules — sales playbooks, brand voice, compliance policies, ICP definitions — stored in Personize and automatically routed to agents at runtime based on semantic relevance. Call smartGuidelines({ message: "cold email to enterprise" }) and the system returns only the policies that apply. Two modes: fast (~200ms, embedding-only) and deep (~3s, LLM-routed). No agent sees irrelevant rules; every agent sees the right ones.

Install @personize/sdk, pass your key, and you're storing + recalling in a few lines:

const client = new Personize({ secretKey: 'sk_live_...' });
await client.memory.memorize({
  content: notes, email: 'jane@acme.com', tier: 'pro', tags: ['call']
});
const context = await client.memory.smartRecall({
  query: 'pain points', email: 'jane@acme.com'
});

A basic integration takes minutes. A production setup with schema design, governance, and pipelines takes a few days.

Yes. TypeScript pipelines via Trigger.dev for durable workflows. No-code via n8n with ready-to-import templates for HubSpot, Salesforce, Google Sheets, Slack, and 400+ apps. Webhook delivery (HMAC-signed) for any HTTP endpoint. Native CRM sync with bidirectional support.
Skills are packaged agent capabilities — reusable modules that give any AI agent instant access to memory, governance, pipelines, notifications, or diagnostics. Each skill progressively discloses what it does, what actions it supports, and reference docs. You install a skill and your agent knows how to use it. Current skills include entity memory, governance, code pipelines, no-code pipelines, smart notifications (Signal), multi-agent collaboration, and diagnostics.
Yes. Every SDK method maps 1:1 to an MCP tool — memorize becomes memory_store_pro, smartRecall becomes memory_recall_pro, and so on. Works with Claude Desktop, Cursor, ChatGPT, or any MCP-compatible client. Your agents get full memory and governance access without writing code.