The Private / Dedicated server add-on is for Enterprise customers who want guaranteed agent capacity and full isolation from other Relm tenants. It's the right fit when:
- Your team runs Deep Research at high concurrency and you want guaranteed throughput.
- Your security team prefers single-tenant infrastructure.
- You need a custom data-residency posture (e.g. EU-only).
What you get
- Dedicated agent compute. Your Deep Research runs and AI Summary generations execute on a pool reserved for your org. No shared queue.
- Dedicated database. A separate database cluster scoped to your org, optionally in a region you specify.
- Dedicated indexing. Document indexing and search infrastructure isolated from the shared pool.
- Optional region pinning — US, EU, or other regions on request.
What stays shared
- The Relm Pro web app itself (
relm.ai) — deployment is per-tenant via configuration, not per-tenant via a separate stack. - Public data sources (institutional property graphs, public-records APIs, listing platforms) — no point isolating these; they're external.
- The Excel add-in — same code path for everyone.
If you need true single-tenant deployment of every layer (including the web app and its dependencies), that's a more involved Enterprise+ conversation.
Performance characteristics
- Latency — comparable to shared, sometimes slightly better because there's no queue contention.
- Throughput — predictable; you know your agent pool size and can size it to your team.
- Concurrency — configurable. Default sizing is benchmarked from your team's usage profile.
Pricing
The add-on is priced based on:
- Agent pool size (concurrent Deep Research jobs).
- Database tier and region.
- Indexing throughput.
Pricing is bundled into your Enterprise contract. Talk to your account contact.
How to enable
- Reach out to your account contact or
sales@relm.ai. - We scope the deployment with your team (concurrency, region, retention).
- Provisioning takes 1–3 business days.
- The flip-over is invisible to your users — they keep using
relm.aias before; behind the scenes, their requests route to your dedicated infra.
Existing properties, portfolios, documents, and chat history migrate during the flip-over with no data loss.