Relm Pro is an AI-powered product, which makes how we use AI a question worth a clear answer. This page is that answer.
TL;DR
- We use multiple frontier model providers for different tasks.
- Your data is not used to train any provider's models. This is contractual via their enterprise APIs and is verified by our DPA.
- We don't fine-tune our own models on your data.
- We don't share your data with any other third party for AI purposes.
Which providers we use
We route different tasks to different models based on what each is best at:
- Reasoning-heavy steps (the AI Summary, the pro-forma generation chain) — frontier reasoning models.
- Structured extraction (rent-roll parsing, P&L line-item recognition) — frontier general-purpose models.
- Embeddings (turning document chunks into vectors for chat retrieval) — production embedding models.
- Vision and OCR — a mix of frontier vision models and traditional OCR engines.
The mix evolves as model quality and pricing change. The important constant is the data-handling posture.
What we send
- Property data and uploaded documents are sent to model providers as needed to answer chat queries, generate summaries, or build pro-formas.
- Sensitive PII (tenant names, social-security numbers, etc.) is detected and redacted before it leaves Relm's infrastructure where feasible.
- Internal Relm logs are not sent to model providers.
What we don't do
- No training. No provider trains models on your data, period. This is enforced via the providers' enterprise / API tier — we use those tiers explicitly for this reason.
- No cross-customer data sharing. Your data is scoped to your organization. The model never sees data from another customer alongside yours.
- No retention beyond inference. Provider APIs don't retain your data beyond the immediate inference call (subject to short-window logging for abuse-prevention, also covered in their DPAs).
Our own model fine-tuning
We do not fine-tune any provider's models on customer data. Not on your data, not on any one customer's data, not on aggregated customer data.
We do build and tune our own prompts, retrieval pipelines, and evaluation suites. Those are based on synthetic data, public records, and explicit opt-in customer collaborations.
What's logged inside Relm
We log enough to operate the service: API request metadata, error rates, agent latencies. We do not log full request bodies in production except for short-window debugging traces, which are auto-purged.
When you have specific compliance needs
Enterprise customers with strict compliance regimes (DPA addendums, BAAs, custom data-residency) can talk to our team. We support most institutional requirements.