AI & model security
Beakr accesses AI models exclusively through provider APIs with zero-data-retention guarantees. Your data is not stored by model providers after inference, and is never used to train their models.
Zero data retention
| Provider | Access method | Data retention | Training use |
|---|---|---|---|
| Anthropic (Claude) | AWS Bedrock | None — stays within AWS | No |
| OpenAI (GPT) | Direct API, Zero Retention Program | None | No |
| Google (Gemini) | Vertex AI (GCP Enterprise) | None | No |
Key guarantees
- Customer data sent to AI models is not stored by the model provider after inference completes.
- Customer data is never used to train, fine-tune, or improve third-party AI models.
- All communication with AI providers occurs over HTTPS (TLS 1.2+).
- Beakr does not persist AI prompts or completions unless the user explicitly saves the output to their project.
Data minimization
- Only the data necessary for the specific AI task is included in model requests.
- OAuth tokens and user credentials are never sent to AI models.
- AI responses are streamed directly to the user — intermediate results are not stored.
Processing PHI?
When working in HIPAA-regulated environments, see HIPAA and AI model usage for the BAA chain and additional guarantees that apply specifically to ePHI.