Legal sector · Internal PoC
AI Gateway for a 50-person law firm
Enabling ChatGPT and Claude for the firm without exposing client names, case numbers, or financial data. Result: 95% of prompts pass to the model PII-free, with <150ms latency.
Transparency note: The cases published here are technical implementation scenarios: internal PoCs, pilots with synthetic or anonymized data, and reference architectures. As real customer projects reach production, we replace them with named cases with permission.
The challenge
A mid-size law firm had blocked ChatGPT over professional secrecy and GDPR concerns. Lawyers asked for AI to reduce time on drafting, case law research, and document synthesis — but any text submission contained names, case numbers, addresses, financial data.
Our solution
We deployed the PrivantAI AI Gateway on-prem inside the firm's infrastructure. The Gateway detects sensitive entities in prompts, masks them with stable placeholders, forwards the anonymized prompt to OpenAI/Anthropic, then rehydrates the response locally before returning it to the lawyer. Immutable audit logs for every query.
Tech stack
Measured results
- 95% accuracy in PII masking over 10,000 test prompts
- Latency overhead <150ms vs direct OpenAI call
- Zero data leak incidents in 3 months of internal pilot
- Average drafting time reduced 35% per lawyer reports
- 200+ users enabled with 30-minute training
- Full audit trail ready for bar association inspections
Lessons learned
- •Masking must be calibrated to the domain: technical legal terms shouldn't be censored
- •Stable placeholders (same name → same placeholder) improve response quality
- •Short training beats long training: lawyers learn by doing

