Security & Privacy — How ConfideAI Protects Your Clinical Data
You shouldn't have to choose between efficiency and ethics.
Mental health professionals face a painful dilemma. AI tools can dramatically reduce documentation time — transforming hours of evening note-writing into minutes — but every existing solution asks you to send your clients' most sensitive disclosures through servers that someone else controls.
ConfideAI was built to eliminate that trade-off. Not with stronger promises, but with stronger hardware. This page explains how your clinical data is protected, in plain language, so you can make an informed decision about whether ConfideAI meets your standard of care.
How Traditional AI Documentation Works (The Problem)
When you use a standard AI tool — ChatGPT, Freed, AutoNotes, or any conventional cloud-based documentation assistant — here is what actually happens:
- You type or dictate clinical content containing protected health information (PHI).
- That content travels to the provider's cloud servers.
- The AI model processes your data on those servers, in unencrypted memory.
- The provider's engineers, system administrators, and security tools have technical access to that data while it is being processed.
The provider may have a privacy policy that says they will not look at your data. They may even sign a Business Associate Agreement (BAA). But architecturally, they can access it. You are trusting a policy, not a physical barrier.
How ConfideAI Works (The Solution)
ConfideAI uses a fundamentally different architecture called confidential computing, provided by NEAR AI's infrastructure. When your clinical text reaches NEAR AI's servers for AI inference, it enters a hardware-secured enclave — a Trusted Execution Environment (TEE) — where the AI processes it in isolation. The TEE is engineered to prevent access to data during AI processing — including by NEAR AI's own engineers — through hardware-level isolation enforced by Intel TDX silicon.
Our Role: Secure Gateway
ConfideAI operates a secure gateway (hosted on Cloudflare Workers) that authenticates your identity and routes your requests to NEAR AI's TEE infrastructure. Our gateway:
- Does not store any clinical content
- Does not log message content or AI responses
- Does not retain clinical data after routing
While your data passes through our gateway in transit (protected by TLS encryption), we do not retain, log, or store any clinical content. The critical protection happens inside NEAR AI's TEE, where hardware — not policy — prevents unauthorized access.
Hardware Protection: The Layers That Guard Your Data
Trusted Execution Environments (TEE)
At the core of ConfideAI's security is a technology called a Trusted Execution Environment. A TEE is a hardware-isolated enclave — a sealed section of a computer's processor that runs code and processes data in complete isolation from everything else on the machine.
Data inside a TEE is encrypted not just when it is stored (at rest) and not just when it is traveling over the internet (in transit), but while it is actively being processed. This is the critical difference. Traditional encryption protects data everywhere except the moment it is being used. Confidential computing closes that gap.
Intel TDX (Trust Domain Extensions)
NEAR AI's TEE infrastructure uses Intel Trust Domain Extensions (TDX), as verified through cryptographic attestation. Intel TDX creates encrypted memory regions at the CPU level. These regions are invisible to:
- The operating system running on the server
- The hypervisor (the software that manages virtual machines)
- The cloud provider that owns the physical hardware
- Any other process running on the same machine
NVIDIA H100 GPU with Confidential Computing
NEAR AI reports that AI model inference runs on NVIDIA H100 GPUs with confidential computing support, extending TEE protections to GPU processing. This is verified through NVIDIA's Remote Attestation Service.
Cryptographic Proof: Trust, but Verify
Privacy claims are only as good as your ability to verify them. ConfideAI does not ask you to take our word for it — or NEAR AI's.
The system uses cryptographic attestation — a process where the TEE hardware generates a mathematical proof that:
- A genuine, unmodified TEE is running (not a simulation)
- The correct, audited code is executing inside the enclave
- No unauthorized party has been granted access
- The hardware has not been tampered with
ConfideAI's application includes a built-in verification panel that automatically checks attestation reports from Intel's attestation service and NVIDIA's Remote Attestation Service (NRAS). You can inspect the verification status of every conversation directly in the app.
What We Store vs. What We Don't
What ConfideAI stores (on our infrastructure):
- Conversation IDs (random identifiers)
- Timestamps
- Pin and archive flags
- Your account information (email, name from your OAuth provider)
This metadata is stored in Cloudflare Workers KV, encrypted at rest by Cloudflare, and contains zero clinical content.
What ConfideAI does NOT store:
- Session notes, SOAP notes, treatment plans, or any clinical text
- Client names, diagnoses, or any protected health information
- Conversation titles (stored only in NEAR AI's TEE)
- Prompts you send to the AI or responses the AI generates
Where clinical content IS stored:
Your conversation history (messages and AI responses) is stored within NEAR AI's infrastructure, where it is encrypted at rest using hardware-managed keys. This storage enables cross-session access to your conversations.
Client-Side Document Processing
When you upload a PDF or Word document to ConfideAI, text extraction happens locally in your browser. The raw file is never transmitted to any server. Only the extracted text — which you can review before sending — enters the secure TEE pipeline.
HIPAA Alignment
ConfideAI's architecture is designed to align with HIPAA's technical safeguard requirements:
- Access controls: Hardware-enforced isolation prevents unauthorized access to PHI during AI processing
- Encryption: Data is encrypted in transit (TLS 1.3), at rest (Cloudflare KV encryption, NEAR TEE encryption), and during processing (Intel TDX hardware encryption)
- Audit controls: Cryptographic attestation provides verifiable proof of data handling
- Integrity controls: TEE attestation confirms that code has not been modified or tampered with
- Minimum necessary: Clinical content is not stored on ConfideAI's own infrastructure
When the timing is right, ConfideAI will apply for formal HIPAA compliance certification and BAA availability. We encourage you to consult with your own compliance advisor and will provide any technical documentation they need to evaluate our architecture.
How This Compares to Other AI Documentation Tools
| Feature | ConfideAI | Typical AI Tools |
|---|---|---|
| Encryption during AI processing | Yes — hardware-enforced TEE | No — data is unencrypted in memory |
| Infrastructure provider access during processing | Prevented — hardware barrier | Possible — policy barrier only |
| Cryptographic proof of privacy | Yes — Intel & NVIDIA attestation | No |
| Clinical data on provider servers | No — zero clinical content stored | Varies — often stored |
| Data used for AI training | No — TEE isolation prevents training access | Varies — often policy-only promise |
| PDF/Word processing | Client-side — files never leave browser | Typically server-side |
Our Security Architecture in Detail
Your Browser
│
├─ PDF/Word files: extracted locally (never uploaded)
│
↓ TLS 1.3 encrypted
ConfideAI Gateway (Cloudflare Worker)
│ Authenticates your identity
│ Routes request to NEAR AI
│ Does NOT store, log, or retain clinical content
│
↓ TLS 1.3 encrypted
NEAR AI TEE Infrastructure
│ Data enters hardware-secured enclave
│ Intel TDX + NVIDIA H100 confidential computing
│ AI processes your data in encrypted memory
│ Data isolated from infrastructure operators
│
↓ Response returns via same path
Your Browser
│ You review and use the generated documentationEvery therapist who uses AI for documentation is making a judgment call about their clients' privacy. That judgment should be backed by verifiable technical guarantees, not marketing language.
Try ConfideAI → and experience secure clinical documentation your clients deserve.
Questions about our security architecture? Contact us at support@confideai.ai. We are happy to provide technical documentation for your compliance team or IT department.