Overview
On-demand Confidential AI API provides a secure, OpenAI-compatible interface for running AI models in TEE on GPU hardware. Pay per request with no infrastructure management. This enables developers to integrate AI applications with hardware-level privacy protection, ensuring user data remain confidential during inference. Browse available confidential AI models for your application. For dedicated GPU resources with hourly pricing, see Dedicated Models. Both options use the same API with identical features - the only difference is billing and resource allocation.Prerequisites
Before you begin, ensure you have enough funds to get the API key. You need at least $5 in your account. Go to Dashboard and click Deposit to add funds. Navigate to Dashboard → Confidential AI API and click Enable. Then create your first API key and click the key to copy.
Make Your Secure Request
Replace<API_KEY> with your actual API key in the examples below. We use DeepSeek V3 0324 model as an example, but you can choose any other available models.
Available Models
We support 14+ models running in GPU TEE from multiple providers. Click the GPU TEE checkbox to see all options.Phala Provider
| Model | Model ID | Context | Pricing (per 1M tokens) |
|---|---|---|---|
| DeepSeek V3 0324 | deepseek/deepseek-chat-v3-0324 | 163K | 1.14 |
| Qwen2.5 VL 72B Instruct | qwen/qwen2.5-vl-72b-instruct | 65K | 0.59 |
| Google Gemma 3 27B | google/gemma-3-27b-it | 53K | 0.40 |
| OpenAI GPT OSS 120B | openai/gpt-oss-120b | 131K | 0.49 |
| OpenAI GPT OSS 20B | openai/gpt-oss-20b | 131K | 0.15 |
| Qwen2.5 7B Instruct | qwen/qwen-2.5-7b-instruct | 32K | 0.10 |
| Sentence Transformers all-MiniLM-L6-v2 | sentence-transformers/all-minilm-l6-v2 | 512 | $0.000005 |
NearAI Provider
| Model | Model ID | Context | Pricing (per 1M tokens) |
|---|---|---|---|
| DeepSeek V3.1 | deepseek/deepseek-chat-v3.1 | 163K | 2.50 |
| Qwen3 30B A3B Instruct | qwen/qwen3-30b-a3b-instruct-2507 | 262K | 0.45 |
| Z.AI GLM 4.6 | z-ai/glm-4.6 | 202K | 2.00 |
Tinfoil Provider
| Model | Model ID | Context | Pricing (per 1M tokens) |
|---|---|---|---|
| DeepSeek R1 0528 | deepseek/deepseek-r1-0528 | 163K | 2.00 |
| Qwen3 Coder 480B A35B | qwen/qwen3-coder-480b-a35b-instruct | 262K | 2.00 |
| Qwen3 VL 30B A3B | qwen/qwen3-vl-30b-a3b-instruct | 262K | 2.00 |
| Meta Llama 3.3 70B Instruct | meta-llama/llama-3.3-70b-instruct | 131K | 2.00 |
All models run in GPU TEEs with hardware attestation. Pricing shows input/output token costs. Browse the full list at redpill.ai/models.
Verify Your AI is Running Securely
Once you finished your secure request, every response comes with cryptographic proof that it ran in a secure TEE. This proof is generated by the TEE. ensures the response is secure and trustworthy. Click Verify to learn how to verify your AI is running securely.Next Steps
There are some advanced features you could use with Confidential AI API.- Tool Calling help you call tools from your AI models.
- Images and Vision help you use images and vision models in Confidential AI.
- Structured Output help you get structured output from your AI models.
- Streaming help you get streaming response from your AI models.
- Playground help you play with Confidential AI models in a private environment.

