The Terraform Provider is currently in beta. The resource schema may change between releases.
phala_app is the primary resource, managing one application identity with shared Docker Compose configuration, shared environment variables, and one or more CVM replicas.
If you have used providers like DigitalOcean or AWS, the patterns here will feel familiar: catalog data sources for discovery, declarative compute resources, explicit power control, and SSH key management.
Installation
The provider is published on the Terraform Registry underphala-network/phala. Add it to your required_providers block:
terraform init to download the provider binary. Terraform handles the rest automatically.
Authentication
The provider authenticates with a Phala Cloud API key. You can get one from the dashboard:- Sign in at cloud.phala.com.
- Go to Settings then API Keys.
- Create a key and export it in your shell:
PHALA_CLOUD_API_KEY automatically. You can also set it explicitly in the provider block, though environment variables are recommended for security.
Provider Configuration
| Attribute | Type | Description |
|---|---|---|
api_key | String, Sensitive | Phala Cloud API key. Falls back to PHALA_CLOUD_API_KEY env var. |
api_prefix | String | API base URL. Falls back to PHALA_CLOUD_API_PREFIX. Most users do not need this. |
api_version | String | API version sent in X-Phala-Version header. |
timeout_seconds | Number | HTTP timeout in seconds. |
Quick Start
This deploys a single nginx CVM and outputs the app ID and public endpoint. The values below (tdx.medium, US-WEST-1, 40 GB disk) are tested defaults that work out of the box.
app_id and a public endpoint URL. Your app also appears in the Phala Cloud dashboard in a running state.
To tear down:
What’s in the Provider
The provider includes three resources and seven data sources: Resources:phala_app— the primary lifecycle resource for deploying apps with CVM replicasphala_cvm_power— start/stop power control for existing CVMsphala_ssh_key— manage account-level SSH keys
phala_account— current user and workspace infophala_workspace— active workspace metadataphala_sizes— available instance typesphala_regions— available deployment regionsphala_images— available OS imagesphala_nodes— worker nodes for placement pinningphala_attestation— TEE attestation data for a CVM
Common Workflow
A typical deployment follows this pattern:- Use data sources (
phala_sizes,phala_regions,phala_images) to discover valid slugs. - Define a
phala_appwith your Docker Compose file and desired replica count. - Wait for readiness, then consume
app_id,cvm_ids, andendpointas outputs. - Optionally use
phala_cvm_powerfor explicit start/stop control after deployment.

