Frequently asked questions about Confidential AI.
What is the relationship between dstack Private ML SDK, and what's the vllm-proxy doing there?
Do I need CUDA or NVIDIA drivers on the host for Private ML SDK with GPU support?
Is CUDA directly accessible in Phala GPU TEE?
Can I run my app in a docker container with access to GPU TEE under Intel TDX? Is this similar to Google Cloud's Confidential Space?
Is dstack-based including private-ml-sdk's CVM running on bare metal or a hypervisor?