What is confidential computing?

Published October 22, 20256-minute read
Copy URL

Overview

Confidential computing protects data by creating isolated workload environments that operate independently of the host system. This isolation prevents privileged system administrators or software from accessing data within a secure enclave. Confidential computing addresses a crucial gap in data security by securing data while it’s in use.

Securing data in use is critical because even strong security controls can be undermined by insider threats. For instance, while encryption may protect data at rest, decryption keys must be loaded into memory when the system needs to use that data, potentially exposing those keys to anyone with physical or virtual memory access. Confidential computing addresses this vulnerability. It also provides security for multitenant and shared infrastructure environments, or for when your organization needs to share data sets with a partner organization without compromising sensitive information.

Confidential computing relies on trusted execution environments (TEEs), secure enclaves in which code runs protected from the host. TEEs prevent unauthorized access or modification of applications and data, even while they’re running. TEEs help keep your data confidential and tamper free while executing in a cloud environment. The Confidential Computing Consortium (CCC) defines a TEE as "an environment that provides a level of assurance of data confidentiality, data integrity, and code integrity." And all of this is reliant on a hardware root of trust.

Learn more on the Technically Speaking podcast

How confidential computing works

At the hardware level, memory is physically partitioned, which lets you run an application in its own enclave (the TEE). This enclave functions like a black box and contains an encryption key that’s extended only to the authorized program. The TEE lets the authorized program decrypt the information running within the TEE so the program can perform its set of processes, but that key isn’t extended to other parties—which helps protect sensitive information from being revealed to malicious actors, insider threats, and partners who don’t need access.

Key aspects of confidential computing include:

  • Protection against insider threats. Sensitive data and code are shielded from privileged users, such as cloud providers and infrastructure administrators, who might otherwise have access to the underlying hardware or software. Even if the underlying infrastructure is compromised, data within the TEE remains protected.
  • Hardware-level isolation. Confidentiality is enforced at the hardware level, often through encrypted memory and CPU states, to establish strong TEEs.
  • Remote attestation. Remote attestation allows a verifier to cryptographically confirm that a remote system is running trusted software within a secure hardware TEE before sending it sensitive data to process.
  • Confidential virtual machines. Many confidential computing solutions use confidential virtual machines (CVMs). CVMs execute within a TEE and act as a secure foundation where confidential workloads are deployed. CVMs prevent unauthorized entities from accessing or viewing what happens inside them.

Improve security and compliance

Comparison with other privacy-enhancing approaches

Traditional approaches to data privacy often require trusting the cloud provider’s complete infrastructure stack and host operating system. Instead, confidential computing creates isolated workload environments that operate independently of the host system. While conventional security approaches may incorporate integrity verification, attestation via confidential computing allows sensitive data to be processed only within verified conditions and execution environments.

Confidential computing fills a gap in traditional encryption methods for data when it’s in its most vulnerable state: during active computation. Unlike software-based privacy methods, TEEs offer hardware-based isolation that prevents privileged system administrators or software from accessing data within the secure enclave.

Benefits and challenges

Benefits of confidential computing include enhanced data security, privacy at runtime, and the ability to process sensitive information securely. Confidential computing can enhance data and code integrity, particularly with TEE capabilities such as data-in-use encryption and runtime protection. It also helps organizations achieve regulatory compliance, address strict data privacy regulations, and ensure end-user encryption. By creating a more secure way to collaborate across organizations, confidential computing can help organizations maintain a competitive edge and streamline cloud migration.

Implementing confidential computing can also pose significant challenges, including increased performance and communication overhead. Compatibility issues can arise, particularly in hybrid environments, where specialized hardware or re-architecting applications can introduce additional layers of complexity. Confidential computing can limit scalability, and create additional security considerations in multicloud environments where multiple parties requiring root access can compromise secure enclaves.

Confidential computing stands to bolster data privacy and security, especially as advancements in AI and cloud computing create the continued need for privacy toolboxes in public cloud services. As industry guidelines and standards evolve (shaped by the CCC and insights from the Everest Group), particularly for secure input/output (secure I/O) and attestation mechanisms, confidential computing will become increasingly integral to cloud infrastructure.

Use cases

Confidential computing addresses several key security challenges and use cases:

Mitigate insider threats

Confidential computing helps safeguard sensitive data and code from privileged users or host systems that might otherwise have access to the underlying hardware or software infrastructure through the organization. This includes cloud providers, infrastructure administrators, or even cluster administrators.

Meet regulatory and compliance requirements

For highly regulated industries―such as healthcare, government, and financial services and insurance (FSI)―confidential computing helps organizations meet stringent mandates, including General Data Protection Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), Digital Operational Resilience Act (DORA), Network and Information Security 2 Directive (NIS2), and Cyber Resilience Act (CRA).

Protect IP and AI models

Confidential computing is vital for safeguarding intellectual property (IP) and proprietary business logic, AI models, training data sets, and sensitive user data during processing (both training and inferencing). It protects these valuable assets from unauthorized access, prompt injection, sensitive information disclosure, model poisoning, and theft, even when running in environments controlled by customers or partners. With confidential computing, you can share information with another party without exposing your IP to them.

Sustain multitenant and shared infrastructure

Because it provides data isolation in multitenant and shared cloud environments, confidential computing protects against cross-tenant attacks, ensuring that 1 tenant's data or secrets aren’t accessible to others sharing the same underlying hardware.

Enhance software supply chain security and zero trust

Confidential computing bolsters DevOps practices by:

Confidential computing can also be a key component of zero trust security by enhancing TEE-based security controls and public cloud confidence.

Support edge computing

Confidential computing protects data and computational workloads at the edge, even when data and workloads are integrated with cloud services. This means sensitive information and the processes handling it are more secure, whether they’re entirely on an edge device or are interacting with a central cloud infrastructure. This protection is crucial for edge computing, where devices often reside in less-controlled environments and may be more susceptible to physical attacks or unauthorized access.

Increase cloud and hybrid cloud adoption

By letting organizations migrate sensitive workloads to public, private, and hybrid cloud environments, confidential computing removes barriers to cloud and hybrid cloud adoption. This includes:

  • Secure cloudbursting. Extend on-premise trust to the public cloud for peak workloads or specialized resources (like GPUs) without compromising security or regulatory compliance.
  • Data-clean rooms and partner collaboration. Allow multiple parties to access sensitive data without exposing individual proprietary information, thereby fostering trusted collaboration.
  • Digital sovereignty. Provide the means to move workloads between different cloud providers or on-premise systems without compromising data protection.

How Red Hat can help

Red Hat supports confidential computing with solutions and technologies that enhance data security in the cloud, in on-premise environments, and at the edge.

Red Hat® OpenShift® sandboxed containers, built on the Kata Containers project, can run confidential containers. Confidential containers standardize confidential computing at the pod level and simplify its implementation in Kubernetes environments. This lets users deploy confidential workloads using familiar workflows and tools, without requiring a deep understanding of the underlying TEEs. Available from Red Hat OpenShift sandboxed containers 1.10, confidential containers use an isolated hardware enclave that protects data and code from privileged users, such as cloud and cluster administrators. They protect the integrity of CVM disks, secure workload initialization, and sealed secrets that are made available inside a TEE only after verification.

The Confidential Clusters project integrates confidential computing technology into Kubernetes clusters. With Confidential Clusters, entire Red Hat OpenShift nodes operate within CVMs. This approach protects the confidentiality of all containers across the cluster by establishing a trust model, which considers cloud providers untrusted while maintaining trust in cluster administrators.

Using Red Hat Enterprise Linux® for CVMs, organizations can integrate with a broad range of TEE hardware vendors, and can run in on-premise or cloud environments without vendor lock-in. With CVMs, root storage is automatically self-encrypted at boot, eliminating the need for third-party storage providers. They also provide all the measurements needed to integrate with attestation services.

Red Hat build of Trustee is an attestation service for confidential computing, and performs attestation to verify TEE trustworthiness before workloads execute or sensitive data is transmitted. It delivers secrets to authenticated workloads within the TEE, integrating with external attestation services and acting as a foundational trust anchor supporting TEE technologies. It also supports hierarchical deployments for scalable hybrid and multicloud confidential computing while maintaining centralized trust management.

Red Hat collaborates with a broad ecosystem of partners―including NVIDIA, AMD, and Intel―to support confidential computing. Red Hat and NVIDIA collaborate to offer confidential GPUs and trusted artificial intelligence and machine learning (AI/ML) workloads. This integration allows AI/ML workloads, including training and inferencing, to run in a trusted way, significantly boosting performance within confidential containers. The attestation process covers both CPU and GPU hardware and software before decryption keys are released.

Red Hat also collaborates with partners to create TEEs that protect applications and data from unauthorized access or modification, even when they’re actively being executed in memory. Confidentiality is implemented at the hardware level, often involving encrypted memory and CPU state. Examples of hardware platforms supporting this include:

  • AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP).
  • Intel Trust Domain Extensions (TDX).
  • IBM Secure Execution for Linux (SEL).
  • Power Protected Execution Facility (PEF).
  • ARM Confidential Compute Architecture (CCA).

Red Hat and our partners provide many options for your organization to benefit from confidential computing and enhanced data security in modern cloud-native environments.

Blog post

Confidential containers on Microsoft Azure with Red Hat OpenShift sandboxed containers 1.10 and Red Hat Build of Trustee

Red Hat OpenShift sandboxed containers 1.10 has been released, bringing enhanced security and isolation capabilities to your Red Hat OpenShift environments.

Boost hybrid cloud security

A security-focused hybrid cloud can help you overcome modern challenges. Read this e-book to discover new approaches for hybrid cloud security.

Keep reading

What are SPIFFE and SPIRE?

SPIFFE and SPIRE are a pair of open source projects for identity management in dynamic and varied computing environments. Together they solve many security problems.

Red Hat Enterprise Linux security

Red Hat Enterprise Linux is a world’s leading open source Linux platform, enabling you to mitigate risk, enforce security configuration and policy, and streamline compliance strategy.

What is zero trust?

Find out more about zero trust, an approach to designing security architectures based on the premise that every interaction begins in an untrusted state.

Security resources