Opaque Systems releases new data security, privacy-preserving features for LLMs

0
79

Opaque Systems has announced new features in its confidential computing platform to protect the confidentiality of corporate data during use of a large language model (LLM). Using new privacy-preserving artificial intelligence and zero trust data clean rooms (DCRs) optimized for Microsoft Azure confidential computing, Opaque said it also now enables organizations to securely analyze their combined confidential data without sharing or exposing the underlying raw data. Meanwhile, broader support for confidential AI use cases provides safeguards for machine learning and AI models to use encrypted data within trusted execution environments (TEEs), preventing exposure to unauthorized parties, according to Atom.

Using LLM can expose businesses to significant security and privacy risks

The potential risks of sharing sensitive business information with generative AI algorithms are well documented, as are vulnerabilities known to affect LLM applications. While some generative AI LLM models like ChatGPT are trained on public data, the utility of LLMs can skyrocket if trained on an organization’s confidential data without the risk of exposure, according to Opaque. However, if an LLM provider has visibility into the queries defined by its users, the possibility of accessing highly sensitive queries—such as proprietary code—becomes a significant security and privacy issue because the possibility of a hack increases dramatically, said Jay Harrell, VP of Product at Opaque Systems, tells CSO Protecting the confidentiality of sensitive data such as personally identifiable information (PII) or internal data, such as sales data, is critical to enabling the expanded use of LLMs in an enterprise environment, he adds.

“Organizations want to fine-tune their models on the company’s data, but to do so, they must give the LLM provider access to their data or allow the provider to deploy the proprietary model within the customer organization,” says Harel. “Additionally, when training AI models, the training data is kept regardless of how confidential or sensitive it is. If the security of the host system is compromised, this could lead to the data being leaked or landing in the wrong hands.”

A sealed platform leverages multiple layers of protection for sensitive data

By running LLM models within Opaque’s confidential computing platform, customers can ensure that their queries and data remain private and protected – never exposed to the model/service provider or used in unauthorized ways and accessible only to authorized parties, Opaque claimed. “The Opaque platform uses privacy-preserving technologies to secure LLMs, and leverages multiple layers of protection for sensitive data against potential cyber-attacks and data breaches through a powerful combination of secure hardware enclaves and cryptographic fortification,” says Harel.

For example, the solution allows AI models to run inference inside secret virtual machines (CVMs), he adds. “It enables the creation of secure chatbots that enable organizations to meet regulatory compliance requirements.”

Copyright © 2023 IDG Communications, Inc.

Source