New Research Highlights Vulnerabilities in MLOps Platforms

Written by

Security researchers have identified multiple attack scenarios targeting MLOps platforms like Azure Machine Learning (Azure ML), BigML and Google Cloud Vertex AI, among others.

According to a new research article by Security Intelligence, Azure ML can be compromised through device code phishing, where attackers steal access tokens and exfiltrate models stored in the platform. This attack vector exploits weaknesses in identity management, allowing unauthorized access to machine learning (ML) assets.

BigML users face threats from exposed API keys found in public repositories, which could grant unauthorized access to private datasets. API keys often lack expiration policies, making them a persistent risk if not rotated frequently.

Google Cloud Vertex AI is vulnerable to attacks involving phishing and privilege escalation, allowing attackers to extract GCloud tokens and access sensitive ML assets. Attackers can leverage compromised credentials to perform lateral movements within an organization’s cloud infrastructure.

Read more on machine learning security: New Research Exposes Security Risks in ChatGPT Plugins

Protective Measures

Security experts recommend several protective measures for each platform.

  • For Azure ML, best practices include enabling multi-factor authentication (MFA), isolating networks, encrypting data and enforcing role-based access control (RBAC)
  • For BigML, users should apply MFA, rotate credentials frequently and implement fine-grained access controls to restrict data exposure
  • For Google Cloud Vertex AI, it is advised to follow the principle of least privilege, disable external IP addresses, enable detailed audit logs and minimize service account permissions

As businesses increasingly rely on AI technologies for critical operations, securing MLOps platforms against threats such as data theft, model extraction and dataset poisoning becomes essential. Implementing proactive security configurations can strengthen defenses and safeguard sensitive AI assets from evolving cyber-threats.

What’s hot on Infosecurity Magazine?