Sensitive information disclosure via large language models (LLMs) and generative AI has become a more critical risk as AI adoption surges, according to the Open Worldwide Application Security Project (OWASP)
To this end, ‘sensitive information disclosure’ has been designated as the second biggest risk to LLMs and GenAI in OWASP’s updated Top 10 List for LLMs, up from sixth in the original 2023 version of the list.
This relates to the risk of LLMs exposing sensitive data held by an organization during interactions with employees and customers, including personally identifiable information and intellectual property.
Speaking to Infosecurity, Steve Wilson, project lead for the OWASP Top 10 for LLM Project, explained that sensitive information disclosure has become a bigger issue as AI adoption has surged.
“Developers often assume that LLMs will inherently protect private data, but we’ve seen repeated incidents where sensitive information has been unintentionally exposed through model outputs or compromised systems,” he said.
Supply Chain Risks in LLMs Rises
Another significant change to the list is ‘supply chain vulnerabilities,’ moving from fifth to the third most critical risk to these tools.
OWASP highlighted that LLM supply chains are susceptible to various vulnerabilities, which can affect the integrity of training data, models and deployment platforms.
This can result in biased outputs, security breaches or system failures.
Wilson observed that when OWASP released the first version of the list, the risks around supply chain vulnerabilities were mostly theoretical. However, it has since become clear that developers and organizations must stay vigilant about what’s being integrated into open-source AI technologies they are using.
“Now, it’s clear that the AI-specific supply chain is a dumpster fire of epic proportions. We’ve seen concrete examples of poisoned foundation models and tainted datasets wreaking havoc in real-world scenarios,” Wilson outlined.
‘Prompt injection’ retained its place as the number one risk to organizations using LLM and GenAI tools. Prompt injection involves users manipulating LLM behavior or outputs through prompts, causing safety measures to be bypassed and leading to outcomes such as generating harmful content and enabling unauthorized access.
OWASP said the updates are a result of a better understanding of existing risks and critical updates on how LLMs are used in real-world applications today.
New LLM Risks Added
The updated Top 10 list for LLMs includes a number of new risks to these technologies.
This includes ‘vector and embeddings’ in eighth spot. This relates to how weakness in how vectors and embeddings are generated, stored or retrieved can be exploited by malicious actions to inject harmful content, manipulate model outputs or access sensitive information.
This entry is a response to the community’s requests for guidance on securing Retrieval-Augmented Generation (RAG) and other embedding-based methods, now core practices for grounding model outputs.
Wilson described the entry of vector and embeddings as the biggest development in the new list, with some form of RAG now the default architecture for enterprise LLM applications.
“This entry was a must-add to reflect how embedding-based methods are now core to grounding model outputs. Providing detailed guidance on securing these technologies helps organizations address risks in systems that are becoming the backbone of their AI deployments,” he commented.
Another new entry is ‘system prompt leakage’ in seventh place. This refers to risk that the system prompts or instructions used to steer the behavior of the model can also contain sensitive information that was not intended to be discovered.
System prompts are designed to guide the model's output based on the requirements of the application but may inadvertently contain secrets that can be used to facilitate other attacks.
This risk was highly requested by the community following recent incidents which demonstrated that developers cannot safely assume that information in these prompts remain secret.
GenAI Security Optimism
Wilson said that despite the significant risks and vulnerabilities in GenAI systems, there are reasons to be optimistic about the future security of these tools.
He highlighted the rapid development of the commercial ecosystem for AI/LLM security since Spring 2023, when OWASP started building the first Top 10 list for LLMs.
At this time, there were a handful of open-source tools and almost no commercial options to help secure these systems.
“Now, just a year and a half later, we’re seeing a healthy and growing landscape of tools – both open source and commercial – designed specifically for LLM security,” said Wilson.
“While it’s still crucial for developers and CISOs to understand the foundational risks, the availability of these tools makes implementing security measures much more accessible and effective.”
The OWASP Top 10 LLM List
The 2025 Top 10 List serves as an update to version 1.0 OWASP’s Top 10 for LLM, which was published in August 2023.
The resource is designed to guide developers, security professionals and organizations in prioritizing their efforts to identify and mitigate critical generative AI application risks.
The risks are listed in order of criticality, and each is enriched with a definition, examples, attack scenarios and prevention measures.
OWASP Top 10 LLM and Gen AI List
The full OWASP Top 10 LLM and Gen AI List for 2025 is as follows:
- Prompt Injection
- Sensitive Information Disclosure
- Supply Chain Vulnerabilities
- Data and Model Poisoning
- Improper Output Handling
- Excessive Agency
- System Prompt Leakage
- Vector and Embedding Weaknesses
- Misinformation
- Unbounded Consumption
OWASP is a non-profit organization that produces open-source content and tools to assist software security. OWASP’s Top 10s are community-driven lists of the most common security issues in a field designed to help developers implement their code safely.