Sensitive Data Sharing Risks Heightened as GenAI Surges

Written by

Generative AI applications are now used in 96% of organizations, increasing the risk of sensitive data being shared with these public tools, according to a Netskope study.

The report found that proprietary source code sharing with generative AI apps accounts for 46% of all data policy violations.

In addition, regulated data, which organizations have a legal duty to protect, make up more than a third (35%) of the sensitive data being shared with genAI applications, putting businesses at risk of costly data breaches.

Intellectual property information comprises 15% of data shared with genAI apps.

Cloud storage and collaboration apps were the source of sensitive data shared with these apps 84% of the time.

The sources of sensitive data most frequently sent to genAI apps were:

  • OneDrive (34%)
  • Google Drive (29%)
  • SharePoint (21%)
  • Outlook.com (8%)
  • Google Gmail (6%)

Read more: Fifth of CISOs Admit Staff Leaked Data Via GenAI

The use of genAI apps increased from 74% to 96% of organizations from 2023 to 2024.

Enterprises now use nearly 10 genAI apps on average, up from three in 2023. The top 1% of adopters use an average of 80 apps, up significantly from 14 last year.

The most commonly adopted genAI apps by organizations are:

  • ChatGPT (80%)
  • Grammarly (60%)
  • Microsoft Copilot (57%)
  • Google Gemini (51%)
  • Perplexity AI (42%)

However, the overall proportion of employees using genAI apps remains low, at 5%.

Generative AI Security Controls

Netskope observed a 75% rise in organizations using data loss prevention (DLP) controls to manage genAI data risks, rising from 24% of organizations in 2023 to 42% in 2024.

DLP policies aim to control specific data flows, particularly to block genAI prompts containing sensitive or confidential information.

Around three-quarters of businesses completely block at least one genAI app to try and limit the risk of sensitive data exfiltration. Nearly a fifth (19%) have imposed a blanket ban on GitHub CoPilot.

The report also found that for organizations with policies to control genAI usage, the use of coaching dialogs rose from 20% in 2023 to 31% in 2024.

These dialogs provide a warning to users while they are interacting with genAI apps, giving them the option to cancel or proceed with their action. This approach is designed to take a more targeted approach to security, informing and enabling users to change their behaviors without blocking their work.

Users chose to stop the action they were performing with a genAI tools in 57% of cases they were sent a coaching dialog alert.

What’s hot on Infosecurity Magazine?