There is significant disconnect between consumer expectations and organizations’ approaches around privacy, especially regarding the use of AI.
This according to Cisco’s 2023 Data Privacy Benchmark Study which encompassed insights from 3100 security professionals familiar with the data privacy program at their organizations and their responses to consumer attitudes to privacy from the earlier Cisco 2022 Consumer Privacy Survey.
The disconnect between consumers and organizations was most profound regarding the impact of AI technologies, like ChatGPT, on privacy.
In 2022’s Consumer Privacy Survey, 60% of consumers expressed concern about how organizations apply and use AI today, and 65% have already lost trust in organizations over their AI practices.
This compares to 96% of security professionals in the 2023 Data Privacy Benchmark survey stating that their organizations already have processes in place to meet the responsible and ethical standards of privacy in AI that consumers expect.
Speaking to Infosecurity, Robert Waitman, privacy director and head of privacy research program at Cisco said: “AI algorithms and automated decision-making can be particularly challenging for people to understand. While most consumers are supportive of AI generally, 60% have already lost trust in organizations due to AI application and use in their solutions and services. As a result, organizations should be extra careful in applying AI to automate and make consequential decisions that affect people directly, such as when applying for a loan or a job interview.”
Unresolved Issues Around AI and Privacy
Speaking during a recent episode of the Infosecurity Magazine podcast, Valerie Lyons, COO and senior consultant at BH Consulting, discussed the huge implications of the growth of AI on privacy.
One of these is the role of AI in creating inferential data – using a dataset to draw conclusions about populations.
“The problem with inferential data is that I don’t know as a consumer that the organization has it. I gave you my name, my address and my age, and the organization infers something from it and that inference may be sensitive data,” explained Lyons.
While using AI to create inferential data could have huge potential, it raises significant privacy issues that have not yet been resolved. “Inferential data is something we have no control over as a consumer,” added Lyons.
Camilla Winlo, head of data privacy at Gemserv, expressed concerns to Infosecurity around the use of AI tools in utilizing people’s personal information in ways they did not intend and consent to. This includes so-called ‘data scraping,’ whereby the datasets used to train AI algorithms are taken from sources like social media.
A high-profile example of this is the investigation into Clearview AI for scraping people’s images from the web without their knowledge and disclosing them through its facial recognition tool.
“Many people would be uncomfortable at their personal information being taken and used for profit by organizations without their knowledge. This kind of process can also make it difficult for people to remove personal information they no longer want to share – if they don’t know an organization has it, they can’t exercise their rights,” said Winlo.
“Many people would be uncomfortable at their personal information being taken and used for profit by organizations without their knowledge"
Winlo also pointed out that consumers may develop an unrealistic expectation of privacy when interacting with AI, not realizing that the information they divulge may be accessed and used by humans and organizations
She commented: “People interacting with tools like chat bots can have an expectation of privacy because they believe they are having a conversation with a computer program. It can come as a surprise to discover that humans may be reading those messages as part of testing programs to improve the AI, or even choosing the most appropriate AI-generated response to post.”
Another area discussed by Lyons was the potential future role of ChatGPT in the field of data privacy. She noted that GPT’s main function of answering questions and formulating text “is essentially what privacy professionals do” especially when curating privacy policies.
Therefore, as the technology learns and evolves, she expects it has the potential to significantly improve organizations’ approaches to privacy.
Developing Consumer Trust in AI
More than nine in 10 (92%) security professionals in Cisco’s 2023 Data Privacy Benchmark report admitted that they need to do more to reassure customers that their data is only being used for intended and legitimate purposes when it comes to the use of AI in their solutions.
However, there are big differences in priorities for building that trust and reassurance between consumers and businesses. While 39% of consumers said the most important way to build trust was clear information as to how their data is being used, just 26% of security professionals felt the same.
Additionally, while 30% of professionals believed the biggest priority to build trust in their organizations was compliance with all relevant privacy laws, this was a priority to just 20% of consumers.
Over three-quarters (76%) of consumers said that the opportunity to opt out of AI-based solutions would make them more comfortable with the use of these technologies. However, just 22% of organizations believe this approach would be most effective.
Reflecting on these findings, Waitman commented: “Compliance is most often seen as a basic prerequisite, but it’s not enough when it comes to earning and building trust. Consumers’ clear priority regarding their data is transparency. They want to know that their data is being used only for intended and legitimate purposes, and they trust organizations more who communicate this clearly to them.”
The company advised organizations to share their online privacy statements in addition to the privacy information they are obliged to disclose under law to boost consumer trust.
Waitman added: “Organizations should explain in plain language exactly how they use customer data, who has access to it, how long they retain it, and so forth.”
In regard to the use of AI, Winlo said it is vital that organizations involved in the development and use of AI tools take action to safeguard privacy, or risk these technologies failing to realize their huge potential benefits.
“We are only just starting to identify the use cases for these systems. However, it is really important that those developing the tools consider the way they do that, and the implications for individuals and society if they do it well or badly. Ultimately, however popular something may be as a novel technology, it will struggle in the long term if people don’t trust that their personal data – and lives – are safe with it,” she added.
Changing Business Attitudes to Privacy
Encouragingly, Cisco’s 2023 survey found that almost all organizations recognize the importance of privacy to their operations, with 95% of respondents stating that privacy is a business imperative. This compares to 90% last year.
Additionally, 94% acknowledged their customers would not buy from them if their data was not properly protected, and 95% said privacy is an integral part of their organization’s culture.
Companies are also recognizing that need for an organization-wide approach to protecting personal data, with 95% of respondents stating that “all of their employees” need to know how to protect data privacy.
Around four in five (79%) said that privacy laws were having a positive impact, with just 6% arguing they have been negative.
These attitudes are leading to changing business practices. Waitman noted: “While very few organizations were even tracking and sharing privacy metrics a few years ago, now 98% of organizations are reporting privacy metrics to their Board of Directors. A few years ago, privacy was typically handled by a small group of lawyers – today, 95% of organizations believe privacy is an integral part of their culture.”