The UK's top Information Commissioner has confirmed safeguarding consumer rights in the era of artificial intelligence (AI) is a top priority, but rejected the idea of creating separate regulations specifically for AI.
John Edwards, the UK’s sixth Information Commissioner, said the Information Commissioner's Office (ICO) is “working at pace to apply further clarity on how the law applies to these emergent AI models.”
He acknowledged that policymakers had failed to respond quickly enough in adapting to the impact of social media on data protection – an area the UK is now catching up on with the Online Safety Act 2023. He noted that they would not "miss the boat" when it comes to data protection and AI.
The growing use of generative AI brings about a range of new data privacy questions, said Edwards speaking during the International Association of Privacy Professionals (IAPP) Data Protection Intensive event on 28 Febrary.
These new privacy questions include:
- How much control are we willing to give the organizations that develop and deploy this technology?
- When is it lawful to scrape data from the internet to train generative AI models?
- Are people’s rights being meaningfully protect when AI models are being built using their information?
- What safeguards need to be considered by developers and organizations when exploring models.
However, Edwards insisted that there is no need for a bespoke AI regulation at this stage.
“We won’t be talking about AI regulation in a few years’ time, AI will be part of the domain of every regulator,” he added.
Speaking to Infosecurity, the IAPP’s Director of Research and Insights, Joe Jones, said, “Data protection applies equally to AI as it does to social media, banking, it’s another technological application to which there’s relevance for data protection.”
Jones added that he doesn’t expect a UK equivalent to the EU’s AI Act in the foreseeable future.
New Guidance on Lawful Use of Biometrics Under GDPR
Other privacy considerations for the ICO in the fast-paced technology environment includes helping organizations understand how and when biometrics can be applied in compliance with the UK GDPR law.
Edwards cited the ICO action against Serco Leisure on February 23, 2024, in which the firm was ordered to stop using facial recognition technology (FRT) and fingerprint scanning to monitor employee attendance.
This order ties into guidance recently issued by the ICO setting out “lawful basis” for using these technologies.
At the heart of the Serco case was proportionality and power imbalance, which prevented genuine choice for employees.
“[Serco] didn’t clearly offer their worker an alternative way to log their entry and exit, which increased the imbalance of power between employers and employees – this isn’t a proportionate or necessary use of biometric data,” noted Edwards.
Joe Jones told Infosecurity the top considerations for using biometrics lawfully revolve around purpose and proportionality, as well as communication.
He believes it is right the ICO is making its stance clear on this issue relating to biometrics. “It’s a pretty intrusive and invasive technology, it can reveal a lot about people,” noted Jones.
Tackling Cookies an ICO Priority
Another priority for the ICO is regulating the use of cookies, to “reset the balance of power between content aggregators and users.”
Edwards said the body analyzed the top 100 websites in the UK, finding that 53 of them have potentially non-compliant cookie banners.
These websites were given 30 days’ notice to make the necessary changes. Currently, 38 have changed their cookie banners to be compliant, and four have committed to reaching compliance in the next month.
“Our message is clear – it must be just as easy to reject all non-essential cookies as it is to accept them,” stated Edwards.
To monitor and regulate the use of cookies requires the use of automation, which the ICO is developing.
“Our bots are coming for your bots,” warned Edwards.