Apple Must Convince Us to Trust AI With Our Data

Written by

Techno-wizardry could help keep our data safe, but it won’t eliminate the need to get the basics right. 

Apple recently announced its first foray into the wild and wonderful world of AI, and it’s hoping to convince us that “Apple Intelligence” can solve the enduring privacy challenges associated with AI technologies. 

Apple’s plan involves a slew of technological fixes designed to unlock the value of AI without putting people’s data at risk. Most of Apple’s AI processing will take place on-device, theoretically reducing the risk of data being leaked by or absorbed into AI models. More complex AI calculations, however, will still take place in the cloud, via an encrypted mechanism that Apple is calling “Private Cloud Compute.” 

Effectively, Private Cloud Compute offers an arm’s-length alternative to conventional cloud-based AI: some subset of your data drifts into the cloud, but it isn’t stored there or passed to third parties. Apple says only specified AI models will be able to unlock the user’s data, and that the security of its AI infrastructure can be verified by independent security researchers.

That’s all well and good, but the reality is that only a tiny proportion of Apple customers will have any idea whether Apple’s privacy system is really working as advertised. The technology may well be just as effective as Apple says – but whether consumers accept it will depend, quite simply, on whether or not they trust the tech giant to do right by them. 

The Rise of Techno-Wizardry

Apple isn’t the only company offering technological fixes for AI-related privacy concerns. Many tech companies are holding out synthetic data as a privacy solution: instead of feeding your data into AI models, they argue, it’s possible to use simulated datasets that are statistically similar to the real thing, but not directly traceable back to any individual’s actual data.   

Some important questions remain, however. Researchers say that the more sophisticated synthetic data grows, the more closely it will approximate the data it’s mimicking – and the easier it will become to infer actual facts about individuals based on synthetic data. That’s especially true of “outlier” individuals, like a patient with an unusual medical condition or someone with huge amounts of debt. Of course, those outliers might be the very people who are most concerned about protecting their privacy.

This doesn’t mean synthetic data is a bad idea. In many cases, it may well work exactly as advertised, enabling AI functionality while protecting user privacy. But once again, this is a technological fix that’s far too complex for individuals to be able to understand.

When users put their trust in a company that’s using synthetic data, in other words, they aren’t actually trusting “synthetic data.” Instead, they’re choosing to trust the company that’s telling them that synthetic data is sufficient to protect their privacy.

In just the same way, Apple users won’t really be making an informed decision about whether Private Cloud Compute is enough to keep them safe; they’ll just be deciding whether or not Apple itself is sufficiently trustworthy. 

How Much do You Trust Apple

Now, I’m not here to tell you that you shouldn’t trust Apple with your data. But it isn’t a given that you should trust Apple, either. Apple Intelligence could potentially have access to everything from our text messages to our finances to our physical and mental health, and there are good reasons to question anyone who wants access to that much of our data. 

Certainly, the US Department of Justice (DoJ) believes there’s ample reason to question the purity of Apple’s motives. In its antitrust lawsuit, the DOJ accused Apple of using privacy as a marketing stunt, and deliberately degrading user privacy – by enabling app-makers to capture data, say, or encouraging and profiting from data-driven advertising.

You can agree or disagree with the DOJ’s allegations. Many of the industry experts I talk to say that Apple does a better job than most companies of prioritizing its users’ privacy and data rights. Others, of course, believe that the company’s sheer scale (and scale of data collection) demands that it be held to a higher standard. 

Either way, though, we come back to the same question: how can we, as consumers, decide who to trust? Who should we allow to access our intimate personal data – and how can we assess the claims they make about how it will or won’t be used?

The reality is that just as we can’t make sense of the endless boilerplate in companies’ privacy policies, neither can we make meaningful judgments about the technologies that companies like Apple promise will keep our data safe. Instead, we have to go back to basics, and ask whether these companies are doing right by us and offering us the transparency and control over our data that we’re entitled to expect. 

Privacy and Magic

The British sci-fi writer Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic. Between Apple’s Private Cloud Compute and the rise of synthetic data, it’s starting to feel like we’re entering a new era of privacy magic: technological solutions that may well work as intended, but that are simply too complex for non-specialists to understand. 

That isn’t necessarily a problem. We all use technologies we don’t fully understand: how many people can explain at a technical level how an LCD screen or a rechargeable battery actually works? But this raises the stakes when it comes to data privacy – because it means that in order for techno-wizardry to work as a privacy solution, it isn’t enough for it to be effective. It also has to be trusted, and underpinned by the moral faith and credit of the company that’s putting it forward.

Look at it this way: if Cambridge Analytica told you they’d devised a private cloud solution for AI, or started selling synthetic datasets derived from your information, would you trust them? In both cases, you might well have some serious concerns – not based on questions about the underlying tech, but on your opinion of the companies deploying it.  

In the new era of technological privacy fixes, organizations obviously need to build effective data infrastructure. But they also need to pay attention to the core principles on which trust is founded: transparency, consumer control and data dignity.

Organizations that get that right have an opportunity to turn technological advances into drivers of competitive advantage – while those that fail will get snubbed by consumers, no matter how advanced their privacy technologies become.

Image credit: Muhammad Alimaki / Shutterstock.com

What’s hot on Infosecurity Magazine?