For some, the idea that a device (originally a tool to make a phone call) will develop empathy may be unnerving, particularly as it will rely on drawing on your data to allow generative AI to deliver smarter features.


Acknowledging this, Craig Federighi, senior vice president of Software Engineering at Apple, has described Apple’s new AI technology as having been “built with privacy at the core”.


These ambitious plans for AI-driven features will be powered via a partnership between Apple and OpenAI (which developed GPT).


What is GPT and how does it work?


For the uninitiated, GPT (current version GPT4o) stands for “Generative Pre-trained Transformers”, which, in short, comprises a large language model (or “LLM”).


In relation to generating text, an (albeit over-simplistic) way of describing how an LLM works is that it predicts the most statistically probable next word (or “token”) in a sequence.


For example, if you input “who was the last Head of the Commonwealth” into ChatGPT (a bot powered by GPT), won’t search a database (in the way that Google operates). Instead, the LLM has been trained to know that that “Queen”, “Elizabeth” and “II” is a statistically probable sequence of “tokens” that completes the input.


In the last year or so, Apple has received some criticism for falling behind on AI features, deploying a foundation model at a time when AI has developed a chokehold on consumer technology innovation.


OpenAI will be quite an upgrade for Apple, allowing the technology to move from 3 billion to 1.76 trillion variables in its available data set.


This will allow for actions to be carried between apps, as well as the automatic writing and summarising of text, management of notifications etc. Manually navigating a journey within Apple’s ecosystem (between, say, Notes, Photos and Siri) may no longer be required to generate a single output.


How is privacy affected in Apple’s new AI technology?


The terms “personal data“, data “collection“, “processing” and “sharing” are key concepts for regulators (such as the ICO), lawyers and their clients when advising on compliance with GDPR and Data Protection in the UK and the EU.


Apple’s plans (and those of its competitors) will lead to ground-breaking outcomes in user function but there are questions to be answered regarding how the new technology will use “personal data”, which Apple have started re-branding as “personal context“.


Apple have said that, to manage privacy concerns, features such as “on-device processing” (to limit data sharing) will be deployed and that when data needs to leave the device it will accessed by a combination of GPT’s LLM in conjunction with Apple’s Private Cloud Compute, such that it is processed securely in the cloud without the data being stored or made available to third parties.


Apple says that the technology will be made “aware of your personal data without collecting your personal data”.


Other safeguards include Apple’s approach to AI image generation. To self-regulate the issue of “deep fakes”, the technology will restrict outputs to animations, illustrations and sketches.


As tech companies rebrand traditional terminology, the law will need to grapple with the identification and use of data in a rapidly evolving sector. What is certain, is that these new offerings will undoubtedly rely on vast amounts of personal information (“context”).


As the biggest tech company in the World (by revenue), we await to see how “Apple Intelligence” (to be launched in Europe in 2025) handles the much-debated privacy concerns of generative AI.


How can Morr & Co help?


If you would like more information and advice on understanding AI, especially Apple’s new AI technology and it’s potential impact on your business, please call 01737 854500 or email [email protected] and a member of our expert corporate and commercial team will get back to you.