The past few months have seen the emergence of Apple’s latest venture, Apple Intelligence, which represents the company’s efforts to compete with other leading companies in artificial intelligence (AI) development. To be unveiled at the highly anticipated Worldwide Developers Conference (WWDC) on June 10, 2024 at Apple Park in Cupertino, Apple Intelligence is what the company calls “AI for the common people,” a nod to a 1984 Macintosh commercial that called the device a “computer for the common people.” But it remains to be seen whether Apple Intelligence will truly be “for the common people,” given the implications of widespread deployment of personalized AI on privacy, data collection, and bias.
The desire to create technology “for us” has been evident in many of Apple's historic moves. When the company introduced the iPhone in 2007, it eschewed marketing to traditional smartphone buyers (business users and enthusiasts) and brought its product directly to the mass market. In May 2023, the company's CEO Tim Cook said, “At Apple, we've always believed the best technology is technology built for everyone.” Now, Apple is taking on the feat of creating generative AI “for us.”
The widespread use of generative AI has the potential to revolutionize public life, and Apple's integration of the technology into smartphones is no exception. A 2024 McKinsey survey revealed some interesting trends in global personal experience with generative AI tools: 20% of people born before 1964 regularly used these tools outside of work. Usage was lower for those born between 1965 and 1980 at 16%, and 17% for those born between 1981 and 1996.
The integration of AI into Apple devices has the potential to dramatically change the role of generative AI in everyday life: replying to detailed emails, finding photos of your cat in a sweater, and planning future road trip itineraries with a single click. Integrating these tools into the already ubiquitous smartphone market could make generative AI more accessible and increase usage across all age groups.
Why Apple Intelligence isn't “for the masses”
But it's important to consider the potential risks that come with the widespread adoption of more commercially deployed generative AI. A survey conducted by Polarization Research Lab on public opinion on AI, misinformation, and democracy in the run-up to the 2024 elections reported that 65.1% of Americans are concerned that AI will infringe on their personal privacy. Apple knows this and has made prioritizing privacy a key part of its business model. Its 2019 advertisements highlighting privacy, its public statements that privacy is a fundamental human right, and even its refusal to help the FBI circumvent iPhone security measures to gather intelligence are all ways Apple has demonstrated its commitment to privacy to consumers.
The Apple Intelligence launch is no exception. During the keynote, Craig Federighi, senior vice president of software engineering, emphasized that the product integrates privacy across its features. Apple is taking a two-pronged approach to generative AI: For more common AI tasks, like organizing your schedule or transcribing phone calls, the task will run on-device, while for more complex tasks, it will outsource to the cloud. An example of this would be creating a custom bedtime story for a 6-year-old who loves butterflies and solving mysteries. However, it remains to be seen where the line is between simple and complex requests, and which of these requests will be sent to external (and potentially third-party) servers.
Additionally, Apple claims that any data sent is encrypted and quickly deleted, but as Matthew Green, a security researcher and associate professor of computer science at Johns Hopkins University, points out, “anything that leaves the device is inherently less secure.”
Data Security
For these reasons, the development process for future versions of Apple Intelligence remains uncertain. When training an AI model, the AI algorithm is given training data, which the algorithm uses iteratively to fine-tune its intended functionality. This new Apple Intelligence model promises the ability to use personal context to make the AI interaction experience even more seamless and integrated into the user's daily life. During the keynote, Apple noted that a user's personal iOS will be able to link information across applications. This means that if you ask Siri how to efficiently get from work to an event, it can access your messages to gather the information it needs to make that assessment. All to “simplify and accelerate everyday tasks.” The company said measures are in place to ensure that Apple employees cannot access users' data collected through the AI platform.
But looking to the future, when Apple develops new versions of its AI models, what training data will it use, other than the data collected from its own devices? A report that looked at trends in the amount of human-generated data used to train large language models found that human-generated text data is likely to be completely exhausted between 2026 and 2032. Publicly available training data is drying up, and Apple could face this problem in the future if it does not collect user input to train future models. So, while Apple's privacy claims are very idealistic, they are not foolproof when considering the long-term impact of its AI implementations.
It is also unclear where the training data for Apple’s current models comes from, or whether the models were developed based on unbiased and inclusive datasets. AI algorithms are trained on standardized data, which may have inherent bias built in. Standardized data often lacks the diversity that promotes inclusivity and eliminates bias. This is especially important because Apple Intelligence is a computer model that makes inferences about people, including their attributes, preferences, likely future actions, and associated objects. It is not clear whether Apple’s algorithms repeat or amplify human biases, are biased toward mainstream inferences about human behavior, or both. As this rollout of generative AI plans becomes more widespread, these are important factors to consider when proposing AI products for “us.”
Get over the hype
Dr. Kevin Laglandure's paper on the impact of AI hype provides valuable insight into the potential consequences of the increasing commercialization of AI products. He outlines how the hype around AI can distort expectations, leading to inappropriate reliance on the technology and potential societal harm. Apple's announcement of its generative AI model and its capabilities has the potential to fall into this trap. Dr. Laglandure warns that the inflated expectations associated with AI implementations, and the shortcomings of these expectations, mirror Gartner's hype cycle, which argues that society must reach a “peak of inflated expectations” and a “plateau of productivity.” Because Apple's technology will not be available to the public until this fall, we cannot be entirely certain about its responsibilities and its impact on user privacy and other broader protections that protect users from harm and consequences.