
Apple has signed the White House’s voluntary commitment to develop safe, secure, and trustworthy AI, according to a press release Friday. The company will soon launch a generative AI product called Apple Intelligence, which will bring generative AI to Apple’s 2 billion users.
Apple joined 15 tech companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, in committing to the White House’s ground rules for generative AI development by July 2023. At the time, Apple didn’t say how deeply it planned to infuse AI into iOS. But we heard Apple’s voice clearly at WWDC in June: It was going all-in on generative AI, starting with a partnership that would embed ChatGPT on iPhone. Apple, a frequent target of federal regulators, is eager to signal early that it intends to play by the White House’s AI rules—a move that could be a goodwill gesture before a future regulatory battle over AI erupts.
But how much power does Apple have in its voluntary commitment to the White House? It’s not much, but it’s a start. The White House calls it a “first step” for Apple and 15 other AI companies to develop safe, secure, and trustworthy AI. The second step was President Biden’s AI executive order in October, and there are several bills currently in progress in federal and state legislatures to better regulate AI models.
Under this commitment, the AI company pledged to red-team (act as an adversarial hacker to stress-test the organization’s security measures) AI models before making them public and share that information with the public. The White House’s voluntary commitment also requires the AI company to treat undisclosed AI model weights as confidential. Apple and the other companies agreed to work on AI model weights in a secure environment and to limit access to model weights to as few employees as possible. Finally, the AI company agreed to develop content labeling systems, such as watermarking, to help users distinguish between what is generated by AI and what is not.
Separately, the Commerce Department said it would soon release a report on the potential benefits, risks, and implications of open-source models. Open-source AI is becoming an increasingly politically sensitive regulatory battleground. Some camps want to limit how accessible model weights for robust AI models can be for safety reasons. But doing so could significantly limit AI startups and the research ecosystem. The White House’s stance could have significant implications for the larger AI industry.
The White House also noted that federal agencies have made significant progress on the tasks outlined in the October executive order. So far, federal agencies have hired more than 200 AI-related positions, given more than 80 research teams access to computing resources, and released several frameworks for AI development (the government loves frameworks).