Regulatory

Global: Apple Commits to White House’s AI Safety Standards

0
Apple Commits to White House’s AI Safety Standards

Apple has officially joined the White House’s voluntary commitment to advancing safe, secure, and trustworthy artificial intelligence (AI). This announcement was made in a press release on Friday. As part of its commitment, Apple will soon introduce its generative AI platform, Apple Intelligence, into its core products, which will integrate generative AI technology for its 2 billion users.

Apple’s move aligns with the earlier commitments made by 15 other technology giants, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. These companies agreed to the White House’s foundational principles for developing generative AI in July 2023. Although Apple had not initially detailed its AI integration plans for iOS, the company’s announcement at WWDC in June highlighted its substantial push into generative AI, starting with a collaboration to embed ChatGPT into the iPhone.

As a company frequently under the scrutiny of federal regulators, Apple’s commitment may be seen as a strategic effort to align with White House guidelines and potentially mitigate future regulatory challenges related to AI.

While the White House’s voluntary commitments are not legally binding, they mark an important initial step. The White House views this as a foundational move toward ensuring AI technologies are developed responsibly. This commitment precedes President Biden’s AI executive order issued in October, and ongoing legislative efforts at both federal and state levels to regulate AI more effectively.

Under this commitment, participating AI companies, including Apple, agree to:

  • Conduct rigorous “red-teaming” exercises to stress-test AI models and disclose these findings to the public.
  • Handle unreleased AI model weights with strict confidentiality, limiting access to authorized personnel only.
  • Develop content labeling systems, such as watermarking, to clearly identify AI-generated content.

In addition, the Department of Commerce plans to release a report on the benefits, risks, and implications of open-source AI models. This area of AI regulation is becoming increasingly contentious, with debates over how much access should be allowed to model weights in the interest of safety versus fostering innovation. The outcome of this report could significantly influence the broader AI industry.

The White House also highlighted progress made under the October executive order, noting that federal agencies have hired over 200 AI specialists, granted more than 80 research teams access to computational resources, and developed several AI frameworks to guide responsible AI development.

Nigeria: CAC Issues New Guidelines for Bank Recapitalisation

Previous article

Global: Privacy Watchdog Surprised by X’s Opt-In Data Sharing for Grok AI Training

Next article

You may also like

Comments

Comments are closed.

More in Regulatory