Share This Article
The US Government has achieved a major milestone on Artificial Intelligence (AI) by securing voluntary commitments from leading AI companies to comply with rules in managing the risks posed by this transformative technology.
Responding to the progress of the European Union towards approving the AI Act, US President Mr. Biden called for a crucial meeting at the White House with seven prominent AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. During this meeting, the companies willingly pledged their commitment to specific rules necessary to ensure safe, secure, and transparent development of AI.
What do the voluntary commitments on artificial intelligence towards the US Government provide?
These voluntary commitments revolve around three fundamental principles for the future of AI: safety, security, and trust. The companies have committed to:
- Ensure products are safe before release, conducting thorough internal and external security testing of their AI systems to address significant AI risks like biosecurity, cybersecurity, and broader societal effects. They will also share vital information on managing AI risks and best practices across the industry, with governments, civil society, and academia;
- Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Additionally, they will facilitate third-party discovery and reporting of vulnerabilities in their AI systems, ensuring quick identification and resolution of any post-release issues; and
- Develop robust technical mechanisms to inform users when content is AI-generated, incorporating features like watermarking systems. Moreover, they will publicly report their AI systems’ capabilities, limitations, and appropriate and inappropriate use, encompassing both security and societal risks, such as fairness and bias, with special research focus.
What differentiates the US voluntary commitments from the EU AI Act?
Comparing the US voluntary commitments with the EU AI Act, three main differences emerge:
- The EU AI Act is a more detailed and structured piece of legislation, setting out a specific regime based on AI system risks, while the US commitments consist of general principles without the same level of detail.
- The EU AI Act is intended as a directly applicable EU Regulation for all entities within its scope, while the US commitments are voluntary without sanctions, limited to the participating companies.
- The EU AI Act is yet to be approved, with a transition period before enforceability, whereas the US commitments are immediately applicable, raising questions about compliance for existing AI products.
What to expect next?
Despite these differences, the US and EU approaches share similarities, and the European Union is concurrently working on a voluntary code of conduct for AI. While non-binding, these rules can be immediately enforced and may align with the US commitments. Though there won’t be sanctions, the principles behind the AI Act are somewhat present in other existing legislation. If authorities interpret regulatory obligations in line with the code of conduct and major market players comply, the code could essentially become a binding discipline.
This exciting moment for AI’s future calls for businesses to adopt a long-term approach, embracing AI while ensuring compliance with the forthcoming regulatory regime. With the convergence of approaches and the commitment of leading AI companies, the path towards responsible AI implementation is clearer than ever.
To support businesses in handling compliance of their AI solutions, DLA Piper has developed a legal tech tool named “Prisca” which allows to assess the maturity of artificial intelligence systems against the main pieces of legislation and technical standards in an efficient and cost-effective manner. You can watch a presentation on the product HERE.
On a similar topic, you can read the article “Will we have global rules on artificial intelligence?“