The Group of Seven (G7) industrial countries is set to announce an artificial intelligence (AI) code of conduct for developers on October 30, as reported by Reuters.
This AI code comprises 11 key points with the objective of promoting the global use of AI that is “safe, secure, and trustworthy” while addressing and mitigating associated risks.
The code was formulated by G7 leaders in September and provides voluntary guidance for organizations involved in developing advanced AI systems, including foundational models and generative AI systems. It also encourages these organizations to publish reports detailing the capabilities, limitations, usage, and potential misuse of the systems they develop. The code advocates for robust security controls for these systems.
The G7 countries participating in this initiative are Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union.
This announcement comes at a time when governments worldwide are grappling with the opportunities and challenges presented by AI. The European Union has already established AI guidelines with its EU AI Act, which passed its initial draft in June. On October 26, the United Nations formed a 39-member advisory committee to address global AI regulation issues. China has also implemented its own AI regulations, effective since August.
In addition to government efforts, OpenAI, the developer of the widely used AI chatbot ChatGPT, recently disclosed its plans to establish a “preparedness” team responsible for evaluating various AI-related risks.
Comments