Framework Convention on Artificial Intelligence
Introduction
The Framework Convention on Artificial Intelligence[1] (Convention) is an international treaty proposed by the Council of Europe that was recently opened for signature[2]. This is the first legally binding international framework regulating the entire lifecycle of Artificial Intelligence (AI) systems. The Convention ensures that AI systems align with fundamental principles of human rights, democracy, and the rule of law while promoting safe and innovative technological growth.
Background
As AI systems increasingly impact our lives, we must address the potential risks they pose, including privacy violations, discrimination, and lack of transparency. The Council of Europe has recognized the urgent need for a unified approach to regulating AI that upholds ethical standards while fostering innovation. This led to the creation of the Convention, which seeks to provide a comprehensive governance framework that addresses these concerns. The Convention builds on the Council of Europe’s work in technology governance and human rights[3].
Requirements for States
The countries that become signatories to the Convention must implement policies that ensure AI systems are transparent, accountable, and respectful of privacy and personal data protection. The treaty calls for measures to address algorithmic bias and discrimination, making fairness a central requirement for AI systems used in decision-making processes.
The states have several key obligations to ensure AI's responsible use and development:
Fundamental Principles
States must safeguard AI use in compliance with human rights. This must be done particularly concerning data privacy, discrimination, and transparency. Signatories must guarantee accountability for AI decision-making and algorithmic outcomes.
Risk Assessments and Mitigation
States must conduct regular risk assessments of AI applications and mitigate potential harms by introducing bans on certain applications if necessary.
Procedural Safeguards
States shall ensure sufficient information on AI systems is provided, so that people concerned can challenge the decision(s) made through the system or challenge the use of so. States shall also make sure there are competent authorities to accept complaints on the systems, and ensure effective procedural guarantees to respect human rights and freedoms.
Who is Covered?
The principles and obligations outlined in the Convention apply to AI system activities carried out by public authorities or private entities working on their behalf. For private sector actors, while they are still required to manage risks and impacts related to AI systems in line with the Convention’s objectives, they have the flexibility to either adhere directly to the Convention’s obligations or implement alternative, suitable measures that achieve the same goals.
The Convention includes exemptions for research and development, in addition to national security.
This comprehensive coverage is designed to promote consistency in the development and use of AI across different industries and countries.
Implementation
As of the date of this article, the Convention has been signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America, and the European Union.
The Convention was negotiated with input from non-member countries like Argentina, Australia, Canada, Costa Rica, Japan, Mexico, Peru, and Uruguay.
Since it is compatible with Union law, and the EU AI Act in particular[4] , it is expected to be an effective framework at the international level.
The treaty will enter into force once at least five signatories, including three Council of Europe member states, have ratified it. The Convention is open for global participation, allowing countries from across the world to commit to its provisions.
Conclusion
The Framework Convention on Artificial Intelligence marks a significant step toward establishing a global, legally binding framework for the ethical development and deployment of AI systems. By focusing on human rights, democracy, and the rule of law, the Convention aims to manage AI's vast potential while mitigating its risks, such as discrimination, lack of transparency, and privacy concerns. Its comprehensive approach—covering both public and private actors—ensures that AI is developed responsibly across diverse sectors and regions.
With signatories from various parts of the world, including member states of the Council of Europe and non-member countries, the Convention has the potential to become a cornerstone of global AI governance. It will hopefully set a precedent for how international cooperation can effectively regulate technology in a way that prioritizes fairness, safety, and innovation. The flexibility provided to private sector actors, along with rigorous requirements for public accountability and risk management, promises to foster trust in AI systems while ensuring their compliance with ethical standards. This international collaboration will be crucial in navigating AI's future and technological developments, ensuring it enhances human life without undermining fundamental rights.
- Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, (Date of Access: 20.09.2024), For access: https://rm.coe.int/1680afae3c
- For details please see: https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature
- For an overview of the Council of Europe’s work on AI see: https://rm.coe.int/brochure-artificial-intelligence-en-march-2023-print/1680aab8e6
- For details see: https://digital-strategy.ec.europa.eu/en/news/commission-signed-council-europe-framework-convention-artificial-intelligence-and-human-rights
All rights of this article are reserved. This article may not be used, reproduced, copied, published, distributed, or otherwise disseminated without quotation or Erdem & Erdem Law Firm's written consent. Any content created without citing the resource or Erdem & Erdem Law Firm’s written consent is regularly tracked, and legal action will be taken in case of violation.
Other Contents
As technology advances, artificial intelligence (“AI”) is steadily making its way into dispute resolution, promising enhanced efficiency. Practitioners are carefully weighing its capabilities against its limitations...
The "Brussels Effect" refers to the phenomenon where European Union (“EU”) regulations influence or set standards globally. Since the EU is a significant market, global companies often find it practical and economically beneficial to adopt EU standards across all their operations rather than comply with multiple...
With its decision dated 11.10.2023 and numbered 2020/76 E., 2023/172 K. published in the Official Gazette dated 10 January 2024 and numbered 32425 ("Decision"), the Constitutional Court ("Constitutional Court") evaluated the requests for the annulment of certain articles of the Law No. 7253 on the...
The Information Technologies and Communications Board adopted the Procedures and Principles for Social Network Providers (“Procedures and Principles”) with its decision dated 28.03.2023 and numbered 2023/DK-ID/119. The said decision was published in the Official Gazette dated 01.04.2023, and entered into...
The first “Artificial Intelligence Act” of all time, which includes rules and regulations that directly affect tools such as ChatGPT, Bard and Midjourney adopted by the European Parliament with a majority of votes. Thus, the European Parliament has officially taken the steps of a regulation that could be a turning point for...
ChatGPT, a large language model (LLM) developed by OpenAI, is an artificial intelligence (AI) system based on deep learning techniques and neural networks for natural language processing. ChatGPT can process and generate human-like text, chat, analyse and answer follow-up questions, and acknowledge errors...