Artificial Intelligence Act Adopted by the European Parliament
Introduction
The first “Artificial Intelligence Act” of all time, which includes rules and regulations that directly affect tools such as ChatGPT, Bard and Midjourney[1] adopted by the European Parliament with a majority of votes. Thus, the European Parliament has officially taken the steps of a regulation that could be a turning point for artificial intelligence applications that have been on the agenda recently. Following this stage, the trilateral negotiations between the Council of the European Union, the European Commission and the European Parliament will continue and expected to be finalized towards the end of 2023. Therefore, the legislation is expected to enter into force in 2024.
In this article, the important provisions of the act, which is called the first artificial intelligence regulation in this world, will be examined and the reflection of the stipulated rules and regulations on companies will be discussed.
Scope of Artificial Intelligence Act
According to a statement[2] on the European Parliament's website, driven by a desire to establish a uniform definition[3] that can be applied to future artificial intelligence systems, the Parliament's priority is to ensure that artificial intelligence systems used in the European Union (“EU”) are safe, transparent, traceable and protect fundamental rights and freedoms. Furthermore, the Parliament aims to stimulate AI investments and innovation, improve governance and enforcement, and promote a single EU market for AI. Accordingly, in its text, the Parliament adopted a risk-based approach, preferring to ban AI applications that pose an unacceptable risk and setting strict rules for high-risk use cases.
The act will apply to providers who place artificial intelligence systems on the market or make them available in the Union, regardless of whether they are established in the EU, distributors of artificial intelligence systems established in the Union, importers and distributors of artificial intelligence systems and manufacturers, if certain criteria are met. The scope of the act is broadened and it is regulated that if an output produced by AI systems is used within the EU, AI system providers located outside the EU (e.g. in Türkiye) will also be subject to the legislation. Thus, the act will have a global impact affecting many stakeholders around the world.
On the other hand, the use of artificial intelligence systems by public institutions and organizations and international organizations based outside the EU within the framework of international agreements signed with EU Member States or the Union for judicial cooperation is excluded, provided that adequate and appropriate safeguards are provided for the protection of fundamental rights and freedoms. Considering the scope of the act, it is obvious that the most affected parties are technology companies such as OpenAI, Google and Microsoft.
The text of the act points to some regulations on privacy, emphasizing that EU law on the protection of personal data, privacy and confidentiality of communications will apply to all processes where personal data is processed in connection with the artificial intelligence.
What Does the Act Stipulate?
When the Artificial Intelligence Act is analyzed in general terms, it is seen that the Parliament has established a systematic based on four basic risk categories (low, limited, high and unacceptable), and depending on the level of risk arising from artificial intelligence, it sets out a set of rules for which all parties involved in the development, use, import, distribution or production of artificial intelligence models will be responsible. Systems classified as high risk are those that have the potential to significantly harm the health, safety or fundamental rights and freedoms of individuals, while applications using subliminal techniques and biometric identification are listed under unacceptable risk.
In the text of the law approved by the Parliament, spam filters and video games are considered minimum-risk artificial intelligence, while robotic surgery and driverless cars are categorized as high-risk artificial intelligence and subject to strict rules. In return, the Parliament foresees a ban on subliminal techniques, real-time biometric classification, social scoring, preventive law enforcement, predictive policing, internet-connected facial recognition databases and emotion recognition software in law enforcement, border management, workplaces and education. Thus, facial recognition through smart cameras in public spaces will be banned, with the entry into force of the act. Chatbots on banks' websites, on the other hand, are considered limited risk and are specifically obliged to comply with transparency obligations.
Under Article 6, a system will be considered high-risk if it is intended to be used as a safety component of a product or if it is covered by EU harmonization legislation listed in Annex 2[4] and is required to undergo a third-party conformity assessment on health and safety risks. Annex 3 lists 8 use cases that are considered high risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. This list includes biometric-based systems, vocational training systems, systems affecting employment and employee management, systems used in law enforcement, systems used in the management of migration, asylum and border control, and systems used in the administration of justice and democratic processes. Accordingly, for example, systems intended to be used for recruitment or selection, systems used to evaluate performance in interviews or tests, and systems used to screen or filter applications or evaluate candidates in tests or interviews, systems used to make decisions on promotion, termination and assignment of duties based on behavior or personal characteristics, and systems used to monitor and evaluate performance and behavior would be considered high risk.
As in the example given above, although the obligations stipulated for the systems to be considered high-risk vary depending on the type of institution associated with the system, it can be said that there are basically 7 obligations to be complied with: (i) a risk management system must be established; (ii) data governance applications must be established; (iii) the necessary technical documentation must be prepared before the system is placed on the market; (iv) record keeping must be facilitated; (v) the system must be developed to allow for transparency (provision of information to users); (vi) the system must be designed to allow for appropriate human oversight to prevent or minimize risks to health, safety or fundamental rights; (vii) an appropriate level of cyber security must be established throughout the life cycle of the system. High-risk systems must also undergo a conformity assessment. This assessment shouldbe completed before they are placed on the market and the models should be registered in a separate EU database.
It is recommended that all practitioners of AI models, regardless of the risk group they fall under, apply the Code of Conduct on AI.
What Should Companies in Scope of the Act Do?
Pursuant to the Artificial Intelligence Act, organizations and companies that assess that they are in scope should first assess whether they have developed or launched artificial intelligence models, or whether they have procured a product from a third-party provider, and list the artificial intelligence models they have identified in a model repository. Along with this inventory to be created, the purpose, qualities and capabilities of artificial intelligence models should be clarified.
Based on the definitions in the text of the act, parties that are subject to be an artificial intelligence provider, user, importer or distributor should review the rules and obligations and ensure that they are in compliance with these rules. In order to ensure compliance and continuity of compliance, they should raise awareness and conduct awareness-raising activities, assess the risks associated with artificial intelligence systems, identify responsibilities, establish a formal governance system for compliance with the Artificial Intelligence Act and should create the necessary infrastructure for the fulfillment of its obligations..
Sanctions
The sanctions that may be imposed in case of violation of the rules and obligations stipulated in the Artificial Intelligence Act may have serious consequences for the actors in the sector. Depending on the magnitude of the violation, fines ranging from 10 million to 40 million Euros or 2% to 7% of annual global turnover are foreseen in the law. Therefore, it is important for all stakeholders involved in the production and distribution of artificial intelligence systems to review the rules they are responsible for and take the necessary steps to ensure full compliance with the law.
Conclusion
Artificial intelligence systems continue to transform our world in unpredictable and irreversible ways. Personalized healthcare, self-driving cars, virtual assistants and chatbots have become almost indispensable parts of our daily lives. However, such a rapid rise of artificial intelligence systems has raised many concerns about the protection of individuals' fundamental rights and freedoms, and some institutions have even chosen to ban such systems and applications in order to eliminate these concerns.
On the other hand, the EU, which defines itself as a leader in the European Data Strategy it has published, has taken the concrete steps of the first official legal regulation in history on artificial intelligence systems and envisaged a strict regime that directly affects the developers of artificial intelligence models. Although the said regulation has not yet entered into force, it is considered that the organizations within the scope should review the text of the act, their rights, the rules and obligations they are subject to, and start preparations for compliance now.
- The text of the Artificial Intelligence Act adopted by the European Parliament can be accessed via the link https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf (Date of Access: 07.08.2023).
- For the statement of the European Parliament dated June 14, 2023, see https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence, (Date of Access 07.08.2023).
- ‘‘Artificial intelligence system’ (AI system) is defined as a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.
- Annex 2 includes products covered by laws on the safety of toys, elevators, pressure equipment and diagnostic medical devices.
All rights of this article are reserved. This article may not be used, reproduced, copied, published, distributed, or otherwise disseminated without quotation or Erdem & Erdem Law Firm's written consent. Any content created without citing the resource or Erdem & Erdem Law Firm’s written consent is regularly tracked, and legal action will be taken in case of violation.