Reflections of the European Union Artificial Intelligence Act on Actors in Turkiye

31.08.2024 Sevgi Ünsal Özden

Introduction

The "Brussels Effect" refers to the phenomenon where European Union (“EU”) regulations influence or set standards globally. Since the EU is a significant market, global companies often find it practical and economically beneficial to adopt EU standards across all their operations rather than comply with multiple, differing local laws. This effect is particularly notable in digital and data protection realms, where EU laws like the General Data Protection Regulation swiftly become benchmarks worldwide, shaping practices far beyond Europe's borders[1].

Building on the concept of the "Brussels Effect," the European Union Artificial Intelligence Act[2](“AI Act” or “Act”), which was published in the Official Journal of the EU on 12 July 2024 and entered into force on 1 August 2024, stands as another significant regulatory development with potential global influence. As the EU seeks to pioneer comprehensive regulations for artificial intelligence (“AI”) technologies, the AI Act could set precedents that extend beyond Europe, impacting how countries like Türkiye navigate the integration of AI into their technological, economic, and regulatory frameworks. This influence is critical for Türkiye, as it adapts to the evolving global standards and aligns its AI initiatives with international expectations, potentially reshaping its own AI landscape significantly.

With the implementation of this law, significant changes in business practices in Türkiye are anticipated. This article examines the scope and implementation timeline of the EU AI Act, and its impact on Turkish sector actors, and briefly touches upon the current developments in artificial intelligence within Turkiye.

Reflections of the European Union Artificial Intelligence Act on Actors in Turkiye
% 0

Navigating the Scope and Timeline of the EU AI Act

The AI Act sets a comprehensive legal framework to regulate AI systems to ensure they are safe, transparent, and accountable[3]. The scope of the AI Act includes stringent requirements for high-risk AI applications, mandating risk assessments, transparency measures, and compliance with fundamental rights. It delineates clear boundaries for unacceptable risks, such as systems that manipulate human behavior or exploit vulnerabilities. This legislation fosters innovation while protecting individuals and upholding democratic values, setting a precedent for global AI governance.

Scope of Application

Under the AI Act, "Artificial Intelligence" is defined through a technical approach, focusing on AI systems based on machine learning approaches, including supervised, unsupervised, and reinforcement learning. It also includes systems that employ logic and knowledge-based techniques, as well as those using statistical methods, and various search and optimization strategies. This definition aims to encompass multiple methodologies that can be employed to develop AI systems, ensuring that the regulations remain relevant as technology evolves. 

The EU AI Act defines responsibilities for various stakeholders associated with AI systems that connect to the EU market. According to Article 2, it applies to (i) providers placing AI systems or general-purpose AI models ("GPAI models") on the EU market or into service, irrespective of whether those providers are established or located within the EU or in a third country; (ii) deployers of AI systems established in the EU; and (iii) providers and deployers outside the EU if their AI system's output is used within the EU. The said act also applies to importers and distributors of AI systems, and product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark. 

As can be seen, the Act extends beyond the borders of the EU, impacting global stakeholders in the AI field. This broad reach means that the AI Act applies not only to entities within the EU but also to those outside the EU in several key scenarios:

  • Non-EU Providers and Deployers: The AI Act applies to providers and deployers of AI systems located outside the EU if the outputs of their AI systems are used within the EU. This means that any AI system produced outside the EU but whose output impacts EU citizens or is utilized within the EU market falls under the regulation.
  • AI Systems Marketed or Put into Service in the EU: Providers outside the EU who place AI systems or GPAI models on the EU market or put them into service within the EU are also subject to the regulations. This includes AI systems that are available through online platforms accessible to EU users.

The above-mentioned extra-territorial scope of the Act necessitates that Turkish companies operating in the AI field, particularly those that engage with the EU market or impact EU citizens, comply with the regulation. This means that Turkish AI developers, providers, and deployers need to ensure their products and services meet EU standards, especially if their AI systems are used within the EU or their outputs have effects in the EU.

Timeline of the AI Act

The AI Act has a structured implementation timeline that spans several years to ensure comprehensive compliance and integration into existing systems across the EU and by entities affected globally. The act formally came into force on 1 August 2024, and will progressively apply its various provisions over the next few years. All provisions of the Act for different risk categories are expected to be gradually implemented starting from February 2025 until the end of 2030[4] . This approach allows time for the development and implementation of necessary infrastructures such as AI regulatory sandboxes and the appointment of competent authorities in EU member states. By February 2025, the prohibitions on unacceptable risk AI will be enforced. By August 2026, all high-risk AI systems as listed in Annex III of the AI Act, which include systems used in critical infrastructure, law enforcement, education, employment, access to essential public services, and other sensitive areas, must comply with the new regulations.

For companies and stakeholders in Türkiye and other non-EU countries, adhering to this timeline is essential as it impacts when they must align their AI operations with the AI Act to continue their business within the EU market. Türkiye's sector actors in AI must closely monitor these timelines to adapt their compliance and operational strategies effectively, ensuring they conform to EU standards that are likely to influence global AI regulatory frameworks.

Risk-Based Application of the Act

The AI Act introduces a risk-based approach to regulating artificial intelligence systems. This means that the regulatory requirements vary depending on the level of risk that an AI system poses to health, safety, and fundamental rights. The Act categorizes AI systems into four risk levels:

  1. Minimal Risk: This category includes AI applications like spam filters or AI-enabled video games, where the risk to rights or safety is low. These applications can operate with minimal regulatory constraints. 
  2. Limited Risk: This refers to AI systems that interact directly with users and could potentially influence their choices or gather sensitive personal data without making it explicit that an AI is being used. Examples include chatbots and other virtual assistants that provide information or support to users. Such systems are required to be transparent about their use of AI. 
  3. High Risk: This category includes AI systems used in critical infrastructure (e.g. transport), educational or vocational training, employment (e.g. CV-sorting software for recruitment procedures), essential private and public services (e.g. credit scoring denying citizens the opportunity to obtain a loan), law enforcement, migration, asylum, border control management, and administration of justice and democratic processes. High-risk AI systems are subject to strict compliance requirements before they can be put on the market, including accurate risk assessment, high levels of accuracy, robustness, cybersecurity measures, appropriate human oversight measures, and transparency obligations.
  4. Unacceptable Risk: The EU AI Act bans certain AI practices outright due to their potential threats to fundamental rights and democracy. These include AI systems that manipulate human behavior to circumvent users' free will (except in specific cases like spam filters), biometric identification systems in public spaces, AI-driven recognition of emotion in the workplace, untargeted scraping of facial images from the internet or CCTV footage and systems that allow 'social scoring' by governments.

Each level of risk dictates specific compliance obligations that range from basic transparency requirements to rigorous testing and documentation before deployment.

Recent Developments from Türkiye

In 2021, Türkiye launched Turkey's first national strategy document in the field of AI, namely the National Artificial Intelligence Strategy for 2021-2025[5] . This strategy, prepared by the Digital Transformation Office of the Presidency of the Republic of Türkiye and the Ministry of Industry and Technology, includes a detailed action plan that emphasizes innovation, integration of AI technologies in business and government, and enhancing AI education and research capabilities. 

On the other hand, there is currently no artificial intelligence law in force in Türkiye. However, a significant development occurred on 25 June 2024, when an AI regulation bill was introduced in parliament. This bill is undergoing the standard legislative review process in parliamentary committees. The proposed regulation consists of eight articles aiming to create a general framework for AI regulation in Türkiye, emphasizing principles like security, transparency, equality, accountability, and privacy. However, the bill does not provide detailed guidance on the implementation and enforcement of these principles. Meanwhile, the Medium Term Program for the years 2025 - 2027[6] , recently published by the Presidency of the Republic of Türkiye Strategy and Budget Directorate, states that the necessary legal arrangements will be made to harmonize the legislation with the EU AI Act.

In addition to all these, the Turkish Personal Data Protection Authority (“KVKK”) has issued guidelines[7] for AI usage across various sectors, emphasizing human rights, data use limitations, and transparency. These non-binding guidelines highlight the KVKK's approach to integrating AI with data privacy standards.

Conclusion

For Turkish businesses in the AI sector, this regulatory environment outlined in this article presents challenges but also some opportunities. This is because companies based in Türkiye and operating in the field of AI must navigate compliance with a robust regulatory framework that might differ significantly from local Turkish regulations, if they wish to be active in the EU market. At the same time, aligning with the AI Act could open doors to the European market, positioning Turkish companies as compliant and competitive on a global scale.

This compliance is not just about meeting legal requirements but also about leveraging these standards to enhance the reputation and reliability of Turkish AI technologies abroad. By adhering to the EU's stringent AI regulations, Turkish companies can differentiate themselves, demonstrating their commitment to safe, ethical, and transparent AI practices, thus potentially gaining a strategic advantage in international markets.


Discover more about the EU AI Act and its timeline on our exclusive document:

References

All rights of this article are reserved. This article may not be used, reproduced, copied, published, distributed, or otherwise disseminated without quotation or Erdem & Erdem Law Firm's written consent. Any content created without citing the resource or Erdem & Erdem Law Firm’s written consent is regularly tracked, and legal action will be taken in case of violation.

Other Contents

Constitutional Court's Annulment Decision on Certain Provisions of the Internet Law No. 5651
Newsletter Articles
Constitutional Court's Annulment Decision on Certain Provisions of the Internet Law No. 5651

With its decision dated 11.10.2023 and numbered 2020/76 E., 2023/172 K. published in the Official Gazette dated 10 January 2024 and numbered 32425 ("Decision"), the Constitutional Court ("Constitutional Court") evaluated the requests for the annulment of certain articles of the Law No. 7253 on the...

IT Law 29.02.2024
Latest Development As Regards to the Social Network Providers
Newsletter Articles
Latest Development As Regards to the Social Network Providers

The Information Technologies and Communications Board adopted the Procedures and Principles for Social Network Providers (“Procedures and Principles”) with its decision dated 28.03.2023 and numbered 2023/DK-ID/119. The said decision was published in the Official Gazette dated 01.04.2023, and entered into...

IT Law 31.07.2023
What Has Come About through the Social Media Regulation?
Newsletter Articles
Internet Actors in Law No. 5651
Newsletter Articles

For creative legal solutions, please contact us.