ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation

30.04.2023 Tilbe Birengel

Introduction

ChatGPT, a large language model (LLM) developed by OpenAI, is an artificial intelligence (AI) system based on deep learning techniques and neural networks for natural language processing.[1]

ChatGPT can process and generate human-like text, chat, analyse and answer follow-up questions, and acknowledge errors. It is good at improving code in programming languages such as Python in a matter of seconds. With the release of an advanced model, ChatGPT-4, in March 2023, it has achieved higher performance and greater functionality in many aspects, including problem solving and image processing.[2]

AI models are expected to open up many opportunities by increasing productivity, creating new search engine architectures, and reducing costs in healthcare, finance and public administration.[3] The mass development in this area by OpenAI and its competitors such as Google and Meta are exciting to watch, but raises major concerns, which are discussed below.

ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation
% 0

Potential Risks of ChatGPT

Given that LLMs are a form of “generative AI”, these models generate their output according to the training data in hand which may include copyrighted material, confidential, biased or discriminatory information.[4] This means that any data fed into the system becomes a training material for the next models.

The massive data collection and process for AI training does not comply with applicable privacy rules such as GDPR[5] as it lacks transparency and legal justification.[6] Furthermore, the chatbot does not provide an immediate option to remove the previously stored data. It is unclear whether the data collected will be shared with OpenAI’s other tools, leaving it open to information hazards.[7]

The unreliability of the responses generated by ChatGPT leads to inaccuracies in data processing and increases the likelihood of misinformation.[8] Widespread use of such models could trigger disinformation and manipulation at a level where users cannot distinguish whether a text is human or AI generated, a reality or a deepfake.[9] This could lead to widespread deception in media, education and politics.

ChatGPT’s high level of text creation and social-engineering makes it much easier to use for malicious purposes such as phishing. Europol’s recent report on the subject shows that the preparation of personalised scams has become much easier, and faster.[10] Impersonating speech style and producing authentic sounding fraud text makes it easier for victims to believe they are in contact with their loved ones. ChatGPT’s ability to generate codes in a short amount of time has since facilitated the development of malicious software codes for cyberattacks. It provides a valuable tool to criminal actors with little technical knowledge and becomes a major threat for cybersecurity.

There are ethical concerns such as discrimination, exclusion and toxicity about the outputs of AI technology.[11] As AI models are trained with existing online data which by nature belong to dominant social groups, the outputs are expected to raise issues of diversity and inclusion.[12] Although developers work on some safeguards to minimize such results, prompt engineering, namely paraphrasing the way a question is asked, seems to be effective in bypassing the safety measures of ChatGPT.

Accountability and explainability are further concerns in the use of LLM.[13] The authenticity of the tool's output is highly controversial, as ChatGPT is not yet able to respect intellectual property rights in its responses. The neural networks and internal operation principles of the technology are complex and opaque. As the impact of LLMs increases significantly, the accountability and liability of their developers and operators remain as an important issue for policymakers to address.

LLMs such as ChatGPT are likely to have other impacts at a societal level. By automating some tasks and jobs, such as translation and software code development, they could lead to job displacement in some areas.[14]

Recent Responses to ChatGPT from Regulators and Private Sector

At the time of drafting this article, the discomfort caused to society by LLMs is higher than ever.

The Future of Life Institute published an open letter outlining the risks of human-competitive intelligence to society, and calling for a pause of at least 6 months in AI development, arguing that the safety protocols are inadequate. The letter was signed by public figures such as Steve Wozniak, and Elon Musk in addition to many scholars and technologists.[15]

At the end of March 2023, the Italian Data Protection Authority temporarily restricted ChatGPT, citing privacy breach concerns.[16] The lack of legal basis and information on the mass collection and processing of personal data for the purpose of training AI algorithms was criticised. Inaccuracies in data processing and the lack of an age verification mechanism to exclude use by children were also raised as concerns. OpenAI has until the end of April 2023 to fulfil the measures requested by the Italian Data Protection Authority, in order to continue operating in Italy.

Meanwhile, the Spanish and French data protection authorities have launched investigations to review potential data breaches by ChatGPT, and the European Data Protection Board has set up a task force for EU-wide cooperation.[17]

Private industry is taking its own measures and restrictions. Since it was reported that some Samsung employees fed ChatGPT with sensitive data source code, the need to educate employees about the risks of the tool has become more apparent. Some major companies such as Verizon, Amazon, Bank of America Corp, Goldman Sachs, Citigroup Inc, Deutsche Bank AG have banned the use of ChatGPT in the workplace, while some others are drafting policies on its acceptable use.

Conclusion

For many tasks, ChatGPT seems to be a useful tool. It is likely to increase opportunities and productivity in various fields. However, users need to be aware of the risks of such AI technologies and avoid feeding it with sensitive data until adequate safeguards are in place.

In terms of privacy, the tool lacks transparency and legal grounds. Unless users explicitly opt out, any data provided to the chatbot will be used for the LLM's training purposes. The same data could appear as output for other users' needs.

If used on a large scale, the chatbot could lead to misinformation and manipulation due to the unreliability of its answers, as users might not be able to distinguish whether an output is real or deepfake. ChatGPT appears to be a strong threat to cybersecurity, given its ability to generate text and facilitate the creation of malicious software code in a short period of time.

Although ChatGPT developers are working on some safeguards, ethical concerns remain. These include discrimination, exclusion and toxicity in the chatbot's output. Accountability and explainability of this complex technology are further concerns.

References

All rights of this article are reserved. This article may not be used, reproduced, copied, published, distributed, or otherwise disseminated without quotation or Erdem & Erdem Law Firm's written consent. Any content created without citing the resource or Erdem & Erdem Law Firm’s written consent is regularly tracked, and legal action will be taken in case of violation.

Other Contents

Framework Convention on Artificial Intelligence
Newsletter Articles
Framework Convention on Artificial Intelligence

The Framework Convention on Artificial Intelligence (Convention) is an international treaty proposed by the Council of Europe that was recently opened for signature . This is the first legally binding international framework regulating the entire lifecycle of Artificial Intelligence (AI) systems. The Convention ensures...

IT Law and Artificial Intelligence 30.09.2024
Reflections of the European Union Artificial Intelligence Act on Actors in Turkiye
Newsletter Articles
Reflections of the European Union Artificial Intelligence Act on Actors in Turkiye

The "Brussels Effect" refers to the phenomenon where European Union (“EU”) regulations influence or set standards globally. Since the EU is a significant market, global companies often find it practical and economically beneficial to adopt EU standards across all their operations rather than comply with multiple...

IT Law and Artificial Intelligence 31.08.2024
Amendments Introduced to the Law on Regulation of Internet Publications
Newsletter Articles
Amendments Introduced to the Law on Regulation of Internet Publications
IT Law and Artificial Intelligence March 2014
Constitutional Court's Annulment Decision on Certain Provisions of the Internet Law No. 5651
Newsletter Articles
Constitutional Court's Annulment Decision on Certain Provisions of the Internet Law No. 5651

With its decision dated 11.10.2023 and numbered 2020/76 E., 2023/172 K. published in the Official Gazette dated 10 January 2024 and numbered 32425 ("Decision"), the Constitutional Court ("Constitutional Court") evaluated the requests for the annulment of certain articles of the Law No. 7253 on the...

IT Law and Artificial Intelligence 29.02.2024
Latest Development As Regards to the Social Network Providers
Newsletter Articles
Latest Development As Regards to the Social Network Providers

The Information Technologies and Communications Board adopted the Procedures and Principles for Social Network Providers (“Procedures and Principles”) with its decision dated 28.03.2023 and numbered 2023/DK-ID/119. The said decision was published in the Official Gazette dated 01.04.2023, and entered into...

IT Law and Artificial Intelligence 31.07.2023
Artificial Intelligence Act Adopted by the European Parliament
Newsletter Articles
Artificial Intelligence Act Adopted by the European Parliament

The first “Artificial Intelligence Act” of all time, which includes rules and regulations that directly affect tools such as ChatGPT, Bard and Midjourney adopted by the European Parliament with a majority of votes. Thus, the European Parliament has officially taken the steps of a regulation that could be a turning point for...

IT Law and Artificial Intelligence 31.07.2023
Did Social Network Platforms Comply with the New Regulations in Turkey?
Newsletter Articles
Did Social Network Platforms Comply with the New Regulations in Turkey?
IT Law and Artificial Intelligence January 2021
What Has Come About through the Social Media Regulation?
Newsletter Articles
What Has Come About through the Social Media Regulation?
IT Law and Artificial Intelligence June 2020
Internet Actors in Law No. 5651
Newsletter Articles
Internet Actors in Law No. 5651
IT Law and Artificial Intelligence June 2020

For creative legal solutions, please contact us.