ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation
Introduction
ChatGPT, a large language model (LLM) developed by OpenAI, is an artificial intelligence (AI) system based on deep learning techniques and neural networks for natural language processing.[1]
ChatGPT can process and generate human-like text, chat, analyse and answer follow-up questions, and acknowledge errors. It is good at improving code in programming languages such as Python in a matter of seconds. With the release of an advanced model, ChatGPT-4, in March 2023, it has achieved higher performance and greater functionality in many aspects, including problem solving and image processing.[2]
AI models are expected to open up many opportunities by increasing productivity, creating new search engine architectures, and reducing costs in healthcare, finance and public administration.[3] The mass development in this area by OpenAI and its competitors such as Google and Meta are exciting to watch, but raises major concerns, which are discussed below.
Potential Risks of ChatGPT
Given that LLMs are a form of “generative AI”, these models generate their output according to the training data in hand which may include copyrighted material, confidential, biased or discriminatory information.[4] This means that any data fed into the system becomes a training material for the next models.
The massive data collection and process for AI training does not comply with applicable privacy rules such as GDPR[5] as it lacks transparency and legal justification.[6] Furthermore, the chatbot does not provide an immediate option to remove the previously stored data. It is unclear whether the data collected will be shared with OpenAI’s other tools, leaving it open to information hazards.[7]
The unreliability of the responses generated by ChatGPT leads to inaccuracies in data processing and increases the likelihood of misinformation.[8] Widespread use of such models could trigger disinformation and manipulation at a level where users cannot distinguish whether a text is human or AI generated, a reality or a deepfake.[9] This could lead to widespread deception in media, education and politics.
ChatGPT’s high level of text creation and social-engineering makes it much easier to use for malicious purposes such as phishing. Europol’s recent report on the subject shows that the preparation of personalised scams has become much easier, and faster.[10] Impersonating speech style and producing authentic sounding fraud text makes it easier for victims to believe they are in contact with their loved ones. ChatGPT’s ability to generate codes in a short amount of time has since facilitated the development of malicious software codes for cyberattacks. It provides a valuable tool to criminal actors with little technical knowledge and becomes a major threat for cybersecurity.
There are ethical concerns such as discrimination, exclusion and toxicity about the outputs of AI technology.[11] As AI models are trained with existing online data which by nature belong to dominant social groups, the outputs are expected to raise issues of diversity and inclusion.[12] Although developers work on some safeguards to minimize such results, prompt engineering, namely paraphrasing the way a question is asked, seems to be effective in bypassing the safety measures of ChatGPT.
Accountability and explainability are further concerns in the use of LLM.[13] The authenticity of the tool's output is highly controversial, as ChatGPT is not yet able to respect intellectual property rights in its responses. The neural networks and internal operation principles of the technology are complex and opaque. As the impact of LLMs increases significantly, the accountability and liability of their developers and operators remain as an important issue for policymakers to address.
LLMs such as ChatGPT are likely to have other impacts at a societal level. By automating some tasks and jobs, such as translation and software code development, they could lead to job displacement in some areas.[14]
Recent Responses to ChatGPT from Regulators and Private Sector
At the time of drafting this article, the discomfort caused to society by LLMs is higher than ever.
The Future of Life Institute published an open letter outlining the risks of human-competitive intelligence to society, and calling for a pause of at least 6 months in AI development, arguing that the safety protocols are inadequate. The letter was signed by public figures such as Steve Wozniak, and Elon Musk in addition to many scholars and technologists.[15]
At the end of March 2023, the Italian Data Protection Authority temporarily restricted ChatGPT, citing privacy breach concerns.[16] The lack of legal basis and information on the mass collection and processing of personal data for the purpose of training AI algorithms was criticised. Inaccuracies in data processing and the lack of an age verification mechanism to exclude use by children were also raised as concerns. OpenAI has until the end of April 2023 to fulfil the measures requested by the Italian Data Protection Authority, in order to continue operating in Italy.
Meanwhile, the Spanish and French data protection authorities have launched investigations to review potential data breaches by ChatGPT, and the European Data Protection Board has set up a task force for EU-wide cooperation.[17]
Private industry is taking its own measures and restrictions. Since it was reported that some Samsung employees fed ChatGPT with sensitive data source code, the need to educate employees about the risks of the tool has become more apparent. Some major companies such as Verizon, Amazon, Bank of America Corp, Goldman Sachs, Citigroup Inc, Deutsche Bank AG have banned the use of ChatGPT in the workplace, while some others are drafting policies on its acceptable use.
Conclusion
For many tasks, ChatGPT seems to be a useful tool. It is likely to increase opportunities and productivity in various fields. However, users need to be aware of the risks of such AI technologies and avoid feeding it with sensitive data until adequate safeguards are in place.
In terms of privacy, the tool lacks transparency and legal grounds. Unless users explicitly opt out, any data provided to the chatbot will be used for the LLM's training purposes. The same data could appear as output for other users' needs.
If used on a large scale, the chatbot could lead to misinformation and manipulation due to the unreliability of its answers, as users might not be able to distinguish whether an output is real or deepfake. ChatGPT appears to be a strong threat to cybersecurity, given its ability to generate text and facilitate the creation of malicious software code in a short period of time.
Although ChatGPT developers are working on some safeguards, ethical concerns remain. These include discrimination, exclusion and toxicity in the chatbot's output. Accountability and explainability of this complex technology are further concerns.
- OpenAI 2023, ChatGPT-4, for access: Introducing ChatGPT (openai.com).
- Europol (2023), Tech Watch Flash ChatGPT: The Impact of Large Language Models on Law Enforcement, 27.03. 2023, (“Europol ChatGPT Report”) for access: ChatGPT - the impact of Large Language Models on Law Enforcement | Europol (europa.eu).
- Tambiama Madiega, European Parliamentary Research Service, Digital Issues in Focus, “General-Purpose Artifical Intelligence”, 31.03.2023, for access: General-purpose artificial intelligence | Epthinktank | European Parliament.
- OECD (2023), "AI Language Models: Technological, Socio-Economic and Policy Considerations", OECD Digital Economy Papers, No. 352, OECD Publishing, Paris, (“OECD AI Report”), p. 10, for access: AI language models: Technological, socio-economic and policy considerations | en | OECD.
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (“GDPR”) OJ L 119, 04.05.2016, p. 1–88.
- Information Commissioner’s Office, The Alan Turing Institute, “Explaining Decisions Made with AI”, p. 10, for access: https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/.
- OECD AI Report, p. 30.
- OECD AI Report, p. 33.
- Europol ChatGPT Report, p. 11.
- Europol ChatGPT Report, p. 7.
- OECD AI Report, p. 30.
- Tambiama Madiega, European Parliamentary Research Service, Digital Issues in Focus, “General-Purpose Artifical Intelligence”, 31.03.2023, for access: General-purpose artificial intelligence | Epthinktank | European Parliament.
- OECD AI Report, p. 36.
- OECD AI Report, p. 39.
- The Future of Life Institute, Pause Giant AI Experiments: An Open Letter, for access: Pause Giant AI Experiments: An Open Letter - Future of Life Institute.
- Decision of Garante Per La Protezione Dei Dati Personali (Italian Data Protection Authority) dated 31.03.2023, for access: Intelligenza artificiale: il Garante blocca ChatGPT. Raccolta illecita di... - Garante Privacy.
- European Data Protection Board Press Release dated 13.04.2023, for access: EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT | European Data Protection Board (europa.eu).
All rights of this article are reserved. This article may not be used, reproduced, copied, published, distributed, or otherwise disseminated without quotation or Erdem & Erdem Law Firm's written consent. Any content created without citing the resource or Erdem & Erdem Law Firm’s written consent is regularly tracked, and legal action will be taken in case of violation.
Other Contents
The Framework Convention on Artificial Intelligence (Convention) is an international treaty proposed by the Council of Europe that was recently opened for signature . This is the first legally binding international framework regulating the entire lifecycle of Artificial Intelligence (AI) systems. The Convention ensures...
The "Brussels Effect" refers to the phenomenon where European Union (“EU”) regulations influence or set standards globally. Since the EU is a significant market, global companies often find it practical and economically beneficial to adopt EU standards across all their operations rather than comply with multiple...
With its decision dated 11.10.2023 and numbered 2020/76 E., 2023/172 K. published in the Official Gazette dated 10 January 2024 and numbered 32425 ("Decision"), the Constitutional Court ("Constitutional Court") evaluated the requests for the annulment of certain articles of the Law No. 7253 on the...
The Information Technologies and Communications Board adopted the Procedures and Principles for Social Network Providers (“Procedures and Principles”) with its decision dated 28.03.2023 and numbered 2023/DK-ID/119. The said decision was published in the Official Gazette dated 01.04.2023, and entered into...
The first “Artificial Intelligence Act” of all time, which includes rules and regulations that directly affect tools such as ChatGPT, Bard and Midjourney adopted by the European Parliament with a majority of votes. Thus, the European Parliament has officially taken the steps of a regulation that could be a turning point for...