
This post is authored by K. P. Hemanth Kumar, Privacy Legal Counsel at Philips, Amsterdam, The Netherlands and B. Barathan, Advocate Madras High Court.
1.INTRODUCTION
This short article is focused on exploring the legal gray area in the context of automated processing of personal data. The initial discussion is about the potential risks of using AI systems in processing personal data automatically. Followed by a discussion on how the Digital Personal Data Protection Act, 2023 along with notifications of the Ministry of Electronics and Information Technology on AI address the risks. In the present article, the European Union (“EU”) General Data Protection Regulation, (“GDPR”)[1] and Artificial Intelligence Act, 2024 (“AI Act”) will be relied on only for the purpose of highlighting and the difference in legal framework in automated processing of personal data. Finally, the article will conclude with a note on the legal improvements required in India.
2.POTENTIAL RISKS IN AUTOMATED PROCESSING OF PERSONAL DATA
Artificial Intelligence has no single widely accepted definition. Merriam's-webster online dictionary defined it as “the capability of computer systems or algorithms to imitate intelligent human behavior”. The AI Act, 2024 defines AI systems as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;”[2] Today we are well aware that AI systems are used wherever automation is required. It may range from deploying simple chatbots for service support to using automatic robots that assist doctors in performing complex operations. An AI system can be a simple algorithm alone which is just a software or an application or a combination of algorithm and hardware for performing some tasks. No matter the form AI systems are deployed, there are numerous instances involving active processing of personal data for delivering services to customers. AI systems are often called "black boxes" because it is unclear how they generate their results. This means their decision-making process lacks transparency, making it difficult to understand how they arrive at a particular outcome.[3]
With such automated systems there may be risk either within the process affecting the result produced or the system can be configured to produce a result that is harmful in some way. The following real life instance will throw light on such situations. One ominous instance of a system's inherent risk that arose due to bias in the AI system is that of the Epic Sepsis Model that was deployed in hospital settings to predict the onset of septic shock in ICU patients who are mostly in unconscious state. The AI model failed to predict the onset of sepsis in 67% of the cases due to an inherent bias in the system. In this case, the inherent risk in the systems was the bias that interfered with the proper functioning of the automated model.
There is a reason for occurrences of such biases or discriminatory results. AI systems function on statistical methods and learn through trial and error[4]. Hence their decisions are probabilistic, not absolute.[5] There is always a margin of error that leads to “bias” in the output that may be discriminatory. Sometimes the margin is so large that the whole system fails, but sometimes it remains latent and shows up only in a few instances.
When working with such systems that are prone to error or bias, it is logical and also moral to allow the users affected by this system to have some kind of redressal against the usage of the system in their case. Such redressals can be by way of conferring enforceable rights under law especially when personal data processing is involved. This will increase the confidence of the users on the system and bring legal certainty. A good legal framework should address these risks and, in this context, the prevailing Indian law will be analyzed in the following section.
3. LEGAL FRAMEWORK FOR AUTOMATED PERSONAL DATA PROCESSING IN INDIA
With respect to the usage of AI systems, in March 2024 the Ministry of Electronics and Information Technology (“MeitY”) issued an advisory to the intermediaries. Further, the MeitY recently published the subcommittee's report on 'AI Governance Guidelines Development' for public consultation. Dehors this there is no other advisory or legislation on AI in India. India’s Digital Personal Data Protection Act (“DPDPA”), 2023 aims to regulate the processing of personal data. Though the act has received the President's assent, it has not yet been notified as the rules are being drafted and the bureaucratic machinery is yet to be set up for implementing the law.
In DPDPA, section 8 outlines the general obligations of a data fiduciary (controller in GDPR) when handling the personal data of data principals (data subjects), including the duty to process such personal data in a fair, lawful, and accurate manner. Though the core principles of processing personal data have not been spelled out explicitly, they can be read between the lines.
Section 8(3) only deals with the processing of personal data by a data fiduciary to make a decision that affects the data principal alone. The entire legislation is silent on automated decision-making. This omission raises serious concerns in the context of AI-driven decision-making, where automated systems are increasingly used for customer interactions, loan approvals, recruitment, and law enforcement.
Section 8(3) of the Digital Personal Data Protection Act, 2023 reads as follows,
8. (3) Where personal data processed by a Data Fiduciary is likely to be—
(a) used to make a decision that affects the Data Principal; or
(b) disclosed to another Data Fiduciary,
the Data Fiduciary processing such personal data shall ensure its completeness, accuracy and consistency.
In the case of AI powered automated decision-making, there is no proper redressal mechanism for data subjects either to object to the mechanism or to the output results if found to be discriminatory. The aggrieved individuals have to fall back on DPDPA’s general personal data processing provisions. This lack of clarity in the law leaves room for debate and legal grievances.
4. PEEK INTO THE EUROPEAN LEGAL FRAMEWORK - INTERPLAY BETWEEN GDPR AND AI ACT
The European Union passed GDPR in 2016 to regulate the processing of personal data and it is in force since 2018. The core principles of informational privacy are ingrained in Article 5 of the GDPR as it lays down basic ingredients for the processing of personal data of data subjects. We know these principles as lawfulness, fairness, transparency[6], purpose limitation[7], data minimization[8], storage limitation[9], accuracy[10], security, integrity, and accountability. It also grants many rights to the data subjects (data principal) to give them more control over their data, such as restricting processing on certain grounds[11] and requesting the deletion of their personal data.[12]
With respect to automatic decision making, Article 22 of GDPR[13] (and recital 71[14])grants data subjects the right to challenge fully automated decisions when such decisions have legal or significant effects on this.
The legal or significant effect can be loosely described as some act has an effect on one's right or freedom previously, say automatic rejection of a candidature by an AI system or rejection of a loan application automatically. In such a scenario, under GDPR, the customer would have the right to request human intervention, ensuring that their case is reviewed by an actual support agent rather than being solely dependent on an AI system.
In the area of automated processing, the new AI Act is complementing GDPR which classifies AI Applications based on the risk to the risks and freedoms of individuals viz., minimal, limited, high, and unacceptable risks based on the parameters laid down by the regulation. Article 14 of the AI Act[15] mirrors the human intervention requirement in GDPR. This covers the AI systems that process or do not process personal data. The human-in-loop requirement is a mandatory requirement in automated processes to keep a check on the errors in the result[16]. The true effectiveness of implementing the AI Act is yet to be seen, and a full understanding and appreciation of its impact will take time. Further the strict post-market surveillance[17] in the AI Act ensures that the systems are functioning error-free in the market.
5. CONCLUSION
If DPDPA gave data principals the right to restrict the processing of personal data (similar to Article 18 of GDPR), it would have been an effective way to keep the automated decision-making process in check for personal data. Its absence will give rise to an all-or-none situation whereby the data principle can only choose to not have their data processed at all or accept the decision the automated process gives without any middle ground.
The legislature should consider appropriate amendments to DPDPA to accommodate the deployment of AI systems for processing personal data. It will not be long before the public grows conscious of their privacy rights and starts enforcing them. In India, there is no classification of the AI systems or applications based on the nature of the risk a particular AI system may pose to personal data, like the legislation in the EU.
ENDNOTES:
[1]Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1; [GDPR (General Data Protection Regulation)]
[2] Regulation (EU) 2024/1689 on artificial intelligence (Artificial Intelligence Act) [2024] OJ L112/1, art 3(1); [AI Act (Artificial Intelligence Act)]
[3] B Ozaydin, ES Berner and JJ Cimino, 'Appropriate use of machine learning in healthcare' (2021) 5 Intelligence-Based Medicine 1.
[4] O Osoba and W Welser IV, An Intelligence in our Image: The Risk of Bias and Errors in Artificial Intelligence (Rand Corporation 2017) 3-5
[5] O Osoba and W Welser IV, An Intelligence in our Image: The Risk of Bias and Errors in Artificial Intelligence (Rand Corporation 2017) 3
[6] GDPR (General Data Protection Regulation), art. 5(1)(a)
[7] GDPR (General Data Protection Regulation), art. 5(1)(b)
[8] GDPR (General Data Protection Regulation), art. 5(1)(c)
[9] GDPR (General Data Protection Regulation), art. 5(1)(e)
[10] GDPR (General Data Protection Regulation), art. 5(1)(d)
[11] GDPR (General Data Protection Regulation), art. 18
[12] GDPR (General Data Protection Regulation), art. 17
[13] GDPR (General Data Protection Regulation), art. 22
[14] GDPR (General Data Protection Regulation), recital 71
[15] AI Act (Artificial Intelligence Act), art. 14
[16] AI Act (Artificial Intelligence Act), art. 14
[17] AI Act (Artificial Intelligence Act), art. 3(25)
Comments