What is Artificial Intelligence?
Artificial Intelligence (AI) discusses the study of different ways of utilizing computer-based systems so as to perform certain functions that are normally performed by humans. The field of research in AI is extremely vast and comprises of various sub-fields such as robotics, natural languages and simulation of human sensory capabilities[1]. The use of AI is present in almost all avenues. AI however, does not possess the ability to think or introspect in a manner similar to humans. The intellect utilized by AI is infused by a human mind, and what feeds AI is data.
AI possesses the ability to permeate into each and every aspect of our society. It is present in critical domains such as healthcare and humanitarian aid, as well as in the comparatively mundane domains such as dating. The usage of embodied AI in robotics and techniques like machine learning may lead to progress in social welfare and exercise of human rights. Numerous sectors continually benefit from AI. At the same time, AI contains within itself, scope for misuse and may behave in unprecedented or harmful ways. This is why questions on the role played by law and ethics are more important than ever.
Luciano Floridi argues that since digital revolution transforms the views about values and priorities, governing the same has become a fundamental issue. The technological excitement of expert systems must be balanced as against the social, political and legal implications of the systems[2]. Al has proliferated into high-risk areas, and is attracting pressure to design and govern AI to be accountable, fair, and transparent. The central question is how this can be achieved, and through what frameworks?
Ethical Concerns
There are numerous debates relating to the ethical conflicts with regards to AI. The most important concern in the populace about AI is its inscrutability and lack of transparency. The lack of transparency in AI is not merely due to the fact that it is a new technology, but because of its complexity, making it impossible for a layperson to understand the process. AI operates under a veil with a few hands of developers to which no one can glance into. This has made AI a mysterious concept of what is referred to as the “black-box theory”.
Ethical questions arise when AI systems are given significant decision-making power. Striking a balance between autonomy and human control is crucial to avoid ceding important choices to AI without human oversight and accountability. They often rely on vast amounts of personal data, raising concerns about privacy and informed consent. Ethical frameworks should ensure that individuals’ data is protected and that users are informed about the AI’s decision-making process and how their data is used.
Legal Concerns
As AI systems make decisions that can have significant consequences, determining liability becomes a challenging task. Who should be held accountable if an AI-driven decision results in harm? Should it be the developers, the users, or the AI system itself? Legal frameworks need to be established to address these issues and define responsibility clearly. Legal standards must be developed to ensure that AI systems can provide understandable explanations for their decisions, especially in high-stakes scenarios like healthcare and legal proceedings[3]. However, it is pertinent to note that generative AI platforms such as ChatGPT or Google’s Bard may not be able to process the personal data of Indians available in the public domain, as per the latest draft of the Digital Personal Data Protection (DPDP) Bill, 2023 which was approved by the cabinet last month.
AI has been employed in the Indian legal system without adequate transparency in some cases. For example, in 2020, an AI-powered facial recognition system was employed in the investigation of the Delhi riots, prompting worries about its lack of transparency. The judgement calls made by such AI may be called into question due to lack of transparency.
The Way Forward
In the future, AI is expected to simplify and accelerate pharmaceutical development. AI may be able to convert drug discovery from labor-intensive to data-intensive by utilizing robotics and models of genetic targets, drugs, organs, diseases and their progression, pharmacokinetics, safety and efficacy[4].
AI systems will only continue to be more and more autonomous, and will reduce the amount of human intervention required. However, this poses a challenge as against accountability as it may be difficult to assign the persons responsible. This could be particularly challenging in contexts such as healthcare, where AI may make life-or-death decisions, or law enforcement agencies’ use of facial recognition technology, which has been criticized for its propensity to perpetuate biases and abuses of privacy rights[5]. As AI becomes more incorporated into the legal system, it will become critical to address the concerns regarding transparency and accountability and guarantee the ethical and accountable use of AI.
As of now, there is no regulation or law in India that regulates AI, machine learning, or big data. But the Government observed the need to focus on the development and implications of AI. The Information Technology Act, 2000 deals with data protection, cybersecurity, and electronic transactions, and the Digital Personal Data Protection Bill, 2022 deals with data privacy, consent, and the processing of personal data, both of which would be relevant to AI systems. The Union Ministry of Commerce and Industry established an Artificial Intelligence Task Force in 2017 intending to integrate AI into India’s economic, political, and legal thought processes.
Conclusion
Through these measures and provisions in our legal system, it may be reasonable to attribute importance of supervision in the legal field to AI, and must be protected as well as regulated. The lack of a defined legislative framework for the use of AI raises questions about accountability and transparency. The formation of committees and institutes, as well as the publication of the NSAI discussion paper by NITI Aayog, are positive milestones.
It is without doubt true that the integration of AI in the Indian legal system will improve the efficiency and accuracy. However, this does not mean that questions about accountability and transparency are not addressed so as to ensure that the employment of AI is ethical. It is known that all AI decisions, unlike human decision-making, are systematic. Although there are no legal repercussions due to the absence of efficient legal frameworks, they will always lead to accountability. There are moral dilemmas in the use of AI, as it is likely to replace the current systems, but, not using AI could also possibly be unethical.
[1] Omar E. M. Khalil, “Artificial Decision-Making and Artificial Ethics: A Management Concern”, 12(4), Journal of Business Ethics (1993).
[2] Supra note 1.
[3] Rong G, Mendez A, Bou Assi E, Zhao B, Sawan M., “Artificial Intelligence in Healthcare: Review and Prediction Case Studies”, Engineering (2020).
[4] Stephenson J. “Who Offers Guidance on Use of Artificial Intelligence in Medicine”, JAMA Health Forum (2021).
[5] Nivash Jeevanandam, “AI in judicial systems”, India AI (2023).
Be First to Comment