Select Page
AI - Regulatory Framework
“The choices that we make today will shape the AI-enabled world that future generations will inherit”
“Advancing secure, trustworthy and robust AI is foundational to building trust and maximizing societal and economic benefits. Noting that deepening our understanding of the potential security aspects remains important, we recognize the importance of security in AI systems, industry-led voluntary measures, and the adoption of technical solutions, and appropriate policy frameworks that enable innovation while promoting the public interest throughout the AI’s lifecycle.”
– 0 – 0 – 0 – 0 – 0 –
The above are the declaratory remarks of the AI Impact Summit that concluded in India on February 21, 2026. We in TheLawyerics have chosen to highlight this to find out whether there is any distinct possibility that an international regulatory framework would emerge in near future. The seed seem to have been sown in the declaration. The key expression in the declaration is “potential security aspects”.
AI is a mixed fruit basket which contains sweet, sour, bitter and even poisonous fruits. Choice is ours. Rajiv Malhotra in Artificial Intelligence and the Future of Power, 5 Battlegrounds (Rupa, 2021, p. 363), aptly describes the impact of AI as follows, “Artificial Intelligence is disrupting many fragile equilibirums that hold together societies and the present world order. … Clearly, there is insufficient awareness of AI among the social, faith, and political leaders – as compared to the heightened awareness on other issues like global warming, pollution, water supply, genetically modified foods, pandemics, nuclear threats, and so forth.” You would notice that all these matters, except perhaps water supply, are regulated by statutory framework because they are risky.
During the last two centuries or so, when a new technology was developed, such as motor vehicles (including railway, aircraft, etc.) which were both beneficial as well risky, political Governments stepped in to regulate them keeping in view their potential risks. Innovation of nuclear technology gave rise to Vienna Convention in 1993. But Governments world over are clueless how to regulate products and services stemming from digital and electromagnetic technologies.

AI - Regulatory Framework

AI – Regulatory Framework

AI is no less risky than the technologies mentioned in the preceding para. It is impacting our mental faculties. Quoting again Rajiv Malhotra (p. xix, Introduction) expresses concern as follows, “A troubling trend is that as machines get smarter, a growing number of humans are becoming dumber. In a sense,

the public has outsourced its critical thinking, memory and agency to increasingly sophisticated digital networks. As in any outsourcing arrangement, the provider of services becomes more knowledgeable about the client’s internal affairs and the client dependent on the supplier. The quest for deep knowledge and critical thinking is becoming a thing of the past because it is easier for people to use internet searches whenever any information is needed. People are operating on autopilot rather than thinking and learning on their own.” In the same vein, the author says, “… AI is also making a growing number of people cognitively and psychologically dependent on digital networks.” (title cover)
The upshot is that in a few years or at the most in a few decades we shall stop using some of our cognitive faculties. AI would make us forget them. Let me explain this with reference to our ancient history. It is explained in our scriptures that there is human faculty called third eye. It is capable of seeing certain subtle things which are not visible to our eyes. It is something like intuition which is beyond our normal thinking. It is capable of seeing past and future. But we have lost power of third eye because in ancient time, people used to do some introspection on what they had done in the past and what they were doing in the present and what could be impact in the future. We may similarly lose some of our natural intelligence. After all AI is a collection of patters which AI determines from a large store of data provided to it. The AI then guides us what to do or not to do, within those patterns. That becomes our limitation. After some time, we shall stop believing that we can go above those patterns, just as we have stopped believing third eye.
Another set of authors, Henry A. Kissinger, Craig and Eric, in Genesis – Artificial Intelligence, Hope, and The Human Spirit (HACHETTA India, 2024, p. 57), predict “… it should not be altogether surprising, considering the speed, complexity, diversity, scale, and resolution of these new intelligences. Nevertheless, it may be intensely disruptive. The appearance of knowledge – especially of the physical word, that is unique to an AI and not previously possessed by a human would force a reconsideration of the relative status of the human mind. Humans placing our own brains on a new, more continuous spectrums of intelligence would revolutionize our perceptions, self-perceptions, and behaviors.” The italicised expression appears strange to me.
Ultimately my hypothesis is that let us not make AI so autonomous that it starts disabling our natural intelligence. Natural thing is natural and artificial is artificial. If we understand this, there is urgent need to work on some international framework which enables judicious use of AI without overpowering our natural intelligence.
Click the link below to download PDF
AI – International Regulatory Framework