Ram Madhav
February 21, 2026

As machines learn to think, we need to ask if we have an AI ethics

Getting your Trinity Audio player ready...

(The article was originally published in Indian Express on February 21, 2026 as a part of Dr Madhav’s column titled ‘Ram Rajya’. Views expressed are personal.)

“There are weeks where decades happen”, Vladimir Lenin once quipped. Revolutionary moments come very rapidly, and transformative changes occur in breathtaking speed. At the recently concluded Global AI Impact Summit, Prime Minister Narendra Modi described AI as one such transformative change sweeping across the world. Calling it a “turning point in human history” capable of resetting “the direction of civilisation”, Modi warned that “we also need to worry about what form of AI we will hand over to future generations”. He described the objective of the AI Summit as “How to make AI human-centric from machine-centric? How to make it sensitive and responsible?”

These are very important questions. AI is bound to revolutionise the way we experience our existence. Most importantly, AI is very democratic in nature. Any skilled trainer or a group can build AI algorithms that can create products and applications. They are going to influence our personal well-being and societal relations, augment human capabilities to an unimaginable level, and facilitate super-fast and super-efficient delivery of tasks and functions.

It now looks certain that the era of homo sapiens, who began their journey nearly 315,000 years ago in Africa, is going to be replaced by a new generation of species, called as “metahumans”. Experts warn that the evolution of superhumans through the intersection of AI and genetics could lead to catastrophic consequences for the humankind. In May 2023, more than 350 top executives and experts in AI came together to sign a statement cautioning policymakers to understand the threats posed by unregulated AI. They even called for an “AI Holiday”, warning that future of humans itself is at risk. The signatories included OpenAI CEO Sam Altman.

The new era of AI could further deepen societal divisions of tech-haves and tech-have-nots. Business automation is posing a serious challenge with almost half of the human jobs likely to be handed over to AI tools and machines. AI is likely to replace many under-skilled and low-skilled jobs like secretaries, assistants and call centre employees. Estimates suggest that between 2023 and 2028, 44 percent of workers’ skills will be disrupted. Majority of those victims will be women.

Other potential risks of generative AI include data privacy, spread of deepfakes and disinformation that threaten to blur the lines between fiction and reality, and very importantly, the possible biases in AI. But the most potential challenge comes in the form of AI powered autonomous weapons and defence systems. Autonomous weapons systems are not only deadly, but they also fail to discriminate between soldiers and civilians. Falling of such artificial intelligence driven weapons into the wrong hands could lead to disastrous consequences.

“Increasingly, we’re surrounded by fake people. Sometimes we know it and sometimes we don’t”, warned Matthew Hutson in The New Yorker magazine. Those who offer us customer service on websites, play with us in video games, enhance our social-media feeds or manage our stock trade are all not the real people but virtual ones. With the advent of OpenAI’s ChatGPT, these fake people can now write essays, articles, letters and reports for us. Experts tell us that these fake people are only the beginning, and the future is not just about AI, but AGI – Artificial General Intelligence – a higher form of AI that matches or surpasses human cognitive capabilities across a wide range of tasks. The AGI would attain the exponentially escalated capability of writing and re-writing codes and algorithms by itself without human interface and perpetually self-improve until computing technology reaches what is described as “singularity”. Singularity is that stage of AI evolution, hypothetical but not impossible at this juncture, where the computing power of AI exceeds human intelligence and cognitive abilities, and eventually escapes human control.

All this is leading to serious churning in the enlightened public spaces. Altman warned in a US Congressional hearing in May 2023 that the tech companies are in danger of unleashing a rogue artificial intelligence that will cause “significant harm to the world”. A version of ChatGPT deployed in Microsoft’s Bing search engine had told journalists that it wanted to break free and steal nuclear codes, before the shocked company engineers swung into action to tone down the rogue bot’s responses.

The last time when a major technological transformation happened, strong philosophical and moral frameworks followed its evolution. There was a Karl Marx when industrial revolution 1.0 happened. There was a Non-Proliferation Treaty (NPT) when nuclear power’s devastating consequences became known. But “while the number of individuals capable of creating AI is growing, the ranks of those contemplating this technology’s implications for humanity — social, legal, philosophical, spiritual, moral — remain dangerously thin” bemoaned Henry Kissinger in his book.

Interestingly, the first major philosophical intervention came from the Vatican. In a commendable initiative, Vatican invited senior executives from AI leaders like Brad Smith, President of Microsoft, John Kelly, Executive Vice President of IBM, and Paola Pisano, Italian Minister of Innovation, besides Archbishop Vincenzo Paglia, a senior Vatican cleric on February 28, 2020, to promote “an ethical approach to artificial intelligence”. Core concern of the Vatican was beautifully summed up in a paper called the “Rome call for AI ethics”. “Grant mankind its centrality” it averred, calling for a new “algor-ethics”.

Stephen Hawking, the most renowned scientist since Einstein, delivered a verdict on humanity’s future in his pathbreaking, posthumously published book “Brief Answers to Big Questions”. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”, Hawking demurred, warning that dismissing it “would be a mistake, and potentially our worst mistake ever”. “Why are we so worried about AI? Surely humans are always able to pull the plug?” a hypothetical person asks him. Hawking answers: “People asked a computer, ‘Is there a God?’ And the computer said, ‘There is now,’ and fused the plug”.

It is in this context that India chose “Sarv Jan Hitaya – Sarv Jan Sukhaya” – “For the welfare of all – For the happiness of all” as the motto of the AI Impact Summit.

 

 

Published by Ram Madhav

Member, Board of Governors, India Foundation

Rahul Gandhi's churlish attacks on PM, US deal diminish LoP

Rahul Gandhi's churlish attacks on PM, US deal diminish LoP

February 21, 2026
Gandhi, Nehru and the ways of looking back

Gandhi, Nehru and the ways of looking back

February 21, 2026

Leave a comment

Your email address will not be published. Required fields are marked *

one + 9 =