The recent inauguration, ‘Pran Pratishtha’ ceremony of Ayodhya’s Ram Mandir on January 22, 2024, saw ‘tradition’ integrate with ‘technology’ as the latest advancements shielded security in the temple.

13,000 forces deployed, anti-terrorism squads, dog squads, and bulletproof vehicles; add to that the integration of artificial intelligence and machine learning. It sounds straight out of a science fiction novel, right? But this is the dystopian reality we’re living in.

The grand inauguration of Ram Mandir saw Ayodhya buzz with religious fervour and anticipation, but the inauguration also shone a spotlight on the ‘modern security measures’ implemented in the process, notably the use of biometric surveillance technology.

Such an integrated, cutting-edge technology was utilised to monitor and track individuals in real-time to bolster security efforts that aided the security operations and guarded the temple.

This surveillance technology enables easy identification of individuals through advanced algorithms that analyse unique physical characteristics such as facial features, fingerprints, or iris patterns, offering authorities an additional layer of security against potential threats.

While many may argue that such measures are necessary to ensure public safety, one can’t help but ponder the potential for abuse and the erosion of individual privacy rights here. Staqu Technologies and their JARVIS platform may claim to offer cutting-edge security solutions, but does it come at a cost to our civil liberties?

What begins as a purported safeguard against terrorism could quickly devolve into a tool of control and consequent oppression. The all-seeing eye of AI knows no bounds, and its unchecked power poses a grave threat to our individual freedom.

The Modi regime has overseen significant advancements, positioning India as a global leader in the digital sphere. However, this is not the first time that the government has taken initiatives in digital governance and technological innovation. While ‘Aapka Aadhar’ has been the front-man of the ruling party’s ‘list of achievements’, reports of Aadhaar data being leaked or misused have raised alarm bells among privacy advocates in the past too, highlighting the need for robust safeguards to protect citizens’ sensitive information.

Very similar to this, the National Digital Health Mission also raised serious concerns about privacy and data security.

Moreover, the approach to data protection has also come under scrutiny for its perceived lack of transparency and accountability. The way data is captured and stored raises serious questions about the government’s commitment to upholding privacy rights.

Against this backdrop, the deployment of biometric surveillance at the Ram Mandir inauguration takes on added significance.

Influential figures ranging from the Ambani family to Alia Bhatt had come to witness history unfold, but a web of surveillance technologies, apparently labelled as necessary measures to ensure safety and security, surrounded the star-studded affair.

In an era marked by evolving security threats, the adoption of such efficient and advanced surveillance tools helped monitor the movements of attendees, identify suspicious individuals, and coordinate response efforts effectively. The ever vigilant and tireless ‘unseen eyes’ of AI scanning the crowd pose inherent risks to individual privacy rights, which can potentially escalate and lead to mass surveillance.

While the government may justify these measures as a necessary evil in the face of modern security challenges, this “slippery slope of mass surveillance” is imperative to be discussed thoughtfully since the measures to ‘ensuring safety’ themselves can also gradually turn into a ‘threat to safety’ because abuse of unlimited power is indeed very likely in the absence of some necessary checks.

Ram Mandir’s majestic inauguration highlights two major observations about “New India”: while on the one hand, it shows an intersection of religion, politics, and governance, on the other hand, it also underscores the complex interplay between security, privacy, and technological innovation in contemporary India.

While it may also symbolise a new dawn for a certain segment of the population in India, it also serves as a stark reminder of the delicate balance between security and privacy. As we march forward into an uncertain future, let us not sacrifice our freedoms at the altar of technological progress. Our rights are not negotiable, and our voices must be heard.

In the absence of clear guidelines and oversight mechanisms, there is a real risk that biometric data collected for security purposes could be misused, whether by government agencies or private entities. For instance, the controversial Personal Data Protection Bill, which falls short of international privacy standards, can be exploited for other purposes. The lack of transparency surrounding data storage and access only serves to exacerbate these concerns, leaving citizens vulnerable to potential misuse of power over personal information.

As we grapple with the implications of biometric surveillance in the context of the Ram Mandir inauguration, it is imperative that we demand greater transparency, accountability, and safeguards to protect our privacy rights. The government must demonstrate a genuine commitment to upholding the principles of privacy and data protection, lest we sacrifice our fundamental rights on the altar of security.

Read Also: When Saffron Sparks Debates: Exploring the Aftermath of Ram Mandir Inauguration in Educational Spaces 

Featured Image Credits: securityworldmarket.com 

Kavya Vashisht 

[email protected]

“People are not used to generative technology. It’s not like it evolved gradually; it was like ‘boom’, and all of a sudden it’s here. So you don’t have the level of skepticism that you would need.” – Cynthia Rudin, AI computer scientist. 

With the use of Generative-AI, the world of true lies has just gotten murkier. India finds itself at the crossroads of a technological dilemma, with the resurgence of concerns surrounding artificial intelligence (AI) regulation. Triggered by a police complaint filed by Indian actress Rashmika Mandanna, over a viral deepfake video and with multiple actors getting tangled in the AI trickery, India’s problems with the escalating and targeted threats posed by the deepfake technology have resurfaced. 

What are deep fakes?

Deepfakes, the deceptive offspring of AI, have evolved unimaginably beyond the mere novelties of the digital age. They are digitally manipulated videos that alter someone’s appearance, blurring the lines between reality and fiction, often with harmful intent. It is a mere tool for deception. Unlike Photoshop, deepfakes leverage machine learning to create manipulative content. These sophisticated manipulations, capable of creating convincing videos and images, raise pressing questions about privacy, consent, and the ominous risk of misuse. You might claim ignorance, but the chances are slimmer than a pixel when it comes to avoiding these digital shape-shifters.

The dual face of deepfakes 

India ranks sixth in vulnerability to deepfakes, as per this year’s State of Deepfakes report (Source: India Today). Yet, despite the looming threats, deepfakes have etched their place in the creative realm, contributing to heartwarming moments like Shah Rukh Khan’s personalized Cadbury’s ad campaign and the completion of Fast and Furious 7 after the untimely demise of legendary actor Paul Walker. Museums and galleries embrace deepfakes to resurrect historical figures, and the technology even serves noble purposes such as anonymizing journalists in oppressive regimes. However, the precarious balance between positive and malicious applications remains ambiguous, stirring profound legal, ethical, and social concerns, notably in the absence of widespread regulations. A case in point is the October 2023, incident where a deepfake video of Elon Musk propagated false cryptocurrency claims, leading to financial losses for many. Furthermore, the escalating use of deepfakes in online gendered violence, particularly in the form of revenge pornography, is a growing worry. Ultimately, despite its occasional positive contributions, the technology tilts the scale towards harm, eroding our fundamental grasp of reality. 

A threat to India’s democratic election process

Owing to generative technology, election campaigning has moved beyond just extravagant posters to include AI-generated fake videos. With the upcoming Lok Sabha elections in India in 2024- anticipated to be the largest yet- the potential impact of deceptive deepfakes on the democratic process and their ability to sway voter sentiments cannot be ignored. Political parties could be both creators and victims of the spreading misinformation. A humorous deepfake about a public figure could swiftly transform from a joke to a harmful manipulation. For instance, a set of AI images went viral on Twitter depicting former president Donald Trump being arrested before his indictment, gathering nearly 5 million views within a couple days. India encountered its inaugural challenge of AI intervention in the 2020 Delhi Assembly polls, when users discovered videos featuring then-state BJP chief, Manoj Tiwari, criticizing CM Arvind Kejriwal’s policies in various languages. The Massachusetts Institute of Technology (MIT) confirmed that these videos were AI-generated. The absence of deepfake concerns during India’s 2019 general elections has transformed due to the surge in smartphone users exceeding 650 million and the growing accessibility of affordable high-speed internet in 2023. This scenario heightens the perils of misinformation, posing a serious threat to India’s young electoral base. So it won’t be incorrect to say that such content can now easily influence the elections by manipulating public opinion and eroding trust in political figures – one WhatsApp forward at a time. 

In a report by Outlook India, S.Y. Quraishi, the former Chief Election Commissioner of India, addressed a significant challenge confronting the Election Commission of India (ECI). He underscored the swift propagation of misinformation facilitated by deepfakes and advised the country’s election watchdog to maintain autonomy separate from the endeavors of the Information Technology (IT) ministry.

Some of the deepfakes can come from the ruling parties as well. So, although an alliance between the ECI and the IT Ministry sounds good on paper, there’s always a possibility of collusion, or people in power keeping their eyes closed. So, it’s the ECI’s credibility at stake.

– S.Y. Quraishi, former CEC India (as quoted by Outlook India)

AI: a double-edged sword?

The paradox of AI being crucial in addressing deepfake challenges becomes evident as AI-powered detection systems are currently under development. After all, in a world where your own eyes are on the verge of a trust crisis, who better to put your faith in than a machine? Because nothing says reliability like circuits and algorithms, right? The central problem lies in the fact that deepfakes are convincing enough to fool humans. As technology relentlessly reveals our daily inefficiencies, researchers worldwide are on a quest to create AI tools that can outsmart the AI responsible for cooking up these deceptive deepfakes. It’s like fighting fire with artificial fire, but in a tech-savvy way. AI algorithms can detect and flag deep false content by analyzing indicators such as a person’s heartbeat, enabling authorities to promptly intervene. However, given the potential for inaccuracies, particularly in flagging genuine content, it is important to develop robust algorithms capable of discerning between authentic and counterfeit material. The significant challenge, favoring wrongdoers, stems from the insufficient availability of vast datasets essential for training machine-learning models. So while the good guys find themselves craving an abundance of deepfakes for training purposes, the troublemakers only require a perfectly timed video at the right moment. Ironically, the very tools employed to enhance detectors today might just end up schooling the next batch of mischievous deepfakes. So, as much as individual awareness is crucial, the grand finale of this cat-and-mouse spectacle will likely hinge on the big tech players stepping up to the plate. 

India’s actionable plan

From surfacing in 2017 on Reddit to being ranked as the most serious AI crime threats, laws around deepfakes are still not solid. Yet, even though Indian laws do not explicitly mention deepfake technology or directly confront its complexities, the existence of certain legal provisions under The Indian Penal Code, The Information Technology Act, 2000, and The Copyright Act, 1957, addresses its misuse and holds the responsible accountable. Notably, India’s IT rules from 2021 mandate that intermediary platforms remove content produced through deepfake technology within 36 hours of reporting. Some experts argue that while government oversight can mitigate misuse and ethical concerns, excessive regulation may impede technological progress. This underscores the importance of investing in algorithms for deepfake detection, emphasizing proactive measures over reactive approaches.

Hence, a strategic partnership between the Indian government and stakeholders in the tech industry becomes crucial in establishing a robust defense against this emerging threat. Following a meeting with leading social media platforms and AI companies on November 23, Ashwini Vaishnaw, the Union Minister of Electronics and Information Technology, announced that the government will devise a “clear, actionable plan” within the next 10 days to counter the proliferation of deepfakes, referring to it as a “new threat to democracy.” The forthcoming strategic plan is anticipated to focus on four key pillars: deepfake and misinformation detection, prevention of their dissemination, reinforcement of reporting mechanisms, and heightened public awareness. Whether brought in through a new law or amendments to existing ones, these regulations are expected to undergo a public consultation, according to Vaishnaw.

The professors who were involved in the meeting clearly made the point that it is no longer a difficult task to detect deepfakes. All platforms agreed that it is possible to do (the detection) within the privacy framework we have all over the world.

– Union IT Minister, Ashwini Vaishnaw

Strongly advocating for a proactive stance from social media platforms in tackling deepfake content, Vaishnaw underscored that the ‘Safe Harbour’ provision, previously protecting these platforms, could be reconsidered if they don’t take sufficient measures against deepfakes. During the meeting, social media companies acknowledged the importance of labeling and watermarking for identifying and eliminating harmful deepfake visuals. With the upcoming December meeting, there is optimism for the implementation of more stringent rules to address the growing threat India faces from this deceptive phenomenon.

Read Also: Deocoding Deceptive-Deepfake 

Featured Image Credits: Mint

Manvi Goel

[email protected]

The antidote to our pre assignment submission anxiety has been discovered in our increasing reliance on AI tools and real time chatbots. The contemporary scenario of everything being fast-paced continues to alienate people from critical creative skills, Artificial Intelligence compounds the menace at hand.

Gone are the days when young students would sit with their parents, and draft, and then redraft scripts for the speech for their morning assembly. Be it a declamation, or a debate, or an MUN, ready-made material is made available at everyone’s disposal. Research papers requiring several weeks worth of research can now be complied in a few hours altogether.  Extremism has been witnessed to the extent that birthday wishes, congratulatory messages – are all being composed, start to end, by AI tools. This lethargic approach is breeding a generation of individuals with stunted innovation, depreciating creativity and sluggish habits. The justification provided for this shift in the nature of retrieving information is the growing competition, and the need to save up time and expend it on ‘more important’ things. Conformists, in the name of academic students will be produced, destined and dedicated to lead a mundane life plagued by the race of placements and abnormally competitive exams. The pressure from these takes away any remnant will to indulge in anything remotely creative.

Heavy dependence on AI for not just academia, but absolutely anything is churning our individuals depleted of the critical ability to, bereft of perspective. Informative is directly consumed from what is vomited out by AI tools without the bare minimum efforts to relook things. People need to realize that Open AI tools are meant to make one’s work easier, not do one’s whole work.  AI bots lack the basic human intelligentsia to produce the kind of work us individuals can. Majority of the output is highly generic, and vastly derivative of the already existing information. No new thought, no new ideas, personal anecdotes comprise a part of the output generated.

One thing guaranteed is the fact that AI can never replace humans, or match the potential of human creativity. It will never kill creative roles, but has a disgustingly high propensity to severely damage the potential of creativity by making humans increasingly dependent. These tendencies also pose a grave threat to the genuine and honest appreciation of real art. With AI sites, capable of producing summaries of entire books, seminal research pieces, stellar pedagogical specimens, one fails to appreciate the artistic nuances and the rigorous research of a creative piece. There is a looming danger of a possible deterioration of the spirit of art appreciation.

Jane Austen didn’t write “Pride and Prejudice” in a hurry, Amrita Pritam didn’t draw inspiration from summaries of anthologies of Sheikh Farid, Shah Husain, Waris Shah, Hasham. Without the inner burning desire to create and introspect, Van Gogh’s melancholic “The Starry Night” would have never existed. Creativity is God’s gift to very few people, don’t let the abundance inside you deplete by giving into the lures of mundanity and convenience.

Image Credits: BYJUs

Rubani Sandhu

[email protected]