SFLC Hosts Dialogue On AI: Navigating Risks, Regulations, And Responsible Innovation

The panel discussions underscored the urgent need for clear, inclusive AI governance frameworks that suits India’s unique context and also aligns with global best practices like the EU's GDPR. The event reinforced the significance of pro-innovation regulations, digital literacy, and stakeholder collaboration in shaping an AI ecosystem. Based on the discussions, SFLC.IN will soon launch its research paper, Harnessing Open Source in AI, which will serve as a cornerstone for all future dialogues and initiatives in shaping responsible AI practices.
On Monday, the Software Freedom Law Center, India hosted a dialogue on artificial intelligence at the India Habitat Centre, bringing together industry leaders, academics, and technology experts. The event, titled AI in Focus: Navigating Risk, Regulation, and Responsibility, featured two panel sessions examining the challenges and opportunities of generative AI and open-source technologies and their impact on technological and social frameworks.
The event began with an opening statement by Mishi Choudhary, Founder, SFLC.IN, who remarked, “Sflc.in has been working around AI since 2018, when Gen AI was relatively non-existent. We have seen data evolve, from simple pattern-matching software to systems capable of imitating human behaviour; it raises urgent questions about ethics and rights. We must shift from reactive measures to proactive and pro-innovation regulations, but not at the cost of human rights, equity, and environmental sustainability.”
Saikat Saha, Technology Director, NASSCOM, highlighted “At NASSCOM AI, we are shaping technical charters and addressing collective risks to drive responsible AI adoption in India. AI can be an economic game-changer for priority sectors like SMEs. While regulatory uncertainties and geolocalized challenges persist, we focus on fostering open consultations between corporates, MSMEs, and stakeholders.”
Pamposh Raina, Head, Deepfakes Analysis Unit (DAU), Misinformation Combat Alliance, said
“Misinformation generated by AI, especially manipulated audio and video, is a growing concern. We analyzed over 2,200 media pieces in eight months, revealing significant data misuse during elections, rising health misinformation, and financial fraud. This issue requires a collaborative approach, such as focusing on AI and digital literacy and ensuring platforms flag fake content.”
Sunil Abraham, Policy Director, META India, emphasized, “LLMs are non-deterministic, and engineers are often racing ahead of scientists with this black box. At Meta, we believe that the future lies in a multiplicity of large and small models that are more accessible, affordable, and capable. The ecosystem must catch up with the regulatory landscape, deploy responsibly, and focus on model explainability, particularly for sensitive use cases like healthcare.”
Udbhav Tiwari, Director, Global Product Policy at Mozilla, added “Open-source AI holds immense potential, but it must be approached with responsibility. Clear definitions and safeguards are important to avoid risks like openwashing and ensuring alignment with shared norms and values. The deliberate design of these systems and their data comes with a responsibility that laws and institutions must incentivize.”
The panel discussions underscored the urgent need for clear, inclusive AI governance frameworks that suits India’s unique context and also aligns with global best practices like the EU's GDPR. The event reinforced the significance of pro-innovation regulations, digital literacy, and stakeholder collaboration in shaping an AI ecosystem. Based on the discussions, SFLC.IN will soon launch its research paper, Harnessing Open Source in AI, which will serve as a cornerstone for all future dialogues and initiatives in shaping responsible AI practices.

Also Read

Stay in the know with our newsletter