Office of Principal Scientific Adviser (OPSA) released a White Paper titled “Strengthening AI Governance Through Techno-Legal Framework“, outlining India’s approach to building a trusted, accountable, and innovation-aligned artificial intelligence (AI) ecosystem.The publication is the second in the White Paper Series on “Emerging Policy Priorities for India’s AI Ecosystem“, an initiative by the OPSA that aims at deepening understanding and fostering informed discussion on critical AI policy issues.The first White Paper in the series, released in December 2025, focused on “Democratising Access to AI Infrastructure“, highlighting the need to treat AI infrastructure as a shared national resource and identifying key enablers such as access to high-quality datasets, affordable computing resources, and integration with Digital Public Infrastructure (DPI).
Key Highlights of the White Paper
¨
Transition from
Traditional “command-and-control” regulation to a “Techno-legal” AI Governance:
A practical and ecosystem-wide model that embeds governance directly into the
design and operation of AI systems by default and aims to mitigate risks while
preserving flexibility and innovation.
¨
The framework must also
carefully balance privacy safeguards with inclusion, equity, and model utility
considerations.
Key focus areas covered
¨
Understanding the
techno-legal approach to AI governance
¨
Enabling safe and trusted
AI across the full AI lifecycle
¨
Technological pathways
for operationalising techno-legal governance
¨
Implementation considerations
for India’s AI governance framework
¨
Development of
techno-legal tools and compliance mechanisms.
National database
¨
To record, classify, and
analyse safety failures, biased outcomes, security breaches and misuse of
AI.Enable post-deployment accountability through measures such as
India-specific risk taxonomy, detection of systemic trends and emerging
threats, data-driven audits and targeted regulatory interventions, and
evidence-based refinement of technical and legal controls.The national database
should have reports on such incidents submitted by public bodies, private
entities, researchers, and civil society organisations.
¨
Measures for AI-Industry:
Publish transparency reports, conduct regular fairness and robustness testing,
perform security reviews, and have red-teaming exercises.
¨
The steps allow
organisations to build familiarity with compliance processes and documentation
before requirements become mandatory.
Institutional Architecture
¨
AI Governance Group
(AIGG): chaired by the Principal Scientific Adviser, to harmonise AI governance
across ministries, regulators and stakeholders.
¨
AI Safety Institute
(AISI): for evaluating, testing and certifying AI systems to ensure they meet
safety and ethical standards.
¨
Technology and Policy
Expert Committee (TPEC): To support the functioning of AIGG, the Ministry of
Electronics and Information Technology should set up a dedicated Technology and
Policy Expert Committee (TPEC), which could pool multidisciplinary experts from
law, public policy, machine learning, AI safety, cybersecurity, and public
administration, among other things.
¨
Impact-aware data
withdrawal mechanisms: To mitigate the risks of privacy, exclusion and inequity
in a country that is linguistically and demographically diverse, impact-aware
data withdrawal mechanisms are needed rather than unconditional erasure.
¨
Large-scale unlearning
requests should be subject to fairness and representativeness impact
assessments, with safeguards triggered where data removal risks degrade
performance for underrepresented groups.
Significance of the Report
¨
Nuanced AI governance
framework: It highlights India’s pro-innovation approach to AI governance,
which integrates baseline legal safeguards, sector-specific regulations,
technical controls, and institutional mechanisms.
¨
Regulatory and
Technological Necessity: Developing a robust and responsive governance
framework is not just a regulatory necessity but a prerequisite for sustaining
the momentum of technological progress. The techno-legal approach offers a viable
pathway by embedding legal, technical, and institutional safeguards into AI
systems by design.
¨
Attempts to solve “Pacing
Problem”: By embedding governance into system design rather than relying purely
on post-hoc enforcement.
¨
Shaping Global AI
governance: These White Papers, framed as explanatory knowledge documents, are
intended to support informed deliberations on various policy priorities to
shape the evolving AI ecosystem and reinforce India’s catalytic role in shaping
the global AI governance discourse.
¨
Integration with India’s
Digital Ecosystem: The white paper discusses leveraging Digital Public
Infrastructure (DPI) and the Data Empowerment and Protection Architecture
(DEPA) for trustworthy, consent-based data access and governance.