Two agendas for AI future in education — and why clarity is essential


India is entering an ‘AI in education’ moment. The question is not whether AI will touch classrooms and public education systems; it will. The question is whether AI strengthens equity, agency, and learning outcomes — or quietly widens gaps that India is already working hard to close. 

That challenge came into focus at the recent conclave at IIT Madras on ‘Strengthening Human Capital for the AI Era’, convened by the Centre for Responsible AI (CeRAI) and partners from research and industry, as a pre-summit event leading up to the IndiaAI Impact Summit 2026.  The message worth carrying beyond conference halls is simple: if AI is reshaping society, the real infrastructure is human capability — not just tools.

Two agendas, not one

Public debate often collapses everything into ‘AI in education’. India would do better to hold two agendas in parallel.

The first is AI for education: using AI to support teaching, learning, assessment, and system operations. The second is AI education: building AI literacy, awareness, ethics, and skills across society — including teachers and officials, not only students. If the second agenda is weak, adoption will look impressive on dashboards while human judgment erodes in the system. AI adoption is not the national goal; AI capability is.

Prof. B. Ravindran and Prof. Ashwini Mahalingam of IIT Madras at a recent conclave on ‘Strengthening Human Capital for the AI Era’. | Photo: Special Arrangement

Prof. B. Ravindran and Prof. Ashwini Mahalingam of IIT Madras at a recent conclave on ‘Strengthening Human Capital for the AI Era’. | Photo: Special Arrangement

This distinction matters because public education does not fail for lack of apps; it fails when systems cannot make good choices at scale. AI will amplify whatever capacity exists in procurement, training, governance, use, and evaluation. If that capacity is thin, AI will amplify thinness — faster.

Three questions that should guide decisions

As India moves from AI pilots to scaling, three questions should sit at the centre of procurement, program design, use, and teacher and official professional development.

First: are we designing for learning, or just for output? AI is excellent at producing answers. But education is about producing ability — the capacity to read with understanding, reason with numbers, and solve unfamiliar problems. If AI encourages cognitive offloading (letting the machine do the thinking), it may improve short-term performance while weakening long-term learning.

This matters because India has a time-bound foundational literacy and numeracy (FLN) ambition under NIPUN Bharat, aiming for children to attain foundational skills by 2026–27. AI in early grades must therefore be judged by whether it deepens comprehension and transfer, not by how quickly it generates answers or how personalised it appears.

We must also consider whether AI in earlier grades is best in the hands of teachers or learners. As we progress through the grades, AI interaction can become a two-pronged dynamic involving both teachers and learners.

Second: are we building responsible AI, or AI-responsible institutions? Responsible AI is often reduced to a technical checklist — bias mitigation, safety filters, transparency. Necessary, but insufficient. In public education, responsibility sits inside institutions: procurement systems, grievance redressal, teacher enablement, and continuous monitoring. Even a well-intended model can cause harm if it is deployed without clear accountability, without training for decision-makers, or without mechanisms to detect and correct failure modes in the field.

Third: are we scaling technology, or scaling trust? In public systems, scale is not a technical milestone; it is a social contract. Trust is earned when teachers can predict how a tool affects workload, when parents can understand what happens to their child’s learning and data, and when officials can answer what will happen when systems fail. If trust is not designed, it becomes an afterthought — and then it becomes a crisis.

Trust is a design requirement when it comes to AI, not a communications buzzword

Trust will not come from slogans about “AI for good”. It will come from governance architecture. India already has strong policy signals in this direction.

The National Education Policy (NEP) 2020 calls for leveraging technology to improve access and quality, while emphasising inclusion and equity. Technology is not neutral: it amplifies what the system rewards. If we reward usage, we will get usage. If we reward learning gains and equity, we will get a different kind of innovation. The Ministry of Education has recently announced the establishment of the AI Centre of Excellence for Education at IIT Madras to champion the discovery, design, development, and outcomes-oriented scaling of AI for education.

The Digital Personal Data Protection Act (DPDP), 2023 requires verifiable parental consent for processing children’s personal data and prohibits processing likely to cause detrimental effects on a child’s well-being. For AI in schools, this is not a compliance footnote. It is a product requirement: data minimisation, purpose limitation, secure defaults, and clear, accessible redress mechanisms must be built into deployments from day one.

UNESCO’s guidance on generative AI in education similarly argues for a human-centred approach and long-term capacity building so that education systems can integrate these tools responsibly.

What India should do next

If India wants AI to strengthen public education — rather than simply modernise it — the next phase needs a different playbook.

Start by anchoring AI deployments to learning science and FLN priorities. In the early grades, the benchmark should be durable learning: comprehension, transfer, and error detection, not “instant answers”. In higher grades and beyond, the benchmark should include critical thinking and the ability to interrogate AI outputs rather than accept them at face value.

Then build institutional capacity where it matters: State and district leadership, procurement teams, academic support structures, and school leadership. Public–private–academia partnerships should be judged not by the number of pilots launched, but by the institutional capabilities created: standards, evaluation protocols, training programs, and shared accountability for harms and redressal.

Engineer trust at scale through transparent child-data practices, independent evaluation of outcomes and fairness across languages, and clear incident-response and grievance pathways. Treat teachers as co-designers, not end users. Teacher development must go beyond tool training to professional judgement — when to use AI, when not to, and how to prevent cognitive offloading.

Finally, invest in interoperability and public goods to prevent the system from fragmenting into vendor silos. Pair open standards with strong evaluation and governance, and innovation can flourish without undermining accountability.

The real national goal

India’s advantage will not be measured by how quickly it deploys AI tools in schools. It will be measured by whether it can build a generation that can think with AI without letting AI think for them — and a public education system that can innovate without compromising dignity, safety, and equity.

The shift is straightforward: stop asking “How fast can we deploy AI?” and start asking “How well can we build human capability around AI — at scale, with dignity?”

(Bhanu Potta is presently the Consulting Senior Partner for EdTech and AI at Central Square Foundation, Senior Advisor at Birla AI Labs, and the Founding Partner at Zinger Labs.)

(Sign up for THEdge, The Hindu’s weekly education newsletter.)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *