Research report · 2026 H1 · ~30 min read
An assessment of the state of AI skill among graduating students at Indian higher-education institutions in the first half of 2026, structured against the Kompas AI Skill Rubric (five bands × six dimensions, v2026.1).
Published: 14 May 2026 · Version: 2026 H1 · Next refresh: November 2026
This is the inaugural release of the Kompas AI Skills Index. It is intended to be read by vice chancellors, deans, heads of computing departments, hiring managers responsible for AI talent, and policymakers shaping the next phase of India's National Education Policy 2020 implementation. It is not addressed to a general audience and does not soften its conclusions for one.
On this page
- Executive summary
- Methodology
- The capability gap, quantified
- Demand side: what AI employers expect in 2026
- Supply side: what Indian universities actually teach
- The six rubric dimensions, dimension-by-dimension
- Tier analysis
- What changes the trajectory
- What this means for employers
- What this means for students
- Limitations and what we don't yet know
- Sources and citations
Executive summary
India produces more engineering graduates than any other country in the world. India also has one of the largest unfilled AI hiring backlogs in the world. These two facts are now familiar enough to have become background noise in policy debates. The argument of this report is that they should not be read as a paradox to be reconciled, but as the surface symptom of a single structural fact: the Indian higher-education system has scaled engineering enrolment without scaling the capability the AI labour market is actually paying for.
We assess that capability using a published rubric — the Kompas AI Skill Rubric, v2026.1 — which divides AI skill into six dimensions (mathematical and statistical foundations, implementation, system design, evaluation and methodology, safety and responsible AI, and industry communication) and five bands (Foundational, Practitioner, Applied, Advanced, Expert). The rubric is the lens through which the rest of this report should be read. Its full text is published at /skill-rubric.
Three headline findings recur through the data:
First, the headline employability numbers materially overstate the AI-relevant capability of the average graduate. Industry-employability surveys consistently report that 42–56 per cent of Indian graduates are "employable", and that the AI and machine-learning domain has the highest sub-score at roughly 46 per cent [1][2][3]. These figures use assessments that test toolchain familiarity and bounded problem-solving. They do not test the dimensions of the Kompas rubric where Indian education most reliably under-trains — system design, evaluation, and safety — and they say little about a graduate's ability to ship a maintainable AI system. The Scaler–CMR study of 400 working engineers and recruiters captures the same gap from inside industry: 89 per cent of engineers self-identify as AI-ready; 19 per cent are judged AI-ready against an industry-grade bar; 86 per cent of recruiters report difficulty finding genuinely AI-skilled candidates [4]. The confidence–capability gap is the central artefact of the 2026 talent market.
Second, the rubric distribution is severely top-heavy at the Foundational band and severely thin above the Practitioner band. Across all institutional tiers — IITs and IIMs included — the median graduating student in 2026 H1 sits at the Foundational band of the Kompas rubric on most dimensions, and at the Practitioner band on at most one or two (typically Implementation and, for stronger candidates, Mathematical Foundations). Less than five per cent of graduating students across all Indian higher-education institutions reach the Applied band on three or more dimensions simultaneously, which is the band hiring managers describe as the minimum bar for a productive AI engineer hire. Less than half a per cent reach Advanced on any dimension at graduation.
Third, the gap is not uniform across the six rubric dimensions. Indian engineering education over-invests in mathematical and implementation foundations relative to its peer systems, and under-invests catastrophically in evaluation, safety, and communication. The first three dimensions are taught — unevenly, but taught. The last three dimensions are essentially not taught as graded curricular content at most institutions, and they are precisely the dimensions hiring managers cite as the difference between a candidate who looks good on paper and a candidate who ships.
The report works through these findings in detail. Section 3 quantifies the rubric distribution by institutional tier. Section 4 reads the demand side from the NASSCOM–Deloitte, LinkedIn, Scaler–CMR, India Skills Report and Mercer–Mettl evidence base. Section 5 maps the supply side — NEP 2020 implementation status, the AICTE "Year of AI" 2025 programme, faculty supply, and compute infrastructure. Section 6 works through each of the six rubric dimensions in turn. Section 7 compares institutional tiers. Section 8 proposes five concrete interventions, with expected impact bands. Sections 9 and 10 translate the findings into practical guidance for employers and for students.
The headline recommendation is unsentimental. The Indian higher-education system has the inputs — enrolment, motivated students, partial compute access via the IndiaAI Mission, and a credible policy frame in NEP 2020 — to move the median graduate from the Foundational band to the Practitioner band on five of six rubric dimensions within two academic cycles. It will not do so by adding more AI courses to an already crowded credit structure. It will do so only by replacing exam-completion logic with project-and-evaluation logic at the assessment level, and by treating the safety, evaluation, and communication dimensions as compulsory engineering content rather than as soft electives. Where universities have done this in the small (most visibly inside the IITs and a handful of Tier-1 private universities), the rubric distribution shifts materially within a single cohort. Where they have not, no amount of curriculum reform on the syllabus side will close the capability gap.
This report will be refreshed every six months, per ETHOS Principle 6. The next release is scheduled for November 2026 and will include a panel of graduating-cohort rubric assessments at partner institutions, with consent and on a methodology disclosed in advance.
Methodology
Why a rubric, not a single index score
A single number is the wrong shape for this question. "AI skill" is not one thing. A graduate who can implement a small Transformer block from a reference paper but cannot design an evaluation harness, defend the design to a sceptical reviewer, or assess the system for bias is not 60 per cent of an AI engineer. They are at the Practitioner band on Implementation and at the Foundational band on Evaluation, System Design, Safety, and Communication. Compressing that profile into a single percentile destroys the information a hiring partner or a dean actually needs.
The Kompas AI Skill Rubric (v2026.1) is therefore a 5 × 6 grid. The five bands are Foundational, Practitioner, Applied, Advanced, and Expert. The six dimensions are:
- Mathematical and statistical foundations — the language a student uses to reason about why a model behaves the way it does, not as an exam paper.
- Implementation skill — writing, debugging, profiling, and maintaining code that trains and serves AI systems.
- System design — composing pipelines (retrieval-augmented generation, agents, evaluation harnesses) for an actual problem.
- Evaluation and methodology — benchmarks, ablations, error analysis, statistical rigour.
- Safety, alignment and responsible AI — bias evaluation, red-teaming, regulatory mapping to the DPDP Act and EU AI Act, model and system cards.
- Industry communication — explaining technical decisions to non-technical buyers, writing model cards a lawyer can read, defending design trade-offs to senior reviewers.
The cell wording is the canonical text published at /skill-rubric. It is co-signed by Kompas's hiring-partner panel and refreshed every six months. Throughout this report, when we say "a typical graduate sits at the Foundational band on Evaluation", we mean the descriptor at that cell of the published rubric.
What this index measures and what it does not
This is an index of the AI capability of the graduating cohort at Indian higher-education institutions in 2026 H1. It is not:
- A ranking of universities. Per-institution rubric assessments are out of scope for this release. We mark tier-level distributions in Section 7; we do not name specific institutions other than to identify publicly cited examples (e.g. the IITs).
- A measure of the Indian working AI engineer population. That population includes practitioners with three to fifteen years of post-graduation experience and is structurally stronger than the graduating cohort across all six dimensions. Where we discuss the working population (e.g. the Scaler–CMR 19 per cent figure) we mark it explicitly.
- A measure of AI literacy in non-engineering disciplines. The Kompas rubric is calibrated to AI engineering, AI-using design and management, and AI-fluent professional practice in adjacent disciplines. A graduating lawyer who can use AI tools responsibly and a graduating engineer who can ship a model service are different profiles. This report focuses on the engineering side; the literacy side is covered in our companion work under Track D.
Data sources
This release synthesises five categories of evidence:
- Published India-specific employability and skills surveys — Mercer–Mettl India Graduate Skill Index 2025, the India Skills Report 2025 and 2026 (Wheebox / ETS / CII / AICTE / AIU / Taggd), and the Scaler–CMR Confidence-Capability Gap study [1][2][3][4].
- Industry talent reports — the NASSCOM–Deloitte AI talent series, NASSCOM's state-of-data-science-and-AI-skills publications, and the LinkedIn AI Labor Market Report 2026 [5][6][7][8].
- Policy and regulatory artefacts — the NEP 2020 source documents and progress commentary, the AICTE "Year of AI" 2025 announcements, the IndiaAI Mission programme documents, the DPDP Act 2023 with the November 2025 rules, and the India AI Governance Guidelines released November 2025 [9][10][11][12][13][14].
- Hiring-partner intake artefacts in Kompas's network — anonymised hiring-rubric requirements from B2B partners that screen for AI-engineering roles in India. We do not name these partners in line with ETHOS Principle 5. Where a finding rests on this evidence base alone, we say so.
- A structured reading of published academic and policy literature on the integration of AI into Indian engineering education, including the IJERT and Springer reviews and the Stanford AI Index analysis where it touches India [15][16][17].
Where a claim rests on a single survey, we cite the survey. Where a claim rests on a Kompas-internal observation that we cannot independently triangulate to a public source, we mark it [Kompas hiring-partner panel, 2026 H1] and treat the reader as warned. Where we cannot find a defensible source, we mark [Source needed] and move on rather than invent.
Limitations of this release
Three limitations are worth naming up front:
- No first-party assessment data yet. This inaugural release is a synthesis of secondary sources and the structured judgement of the Kompas hiring-partner panel. The H2 2026 release will include first-party rubric assessments at a panel of partner institutions on a disclosed methodology, with explicit consent.
- Tier definitions are coarse. We use a five-tier taxonomy (IIT/IIM and equivalent INIs, Tier-1 private, Tier-2 private, Tier-3, state institutions). The boundaries between Tier-1 and Tier-2 are not crisp; reasonable people will disagree about borderline institutions. We make our placement rule explicit in Section 7.
- The denominator is "graduating cohort 2026 H1", not all enrolled students. A first-year student at a Tier-2 private university who is enrolled in an AI-focused programme is not in scope; their graduating self in 2029 is. Where we cite enrolment growth figures, the population is larger.
A serious reader should consume this report alongside the primary sources cited in Section 12. We have linked them all.
The capability gap, quantified
The headline distribution
We estimate the rubric distribution for the 2026 H1 graduating cohort across all Indian higher-education institutions that offer engineering or computing degrees as follows. The estimate is constructed bottom-up from the survey and hiring-partner evidence and is calibrated to the rubric descriptors at /skill-rubric.
| Band | Approximate share of graduating cohort (all institutions, all six dimensions averaged) | Interpretation |
|---|---|---|
| Foundational | ~70–78% | Can read AI code; can run a tutorial; cannot yet author a new model class without a template. |
| Practitioner | ~18–24% | Can build a working single-purpose AI system from a reference architecture. |
| Applied | ~3–5% | Can design a multi-component system, design an evaluation harness, defend choices to a reviewer. |
| Advanced | ~0.3–0.5% | Builds custom infrastructure where off-the-shelf tooling falls short; reviews other engineers' AI code as a senior reviewer. |
| Expert | <0.05% | Negligible at graduation; this band is reached after several years of post-graduate work, not at graduation. |
Two qualifications are essential.
First, the figure above is an average across the six rubric dimensions. A student's profile is not flat. A median IIT graduate may sit at Practitioner on Mathematical Foundations and Implementation, but at Foundational on Evaluation, Safety and Communication. A median Tier-2 private graduate may sit at Foundational on five dimensions and at Practitioner on Implementation alone. The dimension-by-dimension profile is the right unit of analysis; we work through it in Section 6.
Second, the estimate is conservative against the published employability numbers, and we think it is correct to be conservative. The Mercer–Mettl 46.1 per cent AI/ML employability figure [1] and the India Skills Report 2026 56.35 per cent overall employability figure [2] both rest on assessments that test toolchain familiarity, basic coding under exam conditions, and bounded problem-solving. They do not test the capability bar described at the Applied band of the Kompas rubric — designing a domain-specific evaluation harness, writing an architecture decision record that survives review by a working engineering manager, performing a disaggregated bias evaluation on a model the candidate has trained. The Scaler–CMR 19 per cent figure [4], which is measured against working-engineer expectations rather than against fresh-graduate expectations, is closer in spirit to the Practitioner-band bar — and even there it captures the existing professional cohort, not the graduating cohort.
The standard employability surveys answer the question "can this graduate pass our screening test?". The Kompas rubric answers the question "can this graduate ship a maintainable AI system?". The first question is in the affirmative for almost half the cohort. The second is in the affirmative for somewhere between three and five per cent.
Distribution by institutional tier
We use the following five-tier taxonomy, which approximates how the Indian AI hiring market actually segments. The boundaries are coarse and contested.
- Tier IIT/INI — the IITs, IIITs (the central institutions), IISc, NITs at the top of NIRF, BITS Pilani, and roughly equivalent institutions of national importance.
- Tier-1 private — institutions that consistently appear in the top 20–40 of the NIRF engineering ranking and that have shipped credible AI research or industry collaborations in the last two years.
- Tier-2 private — institutions in the NIRF 40–150 band with significant AI/ML or CS-AI programmes but variable depth of faculty and research output.
- Tier-3 — private institutions outside the NIRF top 150 (or unranked) that nevertheless run AI programmes, often via affiliated curricula.
- State — state public universities and their constituent colleges. This is the numerical majority of Indian engineering enrolment. NIRF 2024 places state public universities and INIs at the largest combined share of the rankings [18].
A defensible mid-range estimate of the rubric distribution by tier, for the 2026 H1 graduating cohort, is shown below. Cells are the share of each tier's graduating cohort reaching the named band on the average across the six dimensions. We mark the IIT/INI line conservatively; published reporting is consistent with the picture but does not crisply triangulate the percentages [Source: synthesis of 4, 18, 19, 20, 22].
| Band | IIT / INI | Tier-1 private | Tier-2 private | Tier-3 | State |
|---|---|---|---|---|---|
| Foundational | ~35–45% | ~55–65% | ~70–80% | ~85–92% | ~88–94% |
| Practitioner | ~35–45% | ~25–35% | ~15–22% | ~6–12% | ~5–10% |
| Applied | ~12–18% | ~5–10% | ~2–4% | ~1–2% | <1% |
| Advanced | ~1–3% | ~0.3–0.8% | <0.3% | <0.1% | <0.05% |
| Expert | ~0.05–0.2% | <0.05% | negligible | negligible | negligible |
A few things stand out.
The IIT/INI tier is structurally different from the rest, but not by as much as the public perception suggests. Even at the IITs, fewer than one in five graduating students reaches the Applied band on average across the six dimensions. The IIT advantage is most pronounced on Mathematical Foundations and Implementation, where the entrance examination has selected for the underlying competence years earlier. The IIT disadvantage shows up most clearly on Communication and Safety, where the curriculum offers no more systematic training than its peers.
The Tier-1 private set is a credible Practitioner-band feeder pipeline. It is not, in 2026 H1, a credible Applied-band feeder pipeline. The published programmes are typically strong on toolchain and weak on evaluation, system design, and safety as graded content.
The Tier-2 private and Tier-3 set is where the policy leverage is highest. This is also where most of the cohort sits. A shift of 10 percentage points in this segment from Foundational to Practitioner on a single rubric dimension would change the Indian AI hiring market more than a 10-percentage-point shift at the IIT/INI tier, simply because the underlying population is roughly an order of magnitude larger.
The state institution set is the most under-served and under-measured. It is also the segment NEP 2020 most directly addresses with its multidisciplinary and skill-integration provisions [10]. The H1 2026 picture is that those provisions have not yet shifted the rubric distribution materially at the state-institution tier — though the IndiaAI Mission's data-and-AI lab provisioning in Tier-II and Tier-III cities is the most concrete intervention currently in train [11].
The gap reframed
The right framing of the 2026 capability gap is not "India produces too few AI engineers". It is:
India produces approximately the right number of Foundational-band graduates and produces too few Practitioner-, Applied-, and Advanced-band graduates by roughly an order of magnitude for the market the country is now attempting to serve.
The corollary is uncomfortable. Policies that increase enrolment without changing assessment regimes move the Foundational-band count, which is not the binding constraint. Policies that change assessment regimes — that require shipped projects, evaluated against published rubrics, defended in front of working engineers — move the Practitioner and Applied counts, which are. Section 8 turns this corollary into a concrete intervention list.
Demand side: what AI employers expect in 2026
What the published surveys say
The 2024–25 evidence base on Indian AI hiring demand is unusually consistent on the headline numbers and unusually inconsistent on the details. The headline numbers, from the NASSCOM–Deloitte talent series and corroborating reports, are:
- India had approximately 420,000 AI professionals in 2024 against immediate industry demand for roughly 600,000 — a near-50 per cent shortfall at the point of measurement [5][21].
- The Indian AI talent pool is projected to grow from roughly 600,000–650,000 (2022) to over 1.25 million by 2027, a CAGR consistent with the published demand-side growth forecasts of 25–35 per cent [5][6].
- AI-related job postings on LinkedIn grew 59.5 per cent year-on-year in India in 2025, the fastest in the world; growth has spread beyond Bengaluru to Hyderabad (51 per cent), Vijayawada (45.5 per cent), and other Tier-II cities [7].
- India contributed 19.9 per cent of all AI projects on GitHub in 2024, ranking second globally — a developer-ecosystem depth figure, not a capability figure [22].
The first two figures describe the quantity of the gap. The third describes the geography. None of them describe the capability shape of the demand, which is where the rubric becomes essential.
What hiring managers actually screen for
The Kompas hiring-partner panel — and the publicly disclosed hiring artefacts of large IT services firms — consistently flag a small set of requirements that the typical fresh graduate fails to meet. These do not map cleanly to the toolchain skills the published employability surveys test.
The recurring requirements are, in roughly the order they appear in panel intake:
- Shipped, evaluated project work. A working AI system the candidate has built, with a written evaluation that uses defensible methodology, that they can defend in a one-hour technical interview. The Scaler–CMR finding that 86 per cent of recruiters report difficulty finding skilled candidates [4] is largely a finding about this requirement.
- System-design literacy. A working understanding of when to use retrieval-augmented generation versus fine-tuning versus prompt engineering versus a classical model — and the ability to make and defend the trade-off in cost, latency and accuracy terms.
- Error analysis as a habit. Not "we computed accuracy". The discipline of looking at the failures, categorising them, and turning the categorisation into a prioritised list of fixes.
- Plain-language communication. A candidate who can explain the architecture of their project to a non-technical hiring manager, in five minutes, without retreating into jargon. This is the dimension hiring partners flag most often as "the surprising weakness".
- Awareness of the regulatory frame. Not deep expertise. The ability to identify which clauses of the DPDP Act 2023 and the India AI Governance Guidelines (November 2025) apply to a stated AI use case [13][14]. This is increasingly a screening question for AI hires in regulated sectors (BFSI, healthcare, public administration).
These five requirements correspond, in order, to the Practitioner-and-Applied descriptors of the Implementation, System Design, Evaluation, Communication, and Safety dimensions of the Kompas rubric. The rubric was not constructed in a vacuum; it was constructed by reading the hiring-partner-side requirements backward.
The supply-fact versus the demand-fact
A point of analytical hygiene that the Indian AI talent debate frequently confuses: surveys of graduate readiness are supply-side facts. Surveys of what employers want are demand-side facts. They are not the same evidence base and they should not be averaged.
The supply-side facts (e.g. the Mercer–Mettl 46.1 per cent AI/ML employability figure [1]) describe what graduating students can do under controlled assessment conditions. They are typically generous, because the assessments are calibrated to fresh-graduate expectations and use toolchain-fluency tasks.
The demand-side facts (e.g. the Scaler–CMR 19 per cent figure [4] and the 86 per cent recruiter-difficulty figure) describe what hiring managers experience when they actually try to fill open requisitions. They are typically harsher, because the bar is a working engineer and the assessment is a real interview.
The honest reconciliation is this. Roughly half the graduating cohort can pass a screening test calibrated to fresh-graduate expectations. Roughly one in five of those who pass the screen and enter the working population are deeply engaged in shipping AI systems. The remainder are AI-adjacent — using AI tools, completing AI courses on the job, but not yet at the band the hiring market is paying for.
The Kompas rubric is calibrated to the demand-side fact, not the supply-side fact. That is a deliberate choice.
What employers say about freshers vs. lateral hires
A consistent theme in the 2024–25 reporting [4][5][23] is that the Indian IT services majors have shifted material training capacity toward AI re-skilling of the existing workforce, rather than relying primarily on the fresh-graduate pipeline. TCS trained approximately 350,000 employees in AI during 2023–24; Wipro trained roughly 220,000; Infosys has launched a 20,000-graduate AI-focused intake for 2025; TCS announced 100,000 employees to be trained in AI orchestration by mid-2026 [23][24].
The corporate sector's revealed preference is informative. The reason these firms are training the existing workforce at this scale is that, at current band distributions, it is cheaper and faster to lift an existing employee from Foundational to Practitioner than to find a graduate who is already there. That is a damning statement about the supply pipeline, made by the firms that hire most of it.
For a graduating student, the practical consequence is that the bar at the door has gone up. A fresh graduate who arrives at Foundational and expects to be lifted to Practitioner on the job will increasingly find themselves outcompeted, on the same job ladder, by a five-year IT services veteran who has just been re-skilled by their employer. This is part of why entry-level fresher hiring is showing softness in 2025–26 reporting despite the headline AI demand [25].
Supply side: what Indian universities actually teach
NEP 2020 and the structural promise
The National Education Policy 2020 is, in formal terms, the single most ambitious overhaul of Indian higher education in a generation. Its directly AI-relevant provisions include the integration of AI, machine learning, and data science into the undergraduate engineering curriculum; the introduction of a multidisciplinary minor-degree structure that allows non-engineering students to take a structured minor in AI and data science; the creation of Multidisciplinary Education and Research Universities (MERUs); and the explicit goal of doctoral and master's programmes in core AI areas at every university [9][10].
The policy promise is real. The implementation status, five years in, is mixed. The most recent publicly cited indicators are:
- Digital infrastructure in schools rose from 34 per cent (2019–20) to 57 per cent (2023–24) [9].
- A Centre of Excellence in AI for Education was announced in the Union Budget 2025–26 with a budgetary allocation of Rs. 500 crore [9][10].
- The AICTE declared 2025 the "Year of AI", with a published plan to embed AI into the curricula of over 14,000 colleges and to train faculty across an initial 1,000 engineering colleges [26].
- The AICTE has published model curricula for AI minor degrees and for B.Tech specialisations in AI&ML and AI&DS [27].
These are concrete artefacts. They are also, with the partial exception of the AICTE faculty-training programme, paper artefacts. The Orf Foundation's five-year retrospective on NEP 2020 and the related implementation commentary [9][28] are consistent in noting that "implementation has been inconsistent" and that "there exists no consolidated public data on the progress made so far" [9]. Our reading of the H1 2026 picture aligns with that assessment.
The AICTE model curriculum: what it covers
The AICTE's published model curriculum for the B.Tech in CSE with AI&ML specialisation, and the parallel model curriculum for a minor degree in AI for non-CS UG students, are the documents that most directly shape what 14,000+ colleges teach. A careful read [27] reveals the following.
What it does well:
- Mathematical foundations (linear algebra, probability, statistics, optimisation) are well-specified and broadly correct against the Kompas rubric's Practitioner-band Mathematical Foundations descriptor.
- The implementation track (Python, PyTorch/TensorFlow, basic ML libraries) is current.
- Project work is mentioned at multiple semesters.
What it does less well:
- The evaluation and methodology content is thin. Topics like ablation design, structured error analysis, multi-seed reporting, and metric selection against actual product objectives are not consistently present across the model syllabi.
- The safety and responsible-AI content is largely a single elective or a short module within an ethics course. The DPDP Act, the EU AI Act, the India AI Governance Guidelines, model and system cards, disaggregated evaluation, and red-teaming are not standard graded content.
- Communication is treated as a soft-skill add-on. Writing an architecture decision record, presenting design trade-offs to a sceptical reviewer, producing a model card a lawyer can read — these are not specified as graded outputs.
- Project work, where mentioned, is largely unspecified in terms of evaluation rigour. A capstone that completes is graded as a capstone that ships.
The result is a model curriculum that, executed faithfully, produces graduates at the Practitioner band on Mathematical Foundations and Implementation and at the Foundational band on everything else. That is precisely what the cohort distribution looks like.
Faculty supply
The most binding supply-side constraint is not curriculum. It is faculty.
The publicly cited indicators, where they exist, are sobering. Many institutions outside the IIT/INI tier lack faculty with the working-AI experience needed to teach the Applied and Advanced bands credibly [29][30]. The AICTE's "Year of AI" 2025 programme is, in part, an acknowledgement of this: its target is to train faculty across 1,000 engineering colleges, alongside short-term IIT-led upskilling programmes for core-engineering faculty with five-plus years of experience [26].
We do not have a defensible all-India estimate of the share of computer-science and AI faculty with shipped industry AI experience in the last three years. [Source needed] for a published figure on this. The hiring-partner panel estimate is that, outside the IIT/INI tier, the share is in the single digits. This is the single largest constraint on moving the rubric distribution in Tiers 2 and 3.
A related constraint is faculty PhD supply. India's contribution to highly cited global AI research remains under two per cent of top-cited publications in the field [22], despite holding 16 per cent of the world's AI talent on the broader Stanford AI Index measure [22]. The gap between talent share and research share is, in part, a gap in the academic-PhD pipeline that supplies university faculty.
Compute infrastructure
The IndiaAI Mission, approved in March 2024 with a budgetary outlay of Rs. 10,372 crore [11], is the most material infrastructure intervention on the table. Its compute component (Rs. 4,563 crore over five years) has reached approximately 38,000 GPUs of common-compute capacity by early 2026 [12][31], offered at subsidised rates (researchers and academic institutions receive up to a 40 per cent cost reduction on eligible projects).
This is real. It is also, at 38,000 GPUs across a country of 14,000+ engineering institutions, not yet the kind of compute access that meaningfully shifts the rubric distribution at the long tail. The Tier-2 private and Tier-3 student who needs a few hundred GPU-hours to do a credible capstone project still relies on a mix of free cloud credits, partner-sponsored access, and (increasingly) the IndiaAI common-compute pool. The H1 2026 picture is that compute is no longer the binding constraint at the top of the distribution, but is still a significant friction at the long tail.
The interlock between policy, curriculum, faculty, and compute
A useful summary frame: NEP 2020 and the AICTE model curriculum address the syllabus layer. The AICTE Year of AI 2025 and the IIT-led upskilling programmes address the faculty layer. The IndiaAI Mission and the partner-university lab investments address the compute layer. None of them directly address the assessment layer — the question of what students are graded on, and whether the grading regime distinguishes a Practitioner-band capstone from a Foundational-band one.
In our reading, the assessment layer is where the binding constraint now sits. We return to this in Section 8.
The six rubric dimensions, dimension-by-dimension
This section works through each of the six rubric dimensions in turn. For each dimension we describe what the typical graduating Indian student can and cannot do at the median; what hiring partners flag as missing; and what universities could change without restructuring the credit system. The rubric cell text is summarised at the head of each subsection; the full text is at /skill-rubric.
Mathematical and statistical foundations
Practitioner-band descriptor: "Derives gradients for standard loss functions; reasons about bias and variance in plain language; applies basic hypothesis testing and confidence intervals to a real experimental result; recognises when an i.i.d. assumption is being violated."
Where the typical graduate sits: This is the dimension on which the Indian education system performs most credibly. The entrance-examination culture has, for decades, selected and trained for mathematical fluency. The median IIT/INI graduate sits at or near the Practitioner band; the median Tier-1 private graduate sits between Foundational and Practitioner; the median Tier-2 graduate sits at Foundational with strong exam performance.
What hiring partners flag as missing: Not the symbolic mathematics itself, but the application of it. Candidates can derive cross-entropy. They struggle to identify when a confusion matrix's off-diagonal pattern is the more important business signal. Candidates can compute a confidence interval. They struggle to design an experiment whose conclusions a confidence interval would actually support. This is the gap between exam-strong and application-shaky that the published commentary on engineering education has noted for years [3][15].
What would move the band: Replace "solve this problem set" with "interpret this real experimental result and write a one-page recommendation". The mathematics does not change. The graded artefact does.
Implementation skill
Practitioner-band descriptor: "Writes a new model class (e.g. a small Transformer block) from a paper or reference; uses version control, virtual environments, and a unit-test harness as a default habit; profiles a slow training step with standard tooling; ships code reviewable by a working engineer."
Where the typical graduate sits: The second-strongest dimension. The default tooling culture (Python, PyTorch, Hugging Face, Jupyter) is now broadly familiar across Tier-1 and Tier-2 private programmes. The IndiaAI Mission's compute access and the proliferation of open-source course material have made the Practitioner band reachable for any student who chooses to reach for it.
What hiring partners flag as missing: Three things, consistently. First, the habits — version control as a default, branching discipline, code review responsiveness, a tested-code-only-merges norm. Second, debugging in unfamiliar codebases — candidates can run their own code; they freeze when handed someone else's. Third, the gap between a working prototype and code another engineer can maintain. The Scaler–CMR finding that 89 per cent of working engineers self-identify as AI-ready while 19 per cent are judged ready [4] is, in our reading, mostly a finding about this dimension. The first capability is "can write some AI code". The second is "can ship AI code other engineers can rely on".
What would move the band: Require every capstone to land in a public Git repository with a reviewer's record of merged pull requests, a green CI run, and a written-up README that another team could pick up. Grade the capstone partly on the maintainability artefacts, not only on the final demo.
System design
Practitioner-band descriptor: "Builds a working single-purpose AI system end-to-end (e.g. a domain-specific RAG assistant) following a reference architecture; selects a model, embedding store, and prompt scaffold appropriate to a stated requirement; documents the design choices in a short design note."
Where the typical graduate sits: Predominantly at the Foundational band. The median graduate can diagram the components of a standard AI system (a chatbot, a classifier, a basic RAG pipeline) and identify which component does what, per the Foundational descriptor. They cannot yet justify the choice of one component over another with quantitative reasoning about cost, latency, and accuracy.
What hiring partners flag as missing: The trade-off discipline. Candidates can describe RAG. Candidates struggle to explain when RAG is the wrong answer (small static corpus, hard-real-time latency budget, an answer that requires reasoning rather than retrieval). Candidates can build an agent loop. Candidates struggle to write the architecture decision record that justifies the loop versus a single-shot completion against a cost-and-latency budget.
What would move the band: Require, as the gating artefact of every system-building course, a short written architecture decision record (one to three pages) defended in front of a working industry engineer. The capability is the writing and the defending, not the building.
Evaluation and methodology
Practitioner-band descriptor: "Designs an evaluation set appropriate to a small project, including held-out data; runs a basic ablation; reports results with appropriate uncertainty (confidence intervals, multiple seeds); identifies obvious data leakage."
Where the typical graduate sits: This is the dimension most reliably under-trained in Indian engineering education. The median graduate computes accuracy, precision, recall and F1 on a test split that was provided to them, and treats those numbers as the answer. They have rarely designed an evaluation set themselves; they have rarely run an ablation; they have rarely reported uncertainty around a result; they have rarely thought about data leakage.
This is the most consequential gap in the rubric. The Kompas rubric description calls evaluation "the dimension most often missing from self-taught practitioners" [Rubric v2026.1]. Indian formal education, at most institutions, leaves the formal student in roughly the same position as the self-taught practitioner on this dimension.
What hiring partners flag as missing: Almost everything. "Can run a structured error analysis on model outputs and turn it into a prioritised list of fixes" — the Applied-band descriptor — is the single most common Kompas-hiring-partner-flagged gap, across all institutional tiers including the IITs. It is also the single capability that most reliably distinguishes a Practitioner-band candidate from a Foundational-band candidate in interview.
What would move the band: Make the evaluation methodology, not the model accuracy, the graded artefact of the capstone. A capstone that ships with a defensible evaluation and a modest model is worth more than a capstone with a strong-looking demo and no evaluation. Hiring partners agree on this; assessment regimes do not yet reflect it.
Safety, alignment and responsible AI
Practitioner-band descriptor: "Performs a basic disaggregated evaluation across an obvious sensitive attribute; runs a structured red-team against a small system and documents the failures; writes a first-draft model card or system card for their own project; cites the relevant clauses of India's DPDP Act and the EU AI Act risk-tier framework."
Where the typical graduate sits: Predominantly at the Foundational band, and often below the published Foundational descriptor. Candidates can name "bias, privacy, hallucination, misuse" as categories of AI harm. Most cannot give a concrete project-specific example of each. Few have read a model card. Almost none can cite the DPDP Act 2023, whose rules were notified on 13 November 2025 [13], or the India AI Governance Guidelines released 5 November 2025 [14], with any specificity. The EU AI Act risk-tier framework is even less familiar.
The regulatory landscape is now real. The DPDP Act becomes applicable to all entities 18 months after notification — that is, on 13 May 2027 [13]. The India AI Governance Guidelines tether AI governance to the DPDP Act with explicit privacy-by-design, consent, and purpose-limitation requirements [14]. A graduate hired into an AI-using role in 2026 H1 will be working under the DPDP-Act-applicable regime by the start of their second year on the job. They have, in most cases, not been taught the framework that will govern their work.
What hiring partners flag as missing: In sectors with binding compliance (BFSI, healthcare, public administration, education-tech, anything HR-adjacent), the gap is not bias-evaluation methodology — it is awareness that bias evaluation, system cards, and DPDP-Act-mapped data flow diagrams are now standard work products of an AI engineer. The H1 2026 finding is that this gap is closing fastest at the IIT/INI tier, where responsible-AI electives have proliferated, and is closing slowest at the Tier-2 and Tier-3 levels.
What would move the band: Two changes. First, make a model or system card a compulsory deliverable of every AI capstone, graded against a published rubric. Second, replace the abstract "AI ethics" elective with a graded course in India-specific AI governance — the DPDP Act, the India AI Governance Guidelines, the EU AI Act risk-tier framework, and the operational consequences of each. The content exists; it is not being taught at scale.
Industry communication
Practitioner-band descriptor: "Writes a project design note a non-technical product manager can act on; presents to a small mixed audience and answers basic clarifying questions; produces a model card a hiring manager would consider professional."
Where the typical graduate sits: Foundational on average, and unevenly so. A non-trivial minority of Tier-1 and IIT/INI graduates reach the Practitioner band, typically through extra-curricular work (technical clubs, hackathons, public writing). The majority sit at Foundational: they can present project work to peers, write a basic README, and explain what a system does. They struggle to explain trade-offs they did not make, defend choices to a sceptical reviewer, or produce technical writing a working hiring manager would consider professional.
This is the dimension Indian technical education most reliably under-trains, per the Kompas rubric's own framing [Rubric v2026.1]. The structural reason is that the assessment regime grades the artefact (code, deck, demo) and rarely grades the explanation of the artefact as an independent deliverable.
What hiring partners flag as missing: The five-minute version. The ability to walk a non-technical executive through the architecture of an AI system, the choices made, and the residual risks, in five minutes, without retreating into jargon. Hiring partners describe this as the single most predictive interview signal — and the one fresh graduates fail most often.
What would move the band: Make a five-minute defended explanation, in front of a non-technical reviewer, a graded part of the capstone. Grade the explanation, not the demo. The capability transfers across every other dimension.
Summary: where the leverage is
The dimension-by-dimension picture is, in summary:
- Mathematical foundations and Implementation: the median graduate is close to Practitioner. Marginal investment yields marginal return. This is not where the leverage is.
- System Design: Foundational, with realistic two-year reach to Practitioner if architecture-decision-record artefacts become graded outputs.
- Evaluation and methodology: Foundational, with the highest potential return on investment of any of the six dimensions. This is the leverage point.
- Safety and responsible AI: Foundational, with rising regulatory pressure that will force movement. Universities that move first will produce graduates the regulated sectors will pay materially more for.
- Communication: Foundational. The intervention is cheap (grading conventions, structured presentations to non-technical reviewers) and the return is high.
A two-year programme of curriculum-and-assessment reform that focused on the bottom four of these six dimensions would move the median graduate's average band from Foundational to Practitioner across most of the rubric. It would not require new credit hours. It would require new grading conventions.
Tier analysis
This section returns to the institutional-tier distribution from Section 3 and reads each tier through the rubric profile rather than through the cohort-average band.
IIT and INI tier
Profile: Practitioner on Mathematical Foundations and Implementation, between Foundational and Practitioner on System Design, Foundational on Evaluation and Safety, Foundational on Communication.
What is strong: Mathematical foundations and implementation skill are the dimensions for which the entrance examination has been selecting for decades. The IIT graduate's competence on these dimensions is real and is the reason the tier dominates global hiring at the lateral-and-senior level five years after graduation.
What is the gap: The IIT advantage on Evaluation, Safety, and Communication is much smaller than the public perception suggests. The rubric does not reward exam selection; it rewards the discipline of designing experiments, defending design choices, and producing professional-grade technical artefacts. These are not taught more systematically at the IITs than at the Tier-1 private universities.
The right intervention: The IITs do not need more curriculum. They need to convert their existing project-and-capstone work into graded artefacts on the evaluation, safety, and communication dimensions. The infrastructure (compute, faculty, peer review) is in place; the assessment convention is not.
Tier-1 private
Profile: Between Foundational and Practitioner on Mathematical Foundations and Implementation, Foundational on the other four dimensions. The strongest 10–15 per cent of any cohort reaches Practitioner across three or four dimensions; the long tail does not.
What is strong: Industry-current tooling, ample compute, motivated students, often a credible partnership pipeline.
What is the gap: The faculty depth on the Applied and Advanced bands. A Tier-1 private institution typically has a small number of strongly research-active faculty and a larger number of teaching-only faculty without recent industry-AI experience. The Applied-band content (evaluation harnesses, system design at scale, responsible-AI processes) is therefore taught by the small core and accessed by the strongest subset of students.
The right intervention: A focused Practitioner-to-Applied promotion pipeline for the top tertile of each cohort, supplemented by structured industry-mentor programmes that import working-AI experience the institution does not yet have on-staff. This is the segment where Kompas's own Track A intervention is most directly addressed.
Tier-2 private
Profile: Foundational across the board, with a small Practitioner subset on Implementation. The published programmes are typically AI-branded; the assessment regime is typically not.
What is strong: Scale. This tier comprises a significant share of total engineering enrolment.
What is the gap: Everything above Foundational, structurally. Faculty supply is thin. Compute access is improving via IndiaAI but is not yet universal. Assessment regimes treat capstones as completion artefacts rather than as evaluated systems. The model-curriculum-on-paper is current; the model-curriculum-as-taught is not.
The right intervention: This is the segment where the policy leverage is highest. A focused Practitioner-band shift here, applied to even half the cohort, would move the Indian AI hiring market more than any equivalent intervention at the top tier. The mechanisms include shared faculty (IIT-led upskilling, cross-institution adjunct faculty, industry-on-secondment), shared compute (IndiaAI common-pool access at scale), and shared assessment (a published capstone rubric, externally moderated). The Indian regulatory architecture supports this; the implementation has not yet matched the policy.
Tier-3 private
Profile: Foundational across the board, with the Practitioner subset concentrated in a small number of programme-leading students who self-source via online resources.
What is strong: Reach. This tier brings AI-adjacent education to students who would not otherwise access it.
What is the gap: Most of what the rubric measures. Faculty, compute, peer review, assessment regimes, employer access — all are constrained.
The right intervention: A pragmatic Foundational-to-Practitioner pipeline at scale, delivered through a combination of (a) curriculum reform within the existing AICTE model framework, (b) structured online-and-on-campus blended delivery from AI-specialist providers, and (c) extension of the IndiaAI compute pool to the long tail. The expected band shift here is from Foundational to Practitioner on two or three dimensions over a two-to-three-year cycle, not from Foundational to Applied. The economics support this; the absence of it would leave a large share of Indian engineering graduates in 2030 still at Foundational.
State institutions
Profile: Foundational on all six dimensions, with very limited dispersion. The variation across institutions within this tier is wider than within any other tier; some state institutions are stronger than the median Tier-3 private, others are materially weaker.
What is strong: Public mission, geographic reach (especially into Tier-II and Tier-III cities), low cost of access for students.
What is the gap: Acute, on every dimension. State institutions have the slowest curriculum-update cycle, the thinnest research faculty layer in AI, and the most limited compute access (though the IndiaAI Mission's explicit Tier-II/Tier-III city focus addresses part of this) [11].
The right intervention: The most directly addressed by the NEP 2020 structural provisions. The realistic two-year goal is to move the median state-institution graduate from Foundational to a credible Practitioner on Implementation and a strong Foundational (with movement toward Practitioner) on Mathematical Foundations. The other four dimensions will require longer cycles.
A note on cross-tier variance
A reader who comes from the United States or Western Europe may expect a steep tier gradient — top tier strong, bottom tier weak. The Indian gradient is closer to "top tier strong on two of six dimensions, bottom tier weak on all six, middle three tiers weak on five of six dimensions in different ways". The rubric reveals what a single-number employability ranking conceals: the gap is not primarily a tier gap. It is a dimension gap that runs across all tiers.
What changes the trajectory
Five interventions, in order of expected impact band-by-band over a two-to-three-year cycle. Each is concrete enough that a vice chancellor could ask their department heads to budget and execute against it without further consultation. None requires legislative change; all are compatible with NEP 2020 and the AICTE model framework.
1. Replace completion assessment with evaluation assessment in every AI-bearing capstone
The intervention. Every capstone or major project in an AI-bearing programme must ship with:
- A defined evaluation set, not the one the original dataset provider gave them;
- An ablation that varies a stated design choice;
- A confidence interval or multiple-seed report on the headline metric;
- A written one-page error analysis;
- A defended five-minute presentation to a non-technical reviewer.
Grading weight on these artefacts must equal grading weight on the final model itself. The intervention requires no new credit hours; it requires changing what receives marks within existing capstone hours.
Expected band shift. This intervention alone, executed faithfully for one cohort, would move the median graduate from Foundational to Practitioner on Evaluation, from Foundational to between-Foundational-and-Practitioner on Communication, and produce indirect lift on System Design. Across all tiers. The intervention works at IITs and at Tier-3 private institutions equally — what changes is the dispersion of outcomes, not the median lift.
2. Make a model card and a DPDP-Act mapping compulsory artefacts of every AI capstone
The intervention. Every AI-bearing capstone must ship with a one-to-two-page model card or system card following the published Kompas (or any reputable published) model card template, and a half-page data flow diagram with a mapped DPDP-Act-2023 obligation list. Graded against a published rubric. Reviewed by a faculty member with documented familiarity with the regulatory regime (which the AICTE faculty-training programme is intended to produce).
Expected band shift. Foundational to Practitioner on Safety for the median graduate within a single cohort. Indirect movement on Communication. Direct economic return: graduates with this capability are immediately differentiated in BFSI, healthcare, and public-sector hiring, where DPDP-Act familiarity is now a screening criterion.
3. Convert the "ethics elective" into a graded India-AI-governance course
The intervention. A 2-credit course, mandatory for every AI-bearing programme, covering: the DPDP Act 2023 and its November 2025 rules; the India AI Governance Guidelines (November 2025); the EU AI Act risk-tier framework as a comparator; sectoral guidance (RBI, IRDAI, NMC, MeitY); the operational consequences for an AI engineer (system cards, data flow diagrams, consent registers, incident-response playbooks). Examined by case study, not by multiple-choice quiz.
Expected band shift. As above on Safety; the difference is that this intervention provides the framework knowledge that the model-card artefact assumes. Together with intervention 2, they form a coherent Practitioner-band Safety package.
4. Mandate an externally moderated capstone rubric with industry-mentor sign-off
The intervention. Every AI capstone is reviewed by at least one external assessor from the partner-employer pool, against a published rubric (the Kompas rubric is one option; any published rubric would suffice). The external sign-off becomes part of the transcript record. Hiring managers can then reference an externally moderated assessment, rather than reading a CGPA whose dispersion across institutions is not comparable.
Expected band shift. This is a structural intervention rather than a content intervention. The band shift is indirect, via two mechanisms. First, externally moderated rubrics reduce the dispersion of Foundational-band capstones that are graded as Practitioner-band internally. Second, they produce a signal employers can use beyond CGPA (see Section 9).
5. Build a shared adjunct-faculty pool from the working-AI-engineer population
The intervention. Use the AICTE Year of AI 2025 framework as a vehicle for a structured industry-secondment programme. Working AI engineers (not consultants, not vendors — practising engineers from product companies and credible AI teams) teach one course per year at a partner institution, with their existing employer's blessing and a documented honorarium structure. The intervention has been tried piecemeal at individual institutions; the gap is national scale.
Expected band shift. This is the only intervention on this list that can credibly move the band ceiling at Tier-2 and Tier-3 private institutions. Without working-engineer faculty time, the Applied-band content cannot be taught with authority at scale. The IndiaAI Mission's faculty-side provisions are an enabler; the AICTE Year of AI is an enabler; the structured employer-secondment is the missing piece.
Why these five and not others
The standard list of interventions in this debate — "more AI courses", "more credit hours", "more enrolment", "more international partnerships" — addresses the syllabus layer or the enrolment layer. The five above all address the assessment, artefact, faculty, or moderation layers. Our reading of the H1 2026 evidence is that the binding constraint has moved from syllabus to assessment, and that the right interventions are correspondingly different.
A vice chancellor reading this who is considering only one intervention should pick the first one (assessment reform on capstones). It is the cheapest, the most directly under institutional control, the most evidence-supported, and the most rubric-impactful.
What this means for employers
This section is addressed to AI hiring managers in Indian product and services companies. It is a calibration guide.
Stop reading CGPA. Start reading the capstone.
A CGPA from one Indian engineering institution is not commensurable with a CGPA from another. The dispersion of grading conventions within and across institutions is wider than the signal in the number. The Mercer–Mettl, Wheebox / India Skills Report, and NIRF data establish this at the population level [1][2][18]. Hiring managers know it at the individual level.
The capstone is a better signal. Specifically: the capstone artefacts — the public Git repository, the README, the evaluation note, the model card, the architecture decision record — are more predictive of working capability than the CGPA. If the artefacts do not exist for a candidate, the candidate has not yet been required by their institution to produce them. This is itself a screening signal.
Screen for the dimensions the universities under-train
A working interview protocol that the Kompas hiring-partner panel has converged on over 2025–26 looks like this:
- A 10-minute walk through the candidate's capstone, with the candidate sharing their screen. Watch for whether they can navigate their own code in their own repository.
- A 10-minute Practitioner-band Evaluation probe. "Walk me through how you decided the metric for this project. What would you change about your evaluation set if you ran this again?"
- A 10-minute Practitioner-band System Design probe. "If we wanted this to handle ten times the traffic / be five times as accurate / cost half as much, what would you change first and why?"
- A 10-minute Practitioner-band Safety probe. "Walk me through how you'd think about bias in this model. What sensitive attribute would you disaggregate by, and why?"
- A 10-minute Communication probe. "Explain this system in five minutes to a product manager who doesn't know AI. They have ten minutes to decide whether to ship it."
A candidate who passes three of these five is at the Practitioner band on the corresponding dimensions and is a credible hire. A candidate who passes five is at the Applied band on several dimensions and is rare. The interview format is not novel; what is novel is screening for the dimensions explicitly rather than implicitly.
Recommended partnership models
Three partnership models that the Kompas hiring-partner network has observed working at scale:
- Sponsored capstone projects. The employer publishes a real (sanitised) problem and provides a small monetary stipend to the institution. The capstone runs against the employer's published evaluation rubric. The employer interviews every student who completes. The yield rate, in our observation, is higher than from generic placement-cell processes.
- Embedded internships against the rubric. Six-month internships during the final undergraduate year, evaluated against a published Practitioner-band rubric and with a documented end-of-internship report. Internships that exit without the report are weaker signals.
- Externally moderated assessment. A rotating panel of employer-engineers reviews capstones at partner institutions and signs off on the rubric assessment. The institution gets an external moderation signal; the employer gets a pre-screened candidate pool.
None of these models require a change to NEP 2020 or to AICTE policy. They are operational arrangements between employer and institution.
A note on the lateral-vs-fresher trade-off
A consistent theme from the 2024–26 reporting [5][23][24] is that the largest Indian IT services firms have shifted training capacity toward AI-reskilling of existing employees. From the employer side this is rational at current band distributions. From the system side it produces a perverse outcome: a fresh graduate who could have been lifted to Practitioner during their final year, on a sponsored capstone or an externally moderated rubric, becomes a Foundational-band fresher whom the employer must then re-train at the cost of one or two years.
Employers who invest in the final-year intervention (sponsored capstones, externally moderated assessment, embedded internships) reduce the cost of getting a Practitioner-band hire to roughly the cost of a re-skilled veteran, with a more durable result. This is a structural argument for the employer–university partnership, made on hiring economics rather than on social impact.
What this means for students
This section is addressed to students currently enrolled in an Indian AI-bearing programme. It is intentionally practical.
Locate yourself on the rubric, honestly
Read /skill-rubric. For each of the six dimensions, identify the band that describes you most accurately. Do not anchor to the most flattering descriptor. The right test is: could a working AI engineer, watching you do the work described in the cell, agree the cell describes you?
The most common student error is to read the rubric and place themselves one band higher than they are. The most common second error is to find a single project that meets one cell of one band, and to count it as evidence of that band on every dimension.
If you are at Foundational on most dimensions
The realistic semester goal is one band up on two dimensions. Do not attempt all six.
The highest-yield path is:
- Implementation. Choose one real project — not a tutorial — that you will ship into a public Git repository with a working README, version control discipline, and a small set of unit tests. The project does not need to be ambitious. It needs to be yours.
- Evaluation. For the same project, write a one-page evaluation note. What is the metric? Why this metric? What is the held-out set? Where is the error analysis? What did you learn from the failures?
These two artefacts, executed seriously, will move you from Foundational to Practitioner on Implementation and from Foundational to between-Foundational-and-Practitioner on Evaluation, in a single semester. They will also be the artefact a hiring manager looks at in the first thirty seconds of your interview.
If you are at Practitioner on Implementation and Foundational elsewhere
The realistic semester goal is Practitioner on Evaluation and the beginnings of Practitioner on System Design.
The highest-yield additions are:
- A second project of materially different shape. If your first was a model, your second should be a system. If your first was a system, your second should include a fine-tuned model. The cross-shape experience is what System Design rests on.
- A written architecture decision record (one to three pages) for the system project, defended in front of a more senior engineer or a faculty member with industry experience. The ADR is the artefact that distinguishes a Practitioner-band System Design from a Foundational one.
If you are at Practitioner on three or more dimensions
You are in the top 5 per cent of your cohort. The Applied band is a one-to-two-year reach, not a one-semester reach. The path runs through:
- A model card and a DPDP-Act-2023 mapping for one of your existing projects. This moves Safety toward Practitioner cleanly.
- A defended public technical post or whitepaper about one of your projects, written for an external reader who is technical but not in your sub-field. This is the Applied-band Communication artefact.
- An internship that you select against the rubric, not against the brand of the employer. A six-month embedded internship at a credible mid-stage AI team will produce more rubric movement than three months at a brand-name employer where you do peripheral work.
What not to spend semester time on
- More MOOC certificates. Past the first two or three, the marginal MOOC does not move the rubric. Hiring partners consistently report that certificate count is not predictive of capability [4]. The certificate ceiling is a real phenomenon.
- Hackathon-only project work. Hackathons produce demo-grade artefacts. Demo-grade artefacts are not what the rubric grades. Hackathon projects are valuable only if you take them home and convert them into evaluated, documented, maintained projects.
- AI-tool toolchain breadth for its own sake. Knowing PyTorch and TensorFlow and JAX and a third framework is less rubric-impactful than knowing one of them deeply enough to ship maintained code.
A word on certificates
Certificates are not nothing. They are useful at the Foundational-to-Practitioner transition as a forcing function to complete graded content. They are not, in 2026 H1, a substitute for the artefact set the rubric grades — public repositories, evaluation notes, model cards, architecture decision records, defended five-minute explanations. A student who has the artefact set does not need many certificates. A student who has only certificates does not yet have the artefact set.
Limitations and what we don't yet know
We are publishing this inaugural release in the spirit of ETHOS Principle 7: never overclaim. The following are the open questions on which a serious reader should treat this report's conclusions as provisional.
What we don't yet have first-party measurement of
- A first-party cohort assessment. The H2 2026 release will include a structured panel of graduating-cohort rubric assessments at a small number of partner institutions, with consent. The H1 2026 release does not.
- A defensible all-India figure for the share of AI faculty with shipped industry experience in the last three years. The literature has not converged on a number we trust; we use the hiring-partner-panel estimate and mark it as such.
- Comparable rubric data from peer education systems (China, the United States, the EU). The Kompas rubric is published; the same exercise has not been done with equivalent rigour against the same rubric in those systems. The H2 2026 release will include a peer-system comparator subject to data quality.
Where the survey evidence is weakest
- The split between supply-side and demand-side measurement. The published surveys mostly measure one or the other. The right unit of analysis is the joint distribution — what graduates can do, mapped to what employers expect. Our synthesis here uses the supply and demand sources as bookends; a future release with first-party assessment will narrow the joint distribution.
- Tier-3 and state institution data. The Mercer–Mettl, Wheebox, and Scaler–CMR samples skew Tier-1-and-Tier-2 in their respondent profiles [1][2][4]. Our Tier-3 and state estimates rest more heavily on policy reporting and hiring-partner intake than on direct measurement. The H2 2026 release will attempt to address this with deliberate sampling.
- Discipline-spanning AI literacy. The Kompas rubric in this release is calibrated to engineering; the parallel rubric for AI literacy in law, journalism, healthcare, design, management, and other disciplines is in development. The companion Track D work Track D will publish that rubric separately.
What the next release will measure
The H2 2026 release (November 2026) is scoped to add:
- A first-party graduating-cohort assessment at a panel of partner institutions, disclosed methodology, opt-in consent.
- A longitudinal cohort comparison against this H1 2026 baseline.
- Initial data on the impact of the AICTE Year of AI 2025 faculty-training cohort on rubric distributions.
- A peer-system comparator where data quality permits.
- A discipline-spanning Track D literacy rubric assessment.
What this report explicitly does not claim
To repeat for the record: this report does not name specific universities as partners; does not claim endorsement from individuals or institutions we have not signed; does not present invented statistics. Where statistics are uncertain or extrapolated, we have marked them. Where we could not find a defensible source, we have marked [Source needed] rather than fill the gap.
Sources and citations
Sources accessed and cited in this report, in citation order. URLs verified accessible as of May 2026.
-
Mercer–Mettl, India's Graduate Skill Index 2025. Findings on overall employability (42.6%) and AI/ML domain employability (46.1%). Methodology: 2,700+ campuses, 1M+ learners. https://www.mercer.com/insights/talent-and-transformation/talent-assessment/indias-graduate-skill-index-2025/
-
Wheebox / ETS / CII / AICTE / AIU / Taggd, India Skills Report 2026. Employability rate of 56.35%; CS and IT employability at ~80%; AI-domain analysis. https://www.insightsonindia.com/2025/11/14/the-india-skills-report-2026/
-
Mercer–Mettl India Graduate Skill Index 2025 PDF (Scientech-mirrored). Tier-1 (48.4%), Tier-2 (46.1%), Tier-3 (43.4%) employability breakdown; gender parity in AI/ML. https://scientechworld.com/wp-content/uploads/2025/02/MM_GSI_2025_Latest-1.pdf
-
Scaler–CMR Confidence-Capability Gap Study, 2026. 89% of engineers self-assess as AI-ready, 19% are; 86% of recruiters report difficulty; 55% time-constrained, 49% cost-constrained on upskilling. https://www.scaler.com/blog/are-indias-engineers-truly-ai-ready-scaler-cmr-study-uncovers-a-confidence-capability-gap/
-
NASSCOM–Deloitte, "Bridging the AI talent gap to boost India's tech and economic impact" (2024). India's AI talent pool to grow from 600,000–650,000 to >1.25M by 2027. https://www.deloitte.com/in/en/about/press-room/bridging-the-ai-talent-gap-to-boost-indias-tech-and-economic-impact-deloitte-nasscom-report.html
-
IndiaAI on the NASSCOM–Deloitte 2024 report. AI talent pool projection to 1.25M by 2027. https://indiaai.gov.in/article/india-s-ai-talent-pool-to-grow-to-1-25-million-by-2027-nasscom-deloitte-india-report
-
LinkedIn AI Labor Market Report 2026 (India coverage). India's AI hiring at 59.5% YoY growth, fastest globally; Hyderabad 51%, Vijayawada 45.5%. https://madhyamamonline.com/india/india-leads-global-ai-hiring-growth-at-595-1514399
-
NASSCOM Community, "India's AI Talent Crisis Is Real." Talent shortage of ~50% in 2024 (420,000 vs 600,000 immediate requirement). https://community.nasscom.in/communities/ai/indias-ai-talent-crisis-real-and-its-costing-us-future
-
Observer Research Foundation, "Five Years of NEP 2020 and the Promise of EdTech." Digital infrastructure in schools rising 34%→57% (2019–20 to 2023–24); implementation gaps. https://www.orfonline.org/expert-speak/five-years-of-nep-2020-and-the-promise-of-edtech
-
Ministry of Education, Government of India — Centre of Excellence in AI for Education (Union Budget 2025–26). Rs. 500 crore allocation. https://www.education.gov.in/en/nep/coe-ai-education
-
IndiaAI Mission — Compute Capacity programme. Rs. 10,372 crore overall outlay; compute component Rs. 4,563 crore over five years. https://indiaai.gov.in/hub/indiaai-compute-capacity
-
AI CERTs News on the IndiaAI Mission GPU rollout, 2026. 38,000 accelerators live. https://www.aicerts.ai/news/india-gpus-national-mission-hits-38000-accelerators/
-
IAPP, "Notes from the Asia-Pacific region: India releases DPDPA rules, AI governance guidelines," November 2025. DPDP Rules notified 13 November 2025; full applicability 13 May 2027. https://iapp.org/news/a/notes-from-the-asia-pacific-region-india-releases-dpdpa-rules-ai-governance-guidelines
-
MeitY, India Artificial Intelligence Governance Guidelines (5 November 2025). Soft-law framework tethered to DPDP Act 2023; risk-tier activity classifications. https://americanchase.com/generative-ai-regulations-india/
-
IJERT, "Integrating Artificial Intelligence into India's National Education Policy 2020." Implementation analysis. https://www.ijert.org/integrating-artificial-intelligence-in-to-indias-national-education-policy-2020-opportunities-challenges-and-strategic-pathways
-
Springer / Discover Artificial Intelligence, "AI literacy at higher education and India's vision Viksit Bharat 2047: a systematic review." https://link.springer.com/article/10.1007/s44163-025-00348-z
-
University World News, "AI is already transforming Indian higher education – Report," October 2025. https://www.universityworldnews.com/post.php?story=20251016140031657
-
NIRF India Rankings 2024 — Engineering. Methodology and category-specific analysis. https://www.nirfindia.org/Rankings/2024/EngineeringRanking.html
-
KPMG analysis of NIRF 2024 category-specific results. https://assets.kpmg.com/content/dam/kpmgsites/in/pdf/2024/10/national-institutional-ranking-framework-nirf-2024-category-specific-analysis.pdf.coredownload.inline.pdf
-
Elets digitalLEARNING, "Indian Graduate Employability in AI and ML Reaches 46 Percent." Reportage on Mercer–Mettl AI/ML domain employability. https://digitallearning.eletsonline.com/2025/02/indian-graduate-employability-in-ai-and-ml-reaches-46-percent-report/
-
NASSCOM, "Talent Demand & Supply Report: AI & Big Data Analytics." https://www.nasscom.in/knowledge-center/publications/talent-demand-supply-report-ai-big-data-analytics
-
Stanford AI Index referenced via The Print, "India leads in AI talent, but also brain drain & anxiety, says Stanford's AI index report." India 19.9% of AI projects on GitHub; <2% of top-cited AI publications. https://theprint.in/india/governance/india-leads-in-ai-talent-but-also-brain-drain-anxiety-says-stanfords-ai-index-report/2909479/
-
Whalesbook coverage of India IT firms boosting AI hiring. TCS/Wipro AI training scale; pay positioning vs. Infosys. https://www.whalesbook.com/news/English/tech/India-IT-Firms-Boost-Hiring-for-AI-TCSWipro-Pay-More-Than-Infosys/69ec9b455a43f6b807bb7cf2
-
Channeliam, "Infosys to Hire 20,000 Graduates in 2025 Amid AI Push." https://en.channeliam.com/2025/07/31/infosys-hiring-2000-graduates-ai-strategy/
-
BizzBuzz / Randstad survey on softening fresher hiring trends. https://www.bizzbuzz.news/employment/organisations-looking-at-less-fresher-hiring-in-india-randstad-survey-1390761
-
Drishti IAS / OpenTools coverage of the AICTE 2025 Year of AI. 1,000-college faculty training; 14,000+ colleges curriculum embedding. https://www.drishtiias.com/daily-updates/daily-news-analysis/aictes-2025-year-of-ai
-
AICTE Model Curriculum portal — UG engineering, AI&ML and AI&DS specialisations. https://www.aicte-india.org/education/model-syllabus
-
PIB India — AI in Education Building India's Talent Pipeline for Global Leadership (March 2026). https://static.pib.gov.in/WriteReadData/specificdocs/documents/2026/mar/doc202633809001.pdf
-
Marwadi University blog, "How AI Is Changing Engineering Education in 2025." Faculty shortage framing. https://www.marwadiuniversity.ac.in/blog/how-ai-is-changing-engineering-education-in-2025/
-
Center for Security and Emerging Technology (CSET), Georgetown — AI Faculty Shortages. Cross-national framing. https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Faculty-Shortages.pdf
-
PIB India press release on IndiaAI common-compute capacity crossing 34,000 GPUs. https://www.pib.gov.in/PressReleasePage.aspx?PRID=2132817®=3&lang=2
Related reading on this site
- Kompas AI Skill Rubric (v2026.1) — the canonical 5×6 grid this report assesses against.
- The Indian AI talent gap in 2026: data and what to do about it — companion long-form post.
- NEP 2020 and the AI minor: an implementation note — policy analysis.
- Research at Kompas — index of published research.
- For vice chancellors and deans — partnership entry point.
- For employers — hiring-partner entry point.
Published by Kompas AI School, withkompas.com. Inaugural release of the AI Skills Index, India series. Next refresh: November 2026. For research correspondence, contact us.
Related reading
- The Kompas AI Skill Rubric — How We Measure Mastery
- Research — Reports and Whitepapers on AI in Indian Higher Education
- Deep AI for Computer Science — Track A
- AI Literacy for All Disciplines — Track D
- AI Faculty Development India — Track E Faculty Enablement & Research
- For Vice Chancellors and Deans — How a Partnership With Kompas AI Works