What’s the point in slaving away for hours a day studying for an exam when AI can complete it in twenty minutes? A question many university students are contemplating.
Clearly students
are increasingly picking the easier option displayed by a Guardian study, which
found there were 7,000 cases of AI cheating across UK universities in 2023-2024
(The Guardian, 2025). A deeper economic issue arises from this; how do employers
distinguish between high-ability students and lower-ability students if AI is
allowing everyone to achieve the same grades? As AI develops achieving higher
marks for students, the signal for employers grows weaker, stretching the issue
outside of the classroom and into the real world.
Why Grades Matter, the ‘Label’ Problem:
Employers cannot
understand our capability just by looking at us; in economics this hurdle is
called asymmetric information, where ability is masked leading employers to
rely on exam results as a ‘signal’ of ability. The signalling theory suggests
that higher ability students can achieve higher grades at a lower cost or
effort. This difference results in a separating equilibrium, working like a
filter, where individuals of different abilities choose different performance
levels (Spence, 1973). Under this equilibrium, exam results are more than just
numbers on a paper, instead they work as signals enabling employers to select
suitable candidates.
From
Separation to Pooling: How AI Reshapes Academic Signals
Image1: More students are using AI in online exams
Source: The Guardian (2024)
Figure1: AI lowers the cost of high grades for low-ability students,
shifting the outcome from separating to pooling
As shown in Figure
1, AI lowers the cost gap between high-ability and low-ability students,
enabling lower-ability students
to produce similar outcomes. Consequently, exam results
can no longer clearly distinguish between different ability levels. Meaning the separating equilibrium
that once made exam performance informative, has begun to collapse. We are left
with a pooling equilibrium where grades can no longer reliably reflect
underlying ability.
What does this
shift mean for an employer? As the cost of achieving high scores declines, more
students obtain similar high scores, raising the overall grade distribution and
leading to grade inflation. While first-class grades become easier to obtain, the
informational value of grades diminishes, weakening their role as a signal of
ability. When employers can no longer effectively distinguish between
candidates based on exam performance, they are more likely to rely on average
expectations, thereby deepening
information asymmetry. Under such conditions, this creates a “market for
lemons” (Akerlof, 1970), where poor information
makes it harder to separate from high quality from low quality and adverse
selection may arise, which means that low-ability candidates are more likely to
enter the market, while higher-ability individuals may be undervalued and crowded
out, reducing the overall quality of the labor market.
Alternative
Signals: A Solution or a New Problem?
When grades lose
credibility, firms will pivot to alternative signals to filter ability, rather
than relying on this single indicator (Stiglitz, 1975). For instance,
indicators such as internships and interview performance are typically harder
to ‘hack’ and therefore become significantly influential in evaluating
candidates.
Are these firms
actually better off after this shift? First, it may be too absolute
to conclude the ‘death’ of the
degree, academic signals still carry weight, as the use of AI does
not necessarily eliminate all cost differences in achieving high scores when
higher-ability students may still use these tools more effectively. This
suggests that some degree of separation may continue to exist. Similarly,
alternative signals are not perfect either - interviews can be biased, and internships may
reflect access to opportunities rather than ability (Arrow, 1973; Rivera, 2012), thereby exacerbating inequality. Moreover, hiring costs may increase as firms adopt alternative signals
and evaluate candidates with a
broader range of criteria.
Can We Fix the
Signal?
This brings us
back to the first question: can we restore the academic signal? Leading to a
better outcome for firms and students. One way is to regulate AI use by requiring
students to declare AI use or by introducing detection tools (Russell Group,
2023). Although this does
not fully solve the problem, it could artificially raise the cost of
over-relying on AI and reduce the information gap in the evaluating process.
Another approach is to include more in-person exams and varied assessment
formats, such as presentations or in-class tasks (Attewell, 2024). These are
harder to “game” using AI, forcing more direct engagement and ensuring the
maintenance of the separating equilibrium once again.
Overall, restoring
the signaling power of the degree matters. We still hope that grades can
continue to play an important role in the labour market as a relatively
low-cost, standardized and broadly fair signal. However, we cannot ignore the limitations of using grades as a signal. As higher education
becomes increasingly digital, online exams are unlikely to disappear given
their flexibility and cost-efficiency and imperfect AI detection means universities cannot fully guarantee the
reliability of this signal alone.
Even so, grades
can still work as an initial filter, giving employers basic information about
applicants and preserving some fairness. When combined with interviews and
internships, we can form a broader and more balanced way of assessing
candidates, as many firms already do today.
What Comes Next?
As AI advances, it has an increasingly direct
effect on the evaluation of human capital. For universities, this means regulating AI use to preserve grades as credible
signals of ability. For students, it means adapting to a more complex selection
process and building strengths in interviews and internships. While current
solutions are far from perfect, continued development offers a chance to
redefine new signals, leading to a fairer and more effective system for
assessing job applicants.
References
Akerlof, G.A.
(1970) ‘The Market for “Lemons”: Quality Uncertainty and the Market Mechanism’,
The Quarterly Journal of Economics, 84 (3): 488-500.
Arrow, K.J. (1973)
‘Higher Education as a Filter’. Journal of Public Economics, 2(3), pp.
193-216.
Attewell, S. (2024) ‘Exploring AI and assessment -
avoid, outrun or embrace’, Jisc, 22 April. Available at:
https://www.jisc.ac.uk/blog/exploring-ai-and-assessment-avoid-outrun-or-embrace (Accessed: 22 April 2026).
The Guardian (2024) ‘Researchers
fool university markers with AI-generated exam papers’, 26 June. Available at: https://www.theguardian.com/education/article/2024/jun/26/researchers-fool-university-markers-with-ai-generated-exam-papers (Accessed: 24 March 2026).
The Guardian (2025) ‘Revealed: Thousands of UK
university students caught cheating using AI’, 15 June. Available at: https://www.theguardian.com/education/2025/jun/15/thousands-of-uk-university-students-caught-cheating-using-ai-artificial-intelligence-survey
(Accessed: 23
April 2026).
Rivera, LA. (2012)
‘Hiring as Cultural Matching: The Case of Elite Professional Service Firms’, American
Sociological Review, 77(6), pp. 999-1022.
Russell Group (2023) Principles on the use of
generative AI tools in education. London: Russell Group. Available at: https://www.russellgroup.ac.uk/policy/policy-briefings/principles-use-generative-ai-tools-education (Accessed: 22 April 2026).
Spence, M. (1973)
‘Job market signaling’, The Quarterly Journal of Economics, 87(3), pp. 355–374.
Stiglitz, J.E.
(1975) ‘The Theory of "Screening," Education, and the Distribution of
Income’, American Economic Review, 65(3), pp. 283-300.
Susnjak, T. and McIntosh,
T.R. (2024) ‘ChatGPT: The
End of Online Exam Integrity?’ Education Sciences, 14(6), 656. Available
at: https://doi.org/10.3390/educsci14060656
No comments:
Post a Comment
Note: only a member of this blog may post a comment.