Tuesday, 29 April 2025

AI’s Hidden Costs: The Trade-Offs Behind the Tech

 Sacked by the Spreadsheet

Imagine waking up to find your job replaced by an algorithm. For millions of people this is now becoming reality; AI has redefined customer service and manufacturing in countless industries across the world (Sharp, 2024). While some marvel at the gains in productivity, there emerges a burning economic question: what are the hidden costs to progress?



The Price of Progress

In a free market, all participants in the economy freely exchange information as to their needs, wants, skills and abilities. This process creates an ever-updating network of prices that accurately reflect the relative costs and benefits of undergoing different economic activities. However, the efficacy of this mechanism often breaks down. Sometimes, prices fail to capture all the effects of producing a good or service. Economists call these hidden costs ‘negative externalities’ - when someone else ends up paying the price for a decision they weren’t part of (Kenton, 2024). If a firm replaces workers with AI, it may reduce its private costs by increasing productivity. However, the social costs of retraining workers and mental health crises caused by unemployment are externalised onto society. Thus, these costs are a classic case of a market failure, unaccounted for in corporate balance sheets.

Autonomous vehicles will destroy 500,000 jobs in California’s trucking industry, but Tech firms won’t pay the social cost (Garrifo, 2022). Displaced workers, increased demand for public support services, and the psychological toll on affected families are all effects that were not priced in by markets. Sound familiar? The process is much like factories polluting rivers without covering the bill for cleanup.

 


Convenience Over Competence

There are clearly social costs to AI that have been ignored - but are there also benefits that have been overestimated? Here’s where two classic economic ideas come in: moral hazard and the principal-agent problem. Moral hazard occurs when a party takes risks they ordinarily wouldn’t, because they don’t bear the full consequences of those risks. The principal-agent problem, on the other hand, occurs when one party (the agent) makes decisions on behalf of another (the principal), but their unaligned interests affect economic efficiency.

In many ways, AI is the time-saving tool people claim it is. There’s no doubt it stands out as a powerful method to generate refined signals that reduce information asymmetry in economic transactions. By embedding these signals into principal-agent contracts, moral hazard can be mitigated, boosting efficiency across various market structures. Take Uber as an example: AI tools help reduce asymmetric information by producing precise ‘effort signals’. For instance, cross-referencing a driver’s GPS data with assigned trips to see whether they are genuinely working or idling. If agents know their behaviour is being tracked closely, they have less leeway to shirk or misreport, reducing the moral hazard of not being monitored (Zhang, 2025).

The twist? AI doesn’t just fix moral hazard, it can create it too. As workers have increasing access to AI tools, they become increasingly reliant on them. Reliance on AI can affect workers’ own skills and pose potential problems if the AI software is not always readily available (Klingbeil, 2024). Let’s use an example that may cut close to the bone for fellow students. How many times have you asked ChatGPT for an explanation, copied it into your notes, and moved on?  This often means you’ve not really understood the topic. You miss figuring things out for yourself - making connections, thinking critically, or even just sitting with a tricky concept until it clicks. It might feel like you're learning, but you’re really just going through the motions.   

In economic terms, society doesn’t gain as much from AI as firms think they do, because the private benefits to firms outweigh the social benefit to everyone else. That’s where things get out of balance and may warrant action.

                                  


Policies and Protection

We’ve established that the unchecked adoption of AI is creating economic inefficiencies, because firms and individuals focus primarily on their private costs and benefits (Lane, 2021). Moral of the story? Sometimes the ‘invisible hand’ of the market needs a little nudge.

So how can governments and institutions intervene? Some possibilities include:

1.      Pigouvian Taxes: Financial penalties for firms fully replacing jobs with AI, negating the social costs of unemployment. This may realign the private and social cost of replacing workers with AI, ensuring a more socially optimal level of automation.

2.      Transparency Mandates: Require firms to disclose automation plans, empowering workers to adapt to AI developments. This could increase the productivity of workers, lowering the private cost of employing humans compared to automating the process.

3.      Mandatory Evaluation of Employee Skill Retention: Require firms to periodically evaluate potential erosion of employee skills due to overreliance on AI, mitigating overestimations of automation’s benefits.

Some governments have already acted. The EU’s AI Act (2024) set a precedent by classifying high-risk AI systems. Meanwhile, former US President Joe Biden’s AI Executive Order (White House, 2023) emphasises ‘worker-centric’ innovation. Nevertheless, more may be needed to ensure externalities are internalised - government action is not always enough. In theory, the market could sort all this out on its own, that’s what the Coase Theorem suggests. But in reality? Workers don’t have the power or information to negotiate with big tech firms, so it rarely works out that way.

Human-AI collaboration may offer a middle ground without intrusive or misguided government regulation. Germany’s 'AutoUnion 4.0' pays manufacturers to use AI to assist, not replace, workers. Without layoffs, BMW’s AI-assisted lines saw productivity rise by 25%. Evidence from California suggests autonomous vehicles might grow the economy while creating jobs and raising wages without mass layoffs (Wilkinson, 2023).

Balancing Benefits and Burdens

AI’s promise is genuine, but so are its distortions. When markets ignore hidden costs and inflate hidden benefits, progress has a serious price. It’s time to treat automation not just as a tool, but as a trade-off. It’s microeconomic tools like taxes and regulation that can help bring the balance society desperately needs.

References

1.      Brookings Institution (2023). Automation and Artificial Intelligence: How Machines Affect People and Places. Retrieved from https://www.brookings.edu/articles/automation-and-artificial-intelligence-how-machines-affect-people-and-places/

2.      California Legislature (2024). Senate Bill 1047 (Automation Transparency). Retrieved from https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047

3.      European Commission (2024). EU Artificial Intelligence Act. Retrieved from https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

4.      Gariffo, M. (2022) Automated trucks could cost 500,000 US jobs, researchers say, ZDNET. Available at: https://www.zdnet.com/article/university-of-michigan-study-claims-automated-trucks-could-cost-500k-us-jobs/ (Accessed: 31 March 2025).

5.      Image 1, AI generated using ChatGPT 4o

6.       Image 2, Available at:(9) Five ways AI is transforming the trucking industry | LinkedIn

7.      Image 3. Available at:ChatGPT in education: How students and teachers can use AI to transform learning | YourStory

8.      Kenton, W. (2024) Externality: What it means in economics, with positive and negative examples, Investopedia. Available at: https://www.investopedia.com/terms/e/externality.asp

9.      Klingbeil, A., Grützner, C. and Schreck, P. (2024) ‘Trust and reliance on AI — an experimental study on the extent and costs of overreliance on ai’, Computers in Human Behavior, 160, p. 108352. doi:10.1016/j.chb.2024.108352.

10.   Lane, M. (2021) ‘The impact of artificial intelligence on the labour market’, OECD Social, Employment and Migration Working Papers [Preprint]. doi:10.1787/7c895724-en.

11.   Lane, M. and A. Saint-Martin (2021), “The impact of Artificial Intelligence on the labour market: What do we know so far?”, OECD Social, Employment and Migration Working Papers, No. 256, OECD Publishing, Paris, https://doi.org/10.1787/7c895724-en.

12.   Sharps, S. (2024) The impact of AI on the labour market, Tony Blair Institute for Global Change (TBI). Available at: https://institute.global/insights/economic-prosperity/the-impact-of-ai-on-the-labour-market (Accessed: 31 March 2025).

13.   White House (2023) What they are saying: President Biden issues executive order on safe, secure, and trustworthy artificial intelligence | The White House, National Archives and Records Administration. Available at: https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/10/31/what-they-are-saying-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ (Accessed: 02 April 2025).

14.   Wilkinson, L. (2022) Autonomous long-haul trucking stands to grow the Golden State’s economy while creating jobs and raising wages without mass driver layoffs, Silicon Valley Leadership Group. Available at: https://www.svlg.org/study-shows-autonomous-trucking-will-grow-californias-economy/ (Accessed: 31 March 2025).

15.   World Economic Forum (2023). The Future of Jobs Report 2023. Retrieved from https://www.weforum.org/reports/the-future-of-jobs-report-2023/

16.   Zhang, T. and Zhang, Y. (2025) Generative AI and information asymmetry: Impacts on adverse selection and moral hazard. Available at: https://arxiv.org/html/2502.12969v1 (Accessed: 31 March 2025).

No comments:

Post a Comment

Note: only a member of this blog may post a comment.