Tuesday, 12 May 2026

How is the pricing of AI models created, based on factors of negative externalities, altruistic behaviour, and social networks

 Introduction. 

Prices of AI models often don't correlate with their cost, speed or quality. The price of AI is dependent on other factors such as different consumer interactions. From the perspective of general equilibrium, costs, demand and market interactions will affect the price of AI. Meanwhile, externalities, altruistic behaviour and social networks influence adoption and value, which helps to explain these pricing differences.


Figure 1

Social Networks and Behavioural Economics

The average user costs a company from $0.10 to $0.45 to service (DeepSeek-V3 2025), though power users can cost significantly more. Despite this, Premium versions of the LLMs cost around $20. It seems like a high price for an industry with a high level of competition driving the prices down. Yet, in reality, the AI companies are actually making a loss, like OpenAI, which reported a loss of $5 billion in 2024 and is projected to lose significantly more in 2025 (Vara 2024).

Despite that, the companies keep charging a price of $20 for their models, and 95% of the users keep using the models for free.

Why free use?

Companies may use behavioural economics to maximise future profits. Because AI is a recent invention, consumers may need a “free taste” of the product before committing to a payment. A paying element can slowly be integrated later. Now ChatGPT has a limited number of free responses. The consumer now appreciates the product and a subscription has to be paid for unlimited use. It is then renewed next month, and the consumer is unlikely to cancel due to “status quo bias” (Thaler & Sunstein 2021).

Why a standard price of $20?

It is a price around which many of the platforms charge. Here we see producers themselves being victims of “herd behaviour” (Baddeley 2018). Instead of charging more for their service, companies stick to the price first set in the industry, not necessarily because of competition or of cost differences, but because it acts as a psychological "anchor" (Kahneman 2011).

General Equilibrium and the Pricing of AI Models

Factors outside the general equilibrium model influence

Supply.

Small changes in the production process create a domino effect that has a strong impact on the industry later on. For example, after an AI data server is built in  a small town, the demand for electricity increases, driving up its prices, which then has an effect on the AI’s costs (Miller 2026). The same can work to the advantage of LLM companies - improvements in technology, such as the creation of the inference-optimized chip, decreases the high fixed cost of training an AI model (JPMorgaChase 2026).

And demand.

The ways in which consumers choose the product changes, as it improves over time. Ai uses a lot of resources to invest into research and development. "This, over time, is likely to create a positive learning curve, where the quality and efficiency of AI improve as models ingest more diverse user data (Stanford HAI 2026). Consumers will become more willing to pay, and prices will therefore increase.

Why are companies willing to take this big of a risk?

Companies can be playing the “long game”. They want to have a spot in the new market and figure out how to make a profit later on.

Altruistic Behaviour

Artificial intelligence starts to inform more human centred kinds of behaviour, including altruism, helping one another with no expectation of a return. That matters for AI pricing because those benefits extend beyond the individual user. AI systems are examined in disaster risk and emergency health management, for instance, to assist emergency teams in identifying those who most need assistance post-natural disasters (Bari et al., 2023). AI is also applied to healthcare to detect patient deterioration earlier, allowing doctors and support organisations to respond more quickly to vulnerable people (Gallo et al., 2024).

These benefits can be understood as positive externalities, as the social value of AI may be greater than the private value paid by one user. Hence, organizations could select non-profit plans to increase the popularity of socially beneficial AI instruments. But free or inexpensive AI access is not only generous; it also allows firms to acquire users and data, gain trust and enhance network effects. In a similar vein, if firms leverage AI generated emotional content to encourage people to click or donate, then “kindness” may become part of a business strategy (Arango, Singaraju and Niininen, 2023).


Figure 2

Negative Externalities in the Pricing Process

In a standard market, the price of AI models is usually determined by firms’ private costs such as training costs, and consumers’ willingness to pay. But what many people don’t know is, negative externalities which are “costs imposed on a third party not directly involved in an economic transaction” can also affect the pricing process (Goolsbee et al., 2024, p. 514).

Negative externalities may occur when creators’ works are used to train models without permission from the copyright owners, as the copyright owners’ rights are being harmed while they are not involved in production of models. When tech companies compensate the copyright owners’ loss, the prices of AI models need to be higher as the costs are rising to cover the negative externalities.

However, do tech companies really consider negative externalities when setting prices? Several groups of copyrights owners have sued major tech companies over the misuse of their work to train generative AI systems while the tech companies do not appear to respond directly to these copyright issues (DeepLearning.AI, 2023). Consumers benefit from the cheaper AI tools but how to protect the rights of copyright owners?

Conclusion

We have analysed how general equilibrium, negative externalities, altruistic behaviour and behavioural economics influence the pricing of AI models. In the real world the market is usually more complex. And even for the same AI, prices can vary significantly, such as the price difference between individual and business plans for ChatGPT. What’s left for us id to remain rational while using tools like chatGPT.

References

Arango, L., Singaraju, S. and Niininen, O. (2023) ‘Consumer Responses to AI-Generated Charitable Giving Ads’. Journal of Business Research.

Baddeley, M. (2018) Copycats and Contrarians: Why We Follow the Herd. New Haven: Yale University Press.

Bari, L.F., Ahamed, R., Zihan, T.A., Sharmin, S., Pranto, A.H. and Islam, M.R. (2023) ‘Potential Use of Artificial Intelligence (AI) in Disaster Risk and Emergency Health Management: A Critical Appraisal on Environmental Health’, Environmental Health Insights, 17.

Brynjolfsson, E., Li, D. and Raymond, L.R. (2023) ‘Generative AI at Work’, National Bureau of Economic Research Working Paper, No. 31161.

DeepLearning.AI (2023) ‘Copyright Owners Take AI to Court: Artists and writers sue big tech companies over copyright infringement.’ The Batch, 19 July.

Gallo, R.J. et al. (2024) ‘Effectiveness of an Artificial Intelligence-Enabled Intervention for Detecting Clinical Deterioration’, JAMA Internal Medicine, 184(5).

Goolsbee, A., Levitt, S. and Syverson, C. (2024) Microeconomics. 4th edn. New York: Macmillan Learning.

JPMorganChase (2026) 2026 Tech Trends: Inference Demand and the New Utility. New York: JPMorganChase & Co.

Kahneman, D. (2002) ‘Maps of Bounded Rationality: A Perspective on Intuitive Judgment and Choice’. Nobel Prize Lecture.

Kahneman, D. (2011) Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Lambrecht, A. and Skiera, B. (2006) ‘Paying too much and being happy about it: Existence, causes, and consequences of tariff-choice biases’, Journal of Marketing Research.

Miller, A. (2026) ‘The Freemium Trap: How AI captured the consumer market’, TechEconomy Review, 14 April.

OECD (2019) Artificial Intelligence in Society. Paris: OECD Publishing.

OpenAI (2025) Internal Financial Projections and 2025 Revenue Report. [As cited in global media].

Stanford HAI (2026) AI Index Report 2026: The Year of the Agent. Stanford, CA: Stanford Institute for Human-Centered AI.

Thaler, R. H. and Sunstein, C. R. (2021) Nudge: The Final Edition. New Haven: Yale University Press.

Vara, V. (2026) ‘The $14 Billion Hole: Why OpenAI’s math still doesn’t add up’, The New York Times, 12 January.

Varian, H.R. (2014) Intermediate Microeconomics: A Modern Approach. 9th edn. New York: W. W. Norton & Company.


No comments:

Post a Comment

Note: only a member of this blog may post a comment.