Tuesday, 12 May 2026

Why can’t we stop scrolling? The Economics of attention in the age of Algorithms

 

Perhaps we all have had such an experience: we intended to look up a word but ended up watching videos for an hour without realizing it. Is it the content itself or the continuous anticipation for "the next more interesting content" that makes people addicted? During this process, the algorithm continuously learns user behaviour, makes precise recommendations, gradually shapes our information environment, and influences our attention and decision-making. It is like an "invisible hand", guiding our choices imperceptibly. This raises a key question: In the AI era, are our decisions autonomous or driven by algorithms? This blog explores that algorithms are influencing individual decisions by taking advantage of uncertainty and human behavioural biases and may even deviate from the optimal outcome. Therefore, we need to view algorithms rationally, regarding them as auxiliary tools rather than being dominated by them, and become the true "decision-making agents". 

Image 1 (AI-generated using ChatGPT-4o, OpenAI, 2026)

 

You think you are just "swiping casually", but in fact, you are participating in a carefully designed economic system. Each swipe seems like an independent choice, but it occurs in an environment full of uncertainties: you don't know what the next content will be, but you always feel - perhaps the next one will be better. So, you continue. This kind of behaviour is not about maximizing the average return, but rather about chasing after "possible better outcomes". In other words, you are not calculating the expected value but gambling on luck. 

 

Behavioural economics predicted this long ago. Prospect theory tells us that people overreact to "possible good things" (Kahneman, Tversky, 1979), while hyperbolic discounting indicates that we overestimate the immediate attractiveness of "one more" (Laibson, 1997). In short: It's certain to stop, but there's hope to keep scrolling - humans would choose the latter. But the key point is that this kind of "uncertainty" does not exist naturally but is deliberately created. The algorithmic goal of the platform is not to make you happier, but to make you stay a little longer. By constantly providing fresh and unpredictable content, it keeps you in a state of "the next one might be better". This is precisely the application of information design in reality (Kamenica, Gentzkow, 2011). Even better (and even more terrifying) is that all of this will reinforce itself. Every time you scroll and every second you stay, it will be recorded to train the algorithm, making it better understand you - and then more accurately get you to keep scrolling. This is a typical feedback loop: behaviour data algorithm more behaviour. Over time, the system will converge to a stable state: you keep scrolling, and it keeps feeding. Does it sound like balanced? Indeed. But the problem is that this equilibrium may not be good. According to the bounded rationality viewpoints of Herbert A. Simon and Reinhard Selten, people do not make optimal decisions in the first place, and the system happens to take advantage of this. 

 

Add a classic economic issue: information asymmetry. As George Akerlof (1970) pointed out, when one party has more information than the other, the result will be systematically distorted. Here, the platform knows everything about you, while you have almost no idea what the algorithm is doing. This information advantage enables it not only to predict you but also to "shape you". What's more interesting is that having more information doesn't necessarily mean better. Jonathan Levin (2001) pointed out that an increase in information sometimes reduces overall efficiency instead. On the platform, more data about you doesn't necessarily mean better results; instead, it might make it harder for you to stop. So, the question has never been "Why can't you stop?", but rather: In such a system, do you really have a chance to stop? This is not a matter of individual willpower, but rather a systematic outcome composed of uncertainty, behavioural bias, algorithmic incentives, feedback loops and information asymmetry. You are scrolling through your phone, but the system is also "scrolling through you". 

 

This system brings up a deeper issue: market failure caused by externalities. As mentioned earlier, behavioural biases - hyperbolic discounting - can cause individuals to underestimate future costs when making decisions under uncertainty. When people decide to keep scrolling, they often prioritize the immediate entertainment benefits and overlook long-term costs such as reduced productivity, weakened attention and decreased happiness (Laibson, 1997). Therefore, the perceived cost of watching videos is lower than its actual cost. Excessive consumption occurs when private costs are lower than social costs.

 

More importantly, this influence is not limited to the individual level. When a large number of users systematically allocate their time to low-value content, it will have a broader impact on human capital, productivity and information quality. However, these social costs are not reflected in the platform's incentive mechanism. On the contrary, because attention can be monetized, platforms tend to maximize user engagement, thereby causing a deviation between private incentives and social welfare. Therefore, the final equilibrium formed is inefficient: the consumption of attention exceeds the social optimum level, although every individual decision seems rational


Image 2 (AI-generated using ChatGPT-4o, OpenAI, 2026)

 

If the problem stems from the system itself, then the answer is not merely to enhance self-control. AI-driven platforms are not neutral tools; they reinforce behaviours and maintain usage habits. Under the combined effect of uncertainty, behavioural bias and algorithmic incentives, seemingly simple choices accumulate continuously, eventually forming a stable yet inefficient equilibrium - attention is overly consumed.

 

Against such a backdrop, a more effective approach might start from system design. On the one hand, the "choice architecture" can be redesigned. By adding moderate "friction" (such as using reminders or natural stop points), behaviour can be transformed from automation to conscious decision-making. On the other hand, it can enhance transparency, enabling users to have a clearer understanding of their own actions and the operational logic of the platform. Given the scale of screen usage time (Barber, 2025), this problem is difficult to be alleviated naturally and consciously by individuals alone. Therefore, the key issue is not whether we use technology, but whether we still hold the initiative in the process of using it.

 

 

Reference List:

Akerlof, G.A. (1970) ‘The market for “lemons”: Quality uncertainty and the market mechanism’, The Quarterly Journal of Economics, 84(3), pp. 488–500. Available at: https://doi.org/10.2307/1879431(Accessed: 5 April 2026). 

Barber, S. (2025) Mobile phone and internet usage statistics in the UK. Finder. Available at: Mobile internet statistics UK (Accessed: 23 April 2026). 

Image 1 (AI-generated using ChatGPT-4o, OpenAI, 2026)

Image 2 (AI-generated using ChatGPT-4o, OpenAI, 2026)

Kahneman, D. and Tversky, A. (1979) ‘Prospect theory: An analysis of decision under risk’, Econometrica, 47(2), pp. 263–291. Available at:https://doi.org/10.2307/1914185 (Accessed: 7 April 2026). 

Kamenica, E. and Gentzkow, M. (2011) ‘Bayesian persuasion’, American Economic Review, 101(6), pp. 2590–2615. Available at: https://www.nber.org/papers/w15540 (Accessed: 6 April 2026)

Laibson, D. (1997) ‘Golden eggs and hyperbolic discounting’, The Quarterly Journal of Economics, 112(2), pp. 443–478. Available at: https://doi.org/10.1162/003355397555253(Accessed: 3 April 2026)

Levin, J. (2001) ‘Information and the market for lemons’, The RAND Journal of Economics, 32(4), pp. 657–666. Available at: https://doi.org/10.2307/2696386 (Accessed: 3 April 2026). 

Selten, R. (1990) ‘Bounded rationality’, Journal of Institutional and Theoretical Economics, 146(4), pp. 649–658. Available at:https://www.jstor.org/stable/40751353 (Accessed: 5 April 2026). 

Simon, H.A. (1955) ‘A behavioral model of rational choice’, The Quarterly Journal of Economics, 69(1), pp. 99–118. Available at: https://doi.org/10.2307/1884852 (Accessed: 5 April 2026). 


No comments:

Post a Comment

Note: only a member of this blog may post a comment.