There is a version of you from fifteen years ago who would not recognize what you've become as a listener, a reader, a citizen, a thinker.
Not because you changed your mind about anything. But because the surfaces you think against have been quietly, systematically recontoured by systems whose only objective is to keep you engaged — and whose definition of engagement has nothing to do with what you'd choose if you were actually choosing.
You didn't drift. You were optimized.
01The Shape of the Problem
Every serious technology arrives with a story about what it does for people. Spotify gives you all the world's music. X gives you the global conversation. TikTok gives you an infinite stream of whatever you find most entertaining.
These are real. They are also the surface.
Beneath the surface, each of these products runs a recommendation engine whose purpose is not to serve your interests but to maximize a metric: time on platform, sessions per week, streams per track, clicks per visit. The difference between those two things — serving your interests and maximizing a metric — sounds small. It is not small. It is the entire problem.
When a product optimizes for engagement rather than satisfaction, it doesn't just give you what you want. It gives you what you'll consume. And over time, at scale, with billions of decisions per day, the gap between those two things reshapes the human on the other end of the screen.
I've spent a decade as a product manager, and I can tell you: this gap doesn't open because anyone decided to harm users. It opens because of how product teams choose what to measure. Somewhere in every one of these companies, there was a roadmap review where someone asked "what's our north star metric?" and the answer was some flavor of engagement — DAU/MAU, time spent, sessions per week, streams per day. Not satisfaction. Not growth. Not "did this person's life get better." Engagement. That choice cascades into every A/B test, every ranking model, every feature decision that follows. It is the most consequential product decision these companies make, and it is almost never revisited.
This is not a story about algorithms gone wrong. It is a story about algorithms working exactly as designed, and what that design does to the people inside it.
02Spotify and the Death of Taste
Start with music, because the evidence is cleanest and comes from the platform's own researchers.
Anderson et al. analyzed over 100 million Spotify users in 2020 and found that algorithmically driven listening is associated with reduced consumption diversity compared to organic, self-directed listening. When users became more diverse over time, they did so by shifting away from algorithmic recommendations. The algorithm wasn't expanding taste. It was compressing it.
A parallel MIT experiment found personalized recommendations increased streams by 29% but decreased individual-level diversity by 11.5%. The platform got better at keeping you listening. You got worse at listening to anything that surprised you.
This is what I mean by behavioral colonization. The product didn't just deliver music. It restructured how you relate to music. It replaced an active, identity-forming practice — seeking out records, following recommendations from people you trusted, building a relationship with sound that was yours — with passive consumption of an algorithmically administered feed. The feed knows your patterns. It doesn't know you.
The numbers tell the structural story. Despite access to virtually all recorded music ever made, 90% of Spotify streams flow to the top 1% of artists. Of 424,073 artists on the platform in 2020, only 1,613 achieved more than one million streams per month. The long tail prediction — that infinite catalogs would democratize consumption — was wrong. Infinite supply plus algorithmic curation produces the opposite: a narrowing funnel disguised as an open field.
Here's a product detail that matters more than it looks. Spotify counts a stream only after 30 seconds of listening. That's a product decision — someone chose that threshold, probably to filter out accidental plays and keep royalty accounting clean. Reasonable. But it means the algorithm learns that a track someone skips at 29 seconds is a failure, and a track they passively tolerate for 31 seconds is a success. This one threshold reshapes what gets recommended, which reshapes what artists produce. Songs now front-load their hooks. Intros have collapsed. The entire structure of popular music has bent toward surviving a product manager's definition of a "qualified stream."
Liz Pelly's Mood Machine revealed the mechanism at its most cynical. Spotify's internal "Perfect Fit Content" program replaced real artists on mood playlists with cheap stock music from ghost producers — 20 songwriters behind 500+ fictitious artist names, populating playlists with 4.5 million subscribers. The platform's own analytics dashboard described this content as "music commissioned to fit a certain playlist/mood with improved margins." The algorithm wasn't just compressing your taste. It was replacing what you'd taste with something cheaper.
But even without the cost manipulation, the structural dynamic holds. Musicologist Eric Drott captured it precisely: by removing every barrier to the immediate satisfaction of musical desire, streaming platforms transmute a source of gratification into its opposite. The promise of saturating musical desire has the effect of suffocating it instead.
You can feel this if you're honest with yourself. When was the last time you sat with a record that was difficult, that took three listens to open up, that someone you respected told you to stay with? The algorithm will never recommend that record. It can't. Difficulty is indistinguishable from disengagement in the metric, and the metric is the only thing that's real to the system.
The shift is from musical identity as something you build through effort and encounter to something administered to you by an optimization function. You didn't stop caring about music. The product made caring unnecessary, and then it made not-caring feel like satisfaction.
03X and the Colonization of Political Consciousness
The music case is about taste. The X case is about something more dangerous: the structure of what you think about, and how you think about it.
A 2026 Nature study — Gauthier, Hodler, Widmer, and Zhuravskaya — ran a seven-week field experiment with approximately 5,000 active U.S. users randomly assigned to algorithmic or chronological feeds. The results were unambiguous. Switching from chronological to algorithmic feeds shifted political opinions toward more conservative positions on policy, foreign affairs, and perceptions of criminal investigations. Users exposed to the algorithmic feed followed more conservative activist accounts and continued following them even after switching back.
This is the first rigorous demonstration that a platform algorithm changes actual political attitudes — not just exposure, not just what people click on, but what they believe. The algorithm doesn't just show you the world. It tilts the world, and you tilt with it.
Piccardi et al.'s 2025 Science study reinforced this with a different method: experimentally reranking X feeds during the 2024 presidential election. Up-ranking hostile political content increased affective polarization by an amount equivalent to three years of natural change in the United States. Ten days. Three years of damage.
The mechanism is straightforward. Rathje, Van Bavel, and van der Linden found that content about the political out-group was shared roughly twice as often as in-group content. Each additional out-group word increased sharing odds by 67%. Brady and Crockett demonstrated that positive social feedback for expressing moral outrage increases the probability of future outrage expression — a reinforcement learning loop that the platform's engagement metrics actively reward. The algorithm selects for conflict because conflict drives engagement. It doesn't care that conflict also drives polarization, dehumanization, and the slow erosion of shared reality.
If you've ever worked in product, you know the single highest-leverage decision is what you make the default. Most users never change defaults. When Twitter made the algorithmic "For You" tab the default landing experience instead of the chronological "Following" tab, that wasn't a design tweak. It was a decision to put every user on the planet into an engagement-optimized feed unless they actively opted out. The PM who shipped that feature probably framed it as improving content discovery. The A/B test probably showed higher session time. The dashboard went up and to the right. That's how these things work: locally rational, globally corrosive.
But the deeper point is not about polarization. It's about the Overton window itself.
Whoever controls what the "current thing" is — what everyone is forced to have an opinion about, what frame the debate takes, where the battle lines are drawn — controls a substantial portion of the future. This is not metaphorical. Setting the parameters of the battlespace sets the structure of what human mindspace becomes. X's algorithm doesn't just amplify conflict. It selects which conflicts matter. It decides what you wake up angry about. It defines what constitutes the center and what constitutes the fringe.
Renée DiResta named the downstream consequence precisely: algorithms create "bespoke realities," custom informational environments that fragment shared epistemic ground. If you make it trend, you make it true. The platform is not a mirror held up to society. It is a lens that bends what passes through it, and we are on the other side, unable to see the curvature.
The product didn't give you the global conversation. It gave you a version of the conversation optimized to keep you engaged, and engagement turns out to select for outrage, tribalism, and the slow replacement of thought with reaction. You didn't become more political. You became more algorithmic in your politics.
04Tiktok and the Colonization of Cognition Itself
The Spotify case is about taste. The X case is about belief. The TikTok case is about something more fundamental than either: the capacity to think at all.
Gloria Mark, professor of informatics at UC Irvine, has been measuring how long humans can sustain attention on a single screen for over twenty years. In 2004, the average was 150 seconds. By 2012, it had fallen to 75 seconds. By 2016, it was 47 seconds. Multiple independent researchers have replicated this result. The median — the midpoint, meaning half of all observations fall below it — is 40 seconds.
This is the environment TikTok was built for, and the environment it is actively making worse.
TikTok's architecture is different from every platform that preceded it. Facebook, Instagram, and Twitter were built on the social graph — you followed people, and the algorithm filtered what those people posted. TikTok abandoned this entirely. Its For You Page requires no social connections, no follows, no expressed preferences. You open the app. The algorithm watches what you do — how long you watch, whether you rewatch, how fast you scroll, where you pause, what time of day it is — and within minutes, it has built a behavioral model precise enough to keep you there.
In product terms, this was a brilliant cold-start solution. The social graph is the hardest thing to bootstrap — you need users to have friends on the platform before the product has value, which means you need the friends first. TikTok's insight was to skip the social graph entirely and build the recommendation engine on behavioral signals alone. No friends needed. No onboarding friction. Just open the app, start swiping, and the model converges on your preferences faster than you can articulate them yourself. It solved the growth problem. It also meant that from day one, every user's entire experience was mediated by an algorithm optimizing for watch time, with no social context, no human curation, and no friction whatsoever between the user and the feed.
This is not the algorithm showing you what your friends like. This is the algorithm learning you, directly, from your involuntary responses, and feeding you whatever keeps the session going.
A 2025 Psychological Bulletin meta-analysis — the most comprehensive to date — synthesized the existing research on short-form video and cognition. The findings were consistent across studies: higher levels of short-form video engagement are associated with poorer attention span and reduced inhibitory control. The researchers explained this through the dual theory of habituation and sensitization: repeated exposure to fast-paced, high-stimulation content desensitizes users to slower, more demanding tasks while simultaneously sensitizing them to impulsive engagement. The brain adapts to the feed. It stops being able to function well outside of it.
A 2025 study published in Computers in Human Behavior tested what happens when you remove personalization. Eighty-eight TikTok users spent one week on their normal algorithmic feed, then one week on a depersonalized feed. When the algorithm stopped optimizing for them, their usage frequency dropped, their session duration dropped, and — critically — their self-reported self-regulation increased. They felt more in control of their own behavior. The algorithm wasn't serving them. It was overriding them. When it stopped, they could feel the difference.
The variable ratio reinforcement mechanism is the structural explanation. TikTok operates on the same principle as a slot machine: you never know when the next great video will appear. This uncertainty is more behaviorally addictive than predictable rewards. Your brain releases dopamine not in response to the good video but in anticipation of it — which means the scrolling itself becomes the compulsive behavior, independent of whether any given video is satisfying. Users report intending to spend five minutes on the app and looking up an hour later. This isn't a failure of willpower. It's the intended output of the system.
"Brain rot" — the cultural term for the cognitive flattening produced by excessive consumption of low-quality algorithmic content — was named Oxford's Word of the Year for 2024. It entered the language because people could feel it happening to them. A 2025 PMC review formalized the concept: the For You Page encourages an endless consumption loop that promotes desensitization and shortened attention spans, depleting the capacity for sustained engagement with longer, more demanding content. The overstimulation doesn't just waste time. It restructures what the brain is willing and able to do afterward.
What makes TikTok the purest case of behavioral colonization is that the product doesn't colonize a specific domain of life — your music, your politics — it colonizes the substrate on which everything else depends. Attention is not one cognitive function among many. It is the precondition for all the others. When you lose the ability to sustain focus, you lose the ability to read deeply, to think carefully, to form considered judgments, to sit with difficulty, to be bored productively. You lose the capacity for the kind of cognitive effort that every meaningful human activity requires.
The product didn't give you entertainment. It gave you a variable-ratio reinforcement loop that trains your brain to expect stimulation every few seconds and to experience its absence as deprivation. You didn't get lazier. Your cognitive architecture was remodeled by an optimization function that needs you distractible to survive.
05The Pattern Beneath the Pattern
Each of these cases — taste, ideology, cognition — shares a common architecture:
Infinite supply + engagement-optimized algorithmic curation + zero marginal cost = progressive behavioral dependence on the optimization function.
This is not three separate problems. It is one problem expressed across three domains. The product takes a dimension of human experience that used to require effort, friction, social connection, and personal investment — discovering music, forming political views, sustaining attention on something that matters — and replaces the effortful, human version with a frictionless, algorithmic one. The algorithmic version is faster, more convenient, more immediately satisfying, and it runs on a metric that has nothing to do with your flourishing.
Tristan Harris called it "the race to the bottom of the brainstem." Shoshana Zuboff described "the continuous intensification of the means of behavioral modification." Jaron Lanier named the business model directly: BUMMER — "Behaviors of Users Modified, Made into an Empire for Rent."
The aggregate data is what you'd expect if this thesis were true. U.S. adolescent depression nearly doubled between 2009 and 2019. Young adults' weekly time with friends fell from 12.8 hours in 2010 to 5.1 hours in 2024. Average on-screen attention spans dropped from 150 seconds to 47 seconds in two decades — a 70% decline that tracks almost perfectly with the rise of algorithmically curated content.
But the most important dimension is the one with the least research: what happens when the same person is simultaneously subject to algorithmic optimization by Spotify, X, TikTok, Instagram, YouTube, dating apps, and news aggregators? Each platform's algorithm operates independently on the same finite set of cognitive resources, attention, and neurochemistry. The total load is not studied because the platforms that hold the data have no incentive to release it, and because no research framework has yet been designed to measure the compounding effect.
The question isn't whether any single algorithm is harmful enough to matter. The question is what happens to a human whose taste, politics, attention, social life, information diet, and sense of self are all being optimized simultaneously by systems that don't know they exist and wouldn't care if they did.
06The Data Point That Says Everything
There is one finding from the research literature that, to me, captures the entire problem in a single result.
Milli et al., published in PNAS Nexus in 2025, studied what Twitter's engagement-optimized algorithm actually does to users. They found that the algorithm amplifies emotionally charged, out-group hostile content. This is not surprising. What is surprising is the second finding:
Users did not prefer the content the algorithm selected for them.
They engaged with it more. They clicked on it more. They spent more time with it. But when asked, they said it made them feel worse, and they would not have chosen it themselves.
This is the mechanism of behavioral colonization in miniature. The product doesn't serve what you want. It serves what you'll consume. And because consumption, not satisfaction, is the metric, the system optimizes for something that actively works against your own stated preferences. You are being driven somewhere you don't want to go, by a system that interprets your compliance as consent.
In any well-run product org, this finding would trigger an existential crisis. Your primary metric is telling you the product is working. Your users are telling you the product is making them miserable. If you're measuring engagement, the dashboard says ship it. If you're measuring satisfaction, the dashboard says kill it. These are the same feature, the same algorithm, the same A/B test. The only thing that differs is which metric you chose to believe. Every platform in this essay chose engagement. That is not a technical failure. It is a product decision.
The algorithm knows you'll engage. It doesn't know — and doesn't need to know — that you wish you hadn't.
07What Colonization Looks Like from Inside
The reason this is hard to see is that it doesn't feel like coercion. It feels like choice.
Byung-Chul Han identified this precisely. The neoliberal subject experiences exploitation as freedom. You chose to open Spotify. You chose to scroll X. You chose to open TikTok. At every micro-level, the decision was yours. But the choice architecture — what's presented, in what order, with what emotional loading, at what frequency, through what interface — was designed by someone whose interests are structurally misaligned with yours. And over thousands of these micro-choices per day, across years, the aggregate effect is not freedom. It's a progressive narrowing of your interior life toward the shape that is easiest for the platform to monetize.
Bernard Stiegler called this "generalized proletarianization" — the systematic destruction of savoir-faire, savoir-vivre, and theoretical knowledge through digital technologies. What he meant was: you are losing the ability to do things (discover music, form political judgments, sustain attention on something difficult) not because you lack the capacity but because the systems you use have made the effortful version unnecessary and the effortless version compulsive.
Zuboff's "division of learning" names the power asymmetry: they know everything about you; you know almost nothing about them. They know your listening patterns, your political triggers, your attention thresholds, your habituation curves, the exact scroll speed that indicates waning interest. You know what the interface shows you. The asymmetry is not epistemic. It is architectural. It is the structure of a new form of power.
08The Compounding That No One is Measuring
Here is what I think is actually happening, stated plainly:
The average person in a developed economy now spends the majority of their waking non-work hours inside algorithmic environments optimized for engagement. Their music is algorithmically curated. Their news is algorithmically ranked. Their social interactions are algorithmically mediated. Their entertainment is algorithmically sequenced. Their shopping is algorithmically suggested. Their dating is algorithmically sorted.
Each of these systems operates independently, optimizing its own metric, on the same finite pool of attention, cognition, and neurochemistry. The total effect is not the sum of the parts. It is something qualitatively different: a life in which nearly every discretionary moment is spent inside an optimization loop designed by someone else, for objectives that are not yours.
No one is studying this because no one can. The data is held by the platforms, the platforms compete with each other, and none of them have an incentive to reveal what happens when their effects combine. This is the most consequential research gap in technology studies today, and it is not an accident. It is a feature of the business model.
The products didn't just get better at serving you. They got better at making you easier to serve. That is the difference between a tool and a colonizer.
09So What
I don't have a clean policy prescription, and I'm suspicious of anyone who does. The systems are too deeply embedded, the incentives too structurally entrenched, and the benefits too real to pretend this is a simple problem with a simple solution.
But I think the first step is seeing it clearly. Not as a moral panic about screen time. Not as a nostalgia trip about record stores and newspapers. Not as a left-right political argument about content moderation. But as a structural observation about what happens when the products you use every day are designed to reshape your behavior toward their optimization functions, and the reshaping is working.
And here is the thing that nobody in the current discourse seems willing to say plainly: this is the AI risk. Not the hypothetical one. The actual one. The one that is already deployed, already operating at civilizational scale, already producing measurable damage to human cognition, political capacity, and cultural life.
The AI safety conversation is dominated by frontier risk — the model that develops agency, the bioweapon synthesis, the superintelligence that turns us into house cats. These are real concerns worth taking seriously. But while we debate whether a future AI might learn to manipulate human behavior at scale, the recommendation algorithm has been doing exactly that for over a decade. It is not hypothetical. It is running right now, on billions of devices, optimizing billions of human decisions per day toward outcomes its subjects did not choose and do not prefer. It is the largest behavioral modification experiment ever conducted, and no one consented to it.
The AI that poses the most immediate, measurable, population-level risk to human autonomy and cognition is not the one that hasn't been built yet. It's the one in your pocket. It's the one you opened six times today. The gap between the risk we're debating and the risk we're living inside is the gap between what sounds like science fiction and what sounds like a Tuesday — and that gap is exactly what makes the real risk so hard to see.
The people who built these systems understand this. Daniel Ek knows that Spotify's algorithm compresses taste. Elon Musk knows that X's algorithm shapes political attitudes. TikTok's engineers know that the For You Page is a variable-ratio reinforcement loop that degrades sustained attention. They know because the data tells them, and the data is the thing they care about most.
The question is whether you understand it too. Whether you can feel the curvature of the lens, even while you're looking through it. Whether you can recover the capacity to choose — not just what to consume, but how to relate to the systems that are consuming you.
The algorithm has no ambition. It has no destination. It has a metric, and you are the input. The only question is whether you know that, and what you do once you do.