In 1979, Daniel Kahneman and Amos Tversky published “Prospect Theory,” a paper that would eventually win Kahneman a Nobel Prize. Their argument was simple: people are not rational. We anchor on irrelevant numbers. We fear losses more than we value gains. We are paralyzed by too many choices. The elegant math of classical economics, where rational actors pursue self-interest in competitive markets, didn’t hold up in how real people actually buy things.
They were right, in the domains where their theory applies. When decisions are intuitive, infrequent, and emotional — choosing a hotel, weighing surgery options, picking a contractor — people behave exactly as Kahneman predicted. But Adam Smith was never fully wrong either. Where decisions are structured, repeated, and emotionally cool — corporate finance, insurance pricing, professional poker — behavior looks remarkably rational. Smith described how decisions should work. Kahneman showed where they actually break down.
The real question was always: which theory applies where? Nobody had a reason to revisit that boundary until now. Because something new is entering the consumer domain where Kahneman has always been right, and it behaves exactly as Smith always imagined.
When an AI agent shops for you
Consider how you book a hotel today. You’re immediately hit with “Only 2 rooms left!” warnings (artificial scarcity) and strikethrough pricing showing a rate you were never going to pay (anchoring). You scroll through forty options, get tired by option fifteen, and pick something that feels right.
Now consider an AI agent doing it for you. The agent ignores the scarcity banner because inventory claims have no correlation with value. It doesn’t anchor on the crossed-out price. It doesn’t get tired on the fifteenth option, or the five hundredth. It evaluates every property against your actual constraints — budget, location, dates, cancellation flexibility — and surfaces the best match.
This is the perfectly rational buyer that classical economics always assumed but never had. It compares on merit. It optimizes on stated criteria. It is immune to branding, loyalty programs, and manufactured urgency. And it doesn’t just make shopping more efficient. It dismantles the entire psychological infrastructure that modern e-commerce is built on.
If agents on both sides of a market are optimizing rationally — buyer agents and seller agents — prices converge, hidden information disappears, and margins compress. An agent doesn’t care about your brand story. It reads the spec sheet, the review corpus, and the price history. The firms that thrive in an agent-mediated economy won’t be the ones with the best marketing. They’ll be the ones with the best actual product, at the best actual price.
But the human still decides what “good” means
The agent is not the consumer. The human is the consumer. And the human still has to tell the agent what to optimize for. When you say “find me a good hotel in Istanbul,” what does “good” mean? Cheap? Central? Quiet? The kind of place that photographs well?
The agent executes with perfect rationality, but what it optimizes for remains fuzzy, emotional, and deeply human. This is the new architecture: rational execution layered on top of irrational desire.
In practice, this means letting people declare their decision mode:
“Optimize on cost” — find the cheapest option that clears the quality bar.
“Book my usual” — pay a premium for familiarity. Don’t optimize; replicate.
“Let me review before you commit” — run the search, but hold the decision.
The human isn’t making the purchase decision. They’re setting the decision boundary — how much autonomy the agent gets. That’s a fundamentally simpler task than evaluating fifty options directly.
Two layers emerge. A transaction layer where agents operate with perfect rationality — comparing, optimizing, immune to tricks. And a preference layer where humans remain irreducibly human — deciding what “good” means, choosing with emotion, identity, and taste. The agent handles the how. The human owns the what.
Where delegation happens first
Economists call it disutility: the dissatisfaction or negative experience a consumer gets from a transaction. The thing you do because you have to, not because you want to. Categories with high disutility get delegated first. People outsource pain before they outsource pleasure.
Home services is the clearest example. Nobody wakes up excited to find a plumber. The entire experience is anxiety and hassle: diagnosing the problem, finding someone trustworthy, getting quotes, coordinating schedules, hoping you don’t get ripped off. Every step is friction. Zero enjoyment.
The pattern holds across categories. Tax preparation went from doing it yourself to TurboTax to “my accountant handles it.” Insurance shopping went from calling agents to automated coverage. Parking went from circling blocks to SpotHero. Bill payment went from writing checks to autopay.
Categories people enjoy resist delegation: restaurant discovery, leisure travel planning, shopping for clothes and books. Browsing is the experience.
The higher the pain, the faster consumers hand it off to a system. Home services sits at the extreme end because it combines high pain with high stakes, high cost, and low frequency. That’s what makes the “consumers will delegate to AI” thesis structurally sound.
The convergence
The agentic economy doesn’t pick Smith over Kahneman. It uses both. Our consuming behavior splits into two layers: a preference layer where we still choose with emotion, identity, and taste — and a transaction layer where agents execute with rationality.
We will still overpay for the restaurant with the story. But the agent will make sure we get the best price on the plumber.
The interesting economics of the next decade won’t be about whether consumers are rational. It will be about the seam between desire and execution — where humans choose what to optimize for, and agents figure out how.