Opinion | AI Shopping Is Great. AI Pricing Is A Nightmare.

Heres how algorithmic pricing is poised to undo 150 years of retail trust.

Scope
Goals
Context

In 1861, Philadelphia merchant John Wanamaker revolutionized retail by introducing something radical: the price tag. Before Wanamaker’s Grand Depot opened its doors to serve visitors attending the American Centennial Exposition, shopping meant haggling. Every transaction was a negotiation, every price a starting point for bargaining.

Wanamaker changed all that. As his biographers note, his decision was partly practical. As his store grew, he simply couldn’t hire enough skilled negotiators to staff it. But there was a deeper principle at work. A devout Presbyterian, Wanamaker believed that “if everyone was equal before God, then everyone should be equal before price.” The price tag was part strategy, part statement of fairness. His stores had fixed pricing, pioneered money-back guarantees, were closed on Sundays, had prayer rooms, and an avid community engagement program that spanned church programming to soup kitchens. But with stores that spanned a city block in cities like Philadelphia and NYC, the scale of retail meant rewriting the rules which are still in practice, well until AI turns things around.


Nearly 150 years later, we’re on the verge of undoing Wanamaker’s innovation. And we’re doing it with the very technology that’s supposed to make our lives easier. But here’s the thing: AI shopping doesn’t have to be predatory. The problem isn’t the technology itself, it’s how companies are framing it, branding it, and refusing to talk honestly about what it does.

The Return of Personalized Pricing

Artificial intelligence has given retailers a tantalizing capability: the power to charge each customer exactly what they’re willing to pay. Through machine learning algorithms that analyze our browsing history, purchase patterns, location data, and financial information, companies can now resurrect the ancient art of personalized pricing, but at massive scale and with uncanny precision.

The United States Federal Trade Commission has already taken notice. In 2024, the agency issued subpoenas to major financial institutions like Mastercard and JPMorgan Chase, investigating whether AI tools might enable companies to vary prices using individual consumer data. The concern isn’t hypothetical, it’s happening.

But there’s a fundamental problem with this brave new world of dynamic pricing: people hate it.

The Framing Problem

When Delta Airlines’ AI pricing plans leaked to the public, the backlash was immediate and fierce. Critics conjured dystopian scenarios: an algorithm detecting you’re traveling to a funeral and automatically jacking up the price. Delta denied these specific claims, insisting its pricing was based on market conditions rather than individual circumstances. But the damage was done.

The incident reveals a crucial truth about AI-driven commerce: companies haven’t figured out how to brand algorithmic pricing in a way that builds trust instead of destroying it. The opacity isn’t a communications problem, it’s a fundamental flaw in how AI shopping experiences are being conceived and marketed.

This is, at its core, a framing problem. When companies describe AI pricing, they talk about “optimization,” “dynamic pricing,” and “personalization”, corporate jargon that sounds clinical and evasive. What they don’t talk about is fairness, predictability, or the customer experience of navigating a marketplace where every price might be a negotiation you didn’t know you were having.

The branding of AI shopping has been left entirely in the hands of engineers and revenue teams, not customer advocates. And it shows.

Consider what happens when you’re browsing for a product online. You might see a “special offer” flash on your screen. Is it really special? Or did an algorithm determine you’re the kind of person who responds to artificial urgency? Maybe it knows you’re a returning visitor, or that you’ve been comparison shopping. Perhaps it’s analyzed your browsing history and concluded you’re ready to buy. The price you see might be higher than what someone else would see for the exact same item, or it might be lower. You’ll never know.

This lack of transparency creates a corrosive anxiety. Every purchase becomes a question mark. Did I get a good deal? Am I being manipulated? Would I have paid less if I’d cleared my cookies or used a different device?

The result is paralysis masquerading as choice. Far from empowering consumers, opaque AI pricing makes us suspicious of every transaction and anxious about every decision.

The MAP Illusion

Before we even get to AI, there’s a dirty secret about “comparison shopping” that most consumers don’t understand: Minimum Advertised Price (MAP) policies.

In the US, MAP agreements are perfectly legal and widely used, per the FTC. They’re unilateral (brand-set), uncontroversial, and completely reshape the competitive landscape. Brands can dictate the minimum price that any retailer is allowed to advertise for their products. Violate that minimum, and the brand can cut you off from inventory.

It’s worth noting that this is uniquely American regulation. In Europe, MAP policies are generally illegal under competition law, as they are considered a form of price fixing. The EU market operates with genuinely competitive pricing at the retail level, which makes the AI shopping trust problem even more acute there, since there’s real price variation to exploit.

So when you use a search engine or comparison shopping site to “find the best deal,” you’re not actually seeing what retailers would charge in a free market. You’re seeing a price floor that the brand itself has set. Every retailer shows you basically the same price because they have to.

Technically, retailers can sell below MAP, but only if they don’t advertise it. You’d have to physically walk into a store or somehow discover the lower price through word of mouth. And here’s the thing: there’s no incentive for them to do so. Why would a retailer voluntarily make less profit on each sale when they can’t even advertise the discount to attract more customers? The MAP floor effectively becomes the standard price.



Here’s the kicker: brands regulate the minimum price, but not the maximum. That asymmetry creates a dangerous opportunity for AI shopping products. If an AI agent can build enough trust with users, it can get away with charging higher prices, perhaps significantly higher, and users will assume the price is correct. After all, they trust their AI assistant. And since everyone else is showing roughly the same (minimum) price, a slightly elevated price doesn’t raise red flags.

The search for a “cheaper deal” becomes a misnomer when the floor is fixed and the ceiling is whatever the AI thinks you’ll pay.

The Hidden Data Pipeline

But there’s an even more insidious data pathway that most consumers don’t see: financial data aggregation services like Plaid.

When you connect a budgeting app to your bank account, you’re granting access to your balance, transaction history, and spending patterns. That app now knows when you’re flush with cash, when you typically splurge, and what categories you spend most on.

Now imagine that budgeting app gets acquired by a company that also operates an AI shopping service. Or imagine it has a “partnership” with an AI agent that gets access to aggregated insights about user spending behavior. Suddenly, the AI doesn’t just know you’re browsing for headphones; it knows you just got paid, that you typically spend more on electronics on Friday evenings, and that you have $2,000 sitting in your checking account.

You never explicitly gave the shopping AI permission to access your bank data. But through corporate affiliations, data-sharing agreements buried in terms of service, or “anonymized insights” that are more identifiable than claimed, that information could flow to pricing algorithms anyway. The infrastructure exists. The incentives are obvious. And the disclosure is murky at best.

This is the nightmare scenario for personalized pricing: an AI that knows not just what you want, but exactly what you can afford, and you never knew you told it.

The Impulse Purchase Problem

There’s another casualty in this new landscape: the impulse buy.

Impulse purchases thrive on confidence and spontaneity. When you trust that the price you’re seeing is the price everyone sees, when you believe the “sale” really is a sale, you’re free to act on desire. But when every price might be algorithmically tailored to your wallet, that spontaneity dies.

Why buy now if the algorithm might be testing your willingness to pay? Why not open an incognito window, or wait and see if the price drops? The very tools designed to optimize revenue may be undermining one of retail’s most reliable drivers: the unplanned purchase made in a moment of genuine enthusiasm.

What Builders Need to Know

For developers and companies building AI-powered shopping experiences, the message should be clear: transparency isn’t a nice-to-have. It’s essential to the viability of your product. But more than that, you need to fundamentally rethink how you’re framing AI’s role in the shopping experience.

Stop positioning AI as a tool for “price optimization” and start framing it as a tool for fairness and clarity. The language matters. The brand you build around AI shopping will determine whether customers embrace it or revolt against it.

Here’s what needs to change:

1. Explainable pricing. If your algorithm adjusts prices, customers need to understand why. Is it based on inventory levels? Time of day? Demand patterns? Make it explicit. Real-time demand pricing for airline seats or concert tickets makes intuitive sense, but only when the mechanism is clear.

2. Bounded variability. Decide what degree of price variation is acceptable and communicate those limits. Will your prices fluctuate by 10%? 25%? Never more than the cost of restocking? Set clear boundaries and stick to them.

3. Opt-in personalization. Give users control over whether they want personalized offers. Some people might appreciate AI-curated deals based on their preferences; others want the guarantee of universal pricing. Let them choose.

4. Privacy by default. Stop collecting data you don’t need. If you’re not using someone’s browsing history to adjust prices, tell them explicitly. Better yet, don’t track it at all unless they opt in.

5. Clear language about “deals.” If you’re going to flag something as a special offer, ensure it actually is one—and be prepared to explain what makes it special. Is it a time-limited sale? A genuine discount from your regular price? A personalized offer? Customers deserve to know.

6. Rebrand transparency as a feature. Don’t hide how your AI works, make it a selling point. “Our AI ensures everyone pays the same base price” or “Prices adjust based on real-time inventory, not your personal data” can become competitive advantages if you’re the first to claim them.

But here’s the harder truth that builders need to confront: trust in AI shopping can’t be built through branding alone. It has to be proven with every single transaction.

This is especially crucial for products showing retail goods where value is directly comparable. There’s no capital-T Truth about what a fair price is, but there are dozens of other sites showing prices for the exact same item. If your AI shopping product consistently shows higher prices than competitors, you’ll be spotted immediately. Users will screenshot the difference. They’ll post it on social media. They’ll tell their friends not to trust you.

Traditional retail brands can survive occasional missteps because they build reputational capital over time. But AI shopping agents don’t have that luxury. Loyalty isn’t achievable in the abstract, it must be demonstrated, transaction by transaction, price check by price check.

One instance of showing a user a $50 item that they could have bought elsewhere for $35, and the trust is gone. There’s no brand equity deep enough to overcome the feeling of being ripped off by an AI you thought was working for you.

This means AI shopping products need to compete not on lock-in or convenience alone, but on a verifiable track record of good pricing. You’re not building a brand in the traditional sense. You’re building a reputation that’s only as good as your last recommendation. The bar isn’t “better than nothing”, it’s “demonstrably better than doing it myself.”

The companies that succeed with AI shopping won’t be the ones that squeeze out every last dollar through opaque algorithms. They’ll be the ones that frame AI as a tool for building trust, not exploiting information asymmetries.

John Wanamaker understood that trust is the foundation of commerce. When he eliminated haggling, he wasn’t just making his stores more efficient, he was making a promise. You can trust that the price you see is fair, that it’s the same for everyone, that you’re not being taken advantage of.

We need to remember that lesson. AI has enormous potential to improve shopping, better recommendations, smarter inventory management, more convenient experiences. But if we use it to resurrect the very opacity and unfairness that Wanamaker worked to eliminate, we’ll have built something nobody wants to use.

The technology is powerful. The question is whether we have the wisdom to deploy it in ways that serve customers rather than just extracting maximum revenue from them. Because in the end, a shopping experience that makes people feel manipulated and anxious isn’t just bad ethics, it’s bad business.

The price tag was an innovation born of scale and principle. As we enter the age of AI commerce, we need both again.

Process
Outcomes
No items found.
No items found.