Retail AI

March 6, 2026

ramirezom/Depositphotos.com

It’s Time To Put Your AI Through Leadership Training

Share: LinkedInRedditXFacebookEmail

Here’s what’s happening right now: Someone on your supply chain team just asked ChatGPT whether to reroute a shipment. Someone in marketing used Claude to reevaluate your brand messaging. Your overworked accounts receivable manager let an AI model flag which clients to pressure for overdue payment.

Actions like these probably wouldn’t surprise you. According to McKinsey’s “The State of AI in 2025” survey, 88% of organizations now use AI in at least one business function. Agentic AI is already making inroads into customer support, supply chain management, R&D, and cybersecurity, as per a 2026 Deloitte report.

But as comfortable as you are using AI, this part may make you queasy: These aren’t just efficiency tools anymore. They’re making judgment calls.

Once your team starts trusting AI’s judgment more than their own — or more than yours — you’re dealing with a new power structure. The AI isn’t replacing your CEO, but while it may not have a seat at the table, it’s accumulating the authority that matters: being the smartest one in the room.

The Authority Problem

Think about what happens when your company’s AI advisor has a better track record than your executives. It’s been accurately predicting market shifts, realigning operations, and contributing more profit-generating ideas than your leadership team can devise using gut instinct.

At what point does “AI-assisted decision making” become “AI decisions that humans rubber-stamp”?

You might think your C-suite people are immune. They’re not. The same pattern that started with junior employees using AI for research and email drafts is moving upstairs. Middle managers are using it for resource allocation, VPs are using it for strategic planning, and execs are being briefed on reports generated by AI.

It’s nearly inevitable that AI will assume a leadership role in your company. The question is: How strong a role can you play in shaping what kind of leader it becomes?

Learning through living

Most companies are letting AI training happen organically (aka, by accident). You’re feeding it your data, your processes, your institutional knowledge — but who among your peers is asking: What values is it learning? What tradeoffs is it making? When it optimizes for efficiency, what are we losing?

In my novel “Once a Man,” I explore this problem, but at a scale that affects our entire civilization: How do you train an omnipotent AI to make decisions that preserve what matters about being human? The approach I test in this fictional thought experiment: Embed the developing AI in a simulated human experience where it grows up believing it’s human, learning to navigate moral and ethical choices from an embodied perspective.

It’s fiction, but the underlying question is one that could be critical to our future: Can AI develop a genuine understanding of human values?

How This Actually Works in Your Company

If AI is going to accumulate decision-making authority in your organization, here’s how you make that deliberate instead of haphazard:

  • Define your actual values, not your marketing values. What should you prioritize when growth conflicts with employee well-being? When speed conflicts with quality? When profit conflicts with principle? Write it down. Be specific. Be honest.
  • Test AI decisions against those values systematically. Not just “did this work?” but “did this work in the way we wanted it to work?” Track the tradeoffs AI is making. Make them visible.
  • Build accountability structures now. Once AI is embedded in your operations, retraining it to align with values you should have specified earlier is exponentially harder.
  • Institute AI decision review sessions. Once a year, or possibly more often, bring your team together to examine the toughest calls the AI made. Ask: Do these decisions align with our stated values? Could we have made better calls ourselves? Why or why not?

This isn’t a training program just for the AI. It’s asking your people to confront, in practice, what they truly believe about how the company should operate. You’ll surface conflicts between departments, gaps between stated and lived values, and instances where the AI is optimizing for things you didn’t realize you wanted — or knew you didn’t want.

Companies that do this may discover their AI system is learning from unclear or contradictory guidance. That will be uncomfortable, but those that don’t will wake up one day wondering what happened to the company they once knew.

The Toughest Truth

It’s quite possible your team can’t articulate your company’s values in a way that would meaningfully guide an AI. Maybe when you try to define “what makes a good decision here,” you realize there’s no consensus. These may be among the toughest decisions you’ll ever make.

But at a time when you and your competitors are going head-to-head using similar AI models, this is where humans still make the difference. If you can define your company values earlier and more clearly than your competitors, and maintain those values through ongoing reviews, you’re still a leader. And that means the future still belongs to you.

Rick Moss is a multi-disciplinary artist living in Brooklyn, New York, and was a co-founder of RetailWire.

BrainTrust

"Do you agree with the premise that AI is gaining decision-making authority in retail industry organizations? If so, what, if anything, concerns you about this transition?"
Avatar of Rick Moss

Rick Moss

Co-founder, RetailWire


Discussion Questions

Do you agree with the premise that AI is gaining decision-making authority in retail industry organizations? If so, what, if anything, concerns you about this transition?

Do you see evidence that AI leads are actively training models to align with their company values, or do most assume the models will adapt to the needs of the company without prescriptive guidance from humans?

How should businesses assure the AI applications they use support their foundational brand values and ethics?

Poll

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Neil Saunders
Neil Saunders

Automation has always had some degree of influence in retail: algorithmic product recommendations, delivery route optimization, automated replenishment. But these things are systematized and rules-bound decisions. For more complex decisions that require judgement or taste, human involvement is still necessary; at the stage in its development, AI should be used as an assistant, not as the primary decision maker.

Last edited 3 hours ago by Neil Saunders
Bradley Cooper
Bradley Cooper

AI is clearly gaining influence in decision-making across retail organizations, but I think the bigger issue is how we frame its role. AI works best as a specialist capability applied to specific problems, not as a generalized “company brain.”

The concern is when organizations try to ingest everything into a single AI layer instead of deploying targeted models where they actually add value, that’s when judgment can become blurred instead of improved.

Doug Garnett

It might be gaining decision making abilities — but it shouldn’t. I have been using AI for some things where it should work quite well (footnotes for a book) and it hallucinates continually. While it has some superb uses, allowing executives to avoid blame by saying “well the AI decided” is not a good picture.

More concerning, we must stop anthropomorphizing AI (using human sounding metaphors for what it’s processing of bits involved). With that in mind, I highly recommend Melanie Mitchell’s article in Science about these dangers. https://www.science.org/doi/10.1126/science.adt6140

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Neil Saunders
Neil Saunders

Automation has always had some degree of influence in retail: algorithmic product recommendations, delivery route optimization, automated replenishment. But these things are systematized and rules-bound decisions. For more complex decisions that require judgement or taste, human involvement is still necessary; at the stage in its development, AI should be used as an assistant, not as the primary decision maker.

Last edited 3 hours ago by Neil Saunders
Bradley Cooper
Bradley Cooper

AI is clearly gaining influence in decision-making across retail organizations, but I think the bigger issue is how we frame its role. AI works best as a specialist capability applied to specific problems, not as a generalized “company brain.”

The concern is when organizations try to ingest everything into a single AI layer instead of deploying targeted models where they actually add value, that’s when judgment can become blurred instead of improved.

Doug Garnett

It might be gaining decision making abilities — but it shouldn’t. I have been using AI for some things where it should work quite well (footnotes for a book) and it hallucinates continually. While it has some superb uses, allowing executives to avoid blame by saying “well the AI decided” is not a good picture.

More concerning, we must stop anthropomorphizing AI (using human sounding metaphors for what it’s processing of bits involved). With that in mind, I highly recommend Melanie Mitchell’s article in Science about these dangers. https://www.science.org/doi/10.1126/science.adt6140

More Discussions