How Companies (& AI) Get Psychological Safety Wrong

⏰ If You Only Read One Thing, Read This:
Psychological safety isn’t about the absence of judgment, it’s about developing confidence and trust in other people. AI offers anonymity and the lack of judgment, but that’s not the same thing.

✏️ Research in Practice: What the Experts Are Doing

A recent Fast Company article suggests AI coaches can “often work better” than human coaches because they provide “psychological safety, wrapped in code.” While this phrasing is catchy, it misrepresents a foundational leadership concept.

Psychological safety, as defined by Dr. Amy Edmondson, is not the absence of judgment. It’s the presence of genuine trust, an interpersonal climate where people feel safe to speak up, take risks, and share concerns without fear of negative consequences. In her 2025 Harvard Business Review article, Edmondson directly addresses misconceptions, noting that psychological safety is a practice, not a policy. It develops over time through consistent focus and receiving relevant expert feedback.

AI coaching tools can be valuable for offering on-demand feedback, roleplay scenarios, and private practice environments. However, equating these features with true psychological safety risks dilutes the concept and ignores the human relationships at its core. Leaders and coaches must remember that while AI can supplement the coaching process, trust and connection remain distinctly human work.

⚓ Anchor It in Action

Before adopting AI coaching tools, audit your leadership practices: Are you building trust through active listening, transparency, and empathy? If not, no amount of “code” will create true psychological safety for your team.

📚 Suggested Reading

Next
Next

Rethinking Intelligence Before We Fake It