Do your employees trust your leadership? This will determine your success now more than ever.
I completed my PhD 24 years ago, comparing the conditions for success between Total Quality Management (TQM) and Business Process Re-engineering (BPR).
BPR typically involved senior managers working with consultants to redesign organisational processes based on new technology. This approach was top-down, radical, and often led to significant redundancies. It gained particular popularity in the US.
TQM was different. It encouraged bottom-up change, empowering employees at all levels to suggest improvements for workplace efficiency and to solve problems as a team. This approach was more incremental.
To oversimplify, BPR had much higher failure rates than TQM, and implementations of TQM in the US were less successful than those in Japan. The critical variable was trust. For employees to contribute ideas for improvement, they needed to trust management. This trust was lacking in the UK and the US.
My recent research into the use of AI in professional services firms suggests we are at risk of even higher failure rates, for the same reason: trust.
The Role of Trust in AI Experimentation
My research indicates that 66% of consultants are actively using AI (the real number is likely higher, as I’ll explain), but 67% of those aren’t telling their bosses how they’re using it.
Several factors contribute to this secrecy:
1. Fear of Reprimand: Some employees are wary of using AI without official approval, especially if company policies on AI use are unclear or restrictive. They worry that revealing AI-driven productivity gains could result in negative consequences.
2. Job Security Concerns: Workers fear that demonstrating how AI makes their jobs more efficient could lead to redundancies. They suspect management might use AI as a justification for job cuts rather than enhancing roles.
3. Lack of Incentive: Even when employees aren’t afraid, they may see little benefit in sharing their AI innovations. If they won’t be rewarded for their discoveries, why should they give away their competitive advantage?
4. Losing Their Edge: As one consultant confided, “At the moment, I’m seen as super-productive. If I tell [managers] I’ve automated much of my work, I’ll lose that advantage.”
This behaviour is a classic case of the Principal-Agent Problem. Employees know more than their managers, but managers learning that information offers few benefits for the employees. The result? Keep it secret!
This dynamic reflects an untapped opportunity. Firms that recognise this can gain a significant edge by proactively fostering an open, trust-based culture. By rewarding employees for innovation, you can effectively turn the Principal-Agent Problem into a Principal-Innovator Partnership, where both sides gain from transparency.
Why AI is Different
AI isn’t like other IT rollouts. For systems like CRM or ERP, 90% of the implementation is the same across organisations. These systems are top-down, structured, and predictable. AI is different—it can be applied in countless ways, from automating processes to enhancing creative tasks. It’s more like an enabling technology, such as the internet or the PC.
What works for one company will vary greatly from another, depending on strategy, workflows, and culture. For example, when you hear about a “20% efficiency gain” from AI in one firm, it’s akin to saying the same about using the internet—the question is, what did they use it for? Your firm needs to experiment.
This highlights a key truth about AI: there are no standard solutions. Each organisation must learn how to embed AI in ways that align with its unique processes. This is why firms with rigid, top-down cultures struggle—they are not equipped to nurture the decentralised, experimental thinking that AI thrives on.
Unfortunately, in low-trust firms, experimentation relies on employees trying AI and reporting back.
This requires three things:
- A culture where experimentation is encouraged—highly codified consultancies that rely on standardised processes may struggle with this.
- Employees need to trust that sharing their findings will lead to positive outcomes, not negative ones (like job losses).
- Clear guidelines and incentives that encourage innovation and sharing without fear of reprisal.
In these environments, trust is crucial. Without it, employees will continue to innovate in secret, and organisations will miss out on valuable insights.
Building a Trust-Based AI Culture
If you want to harness the power of AI experimentation, you need to create a culture where employees feel secure enough to share their experiences openly.
Here’s how:
1. Clarify AI Policies and Encourage Experimentation
Instead of vague warnings, be clear about where experimentation is allowed. Set ethical and legal boundaries while encouraging innovation. Up-to-date policies that reflect a realistic understanding of AI risks can reduce fear and encourage openness.
One of the most effective ways to encourage experimentation is to decentralise decision-making. Empower teams closest to the work to trial AI tools on specific tasks or projects. This grassroots approach not only drives innovation but also boosts engagement, as employees feel a greater sense of ownership in AI initiatives.
2. Offer Psychological Safety
Employees need reassurance that their jobs aren’t at risk due to AI discoveries. Leaders must emphasise that AI’s goal is to enhance, not replace, jobs. Organisations with high psychological safety tend to be more innovative. Public guarantees against job losses due to AI-driven productivity gains can also help build trust.
Leaders should take the long-term view on AI—remind employees that while AI will shift roles, it can open new opportunities for learning and development. Highlight stories where AI has enabled teams to take on more creative or strategic tasks, rather than focus only on efficiency gains.
3. Incentivise AI Experimentation
Encourage employees to share their AI findings by offering rewards such as bonuses, promotions, or public recognition. Financial incentives, particularly those tied to AI-driven productivity gains, can be especially effective.
Consider implementing a formal innovation programme where employees can submit AI-driven ideas, which are then reviewed and, if viable, piloted. This not only rewards initiative but also formalises AI experimentation, making it part of the firm’s DNA.
Creating the Right Balance: Guardrails Without Handcuffs
AI requires a balance of freedom and structure. Too many restrictions stifle innovation, but too few create chaos or ethical issues. Establish clear boundaries, especially for client-facing work, while allowing room for creativity.
Here are some practical steps to build trust and foster AI experimentation:
- Run AI Workshops: Regular workshops on AI tools and best practices can help employees innovate while ensuring ethical standards.
- Host Hackathons: Encourage employees to solve real business problems using AI in a collaborative environment.
- Develop Internal AI Champions: Promote employees who are passionate about AI and can lead innovation efforts.
- Create a Safe Space for Failure: Not every experiment will succeed. Make it acceptable to fail—and learn from it.
Be transparent about lessons from failed AI pilots. This helps normalise failure as part of the innovation process. Share what didn’t work and, more importantly, why. This practice builds trust and deepens collective learning.
Why Boutiques May Be Better Positioned Than Big Firms
This brings us back to the article’s title. My research shows that trust is typically higher in boutique firms than in larger consultancies. This shouldn’t be surprising—many boutique founders left big firms to escape the cutthroat culture and build something more human.
Boutiques often do better with innovation too. Large firms tend to weed out creativity during recruitment, while smaller firms need to innovate to survive.
Boutiques are also less bureaucratic and more willing to take risks. They don’t need a web of rules because they can exert control through relationships, culture, and trust.
However, not all boutiques get this right—some still struggle with trust. These firms may be the least able to exploit AI’s incredible potential in the coming years. Perhaps it’s time to re-engineer the culture before it’s too late?
Join the Boutique Leaders Club here for monthly masterminds and exclusive resources designed specifically for CEOs of boutique consultancies. If you would like my help to grow or sell your consultancy, please book a one-on-one slot here…↴↴
Get Your Appointment