Something actually changed

Not the hype, which was already everywhere in 2024. What changed in late 2025 is different. AI tools crossed a threshold. They went from "mostly works" to "almost always does exactly what you want." Developers who use them every day started reporting this on their own, independently of each other.

One developer now writes 95% of his code from his phone. Another hasn't typed code manually since December 2025. Those aren't predictions. They're reports from people doing it.

That changes the question. It's no longer whether to use AI. The question now is harder: how do you lead a mixed team where people and AI agents are working side by side, mostly doing the right thing, but occasionally needing you to step in?

This article pulls together what 60+ research papers and reports say about that question, adds a layer of personality analysis, and ends with 32 conclusions worth knowing.

Chapter 01 Research

A lot of activity. Less actual progress.

Almost everyone is using AI. Most organizations are deploying AI agents. And yet the majority of CEOs say they haven't seen a real productivity gain. That gap between adoption and results is where the real conversation starts.

Organizations using AI regularly 88% Already deploying AI agents 90% CEOs who saw no real productivity gain 89% Executives expecting AI agents on their team within 3 years 84% Workers who have received training for it 26%

Sources: Microsoft Work Trend Index, BCG AI Adoption Report, World Economic Forum 2025/26

That last number is the most immediate problem. 84% of executives expect AI agents on their team. 26% of workers have had any training. The agents are coming. The preparation is not.

AI makes you faster. Not the team.

The individual numbers are real. People produce more. They write faster, research faster, think through more options. A two-year study of a software company found gains of around 50% in individual output. That's not nothing.

But the same study found something else. The team dynamics stayed exactly the same. The same accountability gaps. The same communication problems. The same trust issues between colleagues. AI made everyone faster. It didn't make them work better together.

What got better

  • 50% more output per person
  • 40% better on creative tasks
  • Faster research and writing
  • More analytical depth

What stayed the same

  • Who owns decisions
  • How people communicate
  • Trust between colleagues
  • Collaboration bottlenecks

The team problems you had before AI are still there. They just move faster now.

67% trust AI more than their own colleagues

That's a real finding from a 2025 survey across multiple countries. It matters because trust is one of the main levers that determines whether AI helps a team or creates problems. Too much trust in AI output, and people stop checking what it produces. Too little trust, and they spend more time verifying than they save.

Too much trust

  • Accepting AI output without questioning it
  • Gets worse under time pressure
  • Output feels right before it is right
  • Mistakes don't get caught

Too little trust

  • Checking everything, eliminating the gains
  • The bar too high for AI to actually help
  • Not using AI in areas it would help most
  • Same old bottlenecks, different justification

Both patterns appear in the same organization, in different people. The organizations getting ahead are working on getting the balance right, not setting a policy for or against.

Skills you stop using, you lose

Doctors who used AI to help with diagnoses performed measurably worse when the AI was removed. Essay writers couldn't remember what they'd written minutes after finishing AI-assisted work. This isn't a theory. It's what happens in practice, and it builds slowly without anyone noticing.

Your own capability Dependency on AI Now 1 year 2 years

14% of heavy AI users already experience measurable cognitive fatigue. Junior employees stop developing when their senior colleagues use AI without them. These things don't show up in productivity metrics.

Culture is 70% of the result

BCG looked at ten thousand employees across eleven countries. Their conclusion: 70% of what determines whether AI transformation actually works is people, culture, and process. The technology itself is 30%. Most organizations get this backwards.

70%
People, culture
and process
+
30%
Technology
and tools
=
Real
results

The question for a founder isn't which AI tools to adopt. It's: what kind of team do we need to be for any of this to actually change something?

"Most organizations treat this as a technology adoption problem. The ones getting real results treat it as a people and process problem."
Chapter 02 Personality

How you work with AI is how you already work

The way someone uses AI is not new behavior. It's an extension of patterns they already have with people, with information, and under pressure. The same blind spots show up. The same strengths show up. Just faster, and at larger scale.

"AI does not create new weaknesses in people. It amplifies the ones that were already there."

Looking at 16 personality types from the PPA© model, four clear patterns emerge. One per quadrant. Each with its own way of over-using AI and one specific thing that goes wrong.

Four quadrants. Four things that go wrong.

Vision types: the echo chamber
They use AI to think and feel more clearly. The problem: AI is very good at reflecting back a more polished version of what they already believe. Everything feels right before it is. Ideas multiply but nothing lands.
Strategy types: the monument
They use AI to think harder and go deeper. The problem: the quality of what they produce gets so high that the rest of the team can't follow it, challenge it, or build on it. They become the only person who can explain the plan.
Action types: the speed trap
They use AI to prepare faster and move faster. The problem: more speed without better learning. They build things the team can't maintain, make changes that pull in different directions, and scale stories the product can't support yet.
Structure types: the wrong fix
They use AI to document, protect, and standardize. The problem: they optimize systems that should be questioned. More audit findings than the team can absorb. More documentation for a process that should be changed, not recorded.

The trust range across personality types is wide

Some types accept AI output with almost no critical eye. Others question everything. The types that actually get good results with AI sit in the middle. Not blindly trusting, not paranoid. They check what matters and move on.

Trust calibration by personality type
Over-trust Enable · Excite
Calibrated Unite · Imagine · Decide
Under-trust Realize · Control
Key insight This is not a technology preference. It's an expression of how that person already operates. Managing it starts with understanding that.

What each quadrant actually needs to work on

Vision types (Identify, Enable, Imagine, Unite): Learn to commit to a direction before jumping to the next idea. AI gives you more options. You need fewer, not more. This is the same work with or without AI.

Strategy types (Activate, Form, Realize, Decide): The measure of good strategic thinking is not how sophisticated the plan is. It's how many people can pick it up and run with it. AI makes the gap between you and the rest of the team wider, not smaller.

Action types (Reveal, Excite, Act, Adapt): Speed is only worth it if the team can absorb what it learns. Slow down the review cycle, not the build cycle. The problem is not that you build too fast. The problem is that you don't stop to notice what the speed is telling you.

Structure types (Manage, Control, Sustain, Secure): The skill is knowing when to defend the current system and when to challenge it. AI cannot make that judgment. That one stays with you.

Chapter 03 Combined

The leadership work is the same. The speed changed.

Here's what the research and the personality data say together: working on someone's usual pattern is working on their AI pattern. They are the same thing. If a Vision type learns to commit to a direction before jumping to the next idea, that holds whether they're working with a person or an AI model.

"The leadership work doesn't change. The urgency does, because AI makes the pattern move faster."

You're not managing a new relationship between your people and technology. You're managing the same dynamics you've always managed, now with something that accelerates them.

The founders getting ahead of this are not doing it with better tools. They're doing it by understanding their team more clearly, and acting on what they see.

AI makes the gaps in your team bigger

A team that's already short on commercial energy gets more short on it. A team that's already bad at challenging ideas gets worse at it. AI amplifies whatever's already dominant. The blind spots you had before are the same blind spots, moving faster.

Vision dominates
More ideas, less traction
AI generates more options. Without challenge, nothing lands. The problem gets louder, not quieter.
Strategy dominates
Better thinking, less access
The gap between the analyst and the rest of the team gets wider. The plan is great. Nobody can execute it.
Action dominates
More speed, weaker learning
Output grows. The feedback loop shrinks. The team gets faster at making the same mistakes.
Structure dominates
More systems, less change
What already feels stable becomes harder to question. AI makes the case for the current state, faster.

Who covers the four quadrants in your team is more important now, not less.

Your culture decides what AI does here

A team that avoids hard conversations doesn't get more direct because you add AI. The avoidance gets faster, smoother, and harder to name. Teams with low trust produce AI-assisted work with the same accountability gaps and communication problems as before.

Paul Musters
What I see in practice

Founders who haven't taken a clear look at their team's dynamics will find AI revealing it for them, usually at an inconvenient speed. The tools don't create the problems. They just make them visible faster.

Culture is 70% of the transformation. That's BCG's number across ten thousand employees. It keeps getting ignored because it's harder to measure than a tool rollout.

Four things people won't hand over

Across all 16 personality types, four things stay irreplaceable. Every type holds at least one of these as a boundary. As a leader, those boundaries are worth paying attention to. They usually point at something real about what matters to that person.

Standing behind a decision
The moment of owning a call in front of the people it affects. AI can prepare the decision. It cannot be accountable for it.
Actually knowing someone
Understanding someone's context, what they're not saying, what they need to hear. AI performs the social role. It doesn't replace the real thing.
Knowing what good looks like
Judgment about quality in this specific context, for this specific customer, at this stage. That judgment is earned. It can't be generated.
Being in the room
Sitting with a customer. Walking the floor. Seeing what wasn't in the brief. AI cannot have the experience that makes that contact meaningful.
Chapter 04 Conclusions

What the research actually says

Short titles. One sentence each. No filler.

If this raises questions about your team

I work with scale-up founders and CEOs on leadership, team dynamics, and culture. This is the kind of work I do day to day. If you want to think through what this means for your situation, let's talk.

Paul Musters

Paul Musters is the founder of emaho. He works with scale-up founders and their teams on leadership, team dynamics, and culture.

April 2026

Paul Musters signature