emaho
Founder risk analysis

The Altman Pattern

What 70 pages of secret memos and 100 interviews reveal about how founder risk actually looks

Paul Musters emaho April 2026 9 min read

A lot of what I do professionally is help investors look at a founding team before they commit. The question is always the same: who is this person, and what happens when things get hard?

The New Yorker piece on Sam Altman is the most detailed public case study I've read in years. Altman is not unusual as a type. The pattern is too familiar. I've seen versions of it in scale-ups of 30 people, 80 people. What makes this different is the scale of the consequences.

So I ran through the article using the same framework I use in due diligence. Six categories. What I see in each one. And then the question that interests me most: why do experienced investors keep missing this, and what can we actually learn from it?

A note on limits: This is not a verdict on Altman as a person. I've only seen him once, when I attended a YCombinator event years ago. I've read this article, and I've been following him for some time by watching quite some interviews since he was at YCombinator. I usually start working from intuition. Looking at his decisions, his behavior, and his micro-expressions, I would describe him as, let's say, an interesting person. What I can speak to is the pattern, because I've spent twelve years watching similar patterns in smaller companies, where the consequences are contained to 50 or 200 people.

Visual by David Szauder; Generated using A.I.  |  Source: The New Yorker


Where a pattern starts

The patterns described in this article didn't emerge from nowhere. Altman grew up in Clayton, Missouri, and spent several years of his adolescence managing something most people around him didn't know about. That experience of deciding what to show, and to whom, is part of this story. It's worth understanding before you look at the framework.

Sam Altman grew up in Clayton, Missouri. He knew he was gay from an early age, and growing up in the Midwest in the early 2000s, he has described that period as "not the most awesome thing." He found an early refuge in computers from around age 8. AOL chatrooms, the early internet, spaces where he could be curious and connected without the social weight of school hallways. He came out publicly at 17, in a school assembly, and got a standing ovation.

That story is worth sitting with. Years of careful management, knowing what was safe to show and to whom, followed by a moment of full exposure that was received warmly. That kind of experience shapes something.

What it can develop, particularly in someone ambitious and fast-moving, is a sharp and automatic sense of audience. Which version of yourself works here? What does this group need to hear? What do you hold back? These start as survival instincts. The difficulty is when they don't get revised. When calibrating your presentation to different audiences becomes the default way of operating in all relationships, including professional ones, it stops being a coping strategy and starts being a management style.

The behavioral link: The pattern described in this article, presenting different versions of reality to different people and managing information strategically rather than sharing it directly, has structural parallels to what it looks like to manage identity in an environment where honesty doesn't feel safe. The strategy that keeps you socially alive at 15 is not automatically useful at 40 when you're running one of the most consequential organizations in the world.

His family situation has its own complications. In January 2025, his sister Annie filed a lawsuit alleging sexual abuse from childhood. Sam, his mother, and his brothers Jack and Max all denied the allegations. The family has said Annie has long-standing mental health challenges. The estrangement had been going on for years before the lawsuit appeared, and Altman had reportedly tried to address the relationship through financial offers and gifts. That detail fits a broader pattern: using resources to manage relational tension rather than addressing it directly.

His mother Connie observed that his brothers could give Sam pushback that other people in his life couldn't, because they knew him before everything else. That kind of relationship becomes harder to maintain as the stakes and the status grow. When the circle of people willing to tell you the truth gets smaller, the gap between your public image and your private reality tends to widen quietly, until something forces it into the open.

To be clear about limits: a complicated personal history doesn't automatically produce leadership failures. Many people with difficult backgrounds build organizations with real integrity. But context matters when you're trying to understand why a particular behavioral pattern becomes stable, why it doesn't self-correct over time, and why the people around someone start treating what should be corrective signals as normal.


A personal statement. Delivered with perfect timing.

A few hours after the New Yorker piece went live, someone threw a Molotov cocktail at his house. No one was hurt. He was awake at 3:45 in the morning and posted to his blog. The post opens with a family photo and a description of the attack. It then moves through four carefully structured sections. That structure, under those circumstances, tells you something.

The post does what his behavior throughout this article does: it converts a situation into an opportunity. The attack on his home generates immediate sympathy. A frightened father, a family in danger, a man awake and "pissed" in the middle of the night. That is real, or it may be real. It is also the perfect emotional frame for what follows: a structured defense of his character, his beliefs, and his record.

He never engages with a single specific allegation from the New Yorker piece. The article described systematic deception across multiple relationships and years. His response describes himself as "conflict-averse." Those are not the same thing. Conflict-averse is a sympathetic frame, something that sounds like a human flaw rather than an operational pattern. It generates understanding instead of accountability.

"I am not proud of being conflict-averse, which has caused great pain for me and OpenAI. I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year."

Read that closely. Every acknowledged failure is framed as something that happened around him. The board conflict was a mess. The conflict-aversion caused pain. He is a flawed person in a complex situation. The sentence "trying to get a little better each year" is almost impossible to criticize. Which is exactly the point.

The post is also a masterclass in audience management. People frightened by AI get validation: the fear is justified. People who think he's accumulating too much power get reassurance: he explicitly says power shouldn't be concentrated. Critics who see him as evasive get accountability. Supporters get a list of genuine achievements. Every segment of his readership receives exactly what it needs from him. That is not a coincidence at 3:45 in the morning.

And here is where it becomes genuinely difficult to analyze: none of this necessarily means the post is dishonest. Someone can be genuinely frightened for his family and simultaneously very good at managing a narrative. The problem is that after years of watching this pattern, those two things become indistinguishable from the outside. That is itself the clearest signal of where the behavior has landed: in a place where no one, not his team, not his board, not a journalist, and not a reader at 4am, can reliably tell the difference.

His acknowledgment of being conflict-averse is worth holding next to sections 03 and 04 of this article. An organization where people learn not to raise concerns, and a leader who calls himself conflict-averse, are not separate problems with a common cause. They may simply be the same dynamic, named differently depending on whose perspective you are reading from.

You can read the full post at blog.samaltman.com.


Six categories. One clear picture.

The framework I use maps founder behavior across six categories. They're not personality traits. They're behavioral patterns that either protect or undermine an organization as it scales. Here's how Altman scores on each one, based only on what's documented in the article.

Behavioral risk profile
Where the risk concentrates
Risk concentration across six categories. Outer boundary means maximum risk. Scores derived from documented incidents in the Farrow and Marantz reporting.
Self-Image
0%
Integrity
0%
Empathy
0%
Balance
0%
Conflict
0%
Control
0%

Two categories score at the top of the range: Integrity at 95% and Control at 90%. These two are structurally connected. High integrity risk means the information coming from the top is unreliable. High control risk means the structures that should catch that problem have already been removed or circumvented. When both are high together, the organization loses its ability to self-correct. That is the most dangerous combination I encounter in due diligence work, and the one most difficult to see from the outside.


The lying system

This is where the article is most specific. The Ilya Memos were about seventy pages of documentation. They open with a list: "Sam exhibits a consistent pattern of..." The first item is "Lying."

The incidents are specific and cross-checked. He told Murati that GPT-4 safety features had been approved. They hadn't. He cited the company's general counsel as his authority. The general counsel, asked directly over Slack, replied: "ugh... confused where sam got that impression."

He told Amodei he had "good authority" from a senior executive that Amodei's team was plotting a coup. When Daniela Amodei brought that executive into the room, the executive denied saying anything. Altman then said: "I didn't even say that." Daniela replied: "You just said that."

Paul Graham said this privately to Y Combinator colleagues: "Sam had been lying to us all the time."

What makes this pattern so hard to act on is the mechanism Altman uses when confronted. He says he doesn't recall. Across many conversations for the article, that phrase appears again and again. He doesn't remember the Microsoft merger clause. He doesn't recall the threat to Murati's reputation. He has a different version of the coup conversation.

From practice

What I see in smaller companies is the same mechanism, just at a different scale. It's never an outright denial. It's someone who consistently has a slightly different version of what was said, and that version always lands in their favor. After enough rounds of this, the people around them start doubting their own memory. That's the real effect of the pattern. And it's one of the hardest things to name clearly in a reference call, because each individual instance sounds like a misunderstanding.


Structures built to be dissolved

The clearest structural expression of Altman's control instinct is what happened immediately after his firing. He negotiated the composition of the board that would investigate him. He texted Nadella: "would you do this: bret, larry summers, adam as the board and me as ceo and then bret handles the investigation."

The investigation produced no written report. Oral briefings only, to the two men he had effectively chosen. One board member described the suggestion that all members received those briefings as "an absolute, outright lie."

A former researcher described the pattern directly: "He sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was."

The pattern across years: Nonprofit charter. Merge-and-assist clause. 20% compute for superalignment. Safety approval protocols. Investigation with independent board. Each was real enough to attract good people and satisfy investors, and each was dissolved when it became inconvenient.


Every critic eventually left

The people who initially joined OpenAI were exceptional in the real sense of the word. Sutskever, Amodei, Murati. Serious, brilliant people who made real sacrifices to be there. Their presence gave the whole thing credibility. When investors ran reference checks, they found exactly who you'd want to find.

What the reference checks couldn't easily surface was that Altman's relationship with each of these people eventually reached the same endpoint: exhaustion, distrust, departure. The talent rotated through. Each departure had an explanation. The pattern was invisible unless you looked at the full sequence.

From practice

When I do reference calls, I come back to one question: what happened to the last person who raised a serious concern? Not what that person is like as a colleague. What happened to them. Did they stay? Did they leave quietly? Did the founder's version of that departure match what everyone else saw? That one question tells me more in 60 seconds than two hours of asking about leadership style and company culture.

Key departures
The people who raised concerns, and what happened next
Orange marks major contributors. Red marks those who departed after raising explicit concerns. Their own words are documented.
Co-founder
Ilya Sutskever
Chief Scientist
"I don't think Sam is the guy who should have his finger on the button." Spent months compiling 70 pages of documented concerns.
Left 2024, founded Safe Superintelligence
Departed
Dario Amodei
Head of Safety
"The problem with OpenAI is Sam himself." Compiled 200+ pages of documented encounters over years.
Left 2020, founded Anthropic
Departed
Mira Murati
Chief Technology Officer
"We need institutions worthy of the power they wield. Everything I shared was accurate, and I stand behind all of it."
Left 2024
Departed
Jan Leike
Superalignment Lead
"Safety culture and processes have taken a backseat to shiny products." Team got 1-2% of compute after 20% was promised.
Left 2024
Departed
Daniela Amodei
Safety & Policy
Witnessed Altman accuse her team of plotting a coup, then deny having said it seconds later in the same meeting.
Left 2020, co-founded Anthropic
Departed
Carroll Wainwright
Researcher
"A continual slide toward emphasizing products over safety." Documented the pattern from the inside.
Left before 2024

Five of the six people shown raised explicit concerns before they left. Every single one departed. You could explain any individual departure as the normal friction of a fast-growing company. But when five out of six people who raised serious concerns left, and several went on to found safety-focused competitors, that is not friction. It is the system working exactly as designed. The departure is the mechanism, not a failure of it.


Why smart people kept missing it

This is what I think about most. Whether Sam Altman is a bad person is a question for courts and philosophers. The more useful question is: what is it about this specific pattern that consistently bypasses experienced investors and board members?

I see six things at work. Together they form a kind of trap. Each mechanism on its own is manageable. Together they make the pattern almost invisible until it's too late to act on it without enormous cost.

Why investors missed it
Six mechanisms that make the pattern nearly invisible
Each mechanism alone is manageable. Together they create a situation where acting on concerns becomes more costly than continuing. That is when due diligence stops.
01
The Fear Inversion
He used existential risk as the pitch. The message: someone will build AGI. If not us, it's dangerous. Not investing feels like the dangerous choice. Due diligence fights uphill.
02
Audience Mirroring
Safety researchers heard safety talk. Investors heard growth talk. Governments heard national security talk. Each party thought they saw the real Altman. None of them did.
03
The Talent Halo
Sutskever, Amodei, Murati gave credibility. Reference checks found serious scientists who had chosen to be there. The pattern of departures was invisible without seeing the full sequence.
04
Product Credibility
ChatGPT worked. 100 million users are real. When results are this visible, there's pressure to believe the person behind them shares your values. The most human failure mode.
05
The Sunk Cost Trap
After $13B from Microsoft, the question changed from "should we trust this person" to "what happens if we don't." The cost of acting on concerns exceeded the cost of continuing.
06
Isolated Incidents
Each incident had a narrative that made it discrete. Loopt: founder friction. Y.C.: too much going on. The board firing: effective altruism craziness. Nobody connected the dots.

I see versions of three or four of these mechanisms in almost every team due diligence I do. The hardest combination to see through is product credibility combined with sunk cost. When results are real and visible, and when the financial exposure is already large, the cost of acting on concerns starts to exceed the cost of not acting. That is when boards stop doing due diligence and start managing exposure instead.

From practice

In every team due diligence I do, I check for one thing early on: is there anyone who regularly disagrees with the founder in meetings? If I can't find one person who does that, and the company has been running for two or more years, that's a signal. Friction isn't useful in itself. Silence at that stage is usually not harmony. It's learned behavior. People have figured out that pushing back costs more than staying quiet.

Founder Risk Scan

Sound familiar? Check a founder in your portfolio.

The same six dimensions used in this analysis are built into a free, anonymous scan. Three minutes. You get a risk profile, red flags by category, and a practical observation checklist.

70% of startup failures come from founding team dynamics, not product
Run the scan free

Want to know what this kind of assessment looks like in practice?

I do behavioral due diligence as part of team assessments for investors. The framework in this article is what I use. If you're about to make a significant bet on a founding team and you want a second perspective before you do, that's what I'm here for.


The dots only connect when you see the sequence

This is what Sutskever understood when he spent months compiling seventy pages. Each incident, taken alone, has a plausible innocent explanation. A misremembered conversation. A miscommunication. Normal founder-employee friction. Reasonable people could disagree about any one of them.

Put them on a timeline, organize them by category, and the shape becomes undeniable. The shape is no longer "this person made some mistakes." It's "this person operates this way, consistently, across years, across organizations, across relationships."

The visualization below maps documented incidents from the article across time and the six behavioral categories. Watch what happens when they all appear together.

Incident pattern
25 documented incidents across 17 years
Each dot is one documented incident from the New Yorker reporting. X axis is year. Y axis is behavioral category. The pattern is what matters, not any single dot.
Self-Image Integrity Empathy Balance Conflict Control 2007 2015 2018 2020 2022 2023 2024 Ping-Pong champion claim at Loopt (2007) Grandiose public blogging about AGI / Superintelligence (2015) Claims China launched AGI Manhattan Project to intelligence officials (2017) Claims OpenAI buying functioning quantum computer from Rigetti (2018) Claims fusion reactors will power AI boom by 2026 (2018) Y.C. departure narrative - never fired claim (2014) Housing OpenAI in Y.C. nonprofit arm / using Y.C. funds (2015) Denies Microsoft merger clause to Amodei while Amodei reads it aloud (2019) Accuses Amodeis of coup, then denies having said it in same meeting (2019) Still listed as Y.C. chairman on SEC filing (2021) Claims GPT-4 safety features approved - they weren't (2022) Tells Murati GPT-4 Turbo needs no safety approval, cites Kwon - Kwon denies it (2023) "I can't change my personality" - board confrontation (2019) Allies threaten Murati's reputation post-firing (2023) "You don't get to weigh in on that" - dismissing employees' ethical concerns (2024) Financial entanglements with ex-partners creating "lifetime dependence" (2024) 12-hour call days, war room, Ambien crisis during firing (2023) "Going all out" finding bad things to damage critics' reputations (2023) "Countries plan" - play nations against each other for funding (2017-18) Negative framing of safety researchers as "hysterical doomers" (2023) Subpoenas against California AI safety bill supporters (2023-24) Investigation designed to produce no written findings (2024) Secret handshake deal with Brockman/Sutskever - shadow board (2015) Personal investments from Y.C. president role - "Sam first" policy (2016) Hand-picks board that will investigate him (2019) Karnofsky dissent vote recorded as abstention without consent (2023) For-profit conversion hollows out nonprofit structure (2024)
Self-Image
Integrity
Empathy
Balance
Conflict
Control

What the map makes visible is something you cannot see in a single reference check. Every row has incidents. No time period is clean. Integrity and Control incidents appear in every phase shown, from 2007 through 2024. The pattern is not a late-stage problem that emerged when the stakes got high. It is consistent across the entire documented history. One conversation is an incident. Seventeen years is a system.


The structure failed. Not the person.

This is usually how these things go. These things rarely start with a villain. What you usually get is a series of reasonable people making individually defensible choices, until the distance traveled is too large to trace back.

The board tried to act. It had no communications team, no investor relationships, no legal war chest. Altman had all of these. The lesson isn't that the board was wrong. The lesson is that oversight structures that look strong on paper are often fragile in practice, especially when the person being overseen has spent years building the dependencies that make acting on concerns more costly than not acting.

The concern isn't whether OpenAI's CEO is trustworthy. The concern is that the structures designed to manage that question have been quietly hollowed out. And the institutions that should have caught this each had a reasonable explanation for why this particular moment was not the moment to act.

A former OpenAI researcher described the pattern directly: "He sets up structures that, on paper, constrain him in the future. But then, when the future comes, he does away with whatever the structure was."

What I take from this, practically speaking, is that the standard due diligence checklist is not designed to catch this pattern. It catches embezzlement. It catches sexual harassment. It catches obvious criminality. It does not catch systematic information management over years, executed with sufficient charm and genuine results.

From practice

The investors who call me after a problem has surfaced all say the same thing: the signals were there. They just looked like something else at the time. One person leaving. One board discussion that went nowhere. One reference call that felt slightly off but didn't break any obvious rules. My job is to help them look at those signals together, before they commit, while the pattern is still forming rather than already set.

What better due diligence actually looks like

Last year I did 18 team due diligences for investors. Five of them resulted in a cancelled investment.

In each of those five cases, the pattern looked different on the surface. Different sectors, different stages, different founder profiles. But there was a consistent gap between what the founder said and what the people around them actually experienced. That gap only became visible when I looked at incidents across time rather than one by one.

The six categories in the emaho framework give you the structure. But the real question is whether you look at incidents over time rather than evaluating each one in isolation. One misremembered conversation is nothing. Seven misremembered conversations across three organizations, all in the speaker's favor, is a pattern.

The other thing I'd look at: what happens to people who raise serious concerns? Do they stay or do they leave? And do their departures have explanations that consistently favor the founder's narrative?

Those two questions, pattern across time and fate of critics, will catch most of what standard reference checks miss.

See the dynamics in your own team

The patterns in this article don't only exist at OpenAI. They show up in teams of 8 and teams of 80. The emaho platform maps personality types, role dynamics, and collaboration patterns across your full team, so you can see what's actually there before it becomes a problem.

Founder Risk Scan

You've read the analysis.
Now run it on your own portfolio.

The six dimensions in this article are the same six dimensions in the scan. Free, anonymous, three minutes. You get a risk profile, a radar chart, identified red flags, and an observation checklist you can take into your next conversation with the founder.

70% of startup failures trace back to founding team dynamics, not product or market
4% of senior executives show psychopathic traits, four times the general population rate
3 min to get a behavioral risk profile across six dimensions. Free and anonymous.
Run the founder risk scan Fully anonymous. Nothing stored.