The Real Cost of a Bad Engineering Hire in a Two-Person Delivery Team

Engineering Hiring

·

7 min read

·

Talex Research Team

The Old Math

The standard cost model for a bad engineering hire — widely cited in industry literature — estimates the loss at roughly thirty percent of annual salary. The model accounts for recruiting cost, onboarding time, productivity ramp, and exit overhead.

The model works reasonably well for ten-person engineering teams. In that structure, a single bad hire is absorbed by the rest of the team. Other engineers cover the gap. The project continues. The cost shows up in slowed velocity, not in catastrophic failure.

The model does not work for two- or three-person AI-augmented delivery teams. The math is structurally different.

The New Math

In a two-person team, a bad hire is fifty percent of the team. There is no redundancy to absorb the gap. Whatever the bad hire was supposed to deliver does not get delivered, and there is no second engineer to pick it up.

The cost is not a percentage of salary. The cost is the project.

Across enterprise delivery projects observed, the actual cost of a bad hire in a small AI-augmented team breaks down approximately as follows:

  • Recruitment cost — the same as in any team structure

  • Time to identify the bad hire — typically four to eight weeks longer in small teams because productivity signals are sparser

  • Project delay during identification window — the project burns runway while the bad hire continues to be assigned work

  • Project delay during replacement search — runway continues to burn

  • Knowledge loss — the bad hire built context the replacement cannot recover

  • Trust damage with the SI client — often the most expensive line item, rarely measured

Adding these up across the projects observed, bad-hire cost in small AI-augmented teams runs three to five times the standard model. The thirty-percent-of-salary estimate is a meaningful undercount.

Why Identification Takes Longer

In a ten-person team, a bad hire is visible within four to six weeks. Their work product gets reviewed by multiple engineers. Their decisions surface in standup discussion. The signals that they are not the right fit are observable through normal team operations.

In a two-person team, the same signals are sparser. There is one other engineer reviewing the work. Standup is shorter and has less back-and-forth. The patterns that would normally surface a bad fit are not visible because the structure that surfaces them does not exist.

Across observed projects, identification time in two-person teams is consistently eight to twelve weeks — sometimes longer. Every week of that window is a week the project burns runway.

Why Replacement Is Harder

Replacing an engineer in a ten-person team is a partial pause. Other engineers carry their threads while the replacement onboards. The project does not stop.

Replacing an engineer in a two-person team is closer to a full pause. The remaining engineer cannot carry both functions. The project's velocity drops dramatically until the replacement is at full ramp — typically six to ten weeks.

Combined with the eight-to-twelve-week identification window, the total delay from bad hire to recovery is often four to six months. In a project with a twelve-month timeline, this is one third to one half of the runway.

The Knowledge Cost

The most underestimated cost in small-team bad hires is knowledge loss. A bad hire who has been on the project for two months has built real context — about the codebase, about the client's environment, about the specific constraints of the work.

That context does not transfer to the replacement. The replacement enters cold. They will rebuild the context, but the rebuild costs additional weeks and the rebuilt version is rarely as complete as what was lost.

In small teams, the knowledge per engineer is structurally higher than in large teams. The same exit therefore costs structurally more.

The Trust Cost

The cost that almost never appears in models is the relationship damage with the SI client.

When an SI firm experiences a bad hire on a small delivery team, the client does not see "one engineer was wrong." They see "the engineering function failed." This is not unfair. From the client's perspective, the team is the function. If half the team was wrong, the function failed.

The trust damage from a bad hire in a small team often outlasts the project itself. SI firms that experience two such failures with the same client typically lose the next opportunity. The cost is not measured in salary. It is measured in pipeline.

What This Changes About Hiring Strategy

The implication is not that small teams should hire slowly. Slow hiring has its own costs — the engineering function still has to deliver, and an empty seat is also expensive.

The implication is that the assessment process for small AI-augmented teams should carry significantly more weight than the same assessment process for large teams. The cost of getting it wrong is not linear with team size. It is roughly inverse.

Most SI firms in 2026 still run the same assessment process for two-person teams that they used for ten-person teams. The process is identical. The risk profile is not.

An assessment process appropriate for small AI-augmented teams typically includes:

  • Joint observation rather than sequential interviews — to surface judgment quality at the point of friction

  • Tier identification — distinguishing AI-Enabled, AI-Integrated, and AI-Native engineers, because tier determines fit for small-team contexts

  • Live demonstration components — because self-reported behavior and observed behavior diverge significantly

  • Reasoning audits rather than answer checks — the right answer in an interview is not the same as the right reasoning in production

The Underlying Point

The wrong hire in a two-person team is not a setback. It is the project. Every cost downstream of the hiring decision compounds in ways the standard cost model was never designed to capture.

The hiring decision is no longer one input among many. In small AI-augmented delivery teams, it is the input that determines whether the project succeeds at all.

See pre-vetted AI augmented engineers

See pre-vetted AI augmented engineers

See pre-vetted AI augmented engineers