Position paper · Engineering methodology · April 30, 2026

Why FDE
is the future of AI-native development.

The forward deployed engineer is not a Palantir oddity. It is the talent shape that AI-native software development requires, full stop. This is the argument for why traditional engineering organization structures fail at AI-native development, why the FDE shape succeeds, and why the model is being adopted (sometimes named differently) by every serious AI-native company shipping today.

~1,900 words · A position paper from a team that runs the FDE model in healthcare AI

The short version

AI-native software entangles customer reality and product behavior.

In conventional software, customer feedback updates a backlog. The product team triages, the engineering team builds, the next release ships. The loop is slow, the feedback is processed, the product changes deliberately. In AI-native software, customer feedback updates evaluations, prompts, fine-tunes, retrieval indexes, and runtime guardrails — often simultaneously. The loop is fast, the feedback is structural, the product changes continuously. The talent shape that handles this entanglement is the forward deployed engineer: senior, code-shipping, customer-embedded, outcome-owning. Traditional engineering org structure was not built for this loop. FDE was.

The Palantir lesson

FDE was a structural answer to a structural problem.

Palantir built the FDE function because government and enterprise data work could not be commoditized into a SaaS pitch. Each customer's data was different; each customer's workflow was different; each customer's regulatory environment was different. A SaaS sales motion would have failed on this terrain. So Palantir made engineers the sales motion.

The FDE was sent in to actually solve the customer's data problem — sit at the customer's site, pair with the customer's analysts, ship code against the customer's stack. The artifact at the end of the engagement was working software, not a slide deck. The customer's outcome was the FDE's KPI. The model worked, and Palantir scaled it.

The pattern was widely admired and rarely copied. Most enterprise software companies looked at FDE and thought: "we cannot afford to send senior engineers to every customer." This is the wrong frame. The right frame is: if your product cannot be sold without senior engineers, the senior engineers are the product. Palantir did not pay an FDE tax; Palantir extracted FDE rents.

Why AI-native is different from cloud-native

The entanglement of customer reality and product behavior.

Cloud-native software was a deployment shift. AI-native software is a development shift. The methodology changes, not just the infrastructure.

Cloud-native

Customer reality is downstream of product behavior.

The product team builds; the deployment automates; the customer integrates against a stable API contract; the customer's reality conforms to the product's surface. Feedback loops update a backlog. The development methodology is sprint-based: design, build, ship, observe, iterate.

AI-native

Customer reality is constitutive of product behavior.

The product's behavior emerges from a combination of code, prompts, evaluations, fine-tunes, retrieval indexes, and runtime data — all of which depend on customer-specific information. The customer's data is part of the product, not just an input to it. Feedback loops update evaluations, prompts, fine-tunes, and guardrails — frequently, simultaneously, and in production.

This is not an architectural claim. It is a methodological one. The development loop in an AI-native product runs through customer-specific evaluations and customer-specific data — not through a generic feature backlog. The team running that loop has to be at the customer's elbow, not three time zones away on a quarterly release schedule.

Why traditional engineering org structure fails AI-native

The handoff problem.

Traditional enterprise engineering separates product engineering, customer success, professional services, and account management into distinct functions. The handoffs between them work — barely — for cloud-native products. They fail for AI-native products.

Consider what happens when an AI-native product gets a wrong answer for a customer. In a traditional org structure: customer success raises a ticket; professional services investigates; product engineering reproduces; the issue gets prioritized in the next sprint; a fix ships in the next release. Time to resolution: weeks. The customer's confidence in the system erodes for the duration. By the time the fix ships, the customer's data has changed, and the fix may no longer apply.

In an FDE-led org structure: the FDE — who is sitting with the customer, has shipping access, and owns the outcome — debugs the wrong answer in real time. Updates the eval suite. Adjusts the prompt or the retrieval. Verifies the fix. Time to resolution: hours. The customer's confidence is preserved. The product's quality is preserved. The team's learnings are captured back into the platform's shared infrastructure.

The handoff problem in AI-native development is not slowness. It is information loss. By the time customer reality reaches the engineer who can act on it, the reality has been compressed, sanitized, and stripped of the texture that would let the engineer fix the right thing. FDE eliminates the handoff. The customer's reality reaches the engineer's keyboard intact.

The FDE shape

What the role actually requires.

The FDE shape is unusual and not easily produced from a conventional engineering pipeline. It combines four properties that rarely cluster together — and the absence of any one of them undermines the value of the others.

Property 1

Senior engineering depth.

The FDE has to ship production-grade code in the customer's stack, not pseudo-code in a sandbox. The skills that matter are the same skills that matter on a senior platform team — except they have to be deployable in a customer environment, on the customer's schedule, against the customer's constraints.

Property 2

Domain fluency.

The FDE has to speak the customer's language fluently enough to understand what the customer actually wants — which is rarely what the customer says they want. In healthcare AI, this means understanding the difference between InterQual and MCG, the difference between LCDs and NCDs, the difference between auto-affirmation and auto-denial. Generalists cannot move fast on this terrain.

Property 3

Customer-facing problem-solving.

The FDE has to handle hard conversations with customer leadership — whose timeline is too aggressive, whose budget is too tight, whose IT environment is more complex than admitted. The role requires diplomatic spine. Not just engineering competence; the willingness to say "this scope is wrong" to a senior customer stakeholder, in a way that lands.

Property 4

Outcome ownership.

The FDE owns the customer's outcome, not the SOW's deliverables. If the engagement requires renegotiating the scope to actually deliver value, the FDE renegotiates the scope. If the engagement requires going beyond the scope to land the outcome, the FDE goes beyond the scope. This is the property that separates FDE from professional services, and it is the property that makes the model work.

Why it compounds

Each FDE engagement makes the platform smarter.

The FDE model has a property that traditional consulting does not have: the work compounds back into the platform. This is what makes the economics defensible long-term.

Every FDE engagement produces three artifacts beyond the immediate deliverable: a tested customer-specific configuration that becomes a starting template for similar customers; an evaluation suite that captures the customer's edge cases and feeds back into the platform's regression testing; and a runbook that captures the operational lessons of the deployment. Over time, these artifacts compound — the FDE pool gets faster on each successive engagement because the prior engagements seeded the platform with reusable infrastructure.

This is why AI-native companies that adopt the FDE model can grow margin over time despite high engineer-cost-per-customer in the early years. The early engagements are intensive; the later engagements ride on accumulated infrastructure. Companies that try to scale AI-native products through traditional professional services do not get this compounding effect — because professional services hours do not compound into shared platform infrastructure. The hours just leave with the consultants.

Common questions

FAQ.

What is AI-native development?

AI-native development is software development where AI capabilities are foundational rather than additive. The product behavior emerges from a combination of code, prompts, evaluations, fine-tunes, and runtime data — and changes whenever any of those change. The development methodology has to account for this entanglement; traditional waterfall and even traditional agile methods do not.

Why does AI-native development need a different talent shape?

Because the surface where customer reality meets product behavior is wider and changes faster. In conventional software, customer feedback updates a backlog; in AI-native software, customer feedback updates evaluations, prompts, fine-tunes, and runtime guardrails — often simultaneously. The talent shape that handles this surface is the forward deployed engineer.

How is forward deployed engineering different from professional services?

FDE owns the outcome and writes the code; professional services consultants implement what they are told to implement. FDE is a flat senior-engineer pool; professional services has tiers. FDE bills against measurable outcomes; professional services bills against time and materials.

Is the FDE model applicable beyond Palantir?

Yes — and increasingly so. OpenAI, Anthropic, Scale AI, and most foundation-model deployment companies have adopted variants of the FDE model. The model fits the AI era because customer reality and product behavior are unusually entangled in AI products; the FDE is the talent shape that handles the entanglement.

A conversation with the FDE team.

A 30-45 minute conversation with senior FDEs running the model in healthcare AI today.

Talk to an FDE