Ai Identity Protocol
Menu

Formal Definition of AI Identity Protocol

This page contains the canonical, formal definition of AI identity protocol. Every other page on this site references this definition. Every claim, comparison, and boundary analysis derives from the terms established here. The definition is fixed. It is not a working draft, a proposal, or an approximation. It is the definition.

Definition

“An AI identity protocol is a deterministic system that assigns a stable, independently verifiable identity to an AI run based on its declared execution.”

Each term in this definition carries a specific, non-negotiable meaning. The following sections decompose the definition term by term and establish the constraints that each term imposes on any conforming system.

Deterministic System

“Deterministic” means that given identical inputs, the system produces identical outputs every time, without exception. Applied to identity assignment: the same declared execution always produces the same identity. There is no randomness, no entropy injection, and no variation between computations. Any party that processes the same declaration through the same protocol arrives at the same identity. This property is what makes the identity independently verifiable. Without determinism, two parties computing identity from the same declaration would arrive at different results, and verification would be structurally impossible. The page on Deterministic Identity provides the full specification of this property.

Stable Identity

“Stable” means the identity does not change after assignment. Once an AI run receives its identity, that identity persists unchanged regardless of what happens to the run, its outputs, or the system that executed it. Stability is not durability of storage. It is immutability of the identity value itself. If the identity of a run changed over time, any reference made at time T would be invalid at time T+1. Historical verification, audit trails, and cross-system agreement all require that the identity assigned at execution time remains the same identity indefinitely.

Independently Verifiable

“Independently verifiable” means any party with access to the AI run declaration is able to recompute the identity without relying on the original assigning system, without trusting any third party, and without accessing any private data. The verification is a pure computation: take the declaration, apply the protocol, compare the result. If the result matches, the identity is confirmed. If it does not, the identity is invalid. This property eliminates trust dependencies. No certificate authority, no central registry, and no privileged system is required for verification. The page Verification Failure in AI documents what happens when this property is absent.

AI Run

An AI run is a composite execution. It is not a single API call, not a single model inference, and not a single output generation. An AI run encompasses the complete, bounded execution unit: the model identifier, the parameters, the input context, the system configuration, and every declared element that defines what this execution is. The concept of an AI run as a composite execution is foundational. The identity protocol assigns identity to this composite, not to any individual component. The page AI Run Identity formalizes the structure of the AI run as the identity subject.

Declared Execution

“Based on its declared execution” means the identity is computed from what was stated about the execution, not from what the execution produced. The declaration is the input to the identity function. The outputs of the AI run are not inputs to the identity function. This distinction is absolute: identity is not derived from outputs. The declaration exists at or before execution time. It is the specification of the run, not the record of the run. AI Run Declaration defines the structure and requirements of this declaration.

The Core Problem

The definition exists because a specific, structural problem exists. Every AI system in production executes runs without assigning deterministic identity to those runs. The run happens. Outputs are produced. Side effects propagate. But no stable, independently verifiable identity is bound to the execution. The execution is anonymous.

This is not an oversight in any single system. It is a missing layer in AI infrastructure. The page Why AI Systems Lack Identity traces the structural reasons this layer was never built. No existing tool, platform, or standard fills this gap because the gap exists at the identity layer, and no existing tool operates at the identity layer.

Failure Modes

When the definition is not satisfied, specific failures become unavoidable. Each failure traces directly to a violated term in the definition.

  1. Non-deterministic identity assignment. If the identity system is not deterministic, two parties computing identity from the same declaration produce different results. Cross-system agreement becomes impossible. Every reference to the run becomes ambiguous.
  2. Unstable identity. If the identity changes after assignment, any reference made at an earlier time becomes invalid. Audit records, compliance reports, and inter-system references all break when the identity they point to no longer matches the current identity.
  3. Verification dependency. If the identity requires a trusted third party for verification, the system inherits all the trust assumptions, availability constraints, and single points of failure of that third party. Independent verification is not a feature. It is a structural requirement. Non-Deterministic Identity Risks catalogs the systemic consequences.
  4. Output-derived identity. If the identity is computed from outputs rather than from the declared execution, then two runs with identical outputs receive identical identities even when they are fundamentally different executions. The identity becomes a hash of results, not a fingerprint of execution. This conflates what happened with what was produced.
  5. Partial declaration identity. If the identity is computed from an incomplete declaration, missing parameters mean different executions with overlapping partial declarations receive the same identity. The identity loses specificity, and verification becomes unreliable because the identity does not fully represent the execution.

Why Existing Approaches Fail

Each existing approach fails to satisfy at least one term in the definition. The failures are not failures of implementation quality. They are structural mismatches between what the tool does and what the definition requires.

Logs

Logs record events sequentially. A log entry is created after an event occurs. Log entries do not assign identity to executions. They record effects with timestamps. A log does not know which declared execution produced the event it records. Logs fail the “deterministic system that assigns identity” requirement because they do not assign identity at all. See AI Identity vs Logging.

Observability

Observability aggregates signals across systems to provide operational visibility. It answers questions about system health, performance trends, and anomaly detection. It does not assign stable, independently verifiable identity to individual AI runs. Observability fails the “assigns identity to an AI run” requirement because its unit of analysis is the system, not the individual run. See AI Identity vs Observability.

Tracing

Distributed tracing assigns trace IDs to requests as they traverse services. A trace ID identifies a request path, not an AI run as a composite execution. The trace ID is not computed from the declared execution parameters. It is assigned arbitrarily at request initiation. Tracing fails the “deterministic” and “based on declared execution” requirements because the trace ID bears no relationship to what the execution declared. See AI Identity vs Tracing.

Outputs

Output validation checks whether results meet defined criteria. It operates entirely on the products of execution. Identity is not derived from outputs. Two different declared executions are able to produce identical outputs. Output validation has no mechanism to distinguish between the runs that produced them. It fails the “based on declared execution” requirement because it operates on outputs, not declarations. See AI Identity vs Output Validation.

Audit Trails

Audit trails are retrospective compilations of logs, access records, and output artifacts. They are assembled after execution, not at execution time. An audit trail does not assign identity. It attempts to reconstruct a narrative from partial data. Identity cannot be reconstructed after execution because the declaration that grounds the identity must be captured at execution time. Audit trails fail the “assigns a stable identity” requirement because they never assign identity in the first place.

Category Boundary

The formal definition draws a precise boundary around what AI identity protocol is. The following are explicitly outside that boundary:

The page What AI Identity Is Not provides the exhaustive exclusion list. The pages Identity vs Execution and Identity vs Reconstruction address the two most common conflations.

Logical Inevitability

“If an AI run cannot be assigned a stable, independently verifiable identity, then no system can reliably reproduce, verify, or agree on that execution, making a deterministic identity protocol necessary.”

The inevitability follows directly from the definition. If any system requires the ability to verify, reproduce, or reference an AI run, that system requires a stable, independently verifiable identity for that run. No existing infrastructure provides this identity. Therefore, a dedicated protocol that satisfies the definition is necessary. The necessity is not contingent on adoption preferences, market timing, or technological readiness. It is a structural requirement imposed by the problem itself.

Implications

The formal definition, once established, produces specific consequences for how AI systems must evolve.

Frequently Asked Questions

What makes this definition formal rather than informal?

Every term in the definition carries a precise, constrained meaning. "Deterministic" means the same inputs always produce the same identity. "Stable" means the identity does not change after assignment. "Independently verifiable" means any party with access to the declaration is able to recompute the identity without relying on the assigning system. No term is decorative. Each one imposes a testable constraint.

Why is the definition based on declared execution rather than actual execution?

Actual execution is observable only by the executing system. No external party has direct access to what happened inside a runtime. Declared execution is the set of parameters stated before or at the moment of execution. A declaration is inspectable, transmissible, and independently processable. Identity must be independently verifiable, which requires a shared input. The declaration is that shared input.

Does this definition exclude non-deterministic AI systems?

No. The determinism requirement applies to the identity assignment, not to the AI model. A non-deterministic model that produces different outputs on each run still receives a deterministic identity, because the identity is computed from the declared execution parameters, not from the outputs. Identity is not derived from outputs.

How does this definition differ from a UUID or request ID?

A UUID is a random identifier with no relationship to execution content. A request ID tracks a request through infrastructure. Neither is derived from the execution declaration. A deterministic identity, by definition, is computed from the declared execution. The same declaration always produces the same identity. A UUID never has this property. A request ID never has this property.

Is this definition tied to a specific implementation or technology?

No. The definition specifies properties that the identity system must satisfy: deterministic, stable, independently verifiable, based on declared execution. Any implementation that satisfies all four properties conforms to the definition. The definition is a constraint specification, not an implementation blueprint.

Why does the definition use "AI run" instead of "AI inference" or "AI call"?

An AI run is a composite execution that includes the model, parameters, context, inputs, and all declared configuration. "Inference" refers to the model computation step only. "Call" refers to the API interaction only. Neither captures the full scope of what must be identified. The term "AI run" denotes the complete, bounded execution unit that the protocol assigns identity to.

What breaks if any single term is removed from the definition?

Remove "deterministic" and the same execution declaration produces different identities, making verification impossible. Remove "stable" and the identity changes over time, making historical reference unreliable. Remove "independently verifiable" and only the assigning system is able to confirm the identity, eliminating trust. Remove "declared execution" and the identity has no grounding in what actually ran, making it arbitrary. Every term is load-bearing.