Normative Supplement to the VTI Standard
Published: 2026
Authority: VTI Foundation, Inc.
Normative Reference: VTI Standard v1.0 (February 2026)
This Guidance establishes enforcement-time integrity requirements for AI-mediated authorization systems operating under the VTI Standard.
Automated and AI-mediated systems increasingly influence authorization decisions affecting access, eligibility, compliance status, and operational permissions in regulated and high-assurance digital environments.
In AI-mediated authorization, enforcement outcomes may depend on non-deterministic inference outputs (e.g., probabilistic scoring, learned model predictions) and on evolving decision functions (e.g., model updates, policy updates, or configuration drift).
These conditions introduce enforcement-integrity failure modes, including:
The VTI Standard defines deterministic requirements for binding authorization outcomes to verifiable evidence at the moment of enforcement. This Guidance extends those requirements to AI-mediated authorization contexts and defines additional constraints necessary to preserve deterministic enforcement integrity when inference components influence enforcement.
Systems claiming conformance to this Guidance MUST produce verifiable evidence sufficient to enable independent verification that AI-assisted authorization outcomes remained bound to canonical trust-state representation and verification-linked enforcement requirements as defined in the VTI Standard.
This Guidance applies to systems in which enforcement outcomes are influenced by non-deterministic inference components, including machine learning models, probabilistic scoring mechanisms, or other learned or stochastic decision functions used as inputs to allow/deny, permit/deny, or privilege assignment decisions.
This Guidance also applies where authorization semantics may change over time due to model updates, policy updates, configuration drift, or other mechanisms that can alter the effective decision function.
This Guidance does not prescribe model architecture, training methodology, fairness constraints, or explainability requirements. It is complementary to such frameworks and specifies enforcement-time integrity requirements that those frameworks commonly assume but do not define.
Systems within scope MUST satisfy the VTI Standard and the additional enforcement constraints defined in this Guidance.
AI-mediated authorization introduces specific integrity risks that must be addressed at enforcement time.
Systems MUST bind enforcement outcomes to a uniquely identifiable model version and decision function configuration in effect at the time of evaluation.
Systems MUST ensure that authorization semantics cannot silently change between evaluation and enforcement. Any change in applicable rules, thresholds, or model parameters MUST invalidate previously generated authorization artifacts.
Systems MUST NOT rely solely on post-hoc logging to establish authorization validity. Enforcement MUST be gated on verification of canonical trust-state and decision function identity prior to action execution.
In addition to the requirements of the VTI Standard, AI-mediated authorization systems MUST satisfy the following constraints:
Systems claiming conformance MUST produce verifiable artifacts that enable independent validation of:
Conformance claims MUST reference both the VTI Standard v1.0 and AI Authorization Integrity Guidance v1.0.
This Guidance is governed by the VTI Foundation and subject to structured revision control. Future updates MAY clarify requirements, expand conformance criteria, or incorporate feedback from public working group review.
Substantive changes will result in version increment and publication of updated canonical documentation.