Skip to Content

Policy-as-Code for AI: Designing Testable Controls, Lineage & Evidence

Format: Online

As AI permeates business, the assurance challenge shifts from “Do we have a policy?” to “Are the controls actually executed at the point of use, and can we prove it?” This session presents a practical blueprint for translating privacy, security and ethics requirements into policy-as-code so controls are embedded in data and AI pipelines, operate consistently at scale and produce audit-ready artifacts by default. The session will break the problem into four auditable layers.

  1. Identity, access, and purpose binding – How to implement access-by-purpose and least privilege using role- and attribute-based controls, purpose descriptors on tokens/credentials, and activity-scoped entitlements.
  2. Data protection and prompt/response safety at runtime - How to express redaction and minimization rules as code (detect/classify PII/PHI/secret patterns; mask or block by context), as well as prompt safety rules (forbidden topics, jailbreak prevention, cite-or-fail requirements, max token exposure) and output evaluation gates (toxicity, PII leakage, IP similarity, fact-check confidence).
  3. Lineage, approvals, and change control - How to attach immutable lineage to each pipeline step of dataset hashes, feature lineage, model artifacts, and prompt/output pairs, alongside approval attestations for high-risk uses (e.g., external communications, financial impact).
  4. Monitoring, evaluation, and evidence generation – How to design dashboards and alerts for policy decision rates (allow/deny/override), redaction efficacy, evaluation gate failures and drift/stability of model performance by cohort.

DATE: Feb 11, 2026
TIME: 12:00 PM-1:00 PM ET

One (1) NASBA CPE will only be awarded to participants on the live broadcast who are logged in for a minimum of 50 minutes and engage on at least three poll questions per each hour of the event.

Keep scrolling to register.


By attending this webinar, participants will be able to:

  • Translate AI governance into executable policy-as-code controls for data protection, prompt safety & model output validation.
  • Take home a ‘Policy-as-Code' control catalog mapped to common AI risks.
  • Implement auditable lineage & identity binding to generate verifiable evidence for AI actions & policy enforcement.
  • Leave with - A lineage and identity-binding pattern that makes evidence “born audit-ready.”
  • Develop an internal audit test Plan and a lineage/identity-binding pattern that makes evidence “born audit-ready.”
  • Formulate a pragmatic audit plan with steps & acceptance criteria for control design & operating effectiveness.

SPEAKER

Shaurya Agrawal

Shaurya Agrawal is a Data & Analytics leader with 25+ years of experience driving transformative initiatives across Tech/SaaS, E-commerce, and FinTech. With expertise in AI/ML, Enterprise Data Architecture, and BI, he's led impactful projects, creating customer centric solutions and modernizing data platforms. As CTO of YourNxt Technologies, a mobile tech start-up, and Board Advisor to Hoonartek, Shaurya shapes global data strategies.

Holding an MBA and pursuing an MS in Data Science from UT Austin, Shaurya leverages data to unlock business value, specializing in unified customer views and personalized experiences.


Available Formats