The Ruling That Changed the Conversation
On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York issued a decision in United States v. Heppner that sent shockwaves through legal, compliance, and technology circles. The court held that written exchanges between a criminal defendant and a publicly available AI platform, in this case Anthropic's Claude, were not protected by attorney-client privilege or the work product doctrine. The government was permitted to inspect every document the defendant had generated through the AI system.
Bradley Heppner, indicted on securities and wire fraud charges, had used Claude to analyze legal strategy, input information learned from his attorney, and prepare defense reports, all without attorney direction. He later shared those outputs with counsel. When prosecutors subpoenaed the AI-generated materials, Heppner argued they were privileged. The court disagreed on every count.
The decision turned on three findings. First, Claude is not an attorney, and communications between two non-lawyers about legal issues are not privileged. Second, Anthropic's privacy policy permits data retention and disclosure to third parties, destroying any reasonable expectation of confidentiality. Third, the purpose of the communications was not to obtain legal advice from the entity receiving them. The work product doctrine failed separately because the documents were not prepared at the direction of counsel.
Why This Matters Beyond the Courtroom
Heppner was a criminal case, but its implications reach far beyond criminal defense. Every organization that uses AI to inform strategic, financial, or operational decisions faces the same structural vulnerability: if sensitive analysis is conducted through a consumer AI platform with standard terms of service, that analysis may be discoverable in litigation, regulatory investigation, or audit proceedings.
Consider the executive who asks ChatGPT to model workforce reduction scenarios, or the HR director who uses a public AI tool to evaluate pay equity across departments, or the compliance officer who prompts an AI platform to assess regulatory exposure. Under Heppner's reasoning, none of these exchanges would be privileged if the platform's terms permit data retention and third-party disclosure. The analysis becomes a liability rather than an asset.
The Harvard Law Review's analysis of the ruling raised an additional concern: categorical exclusion of AI-generated documents from privilege creates an asymmetric disadvantage. Organizations that adopt AI broadly, and most will, face growing exposure unless they structure that adoption within a governance framework that preserves confidentiality, documents decision rationale, and maintains clear chains of authorization.
The Three Gaps Heppner Exposed
The ruling illuminated three governance gaps that exist in most enterprise AI deployments today. The first is the confidentiality gap. Consumer AI platforms operate under privacy policies that reserve broad rights to retain, process, and disclose user inputs. When employees use these tools for sensitive analysis, confidentiality is not a technical limitation. It is a contractual forfeiture. The data leaves the organization's control the moment it enters the prompt.
The second is the authorization gap. Heppner's AI use was self-directed, without attorney involvement or organizational oversight. In enterprise settings, this pattern is ubiquitous. Employees use AI tools for strategic analysis without documented authorization, without defined guardrails, and without any record of who directed the work or why. When the analysis later becomes relevant in litigation or an audit, there is no governance trail to invoke.
The third is the auditability gap. Heppner could not demonstrate that his AI-generated documents were created under attorney direction because no such documentation existed. In the same way, most organizations cannot reconstruct who used AI, what data was provided, what model produced the output, or how the output influenced a decision. Without an audit trail, privilege arguments fail and compliance postures collapse.
How Structured AI Governance Closes Each Gap
The Heppner court did not hold that AI-assisted analysis can never be privileged. It held that unstructured, unsupervised use of consumer platforms cannot be. The distinction matters enormously. Legal commentators, including analysis from Venable LLP and Morgan Lewis, have noted that enterprise AI deployments with contractual confidentiality protections, no-training provisions, and defined data retention limits present materially stronger privilege arguments.
This is the design philosophy behind platforms like DecisionLedger. Rather than leaving AI-assisted decisions to ad-hoc tool usage, a governance-first architecture ensures that every interaction with AI occurs within a framework of authorization, confidentiality, and traceability. The platform does not replace legal counsel, but it ensures that AI-assisted analysis is conducted under conditions that preserve the organization's legal protections.
Specifically, a governed AI decision platform addresses each of the three Heppner gaps. Confidentiality is maintained because the AI operates within the organization's own infrastructure. In DecisionLedger's case, that means AWS Bedrock, where data stays within the customer's VPC, is never used for model training, and is subject to enterprise data processing agreements rather than consumer terms of service. Authorization is enforced through role-based access controls, department-scoped data visibility, committee governance workflows, and approval chains that document who directed each analysis. Auditability is preserved through immutable decision records that capture every input, model, output, and rationale in a tamper-evident audit trail.
From Heppner to Best Practice: A Governance Checklist
Organizations reading the Heppner decision should not conclude that AI is too risky to use. They should conclude that unstructured AI use is too risky to tolerate. The following principles, drawn from the court's reasoning and post-ruling legal commentary, form the foundation of defensible AI governance.
First, eliminate consumer AI platforms from sensitive workflows. Any tool whose terms of service permit data retention, model training on user inputs, or disclosure to third parties is incompatible with privilege, confidentiality, and most enterprise data governance policies. Replace these tools with enterprise platforms that offer contractual no-training guarantees, defined retention limits, and enforceable confidentiality provisions.
Second, require documented authorization for AI-assisted analysis. Every significant use of AI in decision-making should be traceable to a specific authorization, whether from counsel, a decision owner, or a governance committee. This documentation is what distinguishes privileged work product from unprotected self-help, and what distinguishes governed enterprise analysis from the pattern that Heppner's case exemplifies.
Third, maintain immutable audit trails. The ability to reconstruct who provided what inputs, which model was applied, what alternatives were considered, and how the output influenced the final decision is not merely a compliance nicety. After Heppner, it is the evidentiary foundation for any privilege or work product argument involving AI-generated materials.
Fourth, enforce data boundaries at the platform level. Access controls should ensure that employees can only query data they are authorized to see, that AI outputs are scoped to the user's department and role, and that sensitive data categories like compensation, performance reviews, and legal strategy are subject to additional protections. These controls must be architectural, not policy-based. The Heppner court was unimpressed by the defendant's subjective intent and focused on the objective structure of the platform he used.
Decision Records as a Legal Asset
One of the most overlooked implications of the Heppner ruling is that structured decision records are not just an operational improvement. They are a legal asset. When an organization can produce a complete, timestamped, immutable record showing that a decision was made using a governed AI platform, under documented authorization, with defined data controls, and with a full audit trail, it demonstrates exactly the kind of structured, supervised AI use that the Heppner court distinguished from unprotected consumer usage.
DecisionLedger's architecture was designed around this principle long before the Heppner ruling. Every decision tracked through the platform captures the full lifecycle: the inputs and data sources, the analytical model applied, the scenarios and alternatives considered, the governance approvals obtained, and the outcome measured against the original projection. This is not a retroactive compliance exercise. It is the natural output of a system built for governed decision-making.
The platform's committee governance module adds an additional layer of defensibility. For high-stakes decisions, a review committee can convene, vote, attach conditions, and generate AI-powered meeting minutes, all within the audit trail. If that decision is later challenged in litigation or regulatory proceedings, the organization can produce a complete, contemporaneous record of the reasoning process rather than reconstructing it from memory and email fragments.
The Path Forward
The Heppner decision is a single ruling from a single district court, and legal scholars have already identified areas where its reasoning may be refined by future courts. The Harvard Law Review analysis argues persuasively that a functional approach, one that asks whether AI use facilitates the attorney-client relationship rather than whether the AI itself is an attorney, would better serve the privilege's foundational purpose.
But regardless of how appellate courts refine the doctrine, the structural lesson is clear. Organizations that treat AI as an ungoverned convenience tool will find their AI-generated analysis exposed in exactly the situations where confidentiality matters most. Organizations that embed AI within a governance framework, one with enterprise-grade infrastructure, documented authorization, role-based access, and immutable audit trails, will be positioned to defend their analysis under any standard the courts adopt.
The question is no longer whether to use AI in decision-making. It is whether your AI governance framework can withstand the scrutiny that Heppner has now invited. For organizations building that framework, the principles are straightforward: control where the data flows, document who authorized the analysis, preserve an immutable record of the reasoning, and ensure that every AI-assisted decision is traceable from input to outcome. These are not aspirational goals. They are the minimum requirements of defensible AI governance in a post-Heppner world.
