April 2026 AFRs and 7520 Rate
The April 2026 Section 7520 rate for use with estate planning techniques such as CRTs, CLTs, QPRTs and GRATs is 4.6%, which is 0.20% less than the March 2026 rate. The April applicable federal rate (“AFR”) for use with a sale to a defective grantor trust or intra-family loan with a note having a duration of:
- 3 years or less (the short-term rate, compounded annually) is 3.59%, which is unchanged from March;
- 3 to 9 years (the mid-term rate, compounded annually) is 3.82%, down from 3.93% in March; and
- 9 years or more (the long-term rate, compounded annually) is 4.62%, down from 4.72% in March.
Philip G. Peterson v. Christian Community Foundation, Inc. d/b/a WaterStone, a Colorado nonprofit corporation (U.S. District Court for the District of Colorado)
This case involves WaterStone’s management of the Peterson Family Stewardship Fund, a donor-advised fund (sometimes referred to as a “DAF”) with over $21 million in charitable assets. The complaint was filed in January of 2026 and the case remains unresolved. WaterStone is a Colorado based Christian nonprofit that offers donor-advised funds as a faith-oriented alternative to secular alternatives. Donor advised-funds permit donors to contribute assets, receive immediate tax benefits, and have, quoting the Internal Revenue Code, “advisory privileges with respect to the distribution or investment of amounts held in such fund…” WaterStone promotes its mission as advancing faith through charitable giving and positions itself as a trusted steward of donor funds, committed to honoring advisors’ grant recommendations.
Philip Peterson (“Peterson”), the successor advisor to the Peterson Family Stewardship Fund created by his father in 2005, alleges that WaterStone blocked his ability to manage the fund. Since 2024, WaterStone has stopped communicating with Peterson, removed his access to account information, and declined to process grant recommendations or distribute charitable funds. As a result, for the first time in the fund’s nineteen-year history, no charitable grants were issued in 2024. The complaint includes claims for breach of contract, breach of the covenant of good faith and fair dealing, negligent misrepresentation, and related equitable claims.
This case will assess the bounds of a donor’s advisory powers over their DAF and serves as a reminder that a DAF is not a charitable checking account, but an irrevocable transfer to a public charity. Depending on how the case is resolved, there could be an impact to DAF marketing materials and related contracts that suggest any level of control over a DAF, such as representing a donor’s “charitable legacy.” Impacts to estate planning will include clear answers on issues like what results when a successor advisor has a disagreement with their donor-advised fund’s policies and the rights and remedies available to donors with respect to their DAF. Finally, the case could prompt a legislative response if it highlights tension between the expectations of donors and the authority of the sponsoring organizations.
In The Matter of The Niki and Darren Irrevocable Trust and The N and D Delaware Irrevocable Trust (Court of Chancery of the State of DE, November 19, 2025)
Claudia Elena Tesak (“Niki”) and Darren Rushin were married from 1997 to 2018. During their marriage, Niki’s mother, Ildiko Juhasz de Tesak (“Ildiko”), created a 2012 irrevocable trust for the benefit of herself, Niki, Darren, and Niki’s and Darren’s children. Ildiko was the lifetime beneficiary of the trust and upon her death the trust was to divide into separate successor trusts for Niki and Darren. After the trust was established, Darren became dissatisfied because his share would not be funded until Ildiko’s death. He sought to add a provision that would divide the trust assets into equal shares for him and Niki (previously being divided 55%/45% in favor of Niki) if they divorced. In 2014, Darren retained counsel to draft a new Delaware trust incorporating these changes.
To implement the proposed changes, Darren’s attorney, Patrick Martin, used Delaware’s decanting statute, which permits a trustee to transfer trust assets to a new trust when the trustee has authority to distribute trust principal. Martin later recognized that the 2012 trust did not grant the power to distribute principal during Ildiko’s lifetime and communicated this concern to Darren. Darren nonetheless pushed for the decanting and in court filings maintained that he relied on counsel to ensure the decanting was valid and that he did not fully understand the governing provisions of the new Delaware trust. Ildiko, Niki, and Darren signed acknowledgements and “statements of non-objection or consent” to the decanting.
Niki and Darren initiated divorce proceedings in 2018. In 2019, Ildiko engaged Delaware counsel to review the 2014 Delaware trust. Ildiko’s counsel concluded that the Decanting was invalid, alerted the involved corporate trustee, Comerica Bank & Trust, N.A., which then filed an action to request instructions from a Court. The Court concluded that under Delaware law the decanting was an invalid “null act.” Therefore, the assets transferred should be viewed as never leaving the original 2012 trust. The court further instructed targeted discovery to trace the assets originally transferred and return them to the original trust.
The result of this case is likely not surprising, but it is a reminder that if a decanting power does not exist within a trust instrument, a statutory decanting power is strictly construed and cannot be modified by consent or good intentions. Further, it’s a rarity for an invalid decanting to unwind cleanly with a simple “restoration” of the invaded trust and can require complicated litigation and costly asset tracing that emphasizes a need for proper execution.
The Series LLC In Florida
Florida will be the 25th U.S. jurisdiction, joining states such as Delaware, Texas, Nevada, and Wyoming, to offer a Series limited liability company (“LLC”) entity structure, effective July 1, 2026. At that time, Series LLCs created in jurisdictions other than Florida can also register to conduct business in the state. A Series LLC creates an unlimited number of “series” beneath it with each series treated as a separate and distinct company under the original Series LLC umbrella. This contrasts with the typical holding company model where one entity owns and controls one or more individual subsidiary companies.
An LLC is generally regarded as providing limited liability, or a “corporate veil,” for its owners. A Series LLC protects owners, but at its core seeks to provide liability protection between separate business operations and assets within the same company. For example, a business that loans vehicles that wants to mitigate risk between rental assets could put each rental vehicle into a protected separate series. Managers, members, and officers can differ from one series to the next and there is a possibility that a single operating agreement can apply to each series depending on the structure. The Series LLC would provide for reduced filing fees, registered agent fees, and annual report filings.
To form a Series LLC, an existing Florida LLC must vote to establish one or more “protected” series and file the necessary documents with Florida’s Division of Corporations. Upon formation, each series takes an identify separate from the members and other protected series under the “master LLC” umbrella. Each protected series created will share the name of the parent and include the initials “P.S.” or “PS.”
Careful attention must be paid to corporate governance requirements and maintaining accurate records particularly regarding ownership of different assets by different protected series. There is also uncertainty in how a Series LLC would be treated in states that do not permit these entities. States that do not provide for a Series LLC entity structure often do not have robust governing rules and regulations and therefore no assurance that the Series LLC structure would be respected as formed. It’s important to review the applicable law in potential jurisdictions that a Series LLC will operate in or where a Series LLC may be exposed to a risk of litigation.
Property Tax Reform in Florida: House Joint Resolution 203
House Joint Resolution 203 proposes a constitutional amendment to the Florida Constitution regarding property taxes on homestead property and was passed by the Florida House of Representatives on February 19, 2026. If the bill is passed by three-fifths of the Florida Senate, it will be placed on the November statewide election ballot for Florida voters to decide. If approved by at least sixty percent of Florida voters, it would be effective January 1, 2027.
The amendment will exempt homestead property from all non-school property taxes. The amendment simultaneously prohibits reductions in funds for first responders at the local government levels. The amendment should have significant implications for property values if passed and may result with increased audits of homestead claims and additional scrutiny on “snowbird residency” involving a homestead property.
Two Recent Federal Privilege Rulings Related to AI Tools Have Implications for Routine Tax Advisor Arrangements
Authors: Margaret Dale, Laura Gavioli, Nolan Goldberg, Peter Cramer, Edward Wang, Amanda Nussbaum, Martin Hamilton, and Richard Corn
On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled in United States v. Heppner that documents generated through a consumer version of Anthropic’s Claude AI were not protected by the attorney-client privilege or the work-product doctrine under the circumstances presented. The decision appears to be the first to squarely address privilege and work product claims arising from a non-lawyer’s use of a consumer-grade insecure, non-enterprise AI tool for “legal research,” as well as the potential consequences of inputting privileged information (provided to an individual by counsel) into an AI tool.
On the same day, a federal magistrate judge in the Eastern District of Michigan reached the opposite conclusion in the civil case Warner v. Gilbarco, Inc., denying a motion to compel discovery of a pro se plaintiff’s use of AI tools such as ChatGPT and holding that such materials are protected under the work-product doctrine despite the third-party operator potentially having access. While the different facts in each case reveal a consistency of analytical approach – despite the seemingly opposite holdings—taken together, these decisions highlight the fact-specific nature of privilege and work-product analysis in the AI context and underscore the importance of 1) how AI tools are used in connection with legal matters; and 2) the importance of pairing the right tool with each task. Moreover, these decisions reinforce the importance of only using properly secured AI tools with confidential or privileged information and for decisions about using AI in the privileged context to be made by those who best appreciate the risks involved: i.e., lawyers.
What Happened in Heppner?
After receiving a grand jury subpoena and retaining counsel, the criminal defendant —Heppner — used a non-enterprise, consumer version of Anthropic’s Claude to research legal issues related to the government’s investigation. Without counsel’s direction or involvement, Heppner input information he had learned from his attorneys into the AI tool, generated “reports that outlined defense strategy, that outlined what he might argue with respect to the facts and the law” and later shared those materials with his lawyers. His defense counsel asserted attorney-client privilege and work-product protection for the AI-generated reports, arguing that Heppner had created the AI documents for the purpose of speaking with counsel to obtain legal advice. In response, the government moved for a ruling that the AI documents were protected by neither doctrine, which Judge Rakoff granted.
No reasonable expectation of confidentiality. The court noted that the tool’s terms permitted the provider — here Anthropic — to disclose user data to regulators and to use users’ prompts and outputs for model training. In other words, the terms themselves made clear that the use of this specific tool was tantamount to a disclosure to the third party that provided the tool. As a result, the court found that users lack a reasonable expectation that their inputs and outputs are confidential. While this reasoning applies broadly to standard consumer AI offerings (which generally provide less confidentiality protections and assurances), the decision leaves open whether enterprise-level products — particularly those that exclude user data from training and provide contractual confidentiality protections — might support a different expectation-of-confidentiality analysis. Importantly, contractual confidentiality protections alone do not automatically establish attorney-client privilege. Even where an enterprise AI product limits data use and includes confidentiality commitments, courts will still assess whether the communication was made for the purpose of obtaining legal advice and whether confidentiality was maintained in a manner sufficient to preserve privilege under governing standards.
Use of unsecured consumer AI tools may defeat privilege. The court held that discussions with a non-enterprise AI platform are legally equivalent to discussing legal issues with a third party and emphasized how the tool itself disclaimed providing any legal advice. This means that employees using consumer-grade AI tools to analyze legal exposure, assess complaints, research regulatory issues, or prepare for litigation could generate documents that adversaries can later seek to obtain. In this sense, this ruling is consistent with legal ethics opinions that raise the concern that using privileged information with certain unsecured AI tools could be considered a disclosure to the third party that operates those tools, and thus, would be inappropriate for legal work. See, e.g., American Bar Ass’n Standing Comm. on Ethics & Pro. Resp., Formal Op. 512, at 6 (July 29, 2024) (“Self-learning GAI tools […] raise the risk that information relating to one client’s representation may be disclosed improperly.”).
Lack of attorney direction undermined the work-product claim. According to the court, because Heppner conducted the AI research independently and not at counsel’s direction, the work-product doctrine did not apply. The court indicated that the analysis might differ if the AI use had been directed by counsel under a Kovel-type arrangement: “[h]ad counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.” Whether that type of arrangement would result in protection remains an open question.
In an illustration of the anthropomorphism that may happen related to AI tools, the court noted that “what matters for the attorney-client privilege is whether Heppner intended to obtain legal advice from Claude, not whether he later shared Claude’s outputs with counsel.” [emphasis in original]. Claude, however, is not a person or an attorney. Future cases likely will have to grapple with the question of how consumer AI tools meaningfully differ from other AI tools that are more specifically designed to operate in the legal arena.
What Happened in Warner?
The underlying case in Warner involves employment-related claims brought pro se by plaintiff Sohyon Warner against Gilbarco, Inc. and Vontier Corporation. During discovery, the defendants sought extensive information about the plaintiff’s use of third-party AI tools in connection with the lawsuit, including detailed questioning at the plaintiff’s deposition. Specifically, the defendants moved to compel production of “all documents and information concerning her use of third-party AI tools in connection with this lawsuit.”[1] The defendants further asked the court to overrule the plaintiff’s attorney-client privilege and work-product objections to the AI materials, or alternatively, to require a privilege log covering such items.
The Court’s Ruling
Magistrate Judge Anthony P. Patti denied the defendants’ motion to compel AI-related materials, finding that such information is not discoverable under the Federal Rules of Civil Procedure. The court’s reasoning rested on several grounds:
Work-Product Doctrine Applies to AI-Assisted Materials. The court held that even if information concerning AI use were otherwise discoverable, under these circumstances it is still subject to protection under the work-product doctrine. The court noted that the work-product doctrine expressly protects “documents and tangible things that are prepared in anticipation of litigation or for trial by another party or its representative.”[2] Because the plaintiff was a pro se litigant, she had the right to assert work-product protection over such material.
No Waiver by Using ChatGPT. The defendants argued that the plaintiff waived work-product protection by using ChatGPT (presumably in this case the free version or a version that did not insulate the inputs and outputs of the tool from OpenAI, the party that provides ChatGPT). The court rejected this argument, explaining that work-product waiver requires disclosure “to an adversary or in a way likely to get in an adversary’s hand.”[3] Significantly, the court reasoned that “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.”[4] This stands in notable contrast to Heppner, where Judge Rakoff treated the AI platform as a third party for privilege purposes based on its terms of service permitting data disclosure.
Mental Impressions Protected. The court agreed with the plaintiff’s characterization that the defendants’ motion improperly sought the plaintiff’s “internal analysis and mental impressions—i.e., her thought process—rather than any existing document or evidence, which is not discoverable as a matter of law.”[5] The court also agreed that the defendants’ theory “would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed.”[6]
Relevance and Proportionality Concerns. The court characterized defendants’ request as a “fishing expedition” that was “untethered from Rule 26 relevance.” [7] Even if marginally relevant, the court found that information about AI use was not proportional to the needs of the case under Rule 26(b)(1).
No Evidence of Protective Order Violation. The court noted that defendants had “no evidence of Plaintiff having violated the protective order by uploading documents marked confidential onto an AI platform,” further undermining the basis for the discovery request.[8]
Contrasting the Two Decisions
The Heppner and Warner decisions, issued the same day, reached opposite conclusions on the discoverability of AI-generated materials—but the factual differences are instructive. In Heppner, the criminal defendant used a consumer version of Claude AI on his own initiative, without counsel’s direction, and inputted information he had learned from his attorneys. The platform’s privacy policy permitted Anthropic to disclose user data to regulators and to use prompts and outputs for model training. Judge Rakoff found that these circumstances defeated any reasonable expectation of confidentiality and that the work-product doctrine did not apply because counsel had not directed the defendant’s AI use.
By contrast, Warner involved a pro se civil litigant who was effectively acting as her own counsel. The court found that the plaintiff’s use of ChatGPT to assist with her litigation was protected work product because she was preparing materials in anticipation of litigation. Critically, the court rejected the argument that using a generative AI tool constitutes disclosure to a third party that would, by itself, waive work-product protection because it required disclosure “to an adversary or in a way likely to get in an adversary’s hand.”[9]
What the Decisions Do Not Address
Heppner’s ruling was limited to a criminal defendant’s use of a consumer, non-enterprise AI platform without counsel’s direction and under terms permitting provider access to user data. Warner involved a pro se civil litigant using a (presumably unsecured) AI tool. Neither decision resolves several important questions.
- The decisions do not address whether use of an enterprise-tier (i.e., secured) AI product could support a different expectation-of-confidentiality analysis.
- Nor did the courts decide whether AI research conducted at the direction of counsel, for example, under a Kovel-type arrangement, or integrated into a structured legal workflow, might qualify for work-product protection.
- The question of whether these holdings would apply more broadly in civil contexts remains unanswered. The Heppner court cited United States v. Adlman, 68 F.3d 1495 (2d Cir. 1995), a Second Circuit decision dealing with protection for tax advice given by an accounting firm for a potential corporate merger. Protections for tax advice in the civil context are more robust than the regime at issue in Heppner: for example, Internal Revenue Code section 7525 extends the attorney-client privilege to accountants’ tax advice in certain noncriminal tax matters/proceedings, but not criminal ones.
Practical Implications
The fact that the Warner court found work-product protection intact does not diminish the need for careful AI governance—indeed, the court’s analysis turned on several factors that organizations can and should control. Accordingly, we recommend the following:
- Be intentional: Reasonable expectations of privacy continue to be of paramount importance when determining whether a tool is suitable for use with confidential or privileged information. Ensure that your organization is conducting proper due diligence when selecting tools and determining permissible applications. Although the Warner court characterized generative AI programs as “tools, not persons” and did not scrutinize ChatGPT’s terms of service, a different court—as Heppner demonstrates—may reach the opposite conclusion by focusing on the platform’s privacy policy and data practices. Organizations cannot rely on receiving Warner-like treatment and should assume that platform terms will be examined.
- Audit AI usage policies: Both decisions support our recommendation to audit existing AI usage policies. Confirm whether your organization permits use of consumer-grade (unsecured) AI tools and make sure that only appropriate applications are allowed—for example, those that do not involve confidential or privileged information. The Warner plaintiff prevailed in part because defendants could point to no evidence that she had uploaded confidential materials in violation of a protective order. Organizations that lack visibility into how employees use AI tools may not be so fortunate.
- Implement guardrails: As we emphasized following Heppner, restrict input of privileged, confidential, or investigation-related information into consumer AI systems absent a vetted enterprise agreement and clear internal protocols. This is what is required by legal ethics opinions and the Warner decision does not change this calculus. The plaintiff in Warner prevailed because she was acting pro se—effectively as her own counsel—and was preparing materials in anticipation of her own litigation. Most organizational contexts do not present such facts. In Heppner, the defendant acted on his own initiative without counsel’s direction, which the court found fatal to the work-product claim. Privilege-related decisions should continue to be made by those who best appreciate the risks, such as counsel. Clients and third-party consultants who handle privileged information generally should use AI tools for legal assistance within the confines of the attorney-client relationship, for example, under terms of engagement or Kovel
- Train personnel: Both decisions strongly support a recommendation to train personnel on AI usage. Ensure employees understand the various considerations that go into determining whether a specific AI tool is appropriate for specific usage. Both Heppner and Warner underscore that courts will “uphold the protections afforded the thought processes and litigation strategies of both sides”—but only where appropriate safeguards are in place. The divergent outcomes in these two cases, issued the same day, illustrate how fact-specific these determinations are and why training matters.
Conclusion
The Heppner and Warner decisions, issued on the same day by different federal courts, demonstrate that courts are actively grappling with how traditional privilege and work-product doctrines apply to AI-generated materials. While Heppner reinforces the importance of using properly secured AI tools with confidential or privileged information and ensuring that AI use is directed by counsel, Warner suggests that not all AI-assisted litigation work will ultimately be subject to discovery. Indeed, despite the seemingly opposite outcomes, both decisions appear to rely on the same basic analytical framework. The critical factors appear to be the specific contractual and technical circumstances of the specific AI platform at issue, whether counsel is involved and/or directed the AI use, whether confidential or privileged information will be entered into the tool, and whether the materials reflect litigation strategy prepared in anticipation of litigation.
Organizations should take this opportunity to reassess AI governance frameworks and usage policies. As AI adoption in legal services continues to expand, we expect courts to continue scrutinizing how these tools intersect with privilege, confidentiality, and waiver doctrines.
Finally, it is worth noting that the Warner decision has significant access-to-justice implications, in this case leveling the field for a pro se litigant who would likely not otherwise have access to the type of secured tools that law firms typically (and should) employ.
The Proskauer team stands ready if you would like assistance reviewing AI usage policies, enterprise agreements, or privilege-protection protocols.
_______________
[1] Id. at 10.
[2] Id.
[3] Id. at 11.
[4] Id. at 12.
[5] Id.
[6] Id. at 12-13.
[7] Id. at 12.
[8] Id. at 10, n. 3.
[9] Warner v. Gilbarco, Inc., No. 2:24-cv-12333, at 11 (E.D. Mich Feb. 10, 2026).