Use of technology referred to as “artificial intelligence” is fast finding its way into many aspects of commercial life. Registered investment advisers are no exception as AI tools are already being used for screening and research, portfolio construction, trading and drafting client communications. As advisers integrate these tools into their investment processes, they face a familiar set of questions under the federal securities laws.
Though the SEC’s “predictive data analytics” (or “PDA”) proposal, which focused on the use of AI through the lens of conflicts of interest has now been withdrawn, this does not signal a retreat from AI as a supervisory priority. Going forward, the SEC seems likely instead to evaluate use of AI through the lens of an adviser’s fiduciary duty of care, as opposed to the conflicts-focused PDA rule centering on an adviser’s duty of loyalty. That is, instead of scrubbing through an AI tool’s inputs to evaluate whether it could potentially consider an adviser’s interests, examiners may ask if the adviser understood, tested and supervised the tool and how it ensured that AI-influenced decisions remained in clients’ best interests, as well as whether related disclosures are fair and accurate.
In December, for example, the Director of the Division of Investment Management devoted a substantial portion of a speech, only the second one given in his official capacity, to AI intelligence, describing it as a “transformative force” that the Division wants “to enable, support, and regulate thoughtfully.” He explicitly raised questions about potential benefits and risks that could arise from advisers’ use of AI, whether an AI agent itself might need to be registered and who bears liability when AI-driven outputs are wrong or misleading. His January remarks on proxy voting similarly raised the possibility that advisers may use AI tools in proxy voting, but that any such use should “take into consideration principles of transparency, auditability, and consistency with fiduciary duties.” The Division of Examinations’ 2026 priorities likewise reiterate a focus on registrants’ use of automated investment tools, AI technologies and trading algorithms, among other things.
Many advisers have already begun to use more mainstream AI tools such as generative pre-trained transformers to conduct basic drafting or research that will be heavily vetted by humans before being used in investment decisions but, as layers of human oversight are removed and AI tools move closer to investment decisions autonomously (for example, agentic AI tools), additional considerations may be necessary. What follows are five practical items for advisers to consider. Not every topic (or every consideration within each topic) will be relevant to every firm, but collectively they offer a framework for considering how to adopt, or expand the use of, AI tools within an organization.
- Explainability. Investment personnel who use AI in the investment process should be able to explain, in plain terms, what a tool is designed to do, what information it relies on, its material limitations and how its output is weighed against other analysis. In practice, that means more than being able to say “the model likes this stock.” While personnel do not need to study computer science, they should generally avoid treating tools as a black box; they should have some understanding of the information an AI tool is considering and, even if they cannot explain how it works, at least understand how to identify when it is not working and the scenarios in which an AI tool’s outputs should be discounted or overridden. This is consistent with the principles-based approach to the duty of care in the SEC’s 2019 fiduciary interpretation, which focuses on how advisers actually form and monitor their advice over the course of the relationship.
- Documentation. Because AI tools can evolve quickly, documentation is critical to avoid models being used for purposes they were not designed to address. When AI tools are adopted for use in investment processes, advisers should maintain documentation of the intended use cases for the AI tool and its material features. For internally developed tools, this may need to be drafted by the relevant development team, while commercially available tools will frequently have some amount of publicly available documentation that should be reviewed and preserved by the adviser. This provides a record if the model is later repurposed or extended beyond its original scope, which is often where operational and regulatory problems emerge. Change management should be part of this documentation; material changes, such as new data sources, new use cases or changes to optimization objectives should be documented.
- Model transparency and validation.Certain AI models, particularly commercially developed closed-source AI models, are inherently more opaque than traditional quantitative investment algorithms. That opacity complicates oversight and raises questions about how the adviser satisfied its duty of care when relying on the output. Advisers adopting these tools may need to take steps to understand (as described above) the types of patterns the model is designed to detect, the controls around training and the circumstances in which output has historically failed.
Training data is another critical element. If the adviser does not know whether material non-public information or other inappropriate data has been ingested (whether inadvertently or intentionally), this could raise concerns about the adviser’s compliance with Section 204A of the Advisers Act, which requires each adviser to establish and enforce policies to prevent the misuse of MNPI, taking into account the nature of its business. It may be difficult for an adviser to demonstrate its policies take the nature of its business into consideration if they do not address a major potential vector for MNPI to affect investment decisions. For commercially developed models, this may require due diligence on the provider’s training data or contractual representations regarding the exclusion of MNPI.
- Governance. AI policies should sit within the adviser’s existing compliance framework. At many firms, especially larger firms, this may require designating a clear line of authority for use of AI tools and a framework for monitoring their use and implementation. Policies should also be clear what AI outputs the adviser treats as part of its books and records required to be maintained under Rule 204-2.
- Privacy and Data Security. The SEC’s recent amendments to Regulation S-P, as well as the Division of Examinations’ statement that it intends to review information security and data privacy during compliance examinations, underscore the need for advisers to understand how customer information, as defined in Regulation S-P, flows into and through AI tools. If a firm cannot determine what information a model ingests, how that information is transformed and whether any client data is disclosed or used for further training, it will be difficult to assess how Regulation S-P applies, which could complicate the adviser’s ability to design and implement an incident response program. This is especially salient for firms that implement AI tools with full access to all of an adviser’s records because the opacity of many AI tools, as described above, can make it difficult to determine what information has been considered.
Just like there is no “law of the horse,” there is no separate fiduciary standard that applies to AI tools. As Judge Easterbrook wrote in his article of the same name, “[b]eliefs lawyers hold about computers, and predictions they make about new technology, are highly likely to be false. This should make us hesitate to prescribe legal adaptations for” such new technologies. Adoption of AI tools should not change the core regulatory questions the compliance team asks: how do we ensure our advice is in our clients best interests? How do our policies and procedures ensure that our personnel comply with the relevant legal standards? How do we disclose our practices sufficiently clearly that investors can provide informed consent?
What AI does change is the speed and scale with which technology can influence firms’ investment decisions, and the opacity with which investment decisions may be made and implemented. A well-designed framework can help advisers harness the benefits of AI while preparing for the inevitable examination questions, even if the rulebook does not yet use the term “artificial intelligence.”
Read more of our Top Ten Regulatory and Litigation Risks for Private Funds in 2026.