This article is part of a monthly column that considers the significance of recent Federal Trade Commission announcements about antitrust issues. This installment examines the FTC's emerging approach to evaluating competition in artificial intelligence markets based on statements it has made in inquiries and enforcement.
The government, whether through agency activity or executive orders, continues its endeavor to wrap its mind around what to do with AI.
On one hand, the FTC has spread the message that it is "very carefully monitoring" the inputs underlying AI.[1] Couple that with President Donald Trump's December 2025 executive order about ensuring a national AI policy framework, and its concerns about the outputs and uses of AI, and we see signals of a multipronged focus.[2]
While the executive order did not alter the FTC's statutory authority, it reinforces a whole-of-government focus on AI competition, with particular attention on both the inputs that enable AI systems and the outputs those systems produce. It also directed the FTC to issue a policy statement explaining how the FTC Act's prohibition on unfair or deceptive practices applies to AI models.
In the months since the executive order, FTC leadership has continued to emphasize that competitive risks in AI markets may arise at multiple levels simultaneously. It considers not only who controls the resources necessary to build AI systems, but also how those systems function and yield outputs. This dual focus reflects increasing scrutiny of the AI sector, including issues of concentration, transparency and systemic effects.
Key AI Inputs in the FTC's Crosshairs
FTC leadership continues to see several inputs as top of mind in evaluating AI competition, including advanced semiconductors and specialized hardware; access to large-scale computing and cloud infrastructure; physical inputs, such as data centers and energy capacity; high-quality and often proprietary training data; and scarce, highly specialized technical labor.
In the current enforcement environment, these inputs are not viewed in isolation. Rather, the FTC is assessing whether control over any one — or a combination — of these resources could create bottlenecks that raise barriers to entry, enable preferential access or otherwise shape competitive outcomes across the AI ecosystem. Transactions, partnerships or contractual arrangements that combine these things, or restrict access to these inputs, may draw scrutiny even in the absence of traditional product-market overlaps.
The FTC's interest in data used to train AI models deserves particular attention. As machine learning systems become more sophisticated, access to high-quality training data has emerged as a potential competitive differentiator, and, potentially, a barrier to entry. If dominant firms are able to secure exclusive or preferential access to critical data inputs, smaller competitors and new entrants may find themselves at a structural disadvantage that is difficult to overcome.
This suggests that transactions involving data assets, data-sharing arrangements or exclusive data partnerships may face increased scrutiny. It also raises questions for companies outside the M&A context: Firms that control significant data resources should consider whether their licensing or access practices could attract regulatory attention, even absent a reportable transaction.
From Inputs to Outputs
A notable evolution in the FTC's approach, particularly in the wake of the executive order, is the growing integration of input- and output-focused analysis. Policymakers and regulators are increasingly concerned not only with who controls the building blocks of AI systems, but with how those building blocks shape the behavior and effects of AI-driven outputs in the marketplace.
This shift reflects a recognition that, in AI markets, inputs and outputs are tightly coupled. Control over training data, resources or model architecture does not simply confer cost or scale advantage. It can directly influence how systems perform, what they produce, and how they interact with users and competitors.
As a result, the competitive significance of inputs may be assessed in light of their downstream effects. Antitrust risk assessment must increasingly consider the full life cycle of AI systems — from the sourcing of inputs to the real-world effects of outputs. The question is no longer simply whether a firm controls a critical input, but how that control may manifest in competitive outcomes.
Mechanistic Interpretability and Its Antitrust Implications
Within this evolving framework, mechanistic interpretability is increasingly relevant as a conceptual and, potentially, practical tool for regulators. At a high level, mechanistic interpretability looks to move beyond treating AI systems as black boxes by examining how models process inputs internally to generate outputs. It is concerned with not just what a model outputs, but how it arrives at its conclusions by examining the actual computations occurring within the model's layers and components.
While still developing, mechanistic interpretability aligns closely with the FTC's growing interest in the relationship between inputs, system mechanics and competitive effects.
From an antitrust perspective, mechanistic interpretability may play several roles.
First, it offers a pathway to connect inputs to competitive advantage in a more concrete way. If regulators can show how particular datasets, architectural choices or resources influence model behavior, they may be better positioned to assess whether control over those inputs creates durable, nonreplicable advantages. This could sharpen traditional analyses of barriers to entry by grounding them in technical realities rather than assumptions about scale or access.
Second, interpretability may inform how regulators evaluate substitutability and replicability. A central question in AI competition is whether rivals can achieve comparable performance using alternative inputs. If it becomes possible to detect the "fingerprints" of specific datasets within a trained model, this could have significant implications.
Greater visibility into model internals could help determine whether advantages attributed to proprietary data or infrastructure are in fact reproducible, or whether they reflect unique combinations of inputs that are difficult to replicate in practice. Companies that have secured exclusive access to valuable training data may find that the competitive advantages conferred by that data become more visible and more subject to scrutiny.
Third, interpretability may shape the tools used in investigations and merger review. It may also lead to increasingly technical analyses that examine how models function, what they rely on and how sensitive they are to changes in inputs.
If regulators can better understand how AI models function internally, they may be better positioned to assess claims about competitive advantages, barriers to entry and the value of certain assets. This could make enforcement more technically sophisticated, and certainly more complex. It could also place a premium on companies' ability to explain and document their systems.
Fourth, and more forward-looking, interpretability itself may emerge as a competitive input. As expectations around transparency, accountability and auditability grow, the ability to analyze and validate complex models may become essential.
If access to interpretability tools or expertise is limited, it could raise familiar concerns about concentration and competitive advantage, extending the FTC's input-focused framework into new territory. Plus, companies that lack the resources to conduct interpretability research may find themselves at a disadvantage, both in the marketplace and in regulatory proceedings.
Taken together, these developments suggest that antitrust enforcement in AI markets is gradually moving toward a more integrated model: one that considers not only who controls key inputs, but how those inputs are operationalized within systems to produce competitive outcomes. While mechanistic interpretability is still a developing field, it may reshape how regulators, competitors and the public understand competition for AI inputs and the advantages that flow from them.
Practical Takeaways
In light of these developments, there is merit in considering the following.
Examine the full life cycle of AI systems.
Consider not only how inputs are acquired, but how they influence model behavior and outputs. Increasingly, regulators may view these as interconnected.
Prepare for more technical inquiries.
Be ready to explain how systems are built and how key inputs affect performance, including at the level of model design and operation.
Monitor evolving regulatory expectations.
The intersection of competition, transparency and AI governance is becoming more pronounced, particularly in light of broader federal policy initiatives.
Looking Ahead
The FTC's continued emphasis on very carefully monitoring AI inputs — now reinforced by broader federal policy — signals a more expansive and technically informed approach to antitrust enforcement. Its focus on AI competition is likely to intensify as the technology continues to mature and its economic significance grows.
While enforcement priorities may shift, the underlying concerns about consolidation, access to critical inputs and barriers to entry are likely to remain relevant regardless of political context.
[1] https://www.concurrences.com/en/evenement/the-tech-antitrust-conference-6859.
Reproduced with permission. Originally published April 6, 2026, "FTC Focus: Growing Emphasis On Competition In AI," Law360.