The Algorithmic Age of Antitrust: Rethinking the Consumer Welfare Standard for Big Tech

Are we becoming a nation mined for our data and attention? And what legal limits, if any, constrain the firms that profit from this extraction economy? Today’s dominant technology firms not only collect behavioral and transaction data but also integrate, analyze, and leverage it as part of their market power. Big Tech refers to the specific firms Alphabet (Google), Meta (Facebook), Apple, Amazon, and Microsoft, as these firms have platforms that depend on massive data collection, extensive market reach, and a large customer base while offering free or low-cost services to consumers. “Big Tech” shapes the information economy in ways that outpace the ability of federal courts and antitrust enforcement agencies to respond. The rapid rise of generative artificial intelligence (“AI”) amplifies these dynamics by creating new forms of market power and anticompetitive risks, particularly price-fixing. This article examines how Big Tech’s use of AI systems interacts with existing antitrust doctrine. By doing so, it argues that the consumer-welfare standard, which has governed U.S. antitrust for nearly half a century, is poorly equipped to evaluate AI-mediated conduct in digital markets.

Antitrust law focuses on regulations that promote fair and open competition. It does so by protecting competitive conditions in markets, where firms compete to sell goods or services, and for consumers, who benefit from lower prices, greater choices, better quality, and choice among goods and services. It primarily lies in §1 and §2 of the Sherman Act of 1890, which prohibit contracts “in restraint of trade” and make it illegal to “monopolize”; §1 and §3 of the Clayton Act, which prohibit exclusivity arrangements that may “substantially lessen competition”; and section 5 of the FTC Act, which prohibits “[u]nfair methods of competition.” [1] Early-twentieth-century jurisprudence adopted a structuralist approach in which large, dominant firms were suspect because market structure itself was understood to generate anticompetitive outcomes. In Standard Oil Co. v. United States (1911), the Supreme Court adopted the “rule of reason” to break up a horizontally integrated monopoly. [2] A horizontally integrated monopoly is where one firm dominates the market by acquiring or merging with competitors who make the same goods or services. Subsequent cases, including United States v. Aluminum Co. of America (1945), emphasized that monopoly power can distort market processes even without explicit exclusionary intent. [3] Furthermore, United States v. E.I. du Pont de Nemours (1956) examined demand elasticity, or how easily consumers could switch to substitute products, to determine if a firm’s market power led to illegal monopolistic behavior that harmed competition. [4] 

This approach was challenged in the 1970s with the rise of the Chicago School, where Robert Bork argued that antitrust should focus solely on consumer welfare–defined through price, output, and efficiency. [5] Federal courts embraced this reasoning in cases like Continental T.V. v. GTE Sylvania (1977), where the Court rejected per se rules for vertical restraints and prioritized economic reasoning over structural presumptions. [6] Though this shift offered a much clearer standard for antitrust judgments, it narrowed enforcement and made antitrust less capable of addressing harms that do not immediately result in high prices—including reduced privacy and diminished innovation in the Big Tech sphere.

Indeed, antitrust law has struggled to address the competitive dynamics of digital markets. Although the Sherman Act was written to prevent monopolies, modern doctrine, after the rise of the consumer-welfare standard, relies on static price-output analysis. This creates tension in markets where products, including the services offered by Big Tech firms, are nominally free, meaning they carry little monetary price, and the static price-output analysis breaks down. Competition occurs through data accumulation rather than price, and market power manifests through platform control rather than traditional exclusion, such as raising prices or cutting off the supply of the service. Unlike other sectors, Big Tech services have a small or zero-dollar price; consumers pay through information and personal data, fueling targeted advertising and algorithmic optimization. Traditional antitrust tools cannot reliably establish consumer harm or measure whether Big Tech firms operate consistently with the consumer-welfare standard when evaluating the consumer side of the tech market.

AI reshapes competitive conditions in ways antitrust has not been designed to evaluate. AI systems require large-scale generative models that ingest vast amounts of data to refine reinforcement learning and optimize outputs in real time. Firms can make rapid predictions, optimize sensitive commercial variables, and automate strategic coordination at a scale unlike traditional human conduct. For example, airlines increasingly use AI to set dynamic ticket prices based on competitor pricing, demand forecasts, and booking patterns. Even if these airlines do not communicate directly, AI systems can create parallel pricing strategies that resemble collusion, distort prices, and reduce competition, ultimately disadvantaging consumers. AI “reduce[s] adoption frictions” and “amplif[ies] structural coordination,” enabling firms to adopt sophisticated pricing and prediction systems with minimal cost. [7] Multiple antitrust consequences emerge from this ability, including price-fixing, collusive dangers, and the obscure nature of AI-assisted coordination. One consequence is the ease and speed of price manipulation. AI introduces a new modality of collusion–one that is difficult to detect under current legal frameworks. Price-fixing cases traditionally rely on an agreement among competitors, but AI-driven coordination occurs through shared data inputs rather than explicit communication. As a result, Section 1 of the Sherman Act, which requires agreement, conspiracy, and intent, is incompatible with the realities of algorithmic markets.

The ongoing U.S. v. RealPage, Inc. exemplifies this shift more than any other case. RealPage is a property management software and data analytics company serving over twenty-four million units in North America, Europe, and Asia. Its AI-powered rent-setting software for landlords and real estate companies led to concerns from lawmakers about potential rent gouging and rental market manipulation. The Department of Justice alleges that RealPage’s AI-powered revenue-management software coordinated rent increases across competing landlords by ingesting non-public sensitive competitor data, forecasting optimal rents, and recommending synchronized price changes that inflated rental prices for properties. [8] According to the federal government, this software system essentially functioned as a “hub-and-spoke cartel,” with RealPage as the hub and the landlords as the spokes. [9] RealPage did not express intent to fix prices, but the algorithm performed the coordinating function. RealPage is not an isolated case; regulators increasingly confront AI systems that automatically adjust prices in response to competitor signals, detect patterns that promote tacit coordination in hotel pricing litigation (e.g., Gibson v. Cendyn Group), airline pricing, and ride-share surge pricing. AI’s speed, predictive power, and adaptive learning increase the risk of market-wide supracompetitive outcomes even without explicit collusion.

Anti-competitive risks posed by AI-enabled systems become far more severe when embedded in Big Tech platform architecture. Unlike independent firms that adopt third-party revenue-management tools, as in RealPage, Big Tech firms possess: (1) unparalleled access to and accumulation of user data across platforms and devices; (2) control over key distribution channels through digital infrastructure such as app stores, search engines, and cloud services; and (3) vertically integrated ecosystems in which firms developed interoperable hardware, software, and cloud services, encouraging consumer lock-in and preferential use of products from that firm. Within these ecosystems, algorithmic designs of search, ranking, advertising, and recommendation systems enable firms to entrench and extend market power across the digital economy. 

Looking at existing Big Tech cases illustrates the tremendous difficulty in applying traditional consumer welfare standards to the AI-mediated conduct of Big Tech forms. In United States v. Google (2023), Judge Mehta of the U.S. District Court for the District of Columbia concluded that Google violated Section 2 of the Sherman Act by maintaining a monopoly over general search through exclusionary agreements, including its default placement contracts with Apple and Android makers. Judge Mehta’s remedies, however, did not include breaking up Google. [10] The opinion’s significance lies in its discussion of AI and search markets. Judge Mehta recognizes that the emergence of generative AI tools, specifically Google’s chatbot and AI-generated search results, threatens to reshape Google’s dominance over the “search” market; yet, he simultaneously acknowledges the limits of judicial capacity in creating a forward-looking remedy for a changing market. He states, “gaz[ing] into a crystal ball and look[ing] to the future… [is] not exactly a judge’s forte.” [11] The recognition that generative AI presents an “immediate threat” to traditional search engines led the court to opt for data-sharing rather than structural separation implies the need for a changed standard.

In contrast, the FTC failed to prove that Facebook holds current monopolistic power in the personal social networking market in the recent decision in Federal Trade Commission v. Facebook (now Meta). The FTC alleged that Meta maintained monopolistic power in personal social networking through a series of exclusionary acquisitions, specifically its acquisitions of Instagram and WhatsApp, and platform restrictions that prevented third-party developers from becoming competitive threats. [12] Although the case does not center on artificial intelligence, Judge Boasberg’s rejection of the FTC’s theory illustrates the burden of proving present monopoly power in fast-changing digital markets, making the burden of proof high for the plaintiff by having to prove that Meta currently possesses monopolistic power as a result of past acquisitions. Looking at Google and Meta, these cases reveal the doctrinal difficulties in determining causation and appropriate remedies in Big Tech contexts. For Meta, the FTC could not prove current monopolistic power, whereas its past platform is obsolete. In determining monopolistic power for these firms and their algorithm, it’s difficult to measure the competitive harms and evaluate the appropriate remedies in digital markets. And in the context of AI, where market power is tied to data accumulation and platform architectures, these doctrinal tensions will proliferate.

The United States’ case-by-case ex post, consumer-welfare-based antitrust scrutiny greatly differs from the European Union’s structural, ex ante regulatory approach. The EU’s Digital Markets Act (DMA) designates certain large platforms, specifically Big Tech firms, as “gatekeepers,” and imposes proactive obligations on them, including restrictions on self-preferencing, stricter app store rules, and heightened data-access requirements. [13] The DMA does not directly regulate artificial intelligence, but its preemptive approach better aptly reflects that digital platform dominance is not a static market outcome, but rather a dynamic system of market practices embedded into a digital structure that requires a proactive and structural approach. DMA rules constrain the kinds of conduct at issue in FTC v. Meta in which U.S. courts have deemed lawful. 

Courts and regulators in the EU have indeed complicated the legal landscape. In April 2025, the European Commission found that Apple breached DMA rules regarding app store development communications and Meta breached DMA data-usage rules and fined the firms €500 million and €200 million, respectively. [15] DMA differs from the U.S. in limiting possible monopolistic conduct by Big Tech firms through an ex ante regulatory approach, but this could be an overzealous approach that carries risks of limiting firm competition and innovation. With the EU’s recent Artificial Intelligence Act, the EU regulatory approach is shifting to treat AI systems as potential high-risk infrastructures that require auditing and transparency to protect consumers from harm. [15] Critics, however, rightly warn that ex ante rules may delay or withhold European consumers from new innovations, introduce unintended security vulnerabilities, and increase costs for both consumers and gatekeepers. [16] 

In the United States, shifting towards ex ante regulation and expanded antitrust enforcement does come with trade-offs. Broadening antitrust’s objectives risks granting judges excessive discretion to determine the extent of anticompetitive harms, raising concerns that antitrust could devolve into an all-purpose, politicized regulatory tool lacking jurisprudence and economic cohesion. Alternatively, shifting toward heightened scrutiny may impose compliance costs that unintentionally chill innovation and diffuse technologies for firms, which could be the case in the EU. 

These concerns, nevertheless, do not diminish the central challenge in Google, Meta, and RealPage: the consumer-welfare standard cannot aptly outline the competitive dynamics of generative AI for digital platforms. If antitrust is to remain a meaningful check on Big Tech’s market power, antitrust must develop tools that account for algorithmic market structures, predictive systems, and the ways in which AI reconfigures both competition and harm. Without structural interventions or new legal standards, the nation risks becoming a society governed by platforms whose power derives not from prices, but from the extraction of our data. To ensure that the digital economy remains transparent and competitive, the law must also evolve. 

Edited by Yusuf Arifin and Ashley Zhou


[1] Sherman Antitrust Act, 15 U.S.C. §§ 1–7; Clayton Act, 15 U.S.C. §§ 1–3; Federal Trade Commission Act, 15 U.S.C. § 5.

[2] Standard Oil Co. of New Jersey v. United States, 221 U.S. 1 (1911).

[3] United States v. Aluminum Co. of America (Alcoa), 148 F.2d 416 (2d Cir. 1945).

[4] United States v. E.I. du Pont de Nemours & Co., 351 U.S. 377 (1956).

[5] Robert H. Bork, The Antitrust Paradox: A Policy at War with Itself (New York: Basic Books, 1978).

[6] Continental T.V., Inc. v. GTE Sylvania Inc., 433 U.S. 36 (1977).

[7] Ryan Chapman, “Antitrust in the Age of AI: Is the Consumer Welfare Standard Equipped to Address the Rise of Generative Artificial Intelligence?,” American Bar Association Antitrust Law Section Newsletter, October 2024, https://www.americanbar.org/groups/antitrust_law/resources/newsletters/antitrust-age-of-ai/?abajoin=true.

[8] United States v. RealPage, Inc., (W.D. Wash. filed August 23, 2023), Complaint.

[9] RealPage, Complaint.

[10] United States v. Google, LLC, 138 Harv. L. Rev. 891 (2025). 

[11] United States v. Google LLC, No. 1:20-cv-03010 (D.D.C. Jan. 24, 2023), slip op.

[12] Federal Trade Commission v. Meta Platforms, Inc., No. 1:20-cv-03590 (D.D.C. November 18, 2025) (Boasberg, J.) (bench decision).

[13] European Union, Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector (Digital Markets Act), Official Journal of the European Union L 265, September 14, 2022.

[14] “EU Commission Adopts Proposal for AI Act: Strengthening Consumer Protection and Innovation,” European Commission, December 5, 2025, https://ec.europa.eu/commission/presscorner/detail/en/ip_25_1085.

[15] European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act), Official Journal of the European Union L 168, June 13, 2024.

[16] “The EU Export of Regulatory Overreach: Implications of the Digital Markets Act,” European Centre for International Political Economy, June 2023, https://ecipe.org/publications/eu-export-of-regulatory-overreach-dma/.