Back to USA Rankings

USA - NATIONWIDE: An Introduction to Artificial Intelligence

Contributors:

Debevoise & Plimpton LLP Logo

View Firm profile

As AI adoption continues to rise, companies face an increasingly complex and uncertain regulatory environment. Many existing legal frameworks must now be applied to AI use cases that were not contemplated when these laws were developed. At the same time, new AI-specific laws are proliferating, but many questions remain about how and when they will be implemented. The result is an AI regulatory landscape that is patchy, evolving, and uncertain.

Businesses typically respond to an uncertain regulatory environment with a cautious, wait-and-see approach. But the rapid evolution in AI advancements demonstrates that AI technology can provide value that many businesses cannot afford to forgo. As a result, many businesses are taking calculated risks with their AI use, relying on a diverse group of internal stakeholders and external experts to evaluate potential AI use cases and assess (i) which risks are real and which are theoretical, (ii) how they can effectively mitigate real risks without significantly undermining the value of the use case, and (iii) whether the benefits of the mitigated use case outweigh the real risks.

In this practice overview, we explore recent trends, outline the AI regulatory landscape, and, based on our experiences helping over 100 companies with their AI adoption over the last four years, offer practical insights to help companies enable responsible AI adoption.

Recent Trends and Developments with AI

The cybersecurity risks associated with AI

Federal and state regulators in the U.S. are issuing guidance to address the cyber-risks posed by AI, including the following.

    • AI-enabled social engineering: Increasingly sophisticated deepfakes are being used effectively in social engineering attacks designed to trick unsuspecting employees into sharing sensitive information or access credentials, or into transferring funds to accounts controlled by the attackers.
    • AI-enhanced cybersecurity attacks: Threat actors can use AI to amplify the potency, scale, and speed of existing types of cyber-attacks, and to quickly identify and exploit security vulnerabilities.
    • Data theft: Use of AI by companies will often involve the collection and processing of large volumes of sensitive information, providing more opportunities for attackers and creating more data, devices, and locations for companies to protect.
    • Increased third-party and supply chain risks: Cybersecurity risks from AI are compounded by reliance on third-party service providers (who are vulnerable to attacks) to provide AI tools and/or the data used to train and operate them.

Recent examples of this guidance include:

Good governance can accelerate AI adoption

Many companies have discovered that, rather than slowing down AI adoption, an effective governance program can accelerate it. Good governance allows companies to:

  • circulate both successes and failures and avoid repeating the same AI failures in different parts of the organization;
  • ensure that low-value, high-risk use cases are not pursued, and properly devote resources to moving high-value, low-risk use cases into production; and
  • effectively identify and mitigate unnecessary risks through training, tailored pilot programs, stress tests, and ongoing monitoring.

Managing AI vendor risk

To address the challenges of third-party AI risk management, companies are adopting structured, risk-based frameworks to assess and manage vendor relationships. Key practices include:

    • conducting diligence on AI tools and use cases, including assessing data requirements, pilot programs, best-case and worst-case scenarios, and success metrics;
    • creating checklists of risks, diligence questions, and contract terms;
    • organizing vendor risks into standard risks (ie, those that will be addressed for all AI vendor engagements) and non-standard risks (ie, those that will only need to be addressed in specific contexts);
    • identifying which risks are covered by other diligence efforts (eg, cyber, privacy) and which should be addressed through AI-specific diligence; and
    • identifying non-contractual mitigants (eg, anonymizing data or having the vendor work on the company’s systems).

Navigating web-based AI meeting tools

AI-enabled meeting tools—offering transcriptions and summaries—are becoming more useful while simultaneously introducing several legal and operational risks relating to:

    • notice and consent;
    • record keeping;
    • proliferation of discoverable materials;
    • reliance on inaccurate or incomplete summaries;
    • loss of attorney-client privilege; and
    • compliance with legal hold obligations.

To manage these risks, many companies are taking a surgical approach that evaluates specific tools and features in pilot programs for low-risk meetings. This approach also involves prescribing specific workflow requirements for how AI-generated documents are created, stored, accessed, reviewed, amended, distributed, and deleted.

Copyright and patent Issues

U.S. copyright and patent law for generative AI remains in flux. The U.S. Copyright Office and the U.S. Patent and Trademark Office have generally taken the position that meaningful human involvement is needed for copyright and patent, although what that means in practice remains unsettled. Concerns over using copyrighted works and proprietary data to train AI models have led to ongoing litigation around the limits of “fair use,” as well as possible legislative actions. These developments all underscore a growing tension between technological innovation and the traditional framework of IP law.

Key Legislative and Regulatory Changes

Broad-based AI laws face headwinds

Proposed state laws that seek to regulate AI broadly have struggled to advance.

    • On May 17, 2024, Colorado passed SB 24-205, the first state law that would broadly regulate AI systems to prevent algorithmic discrimination. However, the bill was signed with express reservations, and efforts to re-work the bill before it takes effect in February 2026 remain ongoing.
    • On March 14, 2025, the Texas House, which had initially been considering a broad-based AI bill similar to the one from Colorado, instead took up H.B. 149, which featured more narrow and targeted requirements. H.B. 149 has since advanced through both the Texas House and Senate.
    • On March 24, 2025, the governor of Virginia vetoed H.B. 2094, a broad-based AI regulation bill, over concerns that it would be unduly burdensome.

These bills and other pending state legislative actions also face the possibility of federal legislation seeking to preempt AI regulation at the state level—by means of a provision included in the 2025 U.S. House of Representatives tax-and-spending bill and referenced in the 2024 Bipartisan House Task Force Report on Artificial Intelligence

Narrow AI laws are gaining traction

While broad-based AI regulatory regimes have flagged, proposals for narrow, fit-for-purpose AI laws have enjoyed greater success. For example, on May 19, 2025, President Trump signed the Take It Down Act, criminalizing nonconsensual disclosure of deepfake intimate images. And on March 25, 2025, Utah’s governor signed H.B. 452, which regulates mental health chatbots in order to target the unique risks presented by those tools.

Enforcement via traditional routes

Pursuing false or misleading statements about using AI (“AI washing”) remains the dominant enforcement trend involving AI. The mission of the new Cyber and Emerging Technologies Unit (CETU) of the SEC involves protecting retail investors from fraud in the emerging technologies space, including fraud committed using artificial intelligence. Parallel SEC and DOJ actions brought in April 2025 against Albert Saniger, the former CEO of Nate, Inc. (“Nate”), confirmed that federal enforcement agencies in the Trump administration will continue pursuing both civil and criminal charges against individuals for alleged misstatements or omissions concerning the use of AI. 

Practical Insights

Look for high-value, low-risk use cases

AI governance programs should focus on efficiently separating high-value, low-risk use cases from low-value, high-risk use cases, so that companies can spend more time on the former and as little time as possible on the latter. To do so, companies should define criteria for quickly identifying and fast-tracking low-risk use cases, establish cross-functional teams to review higher-risk use cases, and require higher-risk use cases to have senior business sponsors.

Pursue risk mitigation, not risk elimination

AI, like cybersecurity, presents risks to businesses that cannot be fully mitigated. The challenge is to balance the risks and benefits to the business, to ensure that the benefits are worth the risks, and to confirm that the business is only taking necessary or acceptable risks, while avoiding unnecessary or unacceptable ones.

Stay in pilot longer

Some AI use cases involve risks that only present themselves gradually over time, such as quality control, model drift, or loss of skills. The AI tools can function properly for several days or even weeks before problems arise. For these use cases, staying in the pilot phase for a longer time can provide the time needed to identify and mitigate these risks so that they do not arise after the model is in production.

Looking Ahead

We anticipate the following developments over the next 12 months:

    • growing pressure from senior executives to deploy AI at scale;
    • tension among cyber, legal, and business units in negotiating access to large volumes of data for AI projects, including customer/client data;
    • increasing frustration with NDAs, engagement letters, and other contracts that prevent or severely limit AI use with client/customer data; and
    • narrowing amendments and delays for generally-applicable AI regulations.