In the roughly 235 years since the First Amendment was ratified, advanced AI has occupied only about two percent of that time, yet its implications for privacy, defamation and fraud are already profound. In these few short years, the rise of synthetic media and “deepfakes” have already prompted both the United States Congress and numerous state legislatures around the country to create laws that aim to limit their potential harm. Yet, these laws face a constitutional challenge — the First Amendment's protection of speech, which extends to content that may be false, manipulative, or offensive, unless it can be tied to causing legally-recognized, demonstrable harm. Deepfake regulation must more accurately target demonstrable harms like privacy invasion, defamation, and fraud, as accurate deepfake regulation is crucial to the upholding of First Amendment principles. Broad or content-based bans risk being struck down as unconstitutional, but narrowly tailored rules, such as factual labeling requirements paired with harm-focused enforcement, can lessen real-world harms without restricting protected speech.