Recent years have seen rapid advances in artificial intelligence (AI). While AI technologies bring substantial benefits in efficiency and innovation, their application in the information environment poses serious challenges.
The use of AI-generated content in elections is one of the most pressing concerns. These systems are now strong enough to produce highly realistic synthetic content at scale. These capabilities increase the risk of sophisticated disinformation campaigns that may be intricate for the public to distinguish from authentic information. Particularly in the electoral settings, such content could influence voter perceptions and undermine trust in democratic processes. For instance, in the U.S., AI was previously used to impersonate candidates by falsely encouraging voters to skip elections or altering their statements.
In Massachusetts, we’ve already started to see these deepfake political ads, for instance, this month Republican candidate for governor Brian Shortsleeve used a deepfake video of Democratic Governor Maura Healey.
In response to this risk, lawmakers in Massachusetts have introduced legislation aimed at mitigating the electoral harm associated with AI-generated content. The proposed bill seeks to establish legal safeguards designed to curb the spread and impact of deceptive synthetic media during elections.
Lawmakers in Massachusetts are discussing a bill titled the Act to Protect Against Election Misinformation. The bill, as the name suggests, aims to limit the harm of materially deceptive election-related communication that contains verifiably false information regarding the date and time of the election, the requirements, methods and deadline for registering to vote, and the voting itself; Any certification related to elections, or the express endorsement of a candidate or ballot initiative by a political party, elected official, nonprofit organization, or other person.
To achieve this goal, the bill prohibits, within 90 days of an election, any person, candidate, campaign committee, political action committee, political issues committee, political party, or other entity from distributing materially deceptive election-related communication. The bill also gives the legal right to anyone who is harmed by such material to seek a remedy and bring an action against the entity that distributed the material.
This bill, as it stands now, is strong in many ways. For example, unlike many states that took the approach of only requiring disclosure of the AI-generated ads, Massachusetts is addressing the issue at its core by fully prohibiting the AI-generated deceptive ads themselves. This solution indeed leaves no room for deceiving voters who might, for any reason, fall prey to trusting the AI-generated content as true. Moreover, legislators protected freedom of speech by avoiding any tension with the First Amendment, exempting satire and parody from the definition of materially deceptive election-related communications.
Opponents of this legislation offer no solid critique and tend to advocate for the use of AI technology in elections without addressing the serious risks it poses. For example, in testimony opposing this bill, the R Street Institute argues that the government is taking a heavy-handed approach and that it should limit itself to proactively providing facts in advance and intervening only when false information arises (presumably by fact-checking them or taking them down).
This opinion is based on three unexamined assumptions about the voting process: first, that these provided facts will reach all citizens and consequently guard them against disinformation; second, disinformation during elections could be fully detected, corrected, and the disseminated fact-checked content will be delivered to voters on time; Third, it assumes all voters will comprehend the fact-checked content and change their minds accordingly. Lastly, taking down disinformation usually takes time, and voters will already be impacted by the harmful content. Indeed, all these assumptions are unrealistic and will fail short when the rubber hits the road; alarmingly, the whole proposal is nothing but blindly transferring the risk to the voters
Massachusetts legislators have taken the right step to protect citizens from the serious risks posed by AI and to ensure they are well-informed during elections, a requirement that should be fulfilled, as it’s the cornerstone of the democratic process.
Mohamed Suliman is an AI policy researcher based in Boston. Before he was a senior disinformation researcher at Northeastern Civic AI Lab, he also earned a degree in engineering from the University of Khartoum.

