Privacy and digital rights advocates have warned about the laws that many hope they will cheer: federal oppression of revenge porn and AI-generated deepfakes.
The newly signed Take It Down Act makes it illegal to publish explicit images of unconsent (realistic or AI-generated) and provides a platform for just 48 hours to comply with victims’ takedown requests or facial liability. While it has long been widely praised as a long victory for victims, experts have warned that vague language, loose standards for verifying claims, and close compliance windows could pave the way for overreach, censorship of legitimate content and even surveillance.
“Moderation of large content is extremely problematic and always means that important and necessary speeches are censored,” India McKinney, director of federal affairs at the Digital Rights Group, Electronic Frontier Foundation, told TechCrunch.
Online platforms have a year to establish a process to remove non-consensual intimate images (NCII). The law requires a takedown request from the victim or its representative, but only requires physical or electronic signatures. No photo ID or other forms of verification are required. It is likely to reduce the barriers to victims, but it could create opportunities for abuse.
“I hope I’m really wrong about this, but I think there’s more demand to remove images depicting queer and trans people in relationships. More than that, I think it’s going to be consensual porn,” McKinney said.
Sen. Marsha Blackburn (R-TN), co-sponsor of the Take It Down Act, sponsored the Children’s Online Safety Act. Blackburn said he believes content related to transgender people is harmful to children. Similarly, the Heritage Foundation (the conservative think tank behind Project 2025) also stated that “steering trans content from children is protecting children.”
Due to the responsibility you face if the platform doesn’t delete the image within 48 hours of receiving the request, “the default is simply to delete it to see if it’s actually an NCII or if it’s another type of protected speech or if it relates to the person making the request,” McKinney said.
Snapchat and Meta both say they support the law, but they also did not respond to TechCrunch’s request with details on how to determine if the person requesting Takedown is a victim.
Mastodon, a decentralized platform that hosts its own flagship servers that others can join, told TechCrunch that it would lean towards deletion if it was too difficult to see the victim.
Other distributed platforms like Mastodon, Bluesky and Pixelfed may be particularly vulnerable to the cold effects of the 48-hour takedown rule. These networks often rely on independently operated servers run by nonprofits or individuals. Under the law, the FTC can treat a platform that “reasonably does not follow” the demand for takedown as committing “unfair or deceptive conduct or practice” even if the host is not a commercial entity.
“This is plagued by its face, but especially at the moment when the FTC chair has taken unprecedented steps to politicize its agents and, in contrast to its principled foundation, explicitly commits to use the power of the institution to punish ideological platforms and services.”
Proactive surveillance
McKinney predicts that it will start moderating its content before the platform becomes popular and few problematic posts will be removed in the future.
The platform is already using AI to monitor harmful content.
Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company will work with online platforms to detect Deepfake and Child Sexual Abuse Material (CSAM). Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and Bereal.
“We were actually one of the tech companies that supported the bill,” Guo told TechCrunch. “It helps to solve some very important issues and force these platforms to adopt solutions more proactively.”
Because the Hive model is software as a service, startups have no control over how the platform uses its products to flag content. However, Guo said many clients will insert Hive’s API into the upload point before being sent to the community.
A Reddit spokesperson told TechCrunch that the platform uses NCIIs “to deal with and remove sophisticated internal tools, processes and teams.” Reddit will partner with non-profit SWGFL to deploy StopnCII tools. This scans live traffic of matches against known NCII databases and removes exact matches. The company did not share a way to ensure that those who requested a takedown were victims.
McKinney warns that this type of surveillance can be extended to encrypted messages in the future. The law focuses on public or semi-public adoption, but it also requires a platform to remove and do reasonable efforts to prevent “reuploading of unconsensual intimate images. She argues that even if this is an encrypted space, it can encourage proactive scanning of all content. The law does not include carve-outs of end-to-end encrypted messaging services such as WhatsApp, Signal, and Imessage.
Meta, Signal, and Apple have not responded to TechCrunch requests for more information about planning encrypted messaging.
The impact of broader freedom of speech
On March 4, Trump gave a joint speech to Congress where he praised the Take It Down Act and said he was looking forward to signing it into law.
“And if you don’t mind, I’ll use the bill for myself too,” he added. “No one is treated better than I do online.”
Not everyone took it as a joke while the audience laughed at the comments. Trump was not shy about suppressing or retaliating against unfavorable speeches, whether or not he was labeled “the enemy of people” by mainstream media outlets.
On Thursday, the Trump administration escalated a conflict that began after Harvard University bans accepting hospitalizations for foreign students and Harvard University refused to comply with Trump’s demands to eliminate DEI-related content. In retaliation, Trump threatened Harvard to freeze federal funds and cancel the university’s tax-free status.
“When we already see school committees trying to ban books and look very clearly on the types of content that certain politicians don’t want to see to people, the kinds of content that are important racial theories, abortion information, climate change information, past work on content moderation is very offensive to us by openly defending content moderation on this scale.