ICE Quietly Pays $5.7M for Zignal Labs’ AI to Expand Social‑Media Surveillance; Civil‑Liberties Groups Sound Alarm
Federal records show Immigration and Customs Enforcement has contracted Zignal Labs to feed an AI-powered monitoring system that can parse billions of public posts and geolocated images — a capability advocates warn could be used to flag people for deportation and chill speech online.
Washington — New federal records and reporting reveal that Immigration and Customs Enforcement has quietly purchased access to a powerful AI-driven social‑media monitoring system, a move civil‑liberties advocates say will dramatically expand the agency’s ability to track and target people online.
According to documents reported by The Lever and summarized by The Verge, ICE has committed roughly $5.7 million for use of Zignal Labs’ “real‑time intelligence” platform. The tool, which Carahsoft procured for the agency, advertises that it can ingest and analyze billions of publicly available posts per day in more than 100 languages using machine learning, computer vision and optical character recognition. Zignal’s materials include examples of identifying emblems and patches in videos and capturing geolocated images and video — capabilities that could, critics say, make it easier to trace a poster’s physical whereabouts.
“ICE is a lawless agency that will use AI‑driven social‑media monitoring not only to terrorize immigrant families, but also to target activists,” Will Owen, communications director at the Surveillance Technology Oversight Project, told The Digester. “This is an assault on our democracy and right to free speech, powered by the algorithm and paid for with our tax dollars.”
The Zignal platform’s reach is notable: a company pamphlet obtained by reporters states the system can process more than 8 billion posts per day. Zignal lists a range of government partners in other contexts — from the National Oceanic and Atmospheric Administration to past work with the U.S. Secret Service and Defense Department — and says its AI can produce “curated detection feeds” and send alerts to operators on the ground.
The move comes as ICE seeks to ramp up human review capacity. A separate Wired report reviewed by The Digester says the agency plans to hire nearly 30 people to comb content across Facebook, Instagram, TikTok, X and YouTube to “locate individuals who pose a danger to national security, public safety, and/or otherwise meet ICE’s law enforcement mission.” That document reportedly envisions workers searching not only for a target’s posts but for information about family members, friends or coworkers to help pinpoint locations. The same reporting notes about a dozen contractors would work in Vermont and about 16 in California, with some required to be available “at all times.”
Civil‑liberties groups warn the combination of automated flagging and human review could produce a sweeping surveillance apparatus with real consequences. “Automated and AI‑powered monitoring tools will give the government the ability to monitor social media for viewpoints it doesn’t like on a scale that was never possible with human review alone,” David Greene, civil‑liberties director at the Electronic Frontier Foundation, said. “The scale of this spying is matched by an equally massive chilling effect on free speech.”
The controversy is not purely theoretical. Advocates point to past cases — such as police use of Geofeedia to track protesters in 2016 — and more recent moves by federal agencies to widen social‑media scrutiny. The State Department has expanded requirements asking some visa applicants to list social‑media handles; the administration also initiated a so‑called “Catch and Revoke” program using automated tools to search for posts it says support designated terrorist groups. And reporting shows ICE has previously used crowd‑sourced posts to identify targets: earlier this year the agency arrested nine street vendors in New York after a conservative influencer tagged ICE in a post showing vendors on Canal Street.
Zignal Labs’ work with federal agencies and the scale promised by its marketing materials have intensified demands for transparency. “This should terrify and anger every American,” Sacha Haworth, executive director of the Tech Oversight Project, said, criticizing what she called a growing partnership between Big Tech tools and federal enforcement.
The Verge reported it reached out to Zignal Labs for comment and did not immediately hear back. The Digester also sought comment from ICE and Carahsoft; at publication neither agency had provided a substantive response.
Experts and advocates are calling for clearer limits: public disclosure of what searches and detection feeds ICE will use, oversight of automated classifiers and stronger safeguards to prevent misuse or discriminatory targeting. Without such guardrails, critics say, the new contract risks turning public social media into a sprawling surveillance panopticon — and increasing the chance that online speech, not just illicit activity, will draw the attention of immigration enforcement.