Surveillance Beyond Borders: How Courts Can Check the Power of AI Within the Immigration System
With the development of artificial intelligence (AI), what once was a technology of the obscure and distant future has now become part of our daily lives. From digital assistants and chatbots to social media platforms and search algorithms, AI has expanded into our reality. It has been implemented in fields ranging from healthcare and banking to analytics. Now, it is increasingly being implemented within the immigration system. Within the United States Department of Homeland Security (DHS), AI algorithms are used to analyze facial expressions, inspect fingerprints and faces, and survey remote border crossings. [1] Its usage has transformed how countries like the United States are managing their inflow of migrants and asylum seekers. Yet, the quickness with which AI has been applied within the immigration system, both within the United States and internationally, dangerously threatens the privacy rights of citizens and non-citizens alike.
Often, AI is used to streamline tasks and reduce the workload of human employees. Within the immigration system, it is no different, and its use helps to simplify surveillance techniques within the border control system. However, implementing AI within the immigration system is foregoing human rights in favor of efficiency. In a 2023 study, 5,000 images created by a generative AI model were analyzed and proven to demonstrate biases specifically against women and people of color. [2] These prejudices could have a significant impact on how immigration decisions are made. Not only does AI have inherent biases, but it also experiences a phenomenon coined as the “hallucination effect.” This effect describes incidents in which AI presents misleading or false information factually and legitimately. [3] This dangerous flaw promotes misinformation. Within the immigration system, it poses an additional threat to ensuring the accuracy of decisions that are made regarding an individual’s citizenship status. Despite its efficiency, AI’s flaws make it dangerous to rely on the technology within the immigration system.
An ongoing court case demonstrates some of these dangers. In Refugees International v. United States Citizenship and Immigration Services, Refugees International, an organization that represents the rights of refugees and other displaced persons, filed a case against the United States Citizenship and Immigration Services (USCIS) for the USCIS to release how it runs its Asylum Text Analytics (ATA) under the Freedom of Information Act. [4] The ATA is an AI-based program that recognizes language patterns in asylum applications to detect incidents of plagiarism. When filing the case, Refugees International was interested in how ATA’s algorithm identifies plagiarism. The organization is concerned that the system is inherently biased against immigrants, asylum seekers, and people whose first language is not English. Many of these individuals rely on translation services to fill out their applications, potentially increasing the risk that ATA flags their writing as plagiarized. Filed in December 2024, this case is still ongoing. However, it is representative of the unequal standards that AI has created within the immigration system. Unfairly biased against marginalized groups, artificial intelligence makes the already difficult immigration process even more inaccessible.
While there have been some policies that recognize the dangers of AI, regulations on the policy side have been lacking. In 2023, former President Biden created the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. [5] This executive order warned against the power of AI and included stipulations that decisions made by AI should be fact-checked by human oversight. It also encouraged transparency in decision-making with AI, including access to information about the algorithms behind the technology. This executive order was then followed by a memo from the White House in October 2024 that promotes the growth of AI while also warning that, “if misused, AI could threaten United States national security, bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses, and weaken the rules-based international order.” [6] Under the AI in Government Act of 2020, it is the responsibility of the Office of Management and Budget to monitor the use of AI within governmental systems and prevent discrimination or bias with its use. [7] As part of this initiative, the OMB requires the use of AI to be classified as both “safety-impacting” and “rights-impacting,” signaling how the technology may or may not impact individual safety and rights. [8] Within the DHS, 27 out of 105 of its AI programs are currently marked as “rights-impacting.” [9] Although policies are in place to help preserve human rights within the government and immigration system, they are not effective enough to fully protect individuals. This hole opens up the possibility for AI use to be regulated via the court system.
With these weaknesses in the policy sector, the court system has the opportunity to be at the forefront of regulations for AI use. Rulings from different cases allow the court system to build up a system of limitations placed on the use of AI to preserve important human rights, like that of privacy. The speed at which AI consistently develops and changes is also best suited to the specific slowness of the legal system. Unlike policies that could restrict the powers of AI but quickly change or be ineffective, legal precedents can better withstand change and create standards of protection regarding the use of AI.
Through decisions in legal cases regarding AI’s use within immigration, the court system has the opportunity to limit AI’s power in favor of preserving human rights. Compared to policy-makers or specialized agencies, the courts have a unique role in protecting individual liberties; they serve as an independent mediator that cannot be swayed by interest groups or lobbyists who are more interested in the advancement of technology as opposed to human rights. Due to its novelty and constantly developing nature, there is currently a lack of regulation regarding the use of AI as a surveillance technology. Thus, in court cases regarding the use of AI-based surveillance technologies, precautions must be taken to ensure appropriate standards are set to protect human rights and liberties alongside the growth of technology. These decisions will influence how immigration decisions are made with the future of AI, impacting how the entire immigration system is governed. By examining precedents from prior cases regarding the conflict between technological advancement and human rights, courts can make decisions that work to mitigate the power AI has within the immigration system.
Historically, technological advancements have always challenged the court system. Decisions have to be made that balance human rights with the growth of technology. In 2012, the United States Supreme Court worked within this delicate balance with the case of United States v. Jones, 565 U.S. 400 (2012). [10] This case concerned the extent of Fourth Amendment rights regarding GPS tracking on an individual’s vehicle. In this case, the technology had been used to monitor Antoine Jones’s location and movement over a month in order to convict him of drug trafficking. In their decision, the Court concluded that the invasiveness of the GPS tracking was not permitted within the limitations on searches and seizures as classified within the Fourth Amendment because of how invasive the technology was used. AI is currently being used in the immigration system to track the locations of individuals, similar to how GPS was used in this case. Through video surveillance, drones, and other AI detection programs, “smart borders” have been created along popular border crossings. These borders are closely analyzed and track migrants as they cross into the country. However, a study by geography and migration experts found that instead of discouraging migration, these smart borders only push migrants to find more dangerous ways of crossing into the country, increasing the fatality rates of border crossings. The study found a significant correlation between the use of smart borders and human remains in the southern Arizona desert. [11] Not only does smart border technology invasively impede individuals’ rights to privacy, but it also increases the risks of border crossings.
In Riley v. California, 573 U.S. 373 (2014), the Supreme Court was again forced to reckon with the conflict between technology and human rights. In a combination of two separate cases, the Court considered the constitutionality of using a phone’s content, such as images, videos, or phone logs, when incriminating an individual. In both cases, the police initially arrested two individuals for lesser crimes. However, upon confiscating and searching their phones, they uncovered additional information that allowed the individuals to be charged for more serious crimes and have more extensive prison sentences. While assessing these two cases, the Supreme Court decided that information collected via searches of phones was an invasion of the individuals' Fourth Amendment privacy rights. Chief Justice John Roberts explained that the extensive information that could be revealed via a search of a phone is representative of “the privacies of life.” In addition, the Court emphasized that while certain searches can be done without a warrant, they must only be done during dire situations, for instance, when a police officer’s life may be threatened. In the case of a phone search, there is not enough of a threat from the information within a phone that would warrant the need for a search. [12]
Another case that investigated the legality of using data collected from cell phones was Carpenter v. United States, 585 U.S. 296 (2018). The case investigated the legality of the use of cell-site location within an ongoing criminal investigation, under the privacy protections granted by the Fourth Amendment. In this case, the Supreme Court decided that the use of cell-site location was an infringement on privacy rights. They argued that because cell phones are so integrated into daily life, it would be unavoidable to limit the tracking of cell-site location. Within the context of the Fourth Amendment, it was decided that this overreach into personal privacy was too extreme in the fight to preserve the individual rights of citizens. In their decision, the Supreme Court clarified that the precedent created from this case was narrow and would not extend to other surveillance measures. [13]
However, similar to the prevalence and unavoidable nature of cell phones, AI has demonstrated a similar role in today’s world. While cell phones have been integrated into our daily lives, artificial intelligence has also become increasingly intrusive, often in ways that surpass those of cell phones. Currently, AI is used as a surveillance tool to recognize faces, analyze asylum and immigration applications, and screen for additional information on noncitizens–all breaches of privacy that extend beyond the searches that were originally covered when the Fourth Amendment was established.
As technology changes, we need to be able to adapt our judicial system to match its growth. The privacy rights-affirming decisions of Riley v. California and Carpenter v. United States could be applied to cases regarding the use of AI as a surveillance tool. The growth of AI needs to be checked in order to preserve privacy rights. We have already seen some successful cases that have limited the power of AI. One such example is the restriction of Clearview AI. Clearview AI is a facial recognition program that is used by law enforcement and other government agencies to identify culprits of crimes; however, it was initially available to all private or public customers who paid for the service. The AI program scans images publicly available on the internet and assigns each individual a unique faceprint based on measurements such as the distance between the eyes and the mouth. These faceprints can then be used to identify individuals from images that users upload to the program. Since the images are publicly available, Clearview AI does not request permission to use or store individuals’ faceprints. [14]
The extent of Clearview AI’s powers was challenged in American Civil Liberties Union v. Clearview AI (Ill. Cir. Ct. 2021). Both the national organization and the Illinois branch of the American Civil Liberties Union filed the case against Clearview AI, claiming that the program threatens the privacy, anonymity, and safety of disenfranchised individuals, like undocumented immigrants, but also survivors of domestic violence, sex workers, and protestors. The plaintiffs argued that Clearview AI violated the Illinois Biometric Information and Privacy Act (“BIPA”), which requires private entities to collect biometric information on individuals without informing them and receiving consent to do so. In the decision for this case, the Circuit Court of Cook County, Illinois, declared that while Clearview AI’s actions do not go against BIPA, the impossibility of asking permission from everyone within the system to use their biometric information to create faceprints limits its applicability under the law. In addition, the court acknowledged that the information collected by Clearview AI is much more intensive than what would be collected in a traditional search of an individual. [15]
Clearview AI was at the center of another court case in Renderos v. Clearview AI CA1/4 (2025). The plaintiffs argued that the program was a violation of both their privacy and First Amendment rights, as protected under California’s anti-SLAPP law, which is designed to protect individuals who are expressing their First Amendment rights. The Superior Court of California decided in favor of the plaintiff. Because of the risk of exposing individuals’ identities, which serves as a deterrent for peacefully protesting or otherwise expressing an individual's First Amendment rights, the use of AI in this case was deemed a violation of the anti-SLAPP law. [16] While this case was mainly focused on the dangers this technology poses for individuals who participate in protests and do not wish for their identity to be shared, it is also applicable to immigrants, noncitizens, and asylum seekers. It highlights the dangers AI video surveillance poses for privacy rights. As video surveillance systems continue to be implemented at border crossings and airports, there will be an increasing risk of the violation of both citizens' and noncitizens’ privacy rights. As future cases concern similar issues of privacy violations connected to AI video surveillance, this case could be a demonstration of both the risks of this monitoring and a potential resolution for similar cases.
These two cases represent successes within the court system regarding the use of AI as a surveillance tool. In both cases, the courts recognized the dangers of AI regarding individual privacy rights. They saw that the intrusive measures used by Clearview AI extend beyond the right to privacy that is protected within the Fourth Amendment. Because of the decisions in these cases, while Clearview AI was originally available to public or private customers, its services are now only accessible to law enforcement and government agencies, per Clearview AI’s website. [17] Within the DHS, the program is now only used to identify individuals involved with child sexual exploitation and abuse. [18] These limitations have severely restricted the power of Clearview AI and other facial recognition programs.
While these two court cases were successful, there are many more that will continue to challenge the balance between technological advancements and human rights. Yet, while the rapid development of AI presents new challenges, the court system has historically fought to protect individual rights to privacy. As AI continues to expand into our daily lives, there will be an increasing number of legal cases that will define its extent of power. Thus, at this moment the definitions of its power must remain in check to best preserve the rights of all. Not doing so will create a dangerous model of how technology can be valued above human rights.
Edited by Begum Gokmen
[1] “United States Citizenship and Immigration Services – AI Use Cases,” U.S. Department of Homeland Security, February 24, 2025, https://www.dhs.gov/ai/use-case-inventory/uscis.
[2] “When AI Gets It Wrong: Addressing AI Hallucinations and Bias,” n.d. MIT Sloan Teaching & Learning Technologies, https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/.
[3] Keith R. Fisher, “ABA Ethics Opinion on Generative AI Offers Useful Framework,” American Bar Association, https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-october/aba-ethics-opinion-generative-ai-offers-useful-framework/.
[4] Refugees International v. United States Citizenship and Immigration Services, No. 1:24-cv-03559 (D.D.C. Dec. 20, 20242024)
[5] Joseph R. Biden Jr., “Executive Order 14110 of October 30, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
[6] Joseph R. Biden Jr., “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence,” The White House, October 24, 2024, https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/.
[7] “H.R.2575 - AI in Government Act of 2020,” congress.gov, September 14, 2020, https://www.congress.gov/bill/116th-congress/house-bill/2575.
[8] Joseph R. Biden Jr., “Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” The White House, March 28, 2024, https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
[9] “Invisible Gatekeepers: DHS’ Growing Use of AI in Immigration Decisions,” American Immigration Council, n.d, https://www.americanimmigrationcouncil.org/blog/invisible-gatekeepers-dhs-growing-use-of-ai-in-immigration-decisions/.
[10] United States v. Jones, 565 U.S. 400 (2012)
[11] Samuel Norton Chambers, Geoffrey Alan Boyce, Sarah Launius, and Alicia Dinsmore, “Mortality, Surveillance and the Tertiary ‘Funnel Effect’ on the U.S.-Mexico Border: A Geospatial Modeling of the Geography of Deterrence,” Journal of Borderlands Studies 36, no. 3 (2021): 443–68, https://doi.org/10.1080/08865655.2019.1570861.
[12] Riley v. California, 573 U.S. 373 (2014)
[13] Carpenter v. United States, 585 U.S. ___ (2018)
[14] American Civil Liberties Union v. Clearview AI (Ill. Cir. Ct. 2021)
[15] American Civil Liberties Union v. Clearview AI (Ill. Cir. Ct. 2021)
[16] Steven Renderos v. Clearview AI, Inc., No. RG21096898 (Cal. Super. Ct. Alameda Cnty. Nov. 18, 2022)
[17] “FAQs,” Clearview AI, https://www.clearview.ai/faq.
[18] “2024 Update on DHS’s Use of Face Recognition & Face Capture Technologies | Homeland Security,” Department of Homeland Security, https://www.dhs.gov/archive/news/2025/01/16/2024-update-dhss-use-face-recognition-face-capture-technologies.