At the Border of Europe's Surveillance State

 

Each time you cross a border, your privacy is violated.

 
Illustration by  Eddie Stok  for Are We Europe

Illustration by Eddie Stok for Are We Europe

 

It’s clear that a country’s laws have geographic limits. Cross a border, and your rights to healthcare, legal protection, work—even your right to stay—can potentially be taken away. But what about at the border itself, when you’ve stepped off a plane and onto sovereign territory, but still exist in that liminal space between two passport stamps? The United States Constitution protects against warrantless searches of people and their property, but in 2017, U.S. border guards forcibly searched through the laptops and phones of ten U.S. citizens and one permanent resident as they re-entered the country.

Eventually, the American Civil Liberties Union, a non-profit watchdog that often provides legal representation to individuals, sued the U.S. government on their behalf. “Let’s get one thing clear: the government cannot use the pretext of the ‘border’ to make an end run around the Constitution,” the ACLU wrote in a statement explaining its lawsuit. “The border is not a lawless place.”

But even if not lawless, historically, borders have been vulnerable places for human rights—particularly the right to privacy—as border guards extend government intrusion into our private lives with the authority of upholding national security. Now, data collection and artificial intelligence are threatening to turn borders into an underregulated free-for-all. With the global fight against terrorism, even the European Union (a more privacy-focused corner of the world) has been leaning towards stronger surveillance at its borders through the use of facial recognition, social media analysis, and smartphone metadata. These technologies, already dangerous enablers of a surveillance state, can be even more indecipherable and threatening when powered by AI that the public does not fully understand.

Want to keep reading?

This story is free! But if you want to support us, you could spoil yourself with a printed version of this story.

ORDER MAGAZINE

Keep reading

National security interests have a way of elevating the perceived necessity of such surveillance measures. However, it is imperative that we understand the significance of making these tools more commonplace. In the past few years, many news publications have criticized India and China for “Big Brother” surveillance systems that collect data and track millions of people. Yet, the EU stands at the precipice of following in their footsteps; it is also collecting biometric and other personal data from people without any meaningful measure of consent—and it is starting at the borders, in airports and other points of entry.

As the EU seeks to be a global leader in data privacy and ethical AI, its borders will be important to understand the fragility of the right to privacy in the face of new technologies. What do non-citizens such as refugees have to give up when they attempt to cross the EU border? Comparatively, what do citizens give up when they cross their own borders? Is their privacy actually better protected? When it comes to increased surveillance—in particular, facial recognition and smartphones—these two populations have more in common and more to be concerned about than they might think.

 
No one explained what they were doing with this information.

The politics and implications of collecting biometric data from refugees—things like “fingerprints, facial patterns, voice or typing cadence”—have been discussed by academics as early as 1999, long before FaceID let you unlock your iPhone. That early debate was sparked when the EU first proposed a project called EURODAC, to use fingerprinting to control border crossings by asylum seekers. Since then, collection of such data has become standard. “They took our fingerprints when we arrived on the island [in Greece],” an asylum seeker told Dragana Kaurin, a human rights researcher at Harvard’s Berkman Klein Center. “They said [that] only [the] border crossing was legal entry, and taking fingerprints was a normal procedure because we committed a criminal act.” The asylum seeker added that “no one explained what they were doing with this information.”

Beyond fingerprinting, for refugees, facial recognition software has become increasingly common. Germany’s Bundesamt für Migration und Flüchtlinge (BAMF), or Federal Office for Migration and Refugees, has already been using facial recognition software to detect asylum seekers that have tried to apply more than once. The 2016 EURODAC proposal would make facial image collection obligatory for all asylum seekers. Meanwhile, the EU is funding projects that would use AI in border applications such as assessing facial micro-expressions of refugees to detect lies.

In recent years, facial recognition has also been more common at airports for all travelers. In the Schengen Area, facial biometrics have been described as a “key weapon” of the new Entry/Exit system (EES) that will be operational by 2020 to keep track of all third country nationals. In addition, Amsterdam’s Schiphol Airport, France’s Orly and Roissy airports, and Spain’s Menorca airport, among others, are launching pilot projects that use facial recognition to board passengers. In summary, facial data risks increasingly becoming something that all travelers must give up in exchange for the right to leave or enter a country.

So how will all of this facial recognition data be used? In April 2019, the EU Parliament voted to create the Common Identity Repository (CIR), a gigantic searchable biometrics database that will integrate border control, migration, and law enforcement data systems. The CIR will aggregate identity records and biometric data from the Schengen Information System, EES, EURODAC, and more, to support border control and law enforcement at the EU-level and among the Member States. Once up and running, the CIR will become “one of the biggest people-tracking databases in the world.”

These developments are a threat to the right to privacy, which is protected by the Universal Declaration of Human Rights. Asylum seekers and refugees tend to be given little information about how their data will be used, and are often unaware that they have rights as data subjects. And according to the Electronic Frontier Foundation—a party to the ACLU’s lawsuit against the U.S. government for border searches—it is possible (but not easy) to opt out of facial recognition at airports.

In addition, the tradeoffs for using facial recognition technology include many possibilities for harm, which are disproportionately felt by some groups. One of the biggest risks is security breaches, or having the data fall into the wrong hands. For refugees who are fleeing persecution and state-sanctioned violence, this may be a matter of life and death. For example, the UNHCR’s biometric database of Rohingya refugees is being used by the Bangladeshi government to send them back to Myanmar, where they face genocide. For regular travelers, biometric data leaks increase the risk of identity-based thefts, and are even more dangerous than password or credit card leaks, as people cannot change their faces. Recent privacy scandals around the world show that any mass collection of biometric data will expose people to higher risk of harm. This is even more salient with facial recognition data, which can be administered at a distance (unlike other forms of biometric data such as iris scans or fingerprints).

Another potential vector for harm is AI’s inaccuracy and well-known social bias. For example, MIT researcher Joy Buolamwini found that facial analysis systems built by IBM, Microsoft, and the Chinese company Face Plus Plus, which is owned by Megvii, had an error rate for darker skinned women of up to 34.7 percent, compared to a maximum error rate of less than one percent for lighter skinned males. This means that darker skinned refugees or travelers would be more likely to be wrongly deported or misidentified as a criminal. More advanced applications of AI to facial technology, such as assessing facial micro-expressions of refugees to detect lies, have also been criticized for reproducing biases that are more likely to harm minorities. In a climate where darker skinned travelers, particularly those who have Muslim-sounding names or religious or cultural clothing, already face extra scrutiny (referred to as the perils of “flying while Muslim”), it appears that facial recognition as it exists today would only perpetuate existing patterns of discrimination.

 
Modern cellphones are not just another technological convenience, but hold ‘the privacies of life’ for many.

The second frontier for privacy violations is the smartphone—the one you check, if you're an average user, 52 times per day. As Chief Justice of the U.S. Supreme Court John Roberts wrote in Riley v. California, “modern cellphones are not just another technological convenience, but hold for many Americans, ‘the privacies of life.’”

For refugees though, smartphones are even more useful. They are a means of survival to connect with their families, check online about policy changes or border closures, and access other critical information while on a perilous journey. However, now smartphone metadata and social media data are being used as potential ammunition to justify deportation. In 2017, Germany and Denmark expanded legislation to allow immigration officials to extract personal data from asylum seekers’ mobile devices, similar legislation has been proposed in Belgium and Austria, and the U.K. has permitted immigration officials to hack refugees’ phones since 2013.

Refugees across Europe are being forced to reckon with new mobile forensics technology that can extract a smartphone’s location history, messages, and even data from WhatsApp, which is one of the more secure apps. These surveillance measures were initially presented as an antiterrorism measure to check whether asylum seekers present a threat to national security. Yet, they are also being used for a secondary purpose—to verify the asylum seeker’s identity, and therefore determine whether they are eligible for international protection. This is another example of how expanding invasions of privacy are often marketed as a necessity for national security and counterterrorism, yet end up being used for political purposes such as the desire to stem the flow of refugees.

Today, the U.S. has already extended the seizure of smartphones to citizens and other travelers at airports, and now requires social media identifiers from everyone who applies for a U.S. visa. Although the EU has not extended its use of social media beyond refugees and asylum seekers, the risk remains that it may follow in the U.S.’s footsteps with the rationale of counterterrorism and protecting national security.

This trend is dangerous to both individuals and the general state of democracy in the EU. What makes social media analysis frightening—in a way that is different from facial recognition—is its infringement on freedom of speech and association. For example, could criticizing the EU be a black mark for an asylum seeker? If a few of someone’s social media followers are suspected gang members, does that make them more likely to be flagged as a security risk? Analyzing someone’s private speech on social media and using it to potentially restrict the freedom to travel has uncomfortable parallels with China, which has blocked activists’ ability to purchase plane and train tickets based on their social credit.

Not to mention, attempts to use algorithmic tone and sentiment analysis to analyze vast quantities of social media data have shown that AI’s accuracy in these applications is far from trustworthy: a recent study concluded that it could make accurate predictions of political ideology based on users’ Twitter posts only 27 percent of the time. Because threats to public safety and national security are difficult to define and measure, the engineers who code algorithms must use proxies—and these proxies may inadvertently reflect biases and unfounded assumptions, such as flagging religious speech as a cause of concern.

These inaccuracies are even more dangerous when considering the typical lack of transparency around when and how AI is used, as well as how it works. In a recent report, David Kaye, UN Special Rapporteur on the right to freedom of opinion and expression, discusses how the lack of clarity about the extent and scope of artificial intelligence and the difficulty of scrutinizing the technical framework of automated decisions means that individuals will often have their expression rights adversely affected without being able to investigate or understand why, how, or on what basis. What this means at the borders is that as algorithms make more decisions about who should be subject to invasive search, or who should have asylum granted or their visa application approved, these critical decisions may not be easy to understand or to challenge.

People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.

In examining the use of invasions of data privacy and applications of emerging technologies at the border, it is clear that just as Virginia Eubanks describes in Automating Inequality, some groups of people are targeted for digital scrutiny disproportionately. As Eubanks writes, “People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.” Refugees and asylum seekers must turn over their biometric data and smartphone data in their attempt to find safety in a new country. Darker skinned travelers and Muslim travelers are subject to higher rates of misidentification, and therefore more invasive levels of screening.

“When a very efficient technology is deployed against a scorned out-group in the absence of strong human rights protections, there is enormous potential for atrocity,” Eubanks writes. Yet despite the numerous identifiable flaws, emerging technologies and the massive quantities of data that are being collected are deeply under-regulated, even in the EU.

Because of underregulation, perhaps the most concerning part about the growing use of technology at the border is the risk of continued function creep, or in less jargon-y terms, the potential for it to be a slippery slope. The European Data Protection Supervisor critiqued the EU’s CIR database, saying, “A central database implicitly increases the risk of abuse and more easily rouses desires to use the system beyond the purposes for which it was originally intended.” CIR will aggregate refugee, migration, and law enforcement data, and will be used for migration management, counterterrorism, and law enforcement. By merging data that was originally intended for very different functions, CIR therefore begins to “blur the boundaries” between all of these activities, even though they do not justify the same level of surveillance.

Ultimately, considerations about when to restrict the right to privacy for one's personal data come down to necessity and proportionality. Is limiting the right to privacy strictly necessary, and is the justification for limiting the right to privacy proportional and relevant to the scope of the invasion of privacy? As the collection of facial images, smartphone metadata and social media data expands along with the use of AI, the EU must implement clearer, more transparent processes for evaluating where and for what purposes these technologies can be used.

Regulators must also understand how these technologies disproportionately harm some groups, and press pause when necessary—and it is necessary—to responsibly approach this technology. If not, the EU will leave the door open for these technologies to be abused in a variety of ways—monetized, increasingly invasive, enabling a growing surveillance state, and threatening civil liberties.


 

This article appears in Are We Europe #5: Code of Conscience


Q5-mockup.jpg
Code of Conscience
10.00
Quantity:
Order Now

This article was produced in partnership with Humanity in Action, an international nonprofit organization that educates and connects young people who seek to become leaders on issues related to human and minority rights.

 
 

RELATED STORIES