• How the Pandemic Turned Refugees Into ‘Guinea Pigs’ for Surveillance Tech

    An interview with Dr. Petra Molnar, who spent 2020 investigating the use of drones, facial recognition, and lidar on refugees

    The coronavirus pandemic unleashed a new era in surveillance technology, and arguably no group has felt this more acutely than refugees. Even before the pandemic, refugees were subjected to contact tracing, drone and LIDAR tracking, and facial recognition en masse. Since the pandemic, it’s only gotten worse. For a microcosm of how bad the pandemic has been for refugees — both in terms of civil liberties and suffering under the virus — look no further than Greece.

    Greek refugee camps are among the largest in Europe, and they are overpopulated, with scarce access to water, food, and basic necessities, and under constant surveillance. Researchers say that many of the surveillance techniques and technologies — especially experimental, rudimentary, and low-cost ones — used to corral refugees around the world were often tested in these camps first.

    “Certain communities already marginalized, disenfranchised are being used as guinea pigs, but the concern is that all of these technologies will be rolled out against the broader population and normalized,” says Petra Molnar, Associate Director of the Refugee Law Lab, York University.

    Molnar traveled to the Greek refugee camps on Lesbos in 2020 as part of a fact-finding project with the advocacy group European Digital Rights (EDRi). She arrived right after the Moria camp — the largest in Europe at the time — burned down and forced the relocation of thousands of refugees. Since her visit, she has been concerned about the rise of authoritarian technology and how it might be used against the powerless.

    With the pandemic still raging and states more desperate than ever to contain it, it seemed a good time to discuss the uses and implications of surveillance in the refugee camps. Molnar, who is still in Greece and plans to continue visiting the camps once the nation’s second lockdown lifts, spoke to OneZero about the kinds of surveillance technology she saw deployed there, and what the future holds — particularly with the European Border and Coast Guard Agency, Molnar says, adding “that they’ve been using Greece as a testing ground for all sorts of aerial surveillance technology.”

    This interview has been edited and condensed for clarity.

    OneZero: What kinds of surveillance practices and technologies did you see in the camps?

    Petra Molnar: I went to Lesbos in September, right after the Moria camp burned down and thousands of people were displaced and sent to a new camp. We were essentially witnessing the birth of the Kara Tepes camp, a new containment center, and talked to the people about surveillance, and also how this particular tragedy was being used as a new excuse to bring more technology, more surveillance. The [Greek] government is… basically weaponizing Covid to use it as an excuse to lock the camps down and make it impossible to do any research.

    When you are in Lesbos, it is very clear that it is a testing ground, in the sense that the use of tech is quite rudimentary — we are not talking about thermal cameras, iris scans, anything like that, but there’s an increase in the appetite of the Greek government to explore the use of it, particularly when they try to control large groups of people and also large groups coming from the Aegean. It’s very early days for a lot of these technologies, but everything points to the fact that Greece is Europe’s testing ground.

    They are talking about bringing biometric control to the camps, but we know for example that the Hellenic Coast Guard has a drone that they have been using for self-promotion, propaganda, and they’ve now been using it to follow specific people as they are leaving and entering the camp. I’m not sure if the use of drones was restricted to following refugees once they left the camps, but with the lockdown, it was impossible to verify. [OneZero had access to a local source who confirmed that drones are also being used inside the camps to monitor refugees during lockdown.]

    Also, people can come and go to buy things at stores, but they have to sign in and out at the gate, and we don’t know how they are going to use such data and for what purposes.

    Surveillance has been used on refugees long before the pandemic — in what ways have refugees been treated as guinea pigs for the policies and technologies we’re seeing deployed more widely now? And what are some of the worst examples of authoritarian technologies being deployed against refugees in Europe?

    The most egregious examples that we’ve been seeing are that ill-fated pilot projects — A.I. lie detectors and risk scorings which were essentially trying to use facial recognition and facial expressions’ micro-targeting to determine whether a person was more likely than others to lie at the border. Luckily, that technology was debunked and also generated a lot of debate around the ethics and human rights implications of using something like that.

    Technologies such as voice printing have been used in Germany to try to track a person’s country of origin or their ethnicity, facial recognition made its way into the new Migration’s Pact, and Greece is thinking about automating the triage of refugees, so there’s an appetite at the EU level and globally to use this tech. I think 2021 will be very interesting as more resources are being diverted to these types of tech.

    We saw, right when the pandemic started, that migration data used for population modeling became kind of co-opted and used to try and model flows of Covid. And this is very problematic because they are assuming that the mobile population, people on the move, and refugees are more likely to be bringing in Covid and diseases — but the numbers don’t bear out. We are also seeing the gathering of vast amounts of data for all these databases that Europe is using or will be using for a variety of border enforcement and policing in general.

    The concern is that fear’s being weaponized around the pandemic and technologies such as mobile tracking and data collection are being used as ways to control people. It is also broader, it deals with a kind of discourse around migration, on limiting people’s rights to move. Our concern is that it’ll open the door to further, broader rollout of this kind of tech against the general population.

    What are some of the most invasive technologies you’ve seen? And are you worried these authoritarian technologies will continue to expand, and not just in refugee camps?

    In Greece, the most invasive technologies being used now would probably be drones and unpiloted surveillance technologies, because it’s a really easy way to dehumanize that kind of area where people are crossing, coming from Turkey, trying to claim asylum. There’s also the appetite to try facial recognition technology.

    It shows just how dangerous these technologies can be both because they facilitate pushbacks, border enforcement, and throwing people away, and it really plays into this kind of idea of instead of humane responses you’d hope to happen when you see a boat in distress in the Aegean or the Mediterranean, now entities are turning towards drones and the whole kind of surveillance apparatus. It highlights how the humanity in this process has been lost.

    And the normalization of it all. Now it is so normal to use drones — everything is about policing Europe’s shore, Greece being a shield, to normalize the use of invasive surveillance tech. A lot of us are worried with talks of expanding the scope of action, mandate, and powers of Frontex [the European Border and Coast Guard Agency] and its utter lack of accountability — it is crystal clear that entities like Frontex are going to do Europe’s dirty work.

    There’s a particular framing applied when governments and companies talk about migrants and refugees, often linking them to ISIS and using careless terms and phrases to discuss serious issues. Our concern is that this kind of use of technology is going to become more advanced and more efficient.

    What is happening with regard to contact tracing apps — have there been cases where the technology was forced on refugees?

    I’ve heard about the possibility of refugees being tracked through their phones, but I couldn’t confirm. I prefer not to interact with the state through my phone, but that’s a privilege I have, a choice I can make. If you’re living in a refugee camp your options are much more constrained. Often people in the camps feel they are compelled to give access to their phones, to give their phone numbers, etc. And then there are concerns that tracking is being done. It’s really hard to track the tracking; it is not clear what’s being done.

    Aside from contact tracing, there’s the concern with the Wi-Fi connection provided in the camps. There’s often just one connection or one specific place where Wi-Fi works and people need to be connected to their families, spouses, friends, or get access to information through their phones, sometimes their only lifeline. It’s a difficult situation because, on the one hand, people are worried about privacy and surveillance, but on the other, you want to call your family, your spouse, and you can only do that through Wi-Fi and people feel they need to be connected. They have to rely on what’s available, but there’s a concern that because it’s provided by the authorities, no one knows exactly what’s being collected and how they are being watched and surveilled.

    How do we fight this surveillance creep?

    That’s the hard question. I think one of the ways that we can fight some of this is knowledge. Knowing what is happening, sharing resources among different communities, having a broader understanding of the systemic way this is playing out, and using such knowledge generated by the community itself to push for regulation and governance when it comes to these particular uses of technologies.

    We call for a moratorium or abolition of all high-risk technology in and around the border because right now we don’t have a governance mechanism in place or integrated regional or international way to regulate these uses of tech.

    Meanwhile, we have in the EU a General Data Protection Law, a very strong tool to protect data and data sharing, but it doesn’t really touch on surveillance, automation, A.I., so the law is really far behind.

    One of the ways to fight A.I. is to make policymakers understand the real harm that these technologies have. We are talking about ways that discrimination and inequality are reinforced by this kind of tech, and how damaging they are to people.

    We are trying to highlight this systemic approach to see it as an interconnected system in which all of these technologies play a part in this increasingly draconian way that migration management is being done.

    https://onezero.medium.com/how-the-pandemic-turned-refugees-into-guinea-pigs-for-surveillance-t

    #réfugiés #cobaye #surveillance #technologie #pandémie #covid-19 #coroanvirus #LIDAR #drones #reconnaissance_faciale #Grèce #camps_de_réfugiés #Lesbos #Moria #European_Digital_Rights (#EDRi) #surveillance_aérienne #complexe_militaro-industriel #Kara_Tepes #weaponization #biométrie #IA #intelligence_artificielle #détecteurs_de_mensonges #empreinte_vocale #tri #catégorisation #donneés #base_de_données #contrôle #technologies_autoritaires #déshumanisation #normalisation #Frontex #wifi #internet #smartphone #frontières

    ping @isskein @karine4

    ping @etraces

  • How the Pandemic Turned Refugees Into ‘Guinea Pigs’ for Surveillance Tech
    https://onezero.medium.com/how-the-pandemic-turned-refugees-into-guinea-pigs-for-surveillance-t

    An interview with Dr. Petra Molnar, who spent 2020 investigating the use of drones, facial recognition, and lidar on refugees The coronavirus pandemic unleashed a new era in surveillance technology, and arguably no group has felt this more acutely than refugees. Even before the pandemic, refugees were subjected to contact tracing, drone and LIDAR tracking, and facial recognition en masse. Since the pandemic, it’s only gotten worse. For a microcosm of how bad the pandemic has been for (...)

    #algorithme #drone #biométrie #migration #facial #reconnaissance #vidéo-surveillance #COVID-19 #santé #surveillance (...)

    ##santé ##EuropeanDigitalRights-EDRi

  • This App Claims It Can Detect ’Trustworthiness.’ It Can’t
    https://www.vice.com/en/article/akd4bg/this-app-claims-it-can-detect-trustworthiness-it-cant

    Experts say an algorithm can’t determine whether you can be trusted by analyzing your face or voice. But that’s not stopping this company from trying. “Determine how trustworthy a person is in just one minute.” That’s the pitch from DeepScore, a Tokyo-based company that spent last week marketing its facial and voice recognition app to potential customers and investors at CES 2021. Here’s how it works : A person—seeking a business loan or coverage for health insurance, perhaps—looks into their (...)

    #algorithme #CCTV #biométrie #émotions #reconnaissance #finance #voix #HumanRightsWatch #PrivacyInternational (...)

    ##DeepScore

  • Bumble, Tinder and Match are banning accounts of Capitol rioters
    https://www.washingtonpost.com/technology/2021/01/16/siege-dating-app-bans

    Bumble, Tinder and others are freezing out rioters with help from law enforcement — and, in some cases, their own photos. Other app users have taken matters into their own hands by striking up conversations with potential rioters and relaying their information to the FBI. Tinder, Bumble and other dating apps are using images captured from inside the Capitol siege and other evidence to identify and ban rioters’ accounts, causing immediate consequences for those who participated as police move (...)

    #Match #Tinder #violence #délation #extrême-droite #SocialNetwork #FBI #biométrie #facial (...)

    ##reconnaissance

  • The FTC Forced a Misbehaving A.I. Company to Delete Its Algorithm
    https://onezero.medium.com/the-ftc-forced-a-misbehaving-a-i-company-to-delete-its-algorithm-124

    Could Google and Facebook’s algorithms be next ? In 2019, an investigation by NBC News revealed that photo storage app Ever had quietly siphoned billions of its users’ photos to train facial recognition algorithms. Pictures of people’s friends and families, which they had thought were private, were in fact being used to train algorithms that Ever then sold to law enforcement and the U.S. military. Two years later, the Federal Trade Commission has now made an example of parent company (...)

    #USArmy #algorithme #biométrie #données #facial #fraude #reconnaissance #scraping #FTC

  • Artificial intelligence : #Frontex improves its maritime surveillance

    Frontex wants to use a new platform to automatically detect and assess „risks“ on the seas of the European Union. Suspected irregular activities are to be displayed in a constantly updated „threat map“ with the help of self-learning software.

    The EU border agency has renewed a contract with Israeli company Windward for a „maritime analytics“ platform. It will put the application into regular operation. Frontex had initially procured a licence for around 800,000 Euros. For now 2.6 million Euros, the agency will receive access for four workstations. The contract can be extended three times for one year at a time.

    Windward specialises in the digital aggregation and assessment of vessel tracking and maritime surveillance data. Investors in the company, which was founded in 2011, include former US CIA director David Petraeus and former CEO’s of Thomson Reuters and British Petroleum. The former chief of staff of the Israeli military, Gabi Ashkenazi, is considered one of the advisors.

    Signature for each observed ship

    The platform is based on artificial intelligence techniques. For analysis, it uses maritime reporting systems, including position data from the AIS transponders of larger ships and weather data. These are enriched with information about the ship owners and shipping companies as well as the history of previous ship movements. For this purpose, the software queries openly accessible information from the internet.

    In this way, a „fingerprint“ is created for each observed ship, which can be used to identify suspicious activities. If the captain switches off the transponder, for example, the analysis platform can recognise this as a suspicuous event and take over further tracking based on the recorded patterns. It is also possible to integrate satellite images.

    Windward uses the register of the International Maritime Organisation (IMO) as its database, which lists about 70,000 ships. Allegedly, however, it also processes data on a total of 400,000 watercraft, including smaller fishing boats. One of the clients is therefore the UN Security Council, which uses the technology to monitor sanctions.

    Against „bad guys“ at sea

    The company advertises its applications with the slogan „Catch the bad guys at sea“. At Frontex, the application is used to combat and prevent unwanted migration and cross-border crime as well as terrorism. Subsequently, „policy makers“ and law enforcement agencies are to be informed about results. For this purpose, the „risks“ found are visualised in a „threat map“.

    Windward put such a „threat map“ online two years ago. At the time, the software rated the Black Sea as significantly more risky than the Mediterranean. Commercial shipping activity off the Crimea was interpreted as „probable sanction evasions“. Ship owners from the British Guernsey Islands as well as Romania recorded the highest proportion of ships exhibiting „risky“ behaviour. 42 vessels were classified as suspicious for drug smuggling based on their patterns.

    Frontex „early warning“ units

    The information from maritime surveillance is likely to be processed first by the „Risk Analysis Unit“ (RAU) at Frontex. It is supposed to support strategic decisions taken by the headquarters in Warsaw on issues of border control, return, prevention of cross-border crime as well as threats of a „hybrid nature“. Frontex calls the applications used there „intelligence products“ and „integrated data services“. Their results flow together in the „Common Integrated Risk Analysis Model“ (CIRAM).

    For the operational monitoring of the situation at the EU’s external borders, the agency also maintains the „Frontex Situation Centre“ (FSC). The department is supposed to provide a constantly updated picture of migration movements, if possible in real time. From these reports, Frontex produces „early warnings“ and situation reports to the border authorities of the member states as well as to the Commission and the Council in Brussels.

    More surveillance capacity in Warsaw

    According to its own information, Windward’s clients include the Italian Guardia di Finanza, which is responsible for controlling Italian territorial waters. The Ministry of the Interior in Rome is also responsible for numerous EU projects aimed at improving surveillance of the central Mediterranean. For the training and equipment of the Libyan coast guard, Italy receives around 67 million euros from EU funds in three different projects. Italian coast guard authorities are also installing a surveillance system for Tunisia’s external maritime borders.

    Frontex now wants to improve its own surveillance capacities with further tenders. Together with the fisheries agency, The agency is awarding further contracts for manned maritime surveillance. It has been operating such a „Frontex Aerial Surveillance Service“ (FASS) in the central Mediterranean since 2017 and in the Adriatic Sea since 2018. Frontex also wants to station large drones in the Mediterranean. Furthermore, it is testing Aerostats in the eastern Mediterranean for a second time. These are zeppelins attached to a 1,000-metre long line.

    https://digit.site36.net/2021/01/15/artificial-intelligence-frontex-improves-its-maritime-surveillance
    #intelligence_artificielle #surveillance #surveillance_maritime #mer #asile #migrations #réfugiés #frontières #AI #Windward #Israël #complexe_militaro-industriel #militarisation_des_frontières #David_Petraeus #Thomson_Reuters #British_Petroleum #armée_israélienne #Gabi_Ashkenazi #International_Maritime_Organisation (#IMO) #thread_map #Risk_Analysis_Unit (#RAU) #Common_Integrated_Risk_Analysis_Model (#CIRAM) #Frontex_Situation_Centre (#FSC) #Frontex_Aerial_Surveillance_Service (#FASS) #zeppelins

    ping @etraces

    • Data et nouvelles technologies, la face cachée du contrôle des mobilités

      Dans un rapport de juillet 2020, l’Agence européenne pour la gestion opérationnelle des systèmes d’information à grande échelle (#EU-Lisa) présente l’intelligence artificielle (IA) comme l’une des « technologies prioritaires » à développer. Le rapport souligne les avantages de l’IA en matière migratoire et aux frontières, grâce, entre autres, à la technologie de #reconnaissance_faciale.

      L’intelligence artificielle est de plus en plus privilégiée par les acteurs publics, les institutions de l’UE et les acteurs privés, mais aussi par le #HCR et l’#OIM. Les agences de l’UE, comme Frontex ou EU-Lisa, ont été particulièrement actives dans l’#expérimentation des nouvelles technologies, brouillant parfois la distinction entre essais et mise en oeuvre. En plus des outils traditionnels de surveillance, une panoplie de technologies est désormais déployée aux frontières de l’Europe et au-delà, qu’il s’agisse de l’ajout de nouvelles #bases_de_données, de technologies financières innovantes, ou plus simplement de la récupération par les #GAFAM des données laissées volontairement ou pas par les migrant·e·s et réfugié∙e∙s durant le parcours migratoire.

      La pandémie #Covid-19 est arrivée à point nommé pour dynamiser les orientations déjà prises, en permettant de tester ou de généraliser des technologies utilisées pour le contrôle des mobilités sans que l’ensemble des droits des exilé·e·s ne soit pris en considération. L’OIM, par exemple, a mis à disposition des Etats sa #Matrice_de_suivi_des_déplacements (#DTM) durant cette période afin de contrôler les « flux migratoires ». De nouvelles technologies au service de vieilles obsessions…

      http://www.migreurop.org/article3021.html

      Pour télécharger le rapport :
      www.migreurop.org/IMG/pdf/note_12_fr.pdf

      ping @karine4 @rhoumour @_kg_ @i_s_

    • La #technopolice aux frontières

      Comment le #business de la #sécurité et de la #surveillance au sein de l’#Union_européenne, en plus de bafouer des #droits_fondamentaux, utilise les personnes exilées comme #laboratoire de recherche, et ce sur des #fonds_publics européens.

      On a beaucoup parlé ici ces derniers mois de surveillance des manifestations ou de surveillance de l’espace public dans nos villes, mais la technopolice est avant tout déployée aux #frontières – et notamment chez nous, aux frontières de la « forteresse Europe ». Ces #dispositifs_technopoliciers sont financés, soutenus et expérimentés par l’Union européenne pour les frontières de l’UE d’abord, et ensuite vendus. Cette surveillance des frontières représente un #marché colossal et profite grandement de l’échelle communautaire et de ses programmes de #recherche_et_développement (#R&D) comme #Horizon_2020.

      #Roborder – des essaims de #drones_autonomes aux frontières

      C’est le cas du projet Roborder – un « jeu de mots » entre robot et border, frontière en anglais. Débuté en 2017, il prévoit de surveiller les frontières par des essaims de #drones autonomes, fonctionnant et patrouillant ensemble. L’#intelligence_artificielle de ces drones leur permettrait de reconnaître les humains et de distinguer si ces derniers commettent des infractions (comme celui de passer une frontière ?) et leur dangerosité pour ensuite prévenir la #police_aux_frontières. Ces drones peuvent se mouvoir dans les airs, sous l’eau, sur l’eau et dans des engins au sol. Dotés de multiples capteurs, en plus de la détection d’activités criminelles, ces drones seraient destinés à repérer des “#radio-fréquences non fiables”, c’est-à-dire à écouter les #communications et également à mesurer la #pollution_marine.
      Pour l’instant, ces essaims de drones autonomes ne seraient pas pourvus d’armes. Roborder est actuellement expérimenté en #Grèce, au #Portugal et en #Hongrie.

      Un #financement européen pour des usages « civils »

      Ce projet est financé à hauteur de 8 millions d’euros par le programme Horizon 2020 (subventionné lui-même par la #Cordis, organe de R&D de la Commission européenne). Horizon 2020 représente 50% du financement public total pour la recherche en sécurité de l’UE. Roborder est coordonné par le centre de recherches et technologie de #Hellas (le #CERTH), en Grèce et comme le montre l’association #Homo_Digitalis le nombre de projets Horizon 2020 ne fait qu’augmenter en Grèce. En plus du CERTH grec s’ajoutent environ 25 participants venus de tous les pays de l’UE (où on retrouve les services de police d’#Irlande_du_Nord, le ministère de la défense grecque, ou encore des entreprises de drones allemandes, etc.).

      L’une des conditions pour le financement de projets de ce genre par Horizon 2020 est que les technologies développées restent dans l’utilisation civile, et ne puissent pas servir à des fins militaires. Cette affirmation pourrait ressembler à un garde-fou, mais en réalité la distinction entre usage civil et militaire est loin d’être clairement établie. Comme le montre Stephen Graham, très souvent les #technologies, à la base militaires, sont réinjectées dans la sécurité, particulièrement aux frontières où la migration est criminalisée. Et cette porosité entre la sécurité et le #militaire est induite par la nécessité de trouver des débouchés pour rentabiliser la #recherche_militaire. C’est ce qu’on peut observer avec les drones ou bien le gaz lacrymogène. Ici, il est plutôt question d’une logique inverse : potentiellement le passage d’un usage dit “civil” de la #sécurité_intérieure à une application militaire, à travers des ventes futures de ces dispositifs. Mais on peut aussi considérer la surveillance, la détection de personnes et la #répression_aux_frontières comme une matérialisation de la #militarisation de l’Europe à ses frontières. Dans ce cas-là, Roborder serait un projet à fins militaires.

      De plus, dans les faits, comme le montre The Intercept (https://theintercept.com/2019/05/11/drones-artificial-intelligence-europe-roborder), une fois le projet terminé celui-ci est vendu. Sans qu’on sache trop à qui. Et, toujours selon le journal, beaucoup sont déjà intéressés par Roborder.

      #IborderCtrl – détection d’#émotions aux frontières

      Si les essaims de drones sont impressionnants, il existe d’autres projets dans la même veine. On peut citer notamment le projet qui a pour nom IborderCtrl, testé en Grèce, Hongrie et #Lettonie.

      Il consiste notamment en de l’#analyse_d’émotions (à côté d’autres projets de #reconnaissances_biométriques) : les personnes désirant passer une frontière doivent se soumettre à des questions et voient leur #visage passer au crible d’un #algorithme qui déterminera si elles mentent ou non. Le projet prétend « accélérer le #contrôle_aux_frontières » : si le #détecteur_de_mensonges estime qu’une personne dit la vérité, un code lui est donné pour passer le contrôle facilement ; si l’algorithme considère qu’une personne ment, elle est envoyée dans une seconde file, vers des gardes-frontières qui lui feront passer un #interrogatoire. L’analyse d’émotions prétend reposer sur un examen de « 38 #micro-mouvements du visage » comme l’angle de la tête ou le mouvement des yeux. Un spectacle de gadgets pseudoscientifiques qui permet surtout de donner l’apparence de la #neutralité_technologique à des politiques d’#exclusion et de #déshumanisation.

      Ce projet a également été financé par Horizon 2020 à hauteur de 4,5 millions d’euros. S’il semble aujourd’hui avoir été arrêté, l’eurodéputé allemand Patrick Breyer a saisi la Cour de justice de l’Union Européenne pour obtenir plus d’informations sur ce projet, ce qui lui a été refusé pour… atteinte au #secret_commercial. Ici encore, on voit que le champ “civil” et non “militaire” du projet est loin de représenter un garde-fou.

      Conclusion

      Ainsi, l’Union européenne participe activement au puissant marché de la surveillance et de la répression. Ici, les frontières et les personnes exilées sont utilisées comme des ressources de laboratoire. Dans une optique de militarisation toujours plus forte des frontières de la forteresse Europe et d’une recherche de profit et de développement des entreprises et centres de recherche européens. Les frontières constituent un nouveau marché et une nouvelle manne financière de la technopolice.

      Les chiffres montrent par ailleurs l’explosion du budget de l’agence européenne #Frontex (de 137 millions d’euros en 2015 à 322 millions d’euros en 2020, chiffres de la Cour des comptes européenne) et une automatisation toujours plus grande de la surveillance des frontières. Et parallèlement, le ratio entre le nombre de personnes qui tentent de franchir la Méditerranée et le nombre de celles qui y laissent la vie ne fait qu’augmenter. Cette automatisation de la surveillance aux frontières n’est donc qu’une nouvelle façon pour les autorités européennes d’accentuer le drame qui continue de se jouer en Méditerranée, pour une “efficacité” qui finalement ne profite qu’aux industries de la surveillance.

      Dans nos rues comme à nos frontières nous devons refuser la Technopolice et la combattre pied à pied !

      https://technopolice.fr/blog/la-technopolice-aux-frontieres

    • Artificial Intelligence - based capabilities for European Border and Coast Guard

      In 2019, Frontex, the European Border and Coast Guard Agency, commissioned #RAND Europe to carry out an Artificial intelligence (AI) research study.

      The purpose of the study was to provide an overview of the main opportunities, challenges and requirements for the adoption of AI-based capabilities in border managament. Frontex’s intent was also to find synergies with ongoing AI studies and initiatives in the EU and contribute to a Europe-wide AI landscape by adding the border security dimension.

      Some of the analysed technologies included automated border control, object recognition to detect suspicious vehicles or cargo and the use of geospatial data analytics for operational awareness and threat detection.

      As part of the study, RAND provided Frontex in 2020 with a comprehensive report and an executive summary with conclusions and recommendations.

      The findings will support Frontex in shaping the future landscape of AI-based capabilities for Integrated Border Management, including AI-related research and innovation projects which could be initiated by Frontex (e.g. under #EU_Innovation_Hub) or recommended to be conducted under the EU Research and Innovation Programme (#Horizon_Europe).

      https://frontex.europa.eu/media-centre/news/news-release/artificial-intelligence-based-capabilities-for-european-border-and-co

    • Pour les réfugiés, la #biométrie tout au long du chemin

      Par-delà les murs qui poussent aux frontières du monde depuis les années 1990, les réfugiés, migrants et demandeurs d’asile sont de plus en plus confrontés à l’extension des bases de #données_biométriques. Un « #mur_virtuel » s’étend ainsi à l’extérieur, aux frontières et à l’intérieur de l’espace Schengen, construit autour de programmes et de #bases_de_données.

      Des réfugiés qui paient avec leurs #iris, des migrants identifiés par leurs #empreintes_digitales, des capteurs de #reconnaissance_faciale, mais aussi d’#émotions… Réunis sous la bannière de la « #frontière_intelligente », ces #dispositifs_technologiques, reposant sur l’#anticipation, l’#identification et l’#automatisation du franchissement de la #frontière grâce aux bases de données biométriques, ont pour but de trier les voyageurs, facilitant le parcours des uns et bloquant celui des autres.

      L’Union européenne dispose ainsi d’une batterie de bases de données qui viennent compléter les contrôles aux frontières. Depuis 2011, une agence dédiée, l’#Agence_européenne_pour_la_gestion_opérationnelle_des_systèmes_d’information_à_grande_échelle, l’#EU-Lisa, a pour but d’élaborer et de développer, en lien avec des entreprises privées, le suivi des demandeurs d’asile.

      Elle gère ainsi plusieurs bases compilant des #données_biométriques. L’une d’elles, le « #Entry_and_Exit_System » (#EES), sera déployée en 2022, pour un coût évalué à 480 millions d’euros. L’EES a pour mission de collecter jusqu’à 400 millions de données sur les personnes non européennes franchissant les frontières de l’espace Schengen, afin de contrôler en temps réel les dépassements de durée légale de #visa. En cas de séjour prolongé devenu illégal, l’alerte sera donnée à l’ensemble des polices européennes.

      Se brûler les doigts pour ne pas être enregistré

      L’EU-Lisa gère également le fichier #Eurodac, qui consigne les empreintes digitales de chacun des demandeurs d’asile de l’Union européenne. Utilisé pour appliquer le #règlement_Dublin III, selon lequel la demande d’asile est déposée et traitée dans le pays européen où le migrant a été enregistré la première fois, il entraîne des stratégies de #résistance.

      « On a vu des migrants refuser de donner leurs empreintes à leur arrivée en Grèce, ou même se brûler les doigts pour ne pas être enregistrés dans Eurodac, rappelle Damien Simonneau, chercheur à l’Institut Convergences Migrations du Collège de France. Ils savent que s’ils ont, par exemple, de la famille en Allemagne, mais qu’ils ont été enregistrés en Grèce, ils seront renvoyés en Grèce pour que leur demande y soit traitée, ce qui a des conséquences énormes sur leur vie. » La procédure d’instruction dure en effet de 12 à 18 mois en moyenne.

      La collecte de données biométriques jalonne ainsi les parcours migratoires, des pays de départs jusqu’aux déplacements au sein de l’Union européenne, dans un but de limitation et de #contrôle. Pour lutter contre « la criminalité transfrontalière » et « l’immigration clandestine », le système de surveillance des zones frontières #Eurosur permet, via un partage d’informations en temps réel, d’intercepter avant leur arrivée les personnes tentant d’atteindre l’Union européenne.

      Des contrôles dans les pays de départ

      Pour le Transnational Institute, auteur avec le think tank Stop Wapenhandel et le Centre Delàs de plusieurs études sur les frontières, l’utilisation de ces bases de données témoigne d’une stratégie claire de la part de l’Union européenne. « Un des objectifs de l’expansion des #frontières_virtuelles, écrivent-ils ainsi dans le rapport Building Walls (https://www.tni.org/files/publication-downloads/building_walls_-_full_report_-_english.pdf), paru en 2018, est d’intercepter les réfugiés et les migrants avant même qu’ils n’atteignent les frontières européennes, pour ne pas avoir à traiter avec eux. »

      Si ces techniques permettent de pré-trier les demandes pour fluidifier le passage des frontières, en accélérant les déplacements autorisés, elles peuvent également, selon Damien Simonneau, avoir des effets pervers. « L’utilisation de ces mécanismes repose sur l’idée que la #technologie est un facilitateur, et il est vrai que l’#autonomisation de certaines démarches peut faciliter les déplacements de personnes autorisées à franchir les frontières, expose-t-il. Mais les technologies sont faillibles, et peuvent produire des #discriminations. »

      Ces #techniques_virtuelles, aux conséquences bien réelles, bouleversent ainsi le rapport à la frontière et les parcours migratoires. « Le migrant est confronté à de multiples points "frontière", disséminés un peu partout, analyse Damien Simonneau. Cela crée des #obstacles supplémentaires aux parcours migratoires : le contrôle n’est quasiment plus lié au franchissement d’une frontière nationale, il est déterritorialisé et peut se produire n’importe où, en amont comme en aval de la frontière de l’État. »

      Ainsi, la « politique d’#externalisation de l’Union européenne » permet au contrôle migratoire de s’exercer dans les pays de départ. Le programme européen « #SIV » collecte par exemple dès leur formulation dans les #consulats les données biométriques liées aux #demandes_de_visas.

      Plus encore, l’Union européenne délègue une partie de la gestion de ses frontières à d’autres pays : « Dans certains États du Sahel, explique Damien Simonneau, l’aide humanitaire et de développement est conditionnée à l’amélioration des contrôles aux frontières. »

      Un programme de l’Organisation internationale pour les migrations (OIM), le programme #MIDAS, financé par l’Union européenne, est ainsi employé par 23 pays, majoritairement en Afrique, mais aussi en Asie et en Amérique. Son but est de « collecter, traiter, stocker et analyser les informations [biométriques et biographiques] des voyageurs en temps réel » pour aider les polices locales à contrôler leurs frontières. Mais selon le réseau Migreurop, ces données peuvent également être transmises aux agences policières européennes. L’UE exerce ainsi un droit de regard, via Frontex, sur le système d’information et d’analyse de données sur la migration, installé à Makalondi au Niger.

      Des réfugiés qui paient avec leurs yeux

      Un mélange des genres, entre organisations humanitaires et États, entre protection, logistique et surveillance, qui se retrouve également dans les #camps_de_réfugiés. Dans les camps jordaniens de #Zaatari et d’#Azarq, par exemple, près de la frontière syrienne, les réfugiés paient depuis 2016 leurs aliments avec leurs iris.

      L’#aide_humanitaire_alimentaire distribuée par le Programme alimentaire mondial (PAM) leur est en effet versée sur un compte relié à leurs données biométriques. Il leur suffit de passer leurs yeux dans un scanner pour régler leurs achats. Une pratique qui facilite grandement la gestion #logistique du camp par le #HCR et le PAM, en permettant la #traçabilité des échanges et en évitant les fraudes et les vols.

      Mais selon Léa Macias, anthropologue à l’EHESS, cela a aussi des inconvénients. « Si ce paiement avec les yeux peut rassurer certains réfugiés, dans la mesure où cela les protège contre les vols, développe-t-elle, le procédé est également perçu comme une #violence. Les réfugiés ont bien conscience que personne d’autre au monde, dans une situation normale, ne paie ainsi avec son #corps. »

      Le danger de la fuite de données

      La chercheuse s’inquiète également du devenir des données ainsi collectées, et se pose la question de l’intérêt des réfugiés dans ce processus. « Les humanitaires sont poussés à utiliser ces nouvelles technologies, expose-t-elle, qui sont vues comme un gage de fiabilité par les bailleurs de fonds. Mais la #technologisation n’est pas toujours dans l’intérêt des réfugiés. En cas de fuite ou de hackage des bases de données, cela les expose même à des dangers. »

      Un rapport de Human Rights Watch (HRW) (https://www.hrw.org/news/2021/06/15/un-shared-rohingya-data-without-informed-consent), publié mardi 15 juin, alerte ainsi sur des #transferts_de_données biométriques appartenant à des #Rohingyas réfugiés au Bangladesh. Ces données, collectées par le Haut-commissariat aux réfugiés (HCR) de l’ONU, ont été transmises par le gouvernement du Bangladesh à l’État birman. Si le HCR a réagi (https://www.unhcr.org/en-us/news/press/2021/6/60c85a7b4/news-comment-statement-refugee-registration-data-collection-bangladesh.html) en affirmant que les personnes concernées avaient donné leur accord à ce #transfert_de_données pour préparer un éventuel retour en Birmanie, rien ne permet cependant de garantir qu’ils seront bien reçus si leur nom « bipe » au moment de passer la frontière.

      https://www.rfi.fr/fr/technologies/20210620-pour-les-r%C3%A9fugi%C3%A9s-la-biom%C3%A9trie-tout-au-long-du-chemin

      #smart_borders #tri #catégorisation #déterritorialisation #réfugiés_rohingyas

      –---

      Sur les doigts brûlés pour ne pas se faire identifier par les empreintes digitales, voir la scène du film Qu’ils reposent en paix de Sylvain George, dont j’ai fait une brève recension :

      Instant tragique : ce qu’un migrant appelle la « prière ». Ce moment collectif où les migrants tentent de faire disparaître leurs empreintes digitales. Étape symbolique où ils se défont de leur propre identité.

      https://visionscarto.net/a-calais-l-etat-ne-peut-dissoudre

  • Vanessa Codaccioni : « L’État nous pousse à agir comme la police »
    https://reporterre.net/Vanessa-Codaccioni-L-Etat-nous-pousse-a-agir-comme-la-police

    Promouvoir la surveillance de tous par tous. Voilà ce que veut l’État, comme l’explique Vanessa Codaccioni dans son dernier ouvrage, « La société de vigilance ». Et en plus d’appeler les citoyens à la délation, il les surveille toujours plus en renforçant les pouvoirs de la police, comme l’illustre la loi de « sécurité globale ». Ce samedi 16 janvier, près d’une centaine de marches des libertés devraient à nouveau avoir lieu en France, contre la proposition de loi relative à la « sécurité globale ». La (...)

    #algorithme #CCTV #activisme #biométrie #écologie #féminisme #aérien #facial #législation #reconnaissance #religion #vidéo-surveillance #BlackLivesMatter #délation #Islam #surveillance (...)

    ##syndicat

  • The dark side of open source intelligence
    https://www.codastory.com/authoritarian-tech/negatives-open-source-intelligence

    Internet sleuths have used publicly available data to help track down last week’s Washington D.C. rioters. But what happens when the wrong people are identified ? In May, a video of a woman flouting a national Covid-19 mask mandate went viral on social media in Singapore. In the clip, the bare-faced woman argues with passersby outside of a grocery store, defending herself as “a sovereign” and therefore exempt from the law. Following her arrest later that day, internet detectives took matters (...)

    #FBI #algorithme #CCTV #biométrie #facial #reconnaissance #vidéo-surveillance #délation #extrême-droite #surveillance #criminalité #bug #racisme #biais #discrimination (...)

    ##criminalité ##Clearview

  • The Capitol siege and facial recognition technology.
    https://slate.com/technology/2021/01/facial-recognition-technology-capitol-siege.html

    In a recent New Yorker article about the Capitol siege, Ronan Farrow described how investigators used a bevy of online data and facial recognition technology to confirm the identity of Larry Rendall Brock Jr., an Air Force Academy graduate and combat veteran from Texas. Brock was photographed inside the Capitol carrying zip ties, presumably to be used to restrain someone. (He claimed to Farrow that he merely picked them up off the floor and forgot about them. Brock was arrested Sunday and (...)

    #Clearview #algorithme #CCTV #biométrie #technologisme #facial #reconnaissance #vidéo-surveillance #extrême-droite #surveillance #voix (...)

    ##AINow

  • A Local Police Department Is Running Clearview AI Searches for the FBI - Dave Gershgorn
    https://onezero.medium.com/a-local-police-department-is-running-clearview-ai-searches-for-the-f

    The FBI, which is searching for insurrectionists who stormed the U.S. Capitol last week, is working with an unlikely partner : a local police department more than 600 miles away from Washington, D.C. An officer in Alabama named Jason Webb told the Wall Street Journal that he had used Clearview AI technology on photos captured during the riot and sent matches to the FBI. The story highlights how access to Clearview’s platform fundamentally changes the capabilities of local law enforcement. (...)

    #Clearview #algorithme #CCTV #biométrie #données #facial #reconnaissance #vidéo-surveillance (...)

    ##surveillance

  • Capitole : la police identifie les assaillants grâce à Clearview AI et sa reconnaissance faciale
    https://www.lebigdata.fr/clearview-ai-identification-assaillants-capitole

    Selon le PDG de Clearview AI, l’utilisation de la technologie de reconnaissance faciale de son entreprise par les forces de l’ordre a augmenté de 26% le lendemain de l’attaque du Capitole. D’abord rapporté par le New York Times, Hoan Ton-That a confirmé que Clearview avait connu une forte augmentation de l’utilisation de sa technologie le 7 janvier 2021 en termes de volume de recherche. Exploiter les images capturées L’attaque du 6 janvier a été diffusée en direct sur les chaînes du câble et (...)

    #Clearview #algorithme #CCTV #biométrie #racisme #facial #reconnaissance #discrimination (...)

    ##extrême-droite

    • Selon le Times, le département de police de Miami utilise Clearview AI pour identifier certains des émeutiers, envoyant des correspondances possibles au groupe de travail conjoint du FBI sur le terrorisme. Et le Wall Street Journal a rapporté qu’un département de police de l’Alabama utilisait également Clearview pour identifier les visages sur les images de l’émeute avant d’envoyer les informations au FBI.

      Certains systèmes de reconnaissance faciale utilisés par les autorités utilisent des images telles que des photos de permis de conduire. La base de données de Clearview pour sa part contient quelque 3 milliards d’images extraites des médias sociaux et d’autres sites web. Ce qui explique son efficacité. Ces informations ont été révélées par une enquête du Times l’année dernière.

      En plus de soulever de sérieuses préoccupations concernant la confidentialité, la pratique consistant à prendre des images à partir des médias sociaux a enfreint les règles des plateformes. Des entreprises de technologie ont alors envoyé de nombreuses ordonnances de cesser et de s’abstenir à Clearview à la suite de l’enquête.

      Nathan Freed Wessler, directeur adjoint du projet Speech, Privacy, and Technology de l’ACLU, a déclaré que bien que la technologie de reconnaissance faciale ne soit pas réglementée par la loi fédérale, son potentiel de surveillance de masse des communautés de couleur a conduit à juste titre l’État et les gouvernements locaux à travers le pays à interdire son utilisation par les forces de l’ordre.

  • Civil society calls for AI red lines in the European Union’s Artificial Intelligence proposal
    https://edri.org/our-work/civil-society-call-for-ai-red-lines-in-the-european-unions-artificial-intellig

    European Digital Rights together with 61 civil society organisations have sent an open letter to the European Commission demanding red lines for the applications of AI that threaten fundamental rights. With the European Union’s AI proposal set to launch this quarter, Europe has the opportunity to demonstrate to the world that true innovation can arise only when we can be confident that everyone will be protected from the most harmful, egregious violations of our fundamental rights. Europe’s (...)

    #algorithme #biométrie #racisme #facial #prédiction #reconnaissance #sexisme #vidéo-surveillance #discrimination #surveillance (...)

    ##EuropeanDigitalRights-EDRi

  • Face Surveillance and the Capitol Attack
    https://www.eff.org/deeplinks/2021/01/face-surveillance-and-capitol-attack

    After last week’s violent attack on the Capitol, law enforcement is working overtime to identify the perpetrators. This is critical to accountability for the attempted insurrection. Law enforcement has many, many tools at their disposal to do this, especially given the very public nature of most of the organizing. But we object to one method reportedly being used to determine who was involved : law enforcement using facial recognition technologies to compare photos of unidentified (...)

    #algorithme #CCTV #biométrie #racisme #facial #reconnaissance #vidéo-surveillance #discrimination #extrême-droite #surveillance #EFF (...)

    ##Clearview

  • Sylvain Louvet et Ludovic Gaillard, prix Albert-Londres 2020 : “Avec la loi Sécurité globale, on franchit encore un cap dans la surveillance”
    https://www.telerama.fr/ecrans/sylvain-louvet-et-ludovic-gaillard-prix-albert-londres-2020-avec-la-loi-sec

    Les auteurs du documentaire “Tous surveillés, 7 milliards de suspects” ont été récompensés du prix Albert-Londres de l’audiovisuel ce 5 décembre. Une enquête remarquable sur les techniques de surveillance de masse et leurs dérives, à voir d’urgence sur Télérama.fr. Cette année encore le prix Albert-Londres de l’audiovisuel récompense un documentaire aux prises avec une des actualités les plus brûlantes du moment : les techniques de surveillance de masse, la reconnaissance faciale, les drones, leur (...)

    #algorithme #capteur #CCTV #drone #IJOP #biométrie #émotions #facial #reconnaissance #religion #son #vidéo-surveillance #Islam #panopticon (...)

    ##surveillance

  • The facial-recognition app Clearview sees a spike in use after Capitol attack.
    https://www.nytimes.com/live/2021/01/09/us/trump-biden#facial-recognition-clearview-capitol

    After the Capitol riot, Clearview AI, a facial-recognition app used by law enforcement, has seen a spike in use, said the company’s chief executive, Hoan Ton-That.

    “There was a 26 percent increase of searches over our usual weekday search volume,” Mr. Ton-That said.

    There are ample online photos and videos of rioters, many unmasked, breaching the Capitol. The F.B.I. has posted the faces of dozens of them and has requested assistance identifying them. Local police departments around the country are answering their call.

    “We are poring over whatever images or videos are available from whatever sites we can get our hands on,” said Armando Aguilar, assistant chief at the Miami Police Department, who oversees investigations.

    Two detectives in the department’s Real Time Crime Center are using Clearview to try to identify rioters and are sending the potential matches to the F.B.I.’s Joint Terrorism Task Force office in Miami. They made one potential match within their first hour of searching.

    “This is the greatest threat we’ve faced in my lifetime,” Mr. Aguilar said. “The peaceful transition of power is foundational to our republic.”

    Traditional facial recognition tools used by law enforcement depend on databases containing government-provided photos, such as driver’s license photos and mug shots. But Clearview, which is used by over 2,400 law enforcement agencies, according to the company, relies instead on a database of more than 3 billion photos collected from social media networks and other public websites. When an officer runs a search, the app provides links to sites on the web where the person’s face has appeared.

    In part because of its effectiveness, Clearview has become controversial. After The New York Times revealed its existence and widespread use last year, lawmakers and social media companies tried to curtail its operations, fearing that its facial-recognition capabilities could pave the way for a dystopian future.

    The Wall Street Journal reported on Friday that the Oxford Police Department in Alabama is also using Clearview to identify Capitol riot suspects and is sending information to the F.B.I. Neither the Oxford Police Department nor the F.B.I. has responded to requests for comment.

    Facial recognition is not a perfect tool. Law enforcement says that it uses facial recognition only as a clue in an investigation and would not charge someone based on that alone, though that has happened in the past.

    When asked if Clearview had performed any searches itself, Mr. Ton-That demurred.

    “Some people think we should be, but that’s really not our job. We’re a technology company and provider,” he said. “We’re not vigilantes.”

    — Kashmir Hill

    #Clearview #FBI #algorithme #CCTV #biométrie #élections #facial #reconnaissance #délation (...)

    ##extrême-droite

  • The Capitol Attack Doesn’t Justify Expanding Surveillance
    https://www.wired.com/story/opinion-the-capitol-attack-doesnt-justify-expanding-surveillance

    The security state that failed to keep DC safe doesn’t need invasive technology to meet this moment—it needs more civilian oversight. They took our Capitol, stormed the halls, pilfered our documents, and shattered the norms of our democracy. The lasting damage from Wednesday’s attack will not come from the mob itself, but from how we respond. Right now, a growing chorus is demanding we use facial recognition, cellphone tower data, and every manner of invasive surveillance to punish the mob. (...)

    #FBI #biométrie #racisme #technologisme #facial #reconnaissance #discrimination #extrême-droite (...)

    ##surveillance

  • Claims Antifa Embedded in Capitol Riots Come From a Deeply Unreliable Facial Recognition Company - Dave Gershgorn
    https://onezero.medium.com/claims-antifa-embedded-in-capitol-riots-come-from-a-deeply-unreliabl

    XRVision also has a track record of spreading conspiracy theories about Hunter Biden Congressman Matt Gaetz, a Republican from Florida, took to the House floor on Wednesday night to spread an increasingly popular conspiracy theory that the pro-Trump mobs that overtook the Capitol building were in fact aligned with antifa. The claim was based on an anonymous source in a story from the Washington Times, a conservative outlet that has repeatedly pushed conspiracy theories. The source was (...)

    #biométrie #manipulation #facial #reconnaissance #vidéo-surveillance #extrême-droite #surveillance (...)

    ##XRVision

  • Le monde en face - Fliquez-vous les uns les autres : le débat en streaming
    https://www.france.tv/france-5/le-monde-en-face/2168885-fliquez-vous-les-uns-les-autres-le-debat.html

    présenté par : Marina Carrère d’Encausse À l’issue de la diffusion du documentaire, Marina Carrère d’Encausse proposera un débat avec quatre invités :

    présenté par : Marina Carrère d’Encausse À l’issue de la diffusion du documentaire, Marina Carrère d’Encausse proposera un débat avec quatre invités : Michel Henry, coauteur du documentaire Laurence Budelot, maire de Vert-le-Petit (Essonne) Olivier Tesquet, journaliste à Télérama, spécialiste du numérique Martin Drago, juriste, La Quadrature du Net (...)

    #algorithme #CCTV #biométrie #facial #reconnaissance #vidéo-surveillance #surveillance (...)

    ##LaQuadratureduNet

  • Technopolice, villes et vies sous surveillance
    https://www.laquadrature.net/2021/01/03/technopolice-villes-et-vies-sous-surveillance

    Depuis plusieurs années, des projets de « Smart Cities » se développent en France, prétendant se fonder sur les nouvelles technologies du « Big Data » et de l’« Intelligence Artificielle » pour améliorer notre quotidien urbain. Derrière ce vernis de ces villes soi-disant « intelligentes », se cachent des dispositifs souvent dangereusement sécuritaires. D’une part, car l’idée de multiplier les capteurs au sein d’une ville, d’interconnecter l’ensemble de ses réseaux et d’en gérer l’entièreté depuis un centre (...)

    #Cisco #Gemalto #Huawei #Thalès #algorithme #capteur #CCTV #PARAFE #SmartCity #biométrie #facial #reconnaissance #vidéo-surveillance #comportement #surveillance #BigData #TAJ #Technopolice (...)

    ##LaQuadratureduNet

  • Inside China’s unexpected quest to protect data privacy
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr

    A new privacy law would look a lot like Europe’s GDPR—but will it restrict state surveillance?

    Late in the summer of 2016, Xu Yuyu received a call that promised to change her life. Her college entrance examination scores, she was told, had won her admission to the English department of the Nanjing University of Posts and Telecommunications. Xu lived in the city of Linyi in Shandong, a coastal province in China, southeast of Beijing. She came from a poor family, singularly reliant on her father’s meager income. But her parents had painstakingly saved for her tuition; very few of her relatives had ever been to college.

    A few days later, Xu received another call telling her she had also been awarded a scholarship. To collect the 2,600 yuan ($370), she needed to first deposit a 9,900 yuan “activation fee” into her university account. Having applied for financial aid only days before, she wired the money to the number the caller gave her. That night, the family rushed to the police to report that they had been defrauded. Xu’s father later said his greatest regret was asking the officer whether they might still get their money back. The answer—“Likely not”—only exacerbated Xu’s devastation. On the way home she suffered a heart attack. She died in a hospital two days later.

    An investigation determined that while the first call had been genuine, the second had come from scammers who’d paid a hacker for Xu’s number, admissions status, and request for financial aid.

    For Chinese consumers all too familiar with having their data stolen, Xu became an emblem. Her death sparked a national outcry for greater data privacy protections. Only months before, the European Union had adopted the General Data Protection Regulation (GDPR), an attempt to give European citizens control over how their personal data is used. Meanwhile, Donald Trump was about to win the American presidential election, fueled in part by a campaign that relied extensively on voter data. That data included details on 87 million Facebook accounts, illicitly obtained by the consulting firm Cambridge Analytica. Chinese regulators and legal scholars followed these events closely.

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms. In his 2018 Senate testimony after the Cambridge Analytica scandal, Facebook’s CEO, Mark Zuckerberg, urged regulators not to clamp down too hard on technologies like face recognition. “We still need to make it so that American companies can innovate in those areas,” he said, “or else we’re going to fall behind Chinese competitors and others around the world.”

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country. It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime, says Samm Sacks, a leading China scholar at Yale and New America, a think tank in Washington, DC. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    November 2000 arguably marks the birth of the modern Chinese surveillance state. That month, the Ministry of Public Security, the government agency that oversees daily law enforcement, announced a new project at a trade show in Beijing. The agency envisioned a centralized national system that would integrate both physical and digital surveillance using the latest technology. It was named Golden Shield.

    Eager to cash in, Western companies including American conglomerate Cisco, Finnish telecom giant Nokia, and Canada’s Nortel Networks worked with the agency on different parts of the project. They helped construct a nationwide database for storing information on all Chinese adults, and developed a sophisticated system for controlling information flow on the internet—what would eventually become the Great Firewall. Much of the equipment involved had in fact already been standardized to make surveillance easier in the US—a consequence of the Communications Assistance for Law Enforcement Act of 1994.

    Despite the standardized equipment, the Golden Shield project was hampered by data silos and turf wars within the Chinese government. Over time, the ministry’s pursuit of a singular, unified system devolved into two separate operations: a surveillance and database system, devoted to gathering and storing information, and the social-credit system, which some 40 government departments participate in. When people repeatedly do things that aren’t allowed—from jaywalking to engaging in business corruption—their social-credit score falls and they can be blocked from things like buying train and plane tickets or applying for a mortgage.

    In the same year the Ministry of Public Security announced Golden Shield, Hong Yanqing entered the ministry’s police university in Beijing. But after seven years of training, having received his bachelor’s and master’s degrees, Hong began to have second thoughts about becoming a policeman. He applied instead to study abroad. By the fall of 2007, he had moved to the Netherlands to begin a PhD in international human rights law, approved and subsidized by the Chinese government.

    Over the next four years, he familiarized himself with the Western practice of law through his PhD research and a series of internships at international organizations. He worked at the International Labor Organization on global workplace discrimination law and the World Health Organization on road safety in China. “It’s a very legalistic culture in the West—that really strikes me. People seem to go to court a lot,” he says. “For example, for human rights law, most of the textbooks are about the significant cases in court resolving human rights issues.”

    Hong found this to be strangely inefficient. He saw going to court as a final resort for patching up the law’s inadequacies, not a principal tool for establishing it in the first place. Legislation crafted more comprehensively and with greater forethought, he believed, would achieve better outcomes than a system patched together through a haphazard accumulation of case law, as in the US.

    After graduating, he carried these ideas back to Beijing in 2012, on the eve of Xi Jinping’s ascent to the presidency. Hong worked at the UN Development Program and then as a journalist for the People’s Daily, the largest newspaper in China, which is owned by the government.

    Xi began to rapidly expand the scope of government censorship. Influential commentators, or “Big Vs”—named for their verified accounts on social media—had grown comfortable criticizing and ridiculing the Chinese Communist Party. In the fall of 2013, the party arrested hundreds of microbloggers for what it described as “malicious rumor-mongering” and paraded a particularly influential one on national television to make an example of him.

    The moment marked the beginning of a new era of censorship. The following year, the Cyberspace Administration of China was founded. The new central agency was responsible for everything involved in internet regulation, including national security, media and speech censorship, and data protection. Hong left the People’s Daily and joined the agency’s department of international affairs. He represented it at the UN and other global bodies and worked on cybersecurity cooperation with other governments.

    By July 2015, the Cyberspace Administration had released a draft of its first law. The Cybersecurity Law, which entered into force in June of 2017, required that companies obtain consent from people to collect their personal information. At the same time, it tightened internet censorship by banning anonymous users—a provision enforced by regular government inspections of data from internet service providers.

    In the spring of 2016, Hong sought to return to academia, but the agency asked him to stay. The Cybersecurity Law had purposely left the regulation of personal data protection vague, but consumer data breaches and theft had reached unbearable levels. A 2016 study by the Internet Society of China found that 84% of those surveyed had suffered some leak of their data, including phone numbers, addresses, and bank account details. This was spurring a growing distrust of digital service providers that required access to personal information, such as ride-hailing, food-delivery, and financial apps. Xu Yuyu’s death poured oil on the flames.

    The government worried that such sentiments would weaken participation in the digital economy, which had become a central part of its strategy for shoring up the country’s slowing economic growth. The advent of GDPR also made the government realize that Chinese tech giants would need to meet global privacy norms in order to expand abroad.

    Hong was put in charge of a new task force that would write a Personal Information Protection Specification (PIPS) to help solve these challenges. The document, though nonbinding, would tell companies how regulators intended to implement the Cybersecurity Law. In the process, the government hoped, it would nudge them to adopt new norms for data protection by themselves.

    Hong’s task force set about translating every relevant document they could find into Chinese. They translated the privacy guidelines put out by the Organization for Economic Cooperation and Development and by its counterpart, the Asia-Pacific Economic Cooperation; they translated GDPR and the California Consumer Privacy Act. They even translated the 2012 White House Consumer Privacy Bill of Rights, introduced by the Obama administration but never made into law. All the while, Hong met regularly with European and American data protection regulators and scholars.

    Bit by bit, from the documents and consultations, a general choice emerged. “People were saying, in very simplistic terms, ‘We have a European model and the US model,’” Hong recalls. The two approaches diverged substantially in philosophy and implementation. Which one to follow became the task force’s first debate.

    At the core of the European model is the idea that people have a fundamental right to have their data protected. GDPR places the burden of proof on data collectors, such as companies, to demonstrate why they need the data. By contrast, the US model privileges industry over consumers. Businesses define for themselves what constitutes reasonable data collection; consumers only get to choose whether to use that business. The laws on data protection are also far more piecemeal than in Europe, divvied up among sectoral regulators and specific states.

    At the time, without a central law or single agency in charge of data protection, China’s model more closely resembled the American one. The task force, however, found the European approach compelling. “The European rule structure, the whole system, is more clear,” Hong says.

    But most of the task force members were representatives from Chinese tech giants, like Baidu, Alibaba, and Huawei, and they felt that GDPR was too restrictive. So they adopted its broad strokes—including its limits on data collection and its requirements on data storage and data deletion—and then loosened some of its language. GDPR’s principle of data minimization, for example, maintains that only necessary data should be collected in exchange for a service. PIPS allows room for other data collection relevant to the service provided.

    PIPS took effect in May 2018, the same month that GDPR finally took effect. But as Chinese officials watched the US upheaval over the Facebook and Cambridge Analytica scandal, they realized that a nonbinding agreement would not be enough. The Cybersecurity Law didn’t have a strong mechanism for enforcing data protection. Regulators could only fine violators up to 1,000,000 yuan ($140,000), an inconsequential amount for large companies. Soon after, the National People’s Congress, China’s top legislative body, voted to begin drafting a Personal Information Protection Law within its current five-year legislative period, which ends in 2023. It would strengthen data protection provisions, provide for tougher penalties, and potentially create a new enforcement agency.

    After Cambridge Analytica, says Hong, “the government agency understood, ‘Okay, if you don’t really implement or enforce those privacy rules, then you could have a major scandal, even affecting political things.’”

    The local police investigation of Xu Yuyu’s death eventually identified the scammers who had called her. It had been a gang of seven who’d cheated many other victims out of more than 560,000 yuan using illegally obtained personal information. The court ruled that Xu’s death had been a direct result of the stress of losing her family’s savings. Because of this, and his role in orchestrating tens of thousands of other calls, the ringleader, Chen Wenhui, 22, was sentenced to life in prison. The others received sentences between three and 15 years.Retour ligne automatique
    xu yuyu

    Emboldened, Chinese media and consumers began more openly criticizing privacy violations. In March 2018, internet search giant Baidu’s CEO, Robin Li, sparked social-media outrage after suggesting that Chinese consumers were willing to “exchange privacy for safety, convenience, or efficiency.” “Nonsense,” wrote a social-media user, later quoted by the People’s Daily. “It’s more accurate to say [it is] impossible to defend [our privacy] effectively.”

    In late October 2019, social-media users once again expressed anger after photos began circulating of a school’s students wearing brainwave-monitoring headbands, supposedly to improve their focus and learning. The local educational authority eventually stepped in and told the school to stop using the headbands because they violated students’ privacy. A week later, a Chinese law professor sued a Hangzhou wildlife zoo for replacing its fingerprint-based entry system with face recognition, saying the zoo had failed to obtain his consent for storing his image.

    But the public’s growing sensitivity to infringements of consumer privacy has not led to many limits on state surveillance, nor even much scrutiny of it. As Maya Wang, a researcher at Human Rights Watch, points out, this is in part because most Chinese citizens don’t know the scale or scope of the government’s operations. In China, as in the US and Europe, there are broad public and national security exemptions to data privacy laws. The Cybersecurity Law, for example, allows the government to demand data from private actors to assist in criminal legal investigations. The Ministry of Public Security also accumulates massive amounts of data on individuals directly. As a result, data privacy in industry can be strengthened without significantly limiting the state’s access to information.

    The onset of the pandemic, however, has disturbed this uneasy balance.

    On February 11, Ant Financial, a financial technology giant headquartered in Hangzhou, a city southwest of Shanghai, released an app-building platform called AliPay Health Code. The same day, the Hangzhou government released an app it had built using the platform. The Hangzhou app asked people to self-report their travel and health information, and then gave them a color code of red, yellow, or green. Suddenly Hangzhou’s 10 million residents were all required to show a green code to take the subway, shop for groceries, or enter a mall. Within a week, local governments in over 100 cities had used AliPay Health Code to develop their own apps. Rival tech giant Tencent quickly followed with its own platform for building them.

    The apps made visible a worrying level of state surveillance and sparked a new wave of public debate. In March, Hu Yong, a journalism professor at Beijing University and an influential blogger on Weibo, argued that the government’s pandemic data collection had crossed a line. Not only had it led to instances of information being stolen, he wrote, but it had also opened the door to such data being used beyond its original purpose. “Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” he asked.

    Indeed, in late May, leaked documents revealed plans from the Hangzhou government to make a more permanent health-code app that would score citizens on behaviors like exercising, smoking, and sleeping. After a public outcry, city officials canceled the project. That state-run media had also published stories criticizing the app likely helped.

    The debate quickly made its way to the central government. That month, the National People’s Congress announced it intended to fast-track the Personal Information Protection Law. The scale of the data collected during the pandemic had made strong enforcement more urgent, delegates said, and highlighted the need to clarify the scope of the government’s data collection and data deletion procedures during special emergencies. By July, the legislative body had proposed a new “strict approval” process for government authorities to undergo before collecting data from private-sector platforms. The language again remains vague, to be fleshed out later—perhaps through another nonbinding document—but this move “could mark a step toward limiting the broad scope” of existing government exemptions for national security, wrote Sacks and fellow China scholars at New America.

    Hong similarly believes the discrepancy between rules governing industry and government data collection won’t last, and the government will soon begin to limit its own scope. “We cannot simply address one actor while leaving the other out,” he says. “That wouldn’t be a very scientific approach.”

    Other observers disagree. The government could easily make superficial efforts to address public backlash against visible data collection without really touching the core of the Ministry of Public Security’s national operations, says Wang, of Human Rights Watch. She adds that any laws would likely be enforced unevenly: “In Xinjiang, Turkic Muslims have no say whatsoever in how they’re treated.”

    Still, Hong remains an optimist. In July, he started a job teaching law at Beijing University, and he now maintains a blog on cybersecurity and data issues. Monthly, he meets with a budding community of data protection officers in China, who carefully watch how data governance is evolving around the world.

    #criminalité #Nokia_Siemens #fraude #Huawei #payement #Cisco #CambridgeAnalytica/Emerdata #Baidu #Alibaba #domination #bénéfices #BHATX #BigData #lutte #publicité (...)

    ##criminalité ##CambridgeAnalytica/Emerdata ##publicité ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##Nortel_Networks ##Facebook ##biométrie ##consommation ##génétique ##consentement ##facial ##reconnaissance ##empreintes ##Islam ##SocialCreditSystem ##surveillance ##TheGreatFirewallofChina ##HumanRightsWatch

  • A year in surveillance
    https://aboutintel.eu/a-year-in-surveillance

    2020 has been a very turbulent year. This is also true with regards to European surveillance politics, both at the EU level and in national politics. Like most years, it was largely characterised by one central conflict, which in simple terms goes like this : a push for more and more technologically advanced surveillance practices by both industry and government actors on the one hand, and fierce resistance from civil society, academia, and some regulators on the other, attempting to reign (...)

    #Palantir #BND #algorithme #IMSI-catchers #biométrie #police #racisme #technologisme #facial #prédiction #reconnaissance #vidéo-surveillance #COVID-19 #écoutes #santé #surveillance #discrimination #OpenRightsGroup #PrivacyInternational #LaQuadratureduNet (...)

    ##santé ##BigData ##Liberty ##MI5

  • Covid-19 Ushered in a New Era of Government Surveillance
    https://onezero.medium.com/covid-19-ushered-in-a-new-era-of-government-surveillance-414afb7e422

    Government-mandated drone surveillance and location tracking apps could be here to stay In early December, after finding 16 people had illegally crossed the border from Myanmar to Thailand and evaded the mandatory quarantine period, the Thai government said it would start patrolling the border with new surveillance equipment like drones and ultraviolet cameras. In 2020, this kind of surveillance, justified by the coronavirus pandemic, has gone mainstream. Since March, more than 30 (...)

    #algorithme #AarogyaSetu_ #Bluetooth #CCTV #drone #smartphone #biométrie #contactTracing #géolocalisation #migration #température #facial #reconnaissance #vidéo-surveillance #COVID-19 #frontières #santé (...)

    ##santé ##surveillance

  • China : Big Data Program Targets Xinjiang’s Muslims
    https://www.hrw.org/news/2020/12/09/china-big-data-program-targets-xinjiangs-muslims

    Leaked List of Over 2,000 Detainees Demonstrates Automated Repression (New York) – A big data program for policing in China’s Xinjiang region arbitrarily selects Turkic Muslims for possible detention, Human Rights Watch said today. A leaked list of over 2,000 detainees from Aksu prefecture provided to Human Rights Watch is further evidence of China’s use of technology in its repression of the Muslim population. The big data program, the Integrated Joint Operations Platform (IJOP), apparently (...)

    #algorithme #biométrie #racisme #comportement #discrimination #HumanRightsWatch

  • Google told its scientists to ’strike a positive tone’ in AI research - documents
    https://www.reuters.com/article/us-alphabet-google-research-focus-idUSKBN28X1CB

    OAKLAND, Calif. (Reuters) - Alphabet Inc’s Google this year moved to tighten control over its scientists’ papers by launching a “sensitive topics” review, and in at least three cases requested authors refrain from casting its technology in a negative light, according to internal communications and interviews with researchers involved in the work. Google’s new review procedure asks that researchers consult with legal, policy and public relations teams before pursuing topics such as face and (...)

    #Google #Gmail #YouTube #algorithme #biométrie #géolocalisation #manipulation #religion #COVID-19 #discrimination (...)

    ##santé
    https://static.reuters.com/resources/r

  • Coincés dans Zoom (4/4) : pourquoi allons-nous y rester ?
    http://www.internetactu.net/2020/12/18/coince-dans-zoom-44-pourquoi-allons-nous-y-rester

    Alors que nous voilà plus ou moins déconfinés, notre expérience de Zoom est appelée à s’alléger. Pas si sûr !… Car même si la seule évocation de son nom vous provoque des crises d’urticaire, il est probable que Zoom soit là pour rester. Pourquoi allons-nous devoir nous habituer à cohabiter avec ce nouvel « ogre » de nos télévies ? L’avenir de Zoom : la vidéosurveillance de notre intimité ? Si l’on en croit nombre d’articles parus sur le sujet, l’avenir des services de visioconférence s’annonce comme une (...)

    #Zoom #algorithme #CCTV #biométrie #vidéo-surveillance #GigEconomy #panopticon #surveillance #télétravail #travail (...)

    ##visioconférence