• The dark side of open source intelligence
    https://www.codastory.com/authoritarian-tech/negatives-open-source-intelligence

    Internet sleuths have used publicly available data to help track down last week’s Washington D.C. rioters. But what happens when the wrong people are identified ? In May, a video of a woman flouting a national Covid-19 mask mandate went viral on social media in Singapore. In the clip, the bare-faced woman argues with passersby outside of a grocery store, defending herself as “a sovereign” and therefore exempt from the law. Following her arrest later that day, internet detectives took matters (...)

    #FBI #algorithme #CCTV #biométrie #facial #reconnaissance #vidéo-surveillance #délation #extrême-droite #surveillance #criminalité #bug #racisme #biais #discrimination (...)

    ##criminalité ##Clearview

  • The Capitol siege and facial recognition technology.
    https://slate.com/technology/2021/01/facial-recognition-technology-capitol-siege.html

    In a recent New Yorker article about the Capitol siege, Ronan Farrow described how investigators used a bevy of online data and facial recognition technology to confirm the identity of Larry Rendall Brock Jr., an Air Force Academy graduate and combat veteran from Texas. Brock was photographed inside the Capitol carrying zip ties, presumably to be used to restrain someone. (He claimed to Farrow that he merely picked them up off the floor and forgot about them. Brock was arrested Sunday and (...)

    #Clearview #algorithme #CCTV #biométrie #technologisme #facial #reconnaissance #vidéo-surveillance #extrême-droite #surveillance #voix (...)

    ##AINow

  • A Local Police Department Is Running Clearview AI Searches for the FBI - Dave Gershgorn
    https://onezero.medium.com/a-local-police-department-is-running-clearview-ai-searches-for-the-f

    The FBI, which is searching for insurrectionists who stormed the U.S. Capitol last week, is working with an unlikely partner : a local police department more than 600 miles away from Washington, D.C. An officer in Alabama named Jason Webb told the Wall Street Journal that he had used Clearview AI technology on photos captured during the riot and sent matches to the FBI. The story highlights how access to Clearview’s platform fundamentally changes the capabilities of local law enforcement. (...)

    #Clearview #algorithme #CCTV #biométrie #données #facial #reconnaissance #vidéo-surveillance (...)

    ##surveillance

  • Capitole : la police identifie les assaillants grâce à Clearview AI et sa reconnaissance faciale
    https://www.lebigdata.fr/clearview-ai-identification-assaillants-capitole

    Selon le PDG de Clearview AI, l’utilisation de la technologie de reconnaissance faciale de son entreprise par les forces de l’ordre a augmenté de 26% le lendemain de l’attaque du Capitole. D’abord rapporté par le New York Times, Hoan Ton-That a confirmé que Clearview avait connu une forte augmentation de l’utilisation de sa technologie le 7 janvier 2021 en termes de volume de recherche. Exploiter les images capturées L’attaque du 6 janvier a été diffusée en direct sur les chaînes du câble et (...)

    #Clearview #algorithme #CCTV #biométrie #racisme #facial #reconnaissance #discrimination (...)

    ##extrême-droite

    • Selon le Times, le département de police de Miami utilise Clearview AI pour identifier certains des émeutiers, envoyant des correspondances possibles au groupe de travail conjoint du FBI sur le terrorisme. Et le Wall Street Journal a rapporté qu’un département de police de l’Alabama utilisait également Clearview pour identifier les visages sur les images de l’émeute avant d’envoyer les informations au FBI.

      Certains systèmes de reconnaissance faciale utilisés par les autorités utilisent des images telles que des photos de permis de conduire. La base de données de Clearview pour sa part contient quelque 3 milliards d’images extraites des médias sociaux et d’autres sites web. Ce qui explique son efficacité. Ces informations ont été révélées par une enquête du Times l’année dernière.

      En plus de soulever de sérieuses préoccupations concernant la confidentialité, la pratique consistant à prendre des images à partir des médias sociaux a enfreint les règles des plateformes. Des entreprises de technologie ont alors envoyé de nombreuses ordonnances de cesser et de s’abstenir à Clearview à la suite de l’enquête.

      Nathan Freed Wessler, directeur adjoint du projet Speech, Privacy, and Technology de l’ACLU, a déclaré que bien que la technologie de reconnaissance faciale ne soit pas réglementée par la loi fédérale, son potentiel de surveillance de masse des communautés de couleur a conduit à juste titre l’État et les gouvernements locaux à travers le pays à interdire son utilisation par les forces de l’ordre.

  • Civil society calls for AI red lines in the European Union’s Artificial Intelligence proposal
    https://edri.org/our-work/civil-society-call-for-ai-red-lines-in-the-european-unions-artificial-intellig

    European Digital Rights together with 61 civil society organisations have sent an open letter to the European Commission demanding red lines for the applications of AI that threaten fundamental rights. With the European Union’s AI proposal set to launch this quarter, Europe has the opportunity to demonstrate to the world that true innovation can arise only when we can be confident that everyone will be protected from the most harmful, egregious violations of our fundamental rights. Europe’s (...)

    #algorithme #biométrie #racisme #facial #prédiction #reconnaissance #sexisme #vidéo-surveillance #discrimination #surveillance (...)

    ##EuropeanDigitalRights-EDRi

  • Face Surveillance and the Capitol Attack
    https://www.eff.org/deeplinks/2021/01/face-surveillance-and-capitol-attack

    After last week’s violent attack on the Capitol, law enforcement is working overtime to identify the perpetrators. This is critical to accountability for the attempted insurrection. Law enforcement has many, many tools at their disposal to do this, especially given the very public nature of most of the organizing. But we object to one method reportedly being used to determine who was involved : law enforcement using facial recognition technologies to compare photos of unidentified (...)

    #algorithme #CCTV #biométrie #racisme #facial #reconnaissance #vidéo-surveillance #discrimination #extrême-droite #surveillance #EFF (...)

    ##Clearview

  • Sylvain Louvet et Ludovic Gaillard, prix Albert-Londres 2020 : “Avec la loi Sécurité globale, on franchit encore un cap dans la surveillance”
    https://www.telerama.fr/ecrans/sylvain-louvet-et-ludovic-gaillard-prix-albert-londres-2020-avec-la-loi-sec

    Les auteurs du documentaire “Tous surveillés, 7 milliards de suspects” ont été récompensés du prix Albert-Londres de l’audiovisuel ce 5 décembre. Une enquête remarquable sur les techniques de surveillance de masse et leurs dérives, à voir d’urgence sur Télérama.fr. Cette année encore le prix Albert-Londres de l’audiovisuel récompense un documentaire aux prises avec une des actualités les plus brûlantes du moment : les techniques de surveillance de masse, la reconnaissance faciale, les drones, leur (...)

    #algorithme #capteur #CCTV #drone #IJOP #biométrie #émotions #facial #reconnaissance #religion #son #vidéo-surveillance #Islam #panopticon (...)

    ##surveillance

  • The facial-recognition app Clearview sees a spike in use after Capitol attack.
    https://www.nytimes.com/live/2021/01/09/us/trump-biden#facial-recognition-clearview-capitol

    After the Capitol riot, Clearview AI, a facial-recognition app used by law enforcement, has seen a spike in use, said the company’s chief executive, Hoan Ton-That.

    “There was a 26 percent increase of searches over our usual weekday search volume,” Mr. Ton-That said.

    There are ample online photos and videos of rioters, many unmasked, breaching the Capitol. The F.B.I. has posted the faces of dozens of them and has requested assistance identifying them. Local police departments around the country are answering their call.

    “We are poring over whatever images or videos are available from whatever sites we can get our hands on,” said Armando Aguilar, assistant chief at the Miami Police Department, who oversees investigations.

    Two detectives in the department’s Real Time Crime Center are using Clearview to try to identify rioters and are sending the potential matches to the F.B.I.’s Joint Terrorism Task Force office in Miami. They made one potential match within their first hour of searching.

    “This is the greatest threat we’ve faced in my lifetime,” Mr. Aguilar said. “The peaceful transition of power is foundational to our republic.”

    Traditional facial recognition tools used by law enforcement depend on databases containing government-provided photos, such as driver’s license photos and mug shots. But Clearview, which is used by over 2,400 law enforcement agencies, according to the company, relies instead on a database of more than 3 billion photos collected from social media networks and other public websites. When an officer runs a search, the app provides links to sites on the web where the person’s face has appeared.

    In part because of its effectiveness, Clearview has become controversial. After The New York Times revealed its existence and widespread use last year, lawmakers and social media companies tried to curtail its operations, fearing that its facial-recognition capabilities could pave the way for a dystopian future.

    The Wall Street Journal reported on Friday that the Oxford Police Department in Alabama is also using Clearview to identify Capitol riot suspects and is sending information to the F.B.I. Neither the Oxford Police Department nor the F.B.I. has responded to requests for comment.

    Facial recognition is not a perfect tool. Law enforcement says that it uses facial recognition only as a clue in an investigation and would not charge someone based on that alone, though that has happened in the past.

    When asked if Clearview had performed any searches itself, Mr. Ton-That demurred.

    “Some people think we should be, but that’s really not our job. We’re a technology company and provider,” he said. “We’re not vigilantes.”

    — Kashmir Hill

    #Clearview #FBI #algorithme #CCTV #biométrie #élections #facial #reconnaissance #délation (...)

    ##extrême-droite

  • The Capitol Attack Doesn’t Justify Expanding Surveillance
    https://www.wired.com/story/opinion-the-capitol-attack-doesnt-justify-expanding-surveillance

    The security state that failed to keep DC safe doesn’t need invasive technology to meet this moment—it needs more civilian oversight. They took our Capitol, stormed the halls, pilfered our documents, and shattered the norms of our democracy. The lasting damage from Wednesday’s attack will not come from the mob itself, but from how we respond. Right now, a growing chorus is demanding we use facial recognition, cellphone tower data, and every manner of invasive surveillance to punish the mob. (...)

    #FBI #biométrie #racisme #technologisme #facial #reconnaissance #discrimination #extrême-droite (...)

    ##surveillance

  • This is how Facebook’s AI looks for bad stuff
    https://www.technologyreview.com/2019/11/29/131792/this-is-how-facebooks-ai-looks-for-bad-stuff

    The context : The vast majority of Facebook’s moderation is now done automatically by the company’s machine-learning systems, reducing the amount of harrowing content its moderators have to review. In its latest community standards enforcement report, published earlier this month, the company claimed that 98% of terrorist videos and photos are removed before anyone has the chance to see them, let alone report them. So, what are we seeing here ? The company has been training its (...)

    #MetropolitanPolice #Facebook #algorithme #anti-terrorisme #modération #reconnaissance #vidéo-surveillance #forme (...)

    ##surveillance

  • Claims Antifa Embedded in Capitol Riots Come From a Deeply Unreliable Facial Recognition Company - Dave Gershgorn
    https://onezero.medium.com/claims-antifa-embedded-in-capitol-riots-come-from-a-deeply-unreliabl

    XRVision also has a track record of spreading conspiracy theories about Hunter Biden Congressman Matt Gaetz, a Republican from Florida, took to the House floor on Wednesday night to spread an increasingly popular conspiracy theory that the pro-Trump mobs that overtook the Capitol building were in fact aligned with antifa. The claim was based on an anonymous source in a story from the Washington Times, a conservative outlet that has repeatedly pushed conspiracy theories. The source was (...)

    #biométrie #manipulation #facial #reconnaissance #vidéo-surveillance #extrême-droite #surveillance (...)

    ##XRVision

  • Le monde en face - Fliquez-vous les uns les autres : le débat en streaming
    https://www.france.tv/france-5/le-monde-en-face/2168885-fliquez-vous-les-uns-les-autres-le-debat.html

    présenté par : Marina Carrère d’Encausse À l’issue de la diffusion du documentaire, Marina Carrère d’Encausse proposera un débat avec quatre invités :

    présenté par : Marina Carrère d’Encausse À l’issue de la diffusion du documentaire, Marina Carrère d’Encausse proposera un débat avec quatre invités : Michel Henry, coauteur du documentaire Laurence Budelot, maire de Vert-le-Petit (Essonne) Olivier Tesquet, journaliste à Télérama, spécialiste du numérique Martin Drago, juriste, La Quadrature du Net (...)

    #algorithme #CCTV #biométrie #facial #reconnaissance #vidéo-surveillance #surveillance (...)

    ##LaQuadratureduNet

  • Technopolice, villes et vies sous surveillance
    https://www.laquadrature.net/2021/01/03/technopolice-villes-et-vies-sous-surveillance

    Depuis plusieurs années, des projets de « Smart Cities » se développent en France, prétendant se fonder sur les nouvelles technologies du « Big Data » et de l’« Intelligence Artificielle » pour améliorer notre quotidien urbain. Derrière ce vernis de ces villes soi-disant « intelligentes », se cachent des dispositifs souvent dangereusement sécuritaires. D’une part, car l’idée de multiplier les capteurs au sein d’une ville, d’interconnecter l’ensemble de ses réseaux et d’en gérer l’entièreté depuis un centre (...)

    #Cisco #Gemalto #Huawei #Thalès #algorithme #capteur #CCTV #PARAFE #SmartCity #biométrie #facial #reconnaissance #vidéo-surveillance #comportement #surveillance #BigData #TAJ #Technopolice (...)

    ##LaQuadratureduNet

  • Inside China’s unexpected quest to protect data privacy
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr

    A new privacy law would look a lot like Europe’s GDPR—but will it restrict state surveillance?

    Late in the summer of 2016, Xu Yuyu received a call that promised to change her life. Her college entrance examination scores, she was told, had won her admission to the English department of the Nanjing University of Posts and Telecommunications. Xu lived in the city of Linyi in Shandong, a coastal province in China, southeast of Beijing. She came from a poor family, singularly reliant on her father’s meager income. But her parents had painstakingly saved for her tuition; very few of her relatives had ever been to college.

    A few days later, Xu received another call telling her she had also been awarded a scholarship. To collect the 2,600 yuan ($370), she needed to first deposit a 9,900 yuan “activation fee” into her university account. Having applied for financial aid only days before, she wired the money to the number the caller gave her. That night, the family rushed to the police to report that they had been defrauded. Xu’s father later said his greatest regret was asking the officer whether they might still get their money back. The answer—“Likely not”—only exacerbated Xu’s devastation. On the way home she suffered a heart attack. She died in a hospital two days later.

    An investigation determined that while the first call had been genuine, the second had come from scammers who’d paid a hacker for Xu’s number, admissions status, and request for financial aid.

    For Chinese consumers all too familiar with having their data stolen, Xu became an emblem. Her death sparked a national outcry for greater data privacy protections. Only months before, the European Union had adopted the General Data Protection Regulation (GDPR), an attempt to give European citizens control over how their personal data is used. Meanwhile, Donald Trump was about to win the American presidential election, fueled in part by a campaign that relied extensively on voter data. That data included details on 87 million Facebook accounts, illicitly obtained by the consulting firm Cambridge Analytica. Chinese regulators and legal scholars followed these events closely.

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms. In his 2018 Senate testimony after the Cambridge Analytica scandal, Facebook’s CEO, Mark Zuckerberg, urged regulators not to clamp down too hard on technologies like face recognition. “We still need to make it so that American companies can innovate in those areas,” he said, “or else we’re going to fall behind Chinese competitors and others around the world.”

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country. It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime, says Samm Sacks, a leading China scholar at Yale and New America, a think tank in Washington, DC. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    November 2000 arguably marks the birth of the modern Chinese surveillance state. That month, the Ministry of Public Security, the government agency that oversees daily law enforcement, announced a new project at a trade show in Beijing. The agency envisioned a centralized national system that would integrate both physical and digital surveillance using the latest technology. It was named Golden Shield.

    Eager to cash in, Western companies including American conglomerate Cisco, Finnish telecom giant Nokia, and Canada’s Nortel Networks worked with the agency on different parts of the project. They helped construct a nationwide database for storing information on all Chinese adults, and developed a sophisticated system for controlling information flow on the internet—what would eventually become the Great Firewall. Much of the equipment involved had in fact already been standardized to make surveillance easier in the US—a consequence of the Communications Assistance for Law Enforcement Act of 1994.

    Despite the standardized equipment, the Golden Shield project was hampered by data silos and turf wars within the Chinese government. Over time, the ministry’s pursuit of a singular, unified system devolved into two separate operations: a surveillance and database system, devoted to gathering and storing information, and the social-credit system, which some 40 government departments participate in. When people repeatedly do things that aren’t allowed—from jaywalking to engaging in business corruption—their social-credit score falls and they can be blocked from things like buying train and plane tickets or applying for a mortgage.

    In the same year the Ministry of Public Security announced Golden Shield, Hong Yanqing entered the ministry’s police university in Beijing. But after seven years of training, having received his bachelor’s and master’s degrees, Hong began to have second thoughts about becoming a policeman. He applied instead to study abroad. By the fall of 2007, he had moved to the Netherlands to begin a PhD in international human rights law, approved and subsidized by the Chinese government.

    Over the next four years, he familiarized himself with the Western practice of law through his PhD research and a series of internships at international organizations. He worked at the International Labor Organization on global workplace discrimination law and the World Health Organization on road safety in China. “It’s a very legalistic culture in the West—that really strikes me. People seem to go to court a lot,” he says. “For example, for human rights law, most of the textbooks are about the significant cases in court resolving human rights issues.”

    Hong found this to be strangely inefficient. He saw going to court as a final resort for patching up the law’s inadequacies, not a principal tool for establishing it in the first place. Legislation crafted more comprehensively and with greater forethought, he believed, would achieve better outcomes than a system patched together through a haphazard accumulation of case law, as in the US.

    After graduating, he carried these ideas back to Beijing in 2012, on the eve of Xi Jinping’s ascent to the presidency. Hong worked at the UN Development Program and then as a journalist for the People’s Daily, the largest newspaper in China, which is owned by the government.

    Xi began to rapidly expand the scope of government censorship. Influential commentators, or “Big Vs”—named for their verified accounts on social media—had grown comfortable criticizing and ridiculing the Chinese Communist Party. In the fall of 2013, the party arrested hundreds of microbloggers for what it described as “malicious rumor-mongering” and paraded a particularly influential one on national television to make an example of him.

    The moment marked the beginning of a new era of censorship. The following year, the Cyberspace Administration of China was founded. The new central agency was responsible for everything involved in internet regulation, including national security, media and speech censorship, and data protection. Hong left the People’s Daily and joined the agency’s department of international affairs. He represented it at the UN and other global bodies and worked on cybersecurity cooperation with other governments.

    By July 2015, the Cyberspace Administration had released a draft of its first law. The Cybersecurity Law, which entered into force in June of 2017, required that companies obtain consent from people to collect their personal information. At the same time, it tightened internet censorship by banning anonymous users—a provision enforced by regular government inspections of data from internet service providers.

    In the spring of 2016, Hong sought to return to academia, but the agency asked him to stay. The Cybersecurity Law had purposely left the regulation of personal data protection vague, but consumer data breaches and theft had reached unbearable levels. A 2016 study by the Internet Society of China found that 84% of those surveyed had suffered some leak of their data, including phone numbers, addresses, and bank account details. This was spurring a growing distrust of digital service providers that required access to personal information, such as ride-hailing, food-delivery, and financial apps. Xu Yuyu’s death poured oil on the flames.

    The government worried that such sentiments would weaken participation in the digital economy, which had become a central part of its strategy for shoring up the country’s slowing economic growth. The advent of GDPR also made the government realize that Chinese tech giants would need to meet global privacy norms in order to expand abroad.

    Hong was put in charge of a new task force that would write a Personal Information Protection Specification (PIPS) to help solve these challenges. The document, though nonbinding, would tell companies how regulators intended to implement the Cybersecurity Law. In the process, the government hoped, it would nudge them to adopt new norms for data protection by themselves.

    Hong’s task force set about translating every relevant document they could find into Chinese. They translated the privacy guidelines put out by the Organization for Economic Cooperation and Development and by its counterpart, the Asia-Pacific Economic Cooperation; they translated GDPR and the California Consumer Privacy Act. They even translated the 2012 White House Consumer Privacy Bill of Rights, introduced by the Obama administration but never made into law. All the while, Hong met regularly with European and American data protection regulators and scholars.

    Bit by bit, from the documents and consultations, a general choice emerged. “People were saying, in very simplistic terms, ‘We have a European model and the US model,’” Hong recalls. The two approaches diverged substantially in philosophy and implementation. Which one to follow became the task force’s first debate.

    At the core of the European model is the idea that people have a fundamental right to have their data protected. GDPR places the burden of proof on data collectors, such as companies, to demonstrate why they need the data. By contrast, the US model privileges industry over consumers. Businesses define for themselves what constitutes reasonable data collection; consumers only get to choose whether to use that business. The laws on data protection are also far more piecemeal than in Europe, divvied up among sectoral regulators and specific states.

    At the time, without a central law or single agency in charge of data protection, China’s model more closely resembled the American one. The task force, however, found the European approach compelling. “The European rule structure, the whole system, is more clear,” Hong says.

    But most of the task force members were representatives from Chinese tech giants, like Baidu, Alibaba, and Huawei, and they felt that GDPR was too restrictive. So they adopted its broad strokes—including its limits on data collection and its requirements on data storage and data deletion—and then loosened some of its language. GDPR’s principle of data minimization, for example, maintains that only necessary data should be collected in exchange for a service. PIPS allows room for other data collection relevant to the service provided.

    PIPS took effect in May 2018, the same month that GDPR finally took effect. But as Chinese officials watched the US upheaval over the Facebook and Cambridge Analytica scandal, they realized that a nonbinding agreement would not be enough. The Cybersecurity Law didn’t have a strong mechanism for enforcing data protection. Regulators could only fine violators up to 1,000,000 yuan ($140,000), an inconsequential amount for large companies. Soon after, the National People’s Congress, China’s top legislative body, voted to begin drafting a Personal Information Protection Law within its current five-year legislative period, which ends in 2023. It would strengthen data protection provisions, provide for tougher penalties, and potentially create a new enforcement agency.

    After Cambridge Analytica, says Hong, “the government agency understood, ‘Okay, if you don’t really implement or enforce those privacy rules, then you could have a major scandal, even affecting political things.’”

    The local police investigation of Xu Yuyu’s death eventually identified the scammers who had called her. It had been a gang of seven who’d cheated many other victims out of more than 560,000 yuan using illegally obtained personal information. The court ruled that Xu’s death had been a direct result of the stress of losing her family’s savings. Because of this, and his role in orchestrating tens of thousands of other calls, the ringleader, Chen Wenhui, 22, was sentenced to life in prison. The others received sentences between three and 15 years.Retour ligne automatique
    xu yuyu

    Emboldened, Chinese media and consumers began more openly criticizing privacy violations. In March 2018, internet search giant Baidu’s CEO, Robin Li, sparked social-media outrage after suggesting that Chinese consumers were willing to “exchange privacy for safety, convenience, or efficiency.” “Nonsense,” wrote a social-media user, later quoted by the People’s Daily. “It’s more accurate to say [it is] impossible to defend [our privacy] effectively.”

    In late October 2019, social-media users once again expressed anger after photos began circulating of a school’s students wearing brainwave-monitoring headbands, supposedly to improve their focus and learning. The local educational authority eventually stepped in and told the school to stop using the headbands because they violated students’ privacy. A week later, a Chinese law professor sued a Hangzhou wildlife zoo for replacing its fingerprint-based entry system with face recognition, saying the zoo had failed to obtain his consent for storing his image.

    But the public’s growing sensitivity to infringements of consumer privacy has not led to many limits on state surveillance, nor even much scrutiny of it. As Maya Wang, a researcher at Human Rights Watch, points out, this is in part because most Chinese citizens don’t know the scale or scope of the government’s operations. In China, as in the US and Europe, there are broad public and national security exemptions to data privacy laws. The Cybersecurity Law, for example, allows the government to demand data from private actors to assist in criminal legal investigations. The Ministry of Public Security also accumulates massive amounts of data on individuals directly. As a result, data privacy in industry can be strengthened without significantly limiting the state’s access to information.

    The onset of the pandemic, however, has disturbed this uneasy balance.

    On February 11, Ant Financial, a financial technology giant headquartered in Hangzhou, a city southwest of Shanghai, released an app-building platform called AliPay Health Code. The same day, the Hangzhou government released an app it had built using the platform. The Hangzhou app asked people to self-report their travel and health information, and then gave them a color code of red, yellow, or green. Suddenly Hangzhou’s 10 million residents were all required to show a green code to take the subway, shop for groceries, or enter a mall. Within a week, local governments in over 100 cities had used AliPay Health Code to develop their own apps. Rival tech giant Tencent quickly followed with its own platform for building them.

    The apps made visible a worrying level of state surveillance and sparked a new wave of public debate. In March, Hu Yong, a journalism professor at Beijing University and an influential blogger on Weibo, argued that the government’s pandemic data collection had crossed a line. Not only had it led to instances of information being stolen, he wrote, but it had also opened the door to such data being used beyond its original purpose. “Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” he asked.

    Indeed, in late May, leaked documents revealed plans from the Hangzhou government to make a more permanent health-code app that would score citizens on behaviors like exercising, smoking, and sleeping. After a public outcry, city officials canceled the project. That state-run media had also published stories criticizing the app likely helped.

    The debate quickly made its way to the central government. That month, the National People’s Congress announced it intended to fast-track the Personal Information Protection Law. The scale of the data collected during the pandemic had made strong enforcement more urgent, delegates said, and highlighted the need to clarify the scope of the government’s data collection and data deletion procedures during special emergencies. By July, the legislative body had proposed a new “strict approval” process for government authorities to undergo before collecting data from private-sector platforms. The language again remains vague, to be fleshed out later—perhaps through another nonbinding document—but this move “could mark a step toward limiting the broad scope” of existing government exemptions for national security, wrote Sacks and fellow China scholars at New America.

    Hong similarly believes the discrepancy between rules governing industry and government data collection won’t last, and the government will soon begin to limit its own scope. “We cannot simply address one actor while leaving the other out,” he says. “That wouldn’t be a very scientific approach.”

    Other observers disagree. The government could easily make superficial efforts to address public backlash against visible data collection without really touching the core of the Ministry of Public Security’s national operations, says Wang, of Human Rights Watch. She adds that any laws would likely be enforced unevenly: “In Xinjiang, Turkic Muslims have no say whatsoever in how they’re treated.”

    Still, Hong remains an optimist. In July, he started a job teaching law at Beijing University, and he now maintains a blog on cybersecurity and data issues. Monthly, he meets with a budding community of data protection officers in China, who carefully watch how data governance is evolving around the world.

    #criminalité #Nokia_Siemens #fraude #Huawei #payement #Cisco #CambridgeAnalytica/Emerdata #Baidu #Alibaba #domination #bénéfices #BHATX #BigData #lutte #publicité (...)

    ##criminalité ##CambridgeAnalytica/Emerdata ##publicité ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##Nortel_Networks ##Facebook ##biométrie ##consommation ##génétique ##consentement ##facial ##reconnaissance ##empreintes ##Islam ##SocialCreditSystem ##surveillance ##TheGreatFirewallofChina ##HumanRightsWatch

  • A year in surveillance
    https://aboutintel.eu/a-year-in-surveillance

    2020 has been a very turbulent year. This is also true with regards to European surveillance politics, both at the EU level and in national politics. Like most years, it was largely characterised by one central conflict, which in simple terms goes like this : a push for more and more technologically advanced surveillance practices by both industry and government actors on the one hand, and fierce resistance from civil society, academia, and some regulators on the other, attempting to reign (...)

    #Palantir #BND #algorithme #IMSI-catchers #biométrie #police #racisme #technologisme #facial #prédiction #reconnaissance #vidéo-surveillance #COVID-19 #écoutes #santé #surveillance #discrimination #OpenRightsGroup #PrivacyInternational #LaQuadratureduNet (...)

    ##santé ##BigData ##Liberty ##MI5

  • Covid-19 Ushered in a New Era of Government Surveillance
    https://onezero.medium.com/covid-19-ushered-in-a-new-era-of-government-surveillance-414afb7e422

    Government-mandated drone surveillance and location tracking apps could be here to stay In early December, after finding 16 people had illegally crossed the border from Myanmar to Thailand and evaded the mandatory quarantine period, the Thai government said it would start patrolling the border with new surveillance equipment like drones and ultraviolet cameras. In 2020, this kind of surveillance, justified by the coronavirus pandemic, has gone mainstream. Since March, more than 30 (...)

    #algorithme #AarogyaSetu_ #Bluetooth #CCTV #drone #smartphone #biométrie #contactTracing #géolocalisation #migration #température #facial #reconnaissance #vidéo-surveillance #COVID-19 #frontières #santé (...)

    ##santé ##surveillance

  • Non avrai mai più i miei dati | il manifesto
    https://ilmanifesto.it/non-avrai-mai-piu-i-miei-dati

    par Simone Pieranni (auteur de «Red Mirror»)

    Cina. Sotto la spinta dell’opinione pubblica cinese, per la prima volta una metropoli ha presentato una bozza di legge per arginare il riconoscimento facciale all’interno dei complessi residenziali

    Spesso in Occidente rimaniamo impressionati dalla facilità con la quale la tecnologia è entrata nella vita quotidiana dei cinesi. Certi aspetti di questo tema, per altro, ci riguardano da vicino e faremmo bene a osservare cosa accade in un posto così lontano, ma pur sempre inserito – sebbene con le proprie «caratteristiche» – all’interno di un contesto globale.

    Uno degli aspetti più rilevanti è senza dubbio quello del riconoscimento facciale, ormai utilizzato in molte attività e luoghi anche in Occidente, spesso senza neanche il consenso, o quanto meno un parere, della cittadinanza.

    #Simone Pieranni #Chine #Reconnaissance_faciale

  • Amazon’s Alexa Can’t Know Everything, But It Can Go Everywhere
    https://www.wired.com/story/everything-is-an-alexa-device-now

    A heap of new Alexa devices—a microwave ! a wall clock !—show Amazon’s strategy to put its voice assistant in everything. On Thursday, Amazon introduced nearly a dozen new Alexa-powered products to the world. Some, like this year’s Echo Dot, were standard upgrades to familiar products. But in the bulk of the newcomers you could see the full payoff of Amazon’s longstanding strategy to put Alexa in more than just speakers. It’s now in nearly everything. Which is exactly where it needs to be if it (...)

    #Google #Amazon #algorithme #Alexa #domotique #Echo #smartphone #wearable #domination #prédiction #reconnaissance #BigData (...)

    ##voix

  • The Facial Recognition Backlash Is Here
    https://onezero.medium.com/the-facial-recognition-backlash-15b5707444f3

    But will the current bans last ? The facial recognition industry has been quietly working alongside law enforcement, military organizations, and private companies for years, leveraging 40-year old partnerships originally centered around fingerprint databases. But in 2020, the industry faced an unexpected reckoning. February brought an explosive New York Times report on Clearview AI, a facial recognition company that had scraped billions of images from social media to create an (...)

    #Clearview #Microsoft #Walmart #IBM #Amazon #biométrie #police #racisme #facial #reconnaissance #discrimination #empreintes #surveillance #algorithme #CCTV #vidéo-surveillance #ACLU (...)

    ##FightfortheFuture

  • La santé mentale est un enjeu crucial des migrations contemporaines

    Si la migration est source d’espoirs liés à la découverte de nouveaux horizons, de nouveaux contextes sociaux et de nouvelles perspectives économiques, elle est également à des degrés divers un moment de rupture sociale et identitaire qui n’est pas sans conséquence sur la santé mentale.

    #Abdelmalek_Sayad, l’un des sociologues des migrations les plus influents de ces dernières décennies, a défini la condition du migrant comme étant suspendu entre deux mondes parallèles. #Sayad nous dit que le migrant est doublement absent, à son lieu d’origine et son lieu d’arrivée.

    Il est, en tant qu’émigrant, projeté dans une condition faite de perspectives et, très souvent, d’illusions qui l’éloignent de son lieu d’origine. Mais le migrant est tout aussi absent dans sa #condition ^_d’immigré, dans les processus d’#adaptation à un contexte nouveau et souvent hostile, source de nombreuses #souffrances.

    Quelles sont les conséquences de cette #double_absence et plus largement de cette transition de vie dans la santé mentale des migrants ?

    Migrer implique une perte de #capital_social

    Migrer, c’est quitter un #univers_social pour un autre. Les #contacts, les #échanges et les #relations_interpersonnelles qui soutiennent chacun de nous sont perturbés, fragmentés ou même rompus durant cette transition.

    Si pour certains la migration implique un renforcement du capital social (ou économique), dans la plupart des cas elle mène à une perte de capital social. Dans un entretien mené en 2015, un demandeur d’asile afghan souligne cette #rupture_sociale et la difficulté de maintenir des liens avec son pays d’origine :

    « C’est très difficile de quitter son pays parce que ce n’est pas seulement ta terre que tu quittes, mais toute ta vie, ta famille. J’ai des contacts avec ma famille de temps en temps, mais c’est difficile parce que les talibans détruisent souvent les lignes de téléphone, et donc, c’est difficile de les joindre. »

    Pour contrer ou éviter cette perte de capital social, de nombreux #réseaux_transnationaux et organisations d’immigrants dans les pays d’accueil sont créés et jouent dans la vie des migrants un rôle primordial.

    À titre d’exemple, la migration italienne d’après-guerre s’est caractérisée par une forte structuration en #communautés. Ils ont créé d’importants organisations et réseaux, notamment des organisations politiques et syndicales, des centres catholiques et culturels, dont certains sont encore actifs dans les pays de la #diaspora italienne.

    L’#environnement_social et la manière dont les sociétés d’arrivée vont accueillir et inclure les migrants, vont être donc des éléments clés dans la #résilience de ces populations face aux défis posés par leur trajectoire de vie et par leur #parcours_migratoire. Les migrants peuvent en effet rencontrer des situations qui mettent en danger leur #santé physique et mentale dans leur lieu d’origine, pendant leur transit et à leur destination finale.

    Cela est particulièrement vrai pour les migrants forcés qui sont souvent confrontés à des expériences de #détention, de #violence et d’#exploitation susceptibles de provoquer des #troubles_post-traumatiques, dépressifs et anxieux. C’est le cas des centaines de milliers de réfugiés qui fuient les #conflits_armés depuis 2015, principalement dans les régions de la Syrie et de l’Afrique subsaharienne.

    Ces migrants subissent des #violences tout au long de leur parcours, y compris la violence des lois de l’asile dans nos sociétés.

    L’environnement social est une des clés de la santé mentale

    Dans son document d’orientation « Mental health promotion and mental health care in refugees and migrants », l’Organisation mondiale de la santé (OMS) indique l’#intégration_sociale comme l’un des domaines d’intervention les plus importants pour combattre les problèmes de santé mentale dans les populations migrantes.

    Pour l’OMS, la lutte contre l’#isolement et la promotion de l’#intégration sont des facteurs clés, tout comme les interventions visant à faciliter le relations entre les migrants et les services de soins, et à améliorer les pratiques et les traitements cliniques.

    Cependant, l’appartenance à des réseaux dans un environnement social donné est une condition essentielle pour le bien-être mental de l’individu, mais elle n’est pas suffisante.

    Le philosophe allemand #Axel_Honneth souligne notamment que la #confiance_en_soi, l’#estime_de_soi et la capacité à s’ouvrir à la société trouvent leurs origines dans le concept de #reconnaissance. Chaque individu est mu par le besoin que son environnement social et la société, dans laquelle il ou elle vit, valorisent ses #identités et lui accordent une place comme #sujet_de_droit.

    Les identités des migrants doivent être reconnues par la société

    À cet égard, se construire de nouvelles identités sociales et maintenir une #continuité_identitaire entre l’avant et l’après-migration permet aux migrants de diminuer les risques de #détresse_psychologique.

    https://www.youtube.com/watch?v=oNC4C4OqomI&feature=emb_logo

    Être discriminé, exclu ou ostracisé du fait de ses appartenances et son identité affecte profondément la santé mentale. En réaction à ce sentiment d’#exclusion ou de #discrimination, maintenir une estime de soi positive et un #équilibre_psychosocial passe souvent parla prise de distance par rapport à la société discriminante et le #repli vers d’autres groupes plus soutenants.

    La #reconnaissance_juridique, un élément central

    Or ce principe de reconnaissance s’articule tant au niveau de la sphère sociale qu’au niveau juridique. Dans les sociétés d’accueil, les migrants doivent être reconnus comme porteurs de droits civils, sociaux et politiques.

    Au-delà des enjeux pragmatiques liés à l’accès à des services, à une protection ou au #marché_de_l’emploi, l’obtention de droits et d’un #statut_juridique permet de retrouver une forme de contrôle sur la poursuite de sa vie.

    Certaines catégories de migrants vivant soit en procédure pour faire reconnaître leurs droits, comme les demandeurs d’asile, soit en situation irrégulière, comme les « #sans-papiers », doivent souvent faire face à des situations psychologiquement compliquées.

    À cet égard, les sans-papiers sont presque totalement exclus, privés de leurs #droits_fondamentaux et criminalisés par la justice. Les demandeurs d’asile sont quant à eux souvent pris dans la #bureaucratie du système d’accueil durant des périodes déraisonnablement longues, vivant dans des conditions psychologiques difficiles et parfois dans un profond #isolement_social. Cela est bien exprimé par un jeune migrant kenyan que nous avions interviewé en 2018 dans une structure d’accueil belge :

    « Je suis arrivé quand ils ont ouvert le [centre d’accueil], et je suis toujours là ! Cela fait presque trois ans maintenant ! Ma première demande a été rejetée et maintenant, si c’est un “non”, je vais devoir quitter le territoire. […] Tous ces jours, les mois d’attente, pour quoi faire ? Pour rien avoir ? Pour devenir un sans-papiers ? Je vais devenir fou, je préfère me tuer. »

    Être dans l’#attente d’une décision sur son statut ou être dénié de droits plonge l’individu dans l’#insécurité et dans une situation où toute #projection est rendue compliquée, voire impossible.

    Nous avons souligné ailleurs que la lourdeur des procédures et le sentiment de #déshumanisation dans l’examen des demandes d’asile causent d’importantes #frustrations chez les migrants, et peuvent avoir un impact sur leur #bien-être et leur santé mentale.

    La migration est un moment de nombreuses #ruptures sociales et identitaires face auxquelles les individus vont (ré)agir et mobiliser les ressources disponibles dans leur environnement. Donner, alimenter et construire ces ressources autour et avec les migrants les plus vulnérables constitue dès lors un enjeu de #santé_publique.

    https://theconversation.com/la-sante-mentale-est-un-enjeu-crucial-des-migrations-contemporaines

    #santé_mentale #asile #migrations #réfugiés

    ping @_kg_ @isskein @karine4

  • Watch : Facial recognition at Dubai Metro stations to identify wanted criminals
    https://gulfnews.com/uae/government/watch-facial-recognition-at-dubai-metro-stations-to-identify-wanted-crimin

    Artificial intelligence being used to secure public transport sector in Dubai Dubai : Dubai Police have established a foolproof system to secure the emirate’s public transport sector, an official told Gulf News. Brigadier Obaid Al Hathboor, Director of Transport Security Department in Dubai, said that police are all set to introduce a facial recognition system on public transport to assure more security for commuters and residents. The technology is being brought in as the country gears up (...)

    #algorithme #CCTV #biométrie #facial #reconnaissance #vidéo-surveillance #surveillance (...)

    ##_

  • « Safe City » de Marseille : on retourne à l’attaque
    https://www.laquadrature.net/2020/12/10/safe-city-de-marseille-on-retourne-a-lattaque

    Nous repartons à l’attaque contre la « Safe City » de Marseille. Le projet de vidéosurveillance automatisée, que nous avions déjà essayé d’attaquer sans succès en janvier dernier, est en effet toujours d’actualité, et ce malgré le changement de majorité à la mairie et les critiques de la CNIL. Nous déposons donc un nouveau recours devant le tribunal administratif de Marseille pour stopper ce projet dangereux et illégal – alors que la loi « Sécurité Globale » discutée au Parlement cherche justement à (...)

    #algorithme #CCTV #biométrie #facial #reconnaissance #vidéo-surveillance #comportement #surveillance #CNIL #LaQuadratureduNet #température #masque (...)

    ##SmartCity

  • Huawei Reportedly Tested a ‘Uighur Alarm’ to Track Chinese Ethnic Minorities With Facial Recognition
    https://onezero.medium.com/huawei-reportedly-tested-a-uighur-alarm-to-track-chinese-ethnic-mino

    The system also identifies information such as age and sex Chinese tech giants Huawei and Megvii have allegedly tested software that could identify Uighurs, an ethnic minority in China, according to a new report from the Washington Post and video surveillance trade publication IPVM. The system being tested tried to identify whether a person was Uighur but also information such as their age and sex. If the system detected a Uighur person, it could notify government authorities with a (...)

    #Dahua #Hikvision #Huawei #Megvii #algorithme #CCTV #biométrie #génétique #racisme #facial #reconnaissance #vidéo-surveillance #discrimination #Islam (...)

    ##surveillance

  • Facebook is fighting biometric facial recognition privacy laws.
    https://slate.com/technology/2017/08/facebook-is-fighting-biometric-facial-recognition-privacy-laws.html

    There’s a court case in Illinois that challenges Facebook’s collection of biometric data without users’ permission, and the social media giant is fighting tooth and nail to defend itself. Carlos Licata, one of the plaintiffs on the case, sued Facebook in 2015 under a unique Illinois law, the Biometric Information Privacy Act, which says that no private company can collect or store a person’s biometric information without prior notification and consent. If companies do collect data without (...)

    #CBP #Facebook #algorithme #biométrie #consentement #émotions #facial #législation #reconnaissance #lobbying #publicité #surveillance (...)

    ##publicité ##_