person:tristan harris

  • Accros aux smartphones : six lanceurs d’alerte à écouter de toute urgence - Médias / Net - Télérama.fr
    https://www.telerama.fr/medias/accros-aux-smartphones-six-lanceurs-dalerte-a-ecouter-de-toute-urgence,n591

    “L’abus de smartphone rend-il idiot ?” est la question posée en “une” de “Télérama” cette semaine. Aux Etats-Unis, les lanceurs d’alerte issus de la Silicon Valley cherchent la meilleure façon de répondre à cette question. Et insistent sur le même message : face aux écrans, nous sommes tous vulnérables.

    En 2016, un ingénieur et designer de Google, Tristan Harris (31 ans à l’époque), décide de partager dans un serveur interne une longue note exprimant ses doutes à propos du travail mené par l’équipe qu’il dirige (et plus largement l’entreprise qui l’emploie). Spécialiste de l’ergonomie des alertes et notifications (ces signaux qui nous suivent partout depuis que nos téléphones portables sont devenus de ordinateurs mobiles), Harris considère que Google va désormais trop loin dans « la guerre à l’attention ».

    « Chers collègues (...) aider les gens à gérer leurs messageries, leurs sources d’info, très bien. Mais tout faire pour agripper leur attention en permanence, est-ce bien éthique ? » En 24 heures, sa note fait le tour de l’entreprise ; beaucoup chez Google pensent comme lui mais n’osent le dire… Depuis, Tristan Harris a fait beaucoup de choses. Il a démissionné. A donné une interview qui a marqué les esprits – pour l’émission 60 minutes en avril 2017 –, y comparant le rapport de millions d’utilisateurs à leur smartphone à celui des joueurs de casino face aux machines à sous et n’hésitant pas à parler de « brain hacking ».

    Puis il a lancé un groupe d’actions, The Center for Humane Technology, basé à San Francisco, dans le but d’alerter le grand public aux risques d’addiction aux écrans. Un combat porté par de plus en plus de voix aux Etats-Unis, parmi lesquels les six lanceurs d’alerte ici présentés.

    #Economie_attention #Ecologie_attention #Nudge #Smartphone

  • High score, low pay : why the gig economy loves gamification | Business | The Guardian
    https://www.theguardian.com/business/2018/nov/20/high-score-low-pay-gamification-lyft-uber-drivers-ride-hailing-gig-econ

    Using ratings, competitions and bonuses to incentivise workers isn’t new – but as I found when I became a Lyft driver, the gig economy is taking it to another level.

    Every week, it sends its drivers a personalised “Weekly Feedback Summary”. This includes passenger comments from the previous week’s rides and a freshly calculated driver rating. It also contains a bar graph showing how a driver’s current rating “stacks up” against previous weeks, and tells them whether they have been “flagged” for cleanliness, friendliness, navigation or safety.

    At first, I looked forward to my summaries; for the most part, they were a welcome boost to my self-esteem. My rating consistently fluctuated between 4.89 stars and 4.96 stars, and the comments said things like: “Good driver, positive attitude” and “Thanks for getting me to the airport on time!!” There was the occasional critique, such as “She weird”, or just “Attitude”, but overall, the comments served as a kind of positive reinforcement mechanism. I felt good knowing that I was helping people and that people liked me.

    But one week, after completing what felt like a million rides, I opened my feedback summary to discover that my rating had plummeted from a 4.91 (“Awesome”) to a 4.79 (“OK”), without comment. Stunned, I combed through my ride history trying to recall any unusual interactions or disgruntled passengers. Nothing. What happened? What did I do? I felt sick to my stomach.

    Because driver ratings are calculated using your last 100 passenger reviews, one logical solution is to crowd out the old, bad ratings with new, presumably better ratings as fast as humanly possible. And that is exactly what I did.

    In a certain sense, Kalanick is right. Unlike employees in a spatially fixed worksite (the factory, the office, the distribution centre), rideshare drivers are technically free to choose when they work, where they work and for how long. They are liberated from the constraining rhythms of conventional employment or shift work. But that apparent freedom poses a unique challenge to the platforms’ need to provide reliable, “on demand” service to their riders – and so a driver’s freedom has to be aggressively, if subtly, managed. One of the main ways these companies have sought to do this is through the use of gamification.

    Simply defined, gamification is the use of game elements – point-scoring, levels, competition with others, measurable evidence of accomplishment, ratings and rules of play – in non-game contexts. Games deliver an instantaneous, visceral experience of success and reward, and they are increasingly used in the workplace to promote emotional engagement with the work process, to increase workers’ psychological investment in completing otherwise uninspiring tasks, and to influence, or “nudge”, workers’ behaviour. This is what my weekly feedback summary, my starred ratings and other gamified features of the Lyft app did.

    There is a growing body of evidence to suggest that gamifying business operations has real, quantifiable effects. Target, the US-based retail giant, reports that gamifying its in-store checkout process has resulted in lower customer wait times and shorter lines. During checkout, a cashier’s screen flashes green if items are scanned at an “optimum rate”. If the cashier goes too slowly, the screen flashes red. Scores are logged and cashiers are expected to maintain an 88% green rating. In online communities for Target employees, cashiers compare scores, share techniques, and bemoan the game’s most challenging obstacles.
    Advertisement

    But colour-coding checkout screens is a pretty rudimental kind of gamification. In the world of ride-hailing work, where almost the entirety of one’s activity is prompted and guided by screen – and where everything can be measured, logged and analysed – there are few limitations on what can be gamified.

    Every Sunday morning, I receive an algorithmically generated “challenge” from Lyft that goes something like this: “Complete 34 rides between the hours of 5am on Monday and 5am on Sunday to receive a $63 bonus.” I scroll down, concerned about the declining value of my bonuses, which once hovered around $100-$220 per week, but have now dropped to less than half that.

    “Click here to accept this challenge.” I tap the screen to accept. Now, whenever I log into driver mode, a stat meter will appear showing my progress: only 21 more rides before I hit my first bonus.

    In addition to enticing drivers to show up when and where demand hits, one of the main goals of this gamification is worker retention. According to Uber, 50% of drivers stop using the application within their first two months, and a recent report from the Institute of Transportation Studies at the University of California in Davis suggests that just 4% of ride-hail drivers make it past their first year.

    Before Lyft rolled out weekly ride challenges, there was the “Power Driver Bonus”, a weekly challenge that required drivers to complete a set number of regular rides. I sometimes worked more than 50 hours per week trying to secure my PDB, which often meant driving in unsafe conditions, at irregular hours and accepting nearly every ride request, including those that felt potentially dangerous (I am thinking specifically of an extremely drunk and visibly agitated late-night passenger).

    Of course, this was largely motivated by a real need for a boost in my weekly earnings. But, in addition to a hope that I would somehow transcend Lyft’s crappy economics, the intensity with which I pursued my PDBs was also the result of what Burawoy observed four decades ago: a bizarre desire to beat the game.

    Former Google “design ethicist” Tristan Harris has also described how the “pull-to-refresh” mechanism used in most social media feeds mimics the clever architecture of a slot machine: users never know when they are going to experience gratification – a dozen new likes or retweets – but they know that gratification will eventually come. This unpredictability is addictive: behavioural psychologists have long understood that gambling uses variable reinforcement schedules – unpredictable intervals of uncertainty, anticipation and feedback – to condition players into playing just one more round.

    It is not uncommon to hear ride-hailing drivers compare even the mundane act of operating their vehicles to the immersive and addictive experience of playing a video game or a slot machine. In an article published by the Financial Times, long-time driver Herb Croakley put it perfectly: “It gets to a point where the app sort of takes over your motor functions in a way. It becomes almost like a hypnotic experience. You can talk to drivers and you’ll hear them say things like, I just drove a bunch of Uber pools for two hours, I probably picked up 30–40 people and I have no idea where I went. In that state, they are literally just listening to the sounds [of the driver’s apps]. Stopping when they said stop, pick up when they say pick up, turn when they say turn. You get into a rhythm of that, and you begin to feel almost like an android.”

    In their foundational text Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers, Alex Rosenblat and Luke Stark write: “Uber’s self-proclaimed role as a connective intermediary belies the important employment structures and hierarchies that emerge through its software and interface design.” “Algorithmic management” is the term Rosenblat and Stark use to describe the mechanisms through which Uber and Lyft drivers are directed. To be clear, there is no singular algorithm. Rather, there are a number of algorithms operating and interacting with one another at any given moment. Taken together, they produce a seamless system of automatic decision-making that requires very little human intervention.

    For many on-demand platforms, algorithmic management has completely replaced the decision-making roles previously occupied by shift supervisors, foremen and middle- to upper- level management. Uber actually refers to its algorithms as “decision engines”. These “decision engines” track, log and crunch millions of metrics every day, from ride frequency to the harshness with which individual drivers brake. It then uses these analytics to deliver gamified prompts perfectly matched to drivers’ data profiles.

    To increase the prospect of surge pricing, drivers in online forums regularly propose deliberate, coordinated, mass “log-offs” with the expectation that a sudden drop in available drivers will “trick” the algorithm into generating higher surges. I have never seen one work, but the authors of a recently published paper say that mass log-offs are occasionally successful.

    Viewed from another angle, though, mass log-offs can be understood as good, old-fashioned work stoppages. The temporary and purposeful cessation of work as a form of protest is the core of strike action, and remains the sharpest weapon workers have to fight exploitation. But the ability to log-off en masse has not assumed a particularly emancipatory function.

    After weeks of driving like a maniac in order to restore my higher-than-average driver rating, I managed to raise it back up to a 4.93. Although it felt great, it is almost shameful and astonishing to admit that one’s rating, so long as it stays above 4.6, has no actual bearing on anything other than your sense of self-worth. You do not receive a weekly bonus for being a highly rated driver. Your rate of pay does not increase for being a highly rated driver. In fact, I was losing money trying to flatter customers with candy and keep my car scrupulously clean. And yet, I wanted to be a highly rated driver.
    How much is an hour worth? The war over the minimum wage
    Read more

    And this is the thing that is so brilliant and awful about the gamification of Lyft and Uber: it preys on our desire to be of service, to be liked, to be good. On weeks that I am rated highly, I am more motivated to drive. On weeks that I am rated poorly, I am more motivated to drive. It works on me, even though I know better. To date, I have completed more than 2,200 rides.

    #Lyft #Uber #Travail #Psychologie_comportementale #Gamification #Néo_management #Lutte_des_classes

  • Push #notifications Are Not That Bad, You Just Need To Take Control Again
    https://hackernoon.com/push-notifications-are-not-that-bad-you-just-need-to-take-control-again-

    Just a year ago, the first thing I did when I woke up was picking up my phone and instantly reviewing my notifications. Despite what Tristan Harris said about tech hijacking my morning routine, I was still doing it.I didn’t want to check my social media in front of my employees. I wanted to show to an investor or client that emailed me overnight that I was working early in the morning. I needed to know if an important email would impact my morning meetings.As a startup founder, I always had a good reason to do it.And it seems that I’m not the only one. According to a recent survey from the tech analyst company ReportLinker, 46 % of Americans admitted to checking their #smartphones before they even get out of bed in the morning.How did that happen?Push Notifications Become Part Of Our Daily (...)

    #mobile #attention #attention-economy

  • Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy? | The New Yorker
    https://www.newyorker.com/magazine/2018/09/17/can-mark-zuckerberg-fix-facebook-before-it-breaks-democracy

    Since 2011, Zuckerberg has lived in a century-old white clapboard Craftsman in the Crescent Park neighborhood, an enclave of giant oaks and historic homes not far from Stanford University. The house, which cost seven million dollars, affords him a sense of sanctuary. It’s set back from the road, shielded by hedges, a wall, and mature trees. Guests enter through an arched wooden gate and follow a long gravel path to a front lawn with a saltwater pool in the center. The year after Zuckerberg bought the house, he and his longtime girlfriend, Priscilla Chan, held their wedding in the back yard, which encompasses gardens, a pond, and a shaded pavilion. Since then, they have had two children, and acquired a seven-hundred-acre estate in Hawaii, a ski retreat in Montana, and a four-story town house on Liberty Hill, in San Francisco. But the family’s full-time residence is here, a ten-minute drive from Facebook’s headquarters.

    Occasionally, Zuckerberg records a Facebook video from the back yard or the dinner table, as is expected of a man who built his fortune exhorting employees to keep “pushing the world in the direction of making it a more open and transparent place.” But his appetite for personal openness is limited. Although Zuckerberg is the most famous entrepreneur of his generation, he remains elusive to everyone but a small circle of family and friends, and his efforts to protect his privacy inevitably attract attention. The local press has chronicled his feud with a developer who announced plans to build a mansion that would look into Zuckerberg’s master bedroom. After a legal fight, the developer gave up, and Zuckerberg spent forty-four million dollars to buy the houses surrounding his. Over the years, he has come to believe that he will always be the subject of criticism. “We’re not—pick your noncontroversial business—selling dog food, although I think that people who do that probably say there is controversy in that, too, but this is an inherently cultural thing,” he told me, of his business. “It’s at the intersection of technology and psychology, and it’s very personal.”

    At the same time, former Facebook executives, echoing a growing body of research, began to voice misgivings about the company’s role in exacerbating isolation, outrage, and addictive behaviors. One of the largest studies, published last year in the American Journal of Epidemiology, followed the Facebook use of more than five thousand people over three years and found that higher use correlated with self-reported declines in physical health, mental health, and life satisfaction. At an event in November, 2017, Sean Parker, Facebook’s first president, called himself a “conscientious objector” to social media, saying, “God only knows what it’s doing to our children’s brains.” A few days later, Chamath Palihapitiya, the former vice-president of user growth, told an audience at Stanford, “The short-term, dopamine-driven feedback loops that we have created are destroying how society works—no civil discourse, no coöperation, misinformation, mistruth.” Palihapitiya, a prominent Silicon Valley figure who worked at Facebook from 2007 to 2011, said, “I feel tremendous guilt. I think we all knew in the back of our minds.” Of his children, he added, “They’re not allowed to use this shit.” (Facebook replied to the remarks in a statement, noting that Palihapitiya had left six years earlier, and adding, “Facebook was a very different company back then.”)

    In March, Facebook was confronted with an even larger scandal: the Times and the British newspaper the Observer reported that a researcher had gained access to the personal information of Facebook users and sold it to Cambridge Analytica, a consultancy hired by Trump and other Republicans which advertised using “psychographic” techniques to manipulate voter behavior. In all, the personal data of eighty-seven million people had been harvested. Moreover, Facebook had known of the problem since December of 2015 but had said nothing to users or regulators. The company acknowledged the breach only after the press discovered it.

    We spoke at his home, at his office, and by phone. I also interviewed four dozen people inside and outside the company about its culture, his performance, and his decision-making. I found Zuckerberg straining, not always coherently, to grasp problems for which he was plainly unprepared. These are not technical puzzles to be cracked in the middle of the night but some of the subtlest aspects of human affairs, including the meaning of truth, the limits of free speech, and the origins of violence.

    Zuckerberg is now at the center of a full-fledged debate about the moral character of Silicon Valley and the conscience of its leaders. Leslie Berlin, a historian of technology at Stanford, told me, “For a long time, Silicon Valley enjoyed an unencumbered embrace in America. And now everyone says, Is this a trick? And the question Mark Zuckerberg is dealing with is: Should my company be the arbiter of truth and decency for two billion people? Nobody in the history of technology has dealt with that.”

    In 2002, Zuckerberg went to Harvard, where he embraced the hacker mystique, which celebrates brilliance in pursuit of disruption. “The ‘fuck you’ to those in power was very strong,” the longtime friend said. In 2004, as a sophomore, he embarked on the project whose origin story is now well known: the founding of Thefacebook.com with four fellow-students (“the” was dropped the following year); the legal battles over ownership, including a suit filed by twin brothers, Cameron and Tyler Winklevoss, accusing Zuckerberg of stealing their idea; the disclosure of embarrassing messages in which Zuckerberg mocked users for giving him so much data (“they ‘trust me.’ dumb fucks,” he wrote); his regrets about those remarks, and his efforts, in the years afterward, to convince the world that he has left that mind-set behind.

    New hires learned that a crucial measure of the company’s performance was how many people had logged in to Facebook on six of the previous seven days, a measurement known as L6/7. “You could say it’s how many people love this service so much they use it six out of seven days,” Parakilas, who left the company in 2012, said. “But, if your job is to get that number up, at some point you run out of good, purely positive ways. You start thinking about ‘Well, what are the dark patterns that I can use to get people to log back in?’ ”

    Facebook engineers became a new breed of behaviorists, tweaking levers of vanity and passion and susceptibility. The real-world effects were striking. In 2012, when Chan was in medical school, she and Zuckerberg discussed a critical shortage of organs for transplant, inspiring Zuckerberg to add a small, powerful nudge on Facebook: if people indicated that they were organ donors, it triggered a notification to friends, and, in turn, a cascade of social pressure. Researchers later found that, on the first day the feature appeared, it increased official organ-donor enrollment more than twentyfold nationwide.

    Sean Parker later described the company’s expertise as “exploiting a vulnerability in human psychology.” The goal: “How do we consume as much of your time and conscious attention as possible?” Facebook engineers discovered that people find it nearly impossible not to log in after receiving an e-mail saying that someone has uploaded a picture of them. Facebook also discovered its power to affect people’s political behavior. Researchers found that, during the 2010 midterm elections, Facebook was able to prod users to vote simply by feeding them pictures of friends who had already voted, and by giving them the option to click on an “I Voted” button. The technique boosted turnout by three hundred and forty thousand people—more than four times the number of votes separating Trump and Clinton in key states in the 2016 race. It became a running joke among employees that Facebook could tilt an election just by choosing where to deploy its “I Voted” button.

    These powers of social engineering could be put to dubious purposes. In 2012, Facebook data scientists used nearly seven hundred thousand people as guinea pigs, feeding them happy or sad posts to test whether emotion is contagious on social media. (They concluded that it is.) When the findings were published, in the Proceedings of the National Academy of Sciences, they caused an uproar among users, many of whom were horrified that their emotions may have been surreptitiously manipulated. In an apology, one of the scientists wrote, “In hindsight, the research benefits of the paper may not have justified all of this anxiety.”

    Facebook was, in the words of Tristan Harris, a former design ethicist at Google, becoming a pioneer in “ persuasive technology.

    Facebook had adopted a buccaneering motto, “Move fast and break things,” which celebrated the idea that it was better to be flawed and first than careful and perfect. Andrew Bosworth, a former Harvard teaching assistant who is now one of Zuckerberg’s longest-serving lieutenants and a member of his inner circle, explained, “A failure can be a form of success. It’s not the form you want, but it can be a useful thing to how you learn.” In Zuckerberg’s view, skeptics were often just fogies and scolds. “There’s always someone who wants to slow you down,” he said in a commencement address at Harvard last year. “In our society, we often don’t do big things because we’re so afraid of making mistakes that we ignore all the things wrong today if we do nothing. The reality is, anything we do will have issues in the future. But that can’t keep us from starting.”

    In contrast to a traditional foundation, an L.L.C. can lobby and give money to politicians, without as strict a legal requirement to disclose activities. In other words, rather than trying to win over politicians and citizens in places like Newark, Zuckerberg and Chan could help elect politicians who agree with them, and rally the public directly by running ads and supporting advocacy groups. (A spokesperson for C.Z.I. said that it has given no money to candidates; it has supported ballot initiatives through a 501(c)(4) social-welfare organization.) “The whole point of the L.L.C. structure is to allow a coördinated attack,” Rob Reich, a co-director of Stanford’s Center on Philanthropy and Civil Society, told me. The structure has gained popularity in Silicon Valley but has been criticized for allowing wealthy individuals to orchestrate large-scale social agendas behind closed doors. Reich said, “There should be much greater transparency, so that it’s not dark. That’s not a criticism of Mark Zuckerberg. It’s a criticism of the law.”

    La question des langues est fondamentale quand il s’agit de réseaux sociaux

    Beginning in 2013, a series of experts on Myanmar met with Facebook officials to warn them that it was fuelling attacks on the Rohingya. David Madden, an entrepreneur based in Myanmar, delivered a presentation to officials at the Menlo Park headquarters, pointing out that the company was playing a role akin to that of the radio broadcasts that spread hatred during the Rwandan genocide. In 2016, C4ADS, a Washington-based nonprofit, published a detailed analysis of Facebook usage in Myanmar, and described a “campaign of hate speech that actively dehumanizes Muslims.” Facebook officials said that they were hiring more Burmese-language reviewers to take down dangerous content, but the company repeatedly declined to say how many had actually been hired. By last March, the situation had become dire: almost a million Rohingya had fled the country, and more than a hundred thousand were confined to internal camps. The United Nations investigator in charge of examining the crisis, which the U.N. has deemed a genocide, said, “I’m afraid that Facebook has now turned into a beast, and not what it was originally intended.” Afterward, when pressed, Zuckerberg repeated the claim that Facebook was “hiring dozens” of additional Burmese-language content reviewers.

    More than three months later, I asked Jes Kaliebe Petersen, the C.E.O. of Phandeeyar, a tech hub in Myanmar, if there had been any progress. “We haven’t seen any tangible change from Facebook,” he told me. “We don’t know how much content is being reported. We don’t know how many people at Facebook speak Burmese. The situation is getting worse and worse here.”

    I saw Zuckerberg the following morning, and asked him what was taking so long. He replied, “I think, fundamentally, we’ve been slow at the same thing in a number of areas, because it’s actually the same problem. But, yeah, I think the situation in Myanmar is terrible.” It was a frustrating and evasive reply. I asked him to specify the problem. He said, “Across the board, the solution to this is we need to move from what is fundamentally a reactive model to a model where we are using technical systems to flag things to a much larger number of people who speak all the native languages around the world and who can just capture much more of the content.”

    Lecture des journaux ou des aggrégateurs ?

    once asked Zuckerberg what he reads to get the news. “I probably mostly read aggregators,” he said. “I definitely follow Techmeme”—a roundup of headlines about his industry—“and the media and political equivalents of that, just for awareness.” He went on, “There’s really no newspaper that I pick up and read front to back. Well, that might be true of most people these days—most people don’t read the physical paper—but there aren’t many news Web sites where I go to browse.”

    A couple of days later, he called me and asked to revisit the subject. “I felt like my answers were kind of vague, because I didn’t necessarily feel like it was appropriate for me to get into which specific organizations or reporters I read and follow,” he said. “I guess what I tried to convey, although I’m not sure if this came across clearly, is that the job of uncovering new facts and doing it in a trusted way is just an absolutely critical function for society.”

    Zuckerberg and Sandberg have attributed their mistakes to excessive optimism, a blindness to the darker applications of their service. But that explanation ignores their fixation on growth, and their unwillingness to heed warnings. Zuckerberg resisted calls to reorganize the company around a new understanding of privacy, or to reconsider the depth of data it collects for advertisers.

    Antitrust

    In barely two years, the mood in Washington had shifted. Internet companies and entrepreneurs, formerly valorized as the vanguard of American ingenuity and the astronauts of our time, were being compared to Standard Oil and other monopolists of the Gilded Age. This spring, the Wall Street Journal published an article that began, “Imagine a not-too-distant future in which trustbusters force Facebook to sell off Instagram and WhatsApp.” It was accompanied by a sepia-toned illustration in which portraits of Zuckerberg, Tim Cook, and other tech C.E.O.s had been grafted onto overstuffed torsos meant to evoke the robber barons. In 1915, Louis Brandeis, the reformer and future Supreme Court Justice, testified before a congressional committee about the dangers of corporations large enough that they could achieve a level of near-sovereignty “so powerful that the ordinary social and industrial forces existing are insufficient to cope with it.” He called this the “curse of bigness.” Tim Wu, a Columbia law-school professor and the author of a forthcoming book inspired by Brandeis’s phrase, told me, “Today, no sector exemplifies more clearly the threat of bigness to democracy than Big Tech.” He added, “When a concentrated private power has such control over what we see and hear, it has a power that rivals or exceeds that of elected government.”

    When I asked Zuckerberg whether policymakers might try to break up Facebook, he replied, adamantly, that such a move would be a mistake. The field is “extremely competitive,” he told me. “I think sometimes people get into this mode of ‘Well, there’s not, like, an exact replacement for Facebook.’ Well, actually, that makes it more competitive, because what we really are is a system of different things: we compete with Twitter as a broadcast medium; we compete with Snapchat as a broadcast medium; we do messaging, and iMessage is default-installed on every iPhone.” He acknowledged the deeper concern. “There’s this other question, which is just, laws aside, how do we feel about these tech companies being big?” he said. But he argued that efforts to “curtail” the growth of Facebook or other Silicon Valley heavyweights would cede the field to China. “I think that anything that we’re doing to constrain them will, first, have an impact on how successful we can be in other places,” he said. “I wouldn’t worry in the near term about Chinese companies or anyone else winning in the U.S., for the most part. But there are all these places where there are day-to-day more competitive situations—in Southeast Asia, across Europe, Latin America, lots of different places.”

    The rough consensus in Washington is that regulators are unlikely to try to break up Facebook. The F.T.C. will almost certainly fine the company for violations, and may consider blocking it from buying big potential competitors, but, as a former F.T.C. commissioner told me, “in the United States you’re allowed to have a monopoly position, as long as you achieve it and maintain it without doing illegal things.”

    Facebook is encountering tougher treatment in Europe, where antitrust laws are stronger and the history of fascism makes people especially wary of intrusions on privacy. One of the most formidable critics of Silicon Valley is the European Union’s top antitrust regulator, Margrethe Vestager.

    In Vestager’s view, a healthy market should produce competitors to Facebook that position themselves as ethical alternatives, collecting less data and seeking a smaller share of user attention. “We need social media that will allow us to have a nonaddictive, advertising-free space,” she said. “You’re more than welcome to be successful and to dramatically outgrow your competitors if customers like your product. But, if you grow to be dominant, you have a special responsibility not to misuse your dominant position to make it very difficult for others to compete against you and to attract potential customers. Of course, we keep an eye on it. If we get worried, we will start looking.”

    Modération

    As hard as it is to curb election propaganda, Zuckerberg’s most intractable problem may lie elsewhere—in the struggle over which opinions can appear on Facebook, which cannot, and who gets to decide. As an engineer, Zuckerberg never wanted to wade into the realm of content. Initially, Facebook tried blocking certain kinds of material, such as posts featuring nudity, but it was forced to create long lists of exceptions, including images of breast-feeding, “acts of protest,” and works of art. Once Facebook became a venue for political debate, the problem exploded. In April, in a call with investment analysts, Zuckerberg said glumly that it was proving “easier to build an A.I. system to detect a nipple than what is hate speech.”

    The cult of growth leads to the curse of bigness: every day, a billion things were being posted to Facebook. At any given moment, a Facebook “content moderator” was deciding whether a post in, say, Sri Lanka met the standard of hate speech or whether a dispute over Korean politics had crossed the line into bullying. Zuckerberg sought to avoid banning users, preferring to be a “platform for all ideas.” But he needed to prevent Facebook from becoming a swamp of hoaxes and abuse. His solution was to ban “hate speech” and impose lesser punishments for “misinformation,” a broad category that ranged from crude deceptions to simple mistakes. Facebook tried to develop rules about how the punishments would be applied, but each idiosyncratic scenario prompted more rules, and over time they became byzantine. According to Facebook training slides published by the Guardian last year, moderators were told that it was permissible to say “You are such a Jew” but not permissible to say “Irish are the best, but really French sucks,” because the latter was defining another people as “inferiors.” Users could not write “Migrants are scum,” because it is dehumanizing, but they could write “Keep the horny migrant teen-agers away from our daughters.” The distinctions were explained to trainees in arcane formulas such as “Not Protected + Quasi protected = not protected.”

    It will hardly be the last quandary of this sort. Facebook’s free-speech dilemmas have no simple answers—you don’t have to be a fan of Alex Jones to be unnerved by the company’s extraordinary power to silence a voice when it chooses, or, for that matter, to amplify others, to pull the levers of what we see, hear, and experience. Zuckerberg is hoping to erect a scalable system, an orderly decision tree that accounts for every eventuality and exception, but the boundaries of speech are a bedevilling problem that defies mechanistic fixes. The Supreme Court, defining obscenity, landed on “I know it when I see it.” For now, Facebook is making do with a Rube Goldberg machine of policies and improvisations, and opportunists are relishing it. Senator Ted Cruz, Republican of Texas, seized on the ban of Jones as a fascist assault on conservatives. In a moment that was rich even by Cruz’s standards, he quoted Martin Niemöller’s famous lines about the Holocaust, saying, “As the poem goes, you know, ‘First they came for Alex Jones.’ ”

    #Facebook #Histoire_numérique

  • The Cleaners - Les nettoyeurs du Web

    Qui modère nos contenus en ligne ? Les réseaux sociaux contribuent-ils à l’accroissement de la haine ? Des Philippines à la Silicon Valley, une enquête exhaustive et brutale sur la violence à l’ère du Web.

    Ignorer ou supprimer ? Cette question, les modérateurs des réseaux sociaux se la posent chacun vingt-cinq mille fois par jour. Aux Philippines, ils sont des centaines à effectuer ce travail que Facebook sous-traite à une multinationale : purger le Net de ses images les plus violentes. De la pédopornographie aux décapitations terroristes, en passant par l’automutilation ou la simple nudité, proscrite par les chartes des grands groupes, l’impact psychologique des images les plus rudes – seul quotidien de ces abeilles ouvrières du Web – est aussi violent qu’ignoré par la Silicon Valley, pour laquelle le rendement passe avant tout. Mais les règles de modération imposées trouvent bien vite leur limite, dès lors que la question de l’art ou de la politique fait irruption. Quelle est la frontière entre modération et censure ? Doit-on « nettoyer » les réseaux des images de guerre, alors qu’elles documentent les conflits ? Lorsque l’administration du président Erdogan demande aux géants des réseaux sociaux de supprimer des contenus d’opposition politique qu’elle juge terroristes, sous peine de bloquer les sites sur le territoire turc, pourquoi les entreprises s’exécutent-elles ? Comment ne pas y voir une logique froidement mercantile ?

    Le mal du XXIe siècle
    Quel est le meilleur moyen d’engranger de l’audience ? « L’indignation », répond Tristan Harris, ancien cadre de Google. En privilégiant les contenus choquants, les réseaux sociaux – seule source d’information pour un nombre grandissant d’internautes – voient leur vision segmentée s’imposer à leurs utilisateurs, polarisant une haine et déchaînant une violence bien réelles. C’est tout le paradoxe de ces nouveaux maîtres du Web, qui épuisent leurs sous-traitants à purger les réseaux tout en bâtissant des algorithmes au service de la colère. Un mal du XXIe siècle intelligemment expliqué par Hans Block et Moritz Riesewieck qui, des Philippines à la Silicon Valley, examinent les deux côtés de la chaîne dans un documentaire à charge, exhaustif et passionnant.

    https://www.arte.tv/fr/videos/069881-000-A/les-nettoyeurs-du-web-the-cleaners
    #reseaux_sociaux #google #facebook #twitter #youtube #morale #ethique #liberte #expression

  • L’attention, une question politique ?
    http://www.internetactu.net/2018/06/04/lattention-une-question-politique

    Le concepteur éthique Américain Tristan Harris (@tristanharris) était l’un des invités du sommet Tech for Good, réuni par Emmanuel Macron à l’Élysée. Il était l’un des rares représentants d’une « société civile » dans un quarteron exclusivement entrepreneurial qui a surtout servi à faire des annonces sur le développement de (...)

    #Articles #Débats #attentionbydesign #design #économie_de_l'attention

  • It’s time to rebuild the web - O’Reilly Media
    https://www.oreilly.com/ideas/its-time-to-rebuild-the-web

    The web was never supposed to be a few walled gardens of concentrated content owned by Facebook, YouTube, Twitter, and a few other major publishers. It was supposed to be a cacophony of different sites and voices. And it would be easy to rebuild this cacophony—indeed, it never really died. There are plenty of individual sites out there still, and they provide some (should I say most?) of the really valuable content on the web. The problem with the megasites is that they select and present “relevant” content to us. Much as we may complain about Facebook, selecting relevant content from an ocean of random sites is an important service. It’s easy for me to imagine relatives and friends building their own sites for baby pictures, announcements, and general talk. That’s what we did in the 90s. But would we go to the trouble of reading those all those sites? Probably not. I didn’t in the 90s, and neither did you.

    Yes, there would still be plenty of sites for every conspiracy theory and propaganda project around; but in a world where you choose what you see rather than letting a third party decide for you, these sites would have trouble gaining momentum.

    I don’t want to underestimate the difficulty of this project, or overestimate its chances of success. We’d certainly have to get used to sites that aren’t as glossy or complex as the ones we have now. We might have to revisit some of the most hideous bits of the first-generation web, including those awful GeoCities pages. We would probably need to avoid fancy, dynamic websites; and, before you think this will be easy, remember that one of the first extensions to the static web was CGI Perl. We would be taking the risk that we’d re-invent the same mistakes that brought us to our current mess. Simplicity is a discipline, and not an easy one. However, by losing tons of bloat, we’d end up with a web that is much faster and more responsive than what we have now. And maybe we’d learn to prize that speed and that responsiveness.

    #HTML #Web #Design

    • Je pense qu’il se trompe totalement de sujet. Le problème n’est pas la difficulté technique, mais l’économie de l’attention.

      Avant que « tout le monde » se mette à publier sur Facebook, il y avait les « blogs », y compris centralisés, tels que Blogspot. Aujourd’hui il y a Medium (et d’autres). Si on veut publier en dehors de Facebook, techniquement, c’est simple, c’est puissant et c’est beaucoup plus joli.

      Et en dehors de ça, il y a toujours la possibilité de se mettre à plusieurs pour faire un site ensemble, ce qui est une aventure particulièrement enrichissante, et techniquement pas usante (si vraiment on veut s’exprimer dans ce cadre, hé ben on trouve le ou la geek du groupe qui te configure le truc et voilà).

      Les gens « publient » sur Facebook parce qu’on a développé un fantasme de l’audience et de l’attention. Les gens postent leurs vidéos où ils prétendent commenter sérieusement l’actualité au milieu des vidéos de chats sur Youtube plutôt que sur n’importe quel autre plateforme parce qu’on leur promet que l’audience est là, pas parce que ce serait plus difficile sur Vimeo par exemple. On veut s’exprimer sans bosser (hop, je balance mon indignation à deux balles sur n’importe quel sujet) sur des plateformes qui promettent une grosse audience, voire une audience captive. Et les militants vont aussi sur Facebook pour exactement les mêmes raisons : parce qu’on leur dit qu’ils y feront de l’audience. Genre à ce train, il faudra aussi penser à aller « militer » sur Musically…

      Et je suspecte que beaucoup de « militants » vont sur Facebook ou Twitter parce que ce sont des plateformes fliquées où l’on peut facilement s’exprimer à sens unique, sans gros risque de contradiction, parce que les « réponses » sont perdues dans un marigot de conneries crasses, avec une interface bien conçue pour que personne ne les lise.

      Tant qu’on ne s’interroge pas sur la réalité et la qualité de l’attention dont on croit bénéficier sur Facebook, on n’ira nulle part. On peut bien vouloir décentraliser, revenir à un beau bazar de Web, mais tant que la logique qu’on oppose à cela est systématiquement « oui mais “les gens” sont sur Facebook », c’est mort. Tant qu’on ne se demandera pas quel est le niveau d’attention réel de ce qui est compté comme une « vue », ou même un « like » sur Facebook ou Twitter, c’est peine perdue.

      Si on regarde la photo, à une époque Flickr est presque mort de sa médiocrité, malgré son audience, et les photographes avec une (petite) prétention à la qualité sont partis ailleurs (500px par exemple). Idem pour les vidéos : si tu as une prétention artistique, tu fuis Youtube et tu vas sur Viméo. Pourquoi la question ne se pose-t-elle pas (ou pas encore) pour les gens qui cherchent une qualité de lecture, d’attention et d’échange avant d’aller sur Facebook (ou même Twitter) ?

    • Puisque je suis lancé…

      – La question du personal branding (« marketing personnel »), que je pense fondamentale dans les nouveaux supports de publication. Facebook semble être idéal pour cela, en ce qu’il a dès le début promu l’« identité réelle », mais aussi la confusion volontaire entre vie publique et vie privée (alors que, pour ma génération, aller balancer des propos politiquement engagés au même endroit où l’on échange des photos de ses gamins avec ses beaux-parents et où l’on garde le contact avec ses anciens élèves et ses relations du boulot, c’était une idée totalement crétine).

      Le changement de support de publication s’est aussi accompagné de ce changement de comportement (on se souviendra des vieux de la vieille bataillant pour tenter de pouvoir rester sous pseudo sur Facebook). Dans le bazar des années 90, nous avions tous des pseudos et nous tenions au pseudonymat (théorisé par certains – on a un texte sur uZine à ce sujet – comme la seule façon d’écrire de manière réellement libre). Dans les années 2000, chacun son blog, mais avec la mise en place d’un vedettariat. Désormais c’est terminé. Écrire sous pseudo est largement suspect et irresponsable et fake-news et tout ça…

      – La disparition des liens hypertexte. Je mets ça dans les années 2000, où les gens se sont mis à ne plus faire de liens entre blogs. On cite des gros médias sérieux, on les commente, on y répond, mais on ne rebondit pas « entre blogueurs » aussi facilement qu’on le faisait dans les années 90. Une sorte de dramatisation du lien hypertexte, accentué par la paranoïa anti-confusionisme et anti-complotisme chez les blogueurs engagés à gauche. Oh là là, mon image-personnelle-super-brandée va s’effondrer si je fais un lien vers un autre site qui, un jour, pourrait faire un lien vers quelqu’un de pas bien ou, pire, qui pourrait dire un truc qui ne me conviendrait pas !

      À partir du milieu des années 2000, quand on avait encore un site indépendant/militant, la seule source de visites à peu près régulière a été le Portail des copains. Pour un site militant, avoir un billet référencé ou pas sur le Portail, ça revenait à être lu par quelques centaines ou milliers de visiteurs eux-mêmes militants, ou pas lu du tout. Je suppose qu’au bout d’un moment, hé ben les gens vont à la soupe sur Facebook, parce que « c’est là que sont les gens » et que l’écosystème de l’hypertexte est déjà perverti depuis un bon moment ; même si ça me semble largement illusoire d’être lu sérieusement entre deux considérations sur les photos de vacances des cousins, et si c’est politiquement mortifère.

    • Et du coup, deux aspects qui pourraient redonner goût aux gens engagés à retourner au bazar…

      – À force de paranoïa anti-fake news, la visibilité des médias « non marchands », « indépendants », « alternatifs » sur Facebook va de plus en plus dépendre de leur adhésion à une vision du monde validée par les grands médias, au travers des contrats de traque aux fake-news que ces grands réseaux ont signé avec les Decodex et Décodeurs… L’effort énorme de chasse aux propos hétérodoxes sur les réseaux est en train de faire un massacre de ce côté, avec des changements volontaires des algorithmes de classement. (Le fait qu’une partie de cette paranoïa anti-fake-news ait été largement menée par la gauche aux États-Unis et ici est, je dois dire, assez croquignolesque.) Constater, par exemple, qu’on n’a déjà qu’une très faible visibilité pour les médias alternatifs sur un machin comme Google News.

      – Le principe même de réseaux qui vendent de la publicité et qui, dans le même temps, sont des outils de marketing personnel, amène au rétrécissement de l’expression non-sponsorisée sur ces réseaux (Facebook annonçant par exemple restreindre la visibilité naturelle des « pages » dans les flux personnels, mais pas une réduction de la quantité de publicité dans ces mêmes flux…). En dehors des quelques indignations préformatées qui continueront à « buzzer », parce qu’il faut bien continuer à faire croire que ces réseaux servent à s’exprimer, il devient de plus en plus évident que, pour récupérer de la visibilité dans les flux des usagers, il faudra payer. Je suspecte qu’il existe déjà des gens qui achètent de la visibilité sur Facebook pour avoir plus de « like » sur leurs photos de mariage, et je pense que dans quelques années, ce sera une pratique aussi banale que de, justement, payer un photographe pour avoir de belles photos de mariage… alors cette faramineuse audience qu’on te promet sur Facebook, ça risque d’être un peu compliqué. (Sinon, il te reste l’option d’aller relancer le niveau de buzz – et donc le revenu afférent – en allant, arme à la main, foutre le dawa chez Youtube.)

      Du coup, ce double mouvement de chasse aux expressions hétérodoxes (« chasse », en revanche, très tolérante pour les foutaises mainstream fascisantes) et d’obligation de passer à la caisse pour récupérer de la visibilité, ça pourrait être l’occasion d’une renaissance d’un Web un peu plus éclaté et joyeusement foutraque (#ou_pas).

    • @arno Pas nécessairement pour te relancer, encore que, depuis ce matin je bois un peu tes paroles comme du petit lait, mais je remarque ceci. Ce que tu exprimes ici avec une clarté dont je suis jaloux et qu’il m’arrive donc aussi de défendre de façon pus obscure, est de manière tellement fréquente, en fait, disqualifié d’un revers de main qui entend ringardiser de tels propos. Je suis souvent frappé que ces tentatives visant à faire fi de tels avertissement aux accents de Cassandre comme on me dit si souvent émanent naturellement de personnes qui n’ont pas le centième du quart du tiers de nos compétences techniques (surtout des tiennes plutôt que des miennes, dont je ne doute pas qu’elles datent un peu), et ce sont les mêmes personnes axquelles il ne me viendrait pas à l’esprit de reprocher cette plus faible aptitude technique, qui vont ensuite dévouvrir que vraiment ce que font les plateformes de leurs réseaux asociaux avec leurs données personnelles, c’est quand même pas très bien. Naturellement, des ringards cassandriens comme toi et moi (surtout moi) tentons de saisir l’opportunité de telles réalisations tardives pour leur dire que peut-être ce serait le bon moment pour changer de lunettes, mais alors tombe, tel une masse sur une mouche, l’argument qu’ils et elles ne peuvent pas partir de la sorte de ces plateformes et qu’ils et elles vont perdre tous leurs amis et suiveuses. Et tout ce petit monde retourne dans le grand bain dans la pleine connaissance qu’il grouille de requins et sans doute se rassure en pensant que cela concernera un ou une autres et que statistiquement, dans la masse, il est peu probable que le requin viennent les croquer elles et eux, quand le vrai danger n’est pas tellement les requins mais les amibes, parfaitement invisibles et autrement plus contagieux, et, in fine, léthaux.

      Je n’insiste pas sur le fait que leur faible niveau de prophylaxie numérique éclabousse quotidiennement leurs proches, qui elles, et eux, si cela se trouve, prennent, au contraire, certains précautions.

      Enfin, dans ce grand dessein de reconstruire Internet, je remarque que très rarement la question de la convservation de ressources certaines anciennes est évoquée or il me semble que c’est une question importante. Combien de fois cela me ravit quand à la faveur de la recherche d’un renseignement précis, il m’arrive de tomber sur des sites aux esthétiques du millénaire précédent, tout en Times et les liens en bleu souligné. Et dans cette masse profuse sur laquelle se sont aglutinés des sédiments plus récents, il y a des pépites dont je me demande combien de temps encore les navigateurs contemporains pourront encore les interpréter justement sans les interpréter et les dénaturer.

    • On a encore les codes d’uZine si jamais :)

      Perso je veux juste réagir sur un point de l’article de départ : geocities c’était pas hideux (ou pas seulement), c’était inventif et festif. Avec un revival chez https://neocities.org qui a une approche très sympa (notamment un backend #ipfs pour la pérennité)

  • Fake news et ingérence russe : les deux années qui ont ébranlé Facebook
    https://www.nouvelobs.com/monde/20180213.OBS2131/fake-news-et-ingerence-russe-les-deux-annees-qui-ont-ebranle-facebook.htm

    Parallèlement, l’équipe de campagne de Trump exploite à fond les possibilités de Facebook et de ses propres fichiers sur ses sympathisants, en envoyant des messages publicitaires ciblés. Trump balance des textes comme « Cette élection est truquée par les médias qui diffusent des accusations fausses et dénuées de réalité, et des mensonges éhontés, pour faire élire Hillary la pourrie ! » Ce genre de messages obtient des centaines de milliers de « J’aime », de commentaires et de partages, et l’argent afflue.

    Alors que les messages plus nuancés de la campagne d’Hillary Clinton obtiennent moins d’écho. « Au sein de Facebook, presque tout le monde parmi les dirigeants voulaient que Clinton gagne, mais ils savaient que Trump utilisait mieux la plateforme. S’il était un candidat pour Facebook, elle était une candidate pour LinkedIn », note « Wired ».

    Une nouvelle espèce d’arnaqueurs en ligne apparaît, diffusant des articles viraux et totalement bidonnés. Ils ont vite remarqué que les sujets pro-Trump marchent très bien, et sortent par exemple un article prétendant que le pape soutient Donald Trump, qui obtient près d’un million de réactions sur Facebook.

    Les dénégations initiales de Zuckerberg ont énervé une chercheuse en sécurité, Renée DiResta, qui étudie depuis des années la diffusion de la désinformation sur la plateforme. Elle a noté que si on rejoint un groupe anti-vaccins, l’algorithme propose d’adhérer à des groupes complotistes comme ceux croyant que la Terre est plate ou des adeptes du Pizzagate.

    En mai, DiResta publie un article où elle compare les diffuseurs de fausses nouvelles sur les réseaux sociaux aux manipulations du trading haute fréquence sur les marchés financiers. Pour elle, les réseaux sociaux permettent à des acteurs malveillants d’opérer à grande échelle, et de faire croire avec des bots et des comptes sous fausse identité à des mouvements importants sur le terrain.

    Avec Roger McNamee, un actionnaire de Facebook furieux des réponses pleines d’autosatisfaction qu’a renvoyées l’entreprise à ses courriers d’alerte, et Tristan Harris, ancien de Google devenu célèbre pour pointer les dangers des services numériques, les trois dénoncent dans les médias les dangers que ferait peser Facebook pour la démocratie américaine.

    Le réseau social est aussi accusé d’avoir permis la diffusion de propagande mortelle contre les Rohingyas en Birmanie et d’avoir servi les méthodes brutales de Duterte à la tête des Philippines. Mais ses résultats sont plus florissants que jamais.

    Janvier 2018. Mark Zuckerberg annonce, comme chaque début d’année, ses bonnes résolutions. Et cette fois il ne s’agit pas de ses habituels défis personnels (apprendre le chinois mandarin, lire 25 livres, etc.), mais de « réparer Facebook », reconnaissant que l’entreprise a un rôle à jouer « qu’il s’agisse de protéger notre communauté contre les abus et la haine, de se défendre contre l’ingérence de nations-Etats, ou de s’assurer que le temps passé sur Facebook est du temps bien employé » (un terme qu’on dirait emprunté à Tristan Harris).

    Comment va évoluer Facebook ? Selon un dirigeant cité par « Wired » :
    ""Toute cette année a complètement changé son techno-optimisme [de Mark Zuckerberg]. Ça l’a rendu beaucoup plus paranoïaque quant aux façons dont des gens peuvent abuser de ce qu’il a construit.""

    #Facebook #Politique

    • ah ah ah, au moment ou est souligné également le vol et stockage de données de communication téléphonique par FB via android franchement ce type a été depuis le début le vrp de la surveillance de masse :/

  • Is the Answer to Phone Addiction a Worse Phone? - The New York Times
    https://www.nytimes.com/2018/01/12/technology/grayscale-phone.html

    In an effort to break my smartphone addiction, I’ve joined a small group of people turning their phone screens to grayscale — cutting out the colors and going with a range of shades from white to black. First popularized by the tech ethicist Tristan Harris, the goal of sticking to shades of gray is to make the glittering screen a little less stimulating.

    I’ve been gray for a couple days, and it’s remarkable how well it has eased my twitchy phone checking, suggesting that one way to break phone attachment may be to, essentially, make my phone a little worse. We’re simple animals, excited by bright colors, it turns out.

    What going grayscale does, Mr. Ramsoy said, is reintroduce choice.

    Companies use colors to encourage subconscious decisions, Mr. Ramsoy said. (So that, for example, I may want to open email, but I’ll end up on Instagram, having seen its colorful button.) Making the phone gray eliminates that manipulation. Mr. Ramsoy said it reintroduces “controlled attention.”

    “Color’s not a signal for detecting objects, it’s actually something much more fundamental: it’s for telling us what’s likely to be important,” Mr. Conway said. “If you have lots of color and contrast then you’re under a constant state of attentional recruitment. Your attentional system is constantly going, ‘Look look look over here.’ ”

    #Economie_attention #Couleur #Design

  • Nudger n’est pas jouer : comment Uber et Lyft influencent leurs chauffeurs à leur avantage.
    https://linc.cnil.fr/fr/nudger-nest-pas-jouer-comment-uber-et-lyft-influencent-leurs-chauffeurs-le

    Certaines plateformes – en particulier Uber et Lyft – recourent à des mécanismes issus des sciences comportementales pour inciter leur main d’œuvre à agir dans l’intérêt de la plateforme, au détriment du leur. C’est ce que révèle un article publié le 2 avril dernier sur le site du nytimes.com. Nous nous intéressons régulièrement aux sujets qui sont à la croisée des sciences cognitives et du design, pour nudger les comportements humains. Les travaux de Tristan Harris alertaient par exemple sur le fait que (...)

    #Lyft #Uber #travail #travailleurs #CNIL

  • #design de nos vulnérabilités : la Silicon Valley est-elle à la recherche d’une conscience ?
    http://www.internetactu.net/2016/11/09/design-de-nos-vulnerabilites-la-silicon-valley-est-elle-a-la-recherche

    En juin dernier, nous avions longuement rendu compte des propos du designer Tristan Harris (@tristanharris). Bianca Bosker pour The Atlantic l’a récemment rencontré à une soirée de désintoxication numérique à San Francisco, Unplug SF, organisé par le collectif Digital Detox. Une soirée qui montre bien que face à nos outils, (...)

    #Articles #Enjeux #cognition #confiance #économie_de_l'attention

  • Comment répondre au design de nos vulnérabilités ?
    http://internetactu.blog.lemonde.fr/2016/06/18/comment-repondre-au-design-de-nos-vulnerabilites

    Sur Medium, Tristan Harris (@tristanharris) qui se présente comme ex-designer de l’éthique chez Google, a livré un long et passionnant article sur la manière dont le design aujourd’hui exploite nos vulnérabilités.

    Du design de la dépendance
    http://www.internetactu.net/2015/03/12/du-design-de-la-dependance

    Sur Ethnography Matters, Rachelle Annechino livre une longue interview de l’anthropologue Natasha Schüll, l’auteur d’Addiction by Design, qui s’intéresse à la conception de la dépendance dans l’univers des machines à sous.

    #design #manipulation

  • Répondre au #design de nos vulnérabilités « InternetActu.net
    http://www.internetactu.net/2016/06/16/du-design-de-nos-vulnerabilites

    Or quand on donne aux gens une sélection de #choix, ils se demandent rarement ce qui n’est pas proposé… Pourquoi leur propose-t-on certaines options et pas d’autres ? Quels sont les objectifs de celui qui les propose ?… Ou si ce choix “capacite” le besoin de l’utilisateur ou créé seulement une distraction… Pour Tristan Harris par exemple, consulter Yelp pour y trouver un bar pour continuer à discuter avec des amis transforme sa requête en quel bar semble être le plus attirant ou le proche selon les propositions que renvoie l’application. Or, le menu qui nous propose plus d’#autonomie est différent du menu qui nous propose le plus de choix, rappelle le designer. Pour lui, toutes les interfaces utilisateurs sont des menus qui remplacent les questions qu’on se pose par une autre. La liste des notifications de son téléphone correspond-elle à ce qui nous préoccupe ? Pour lui, celui qui contrôle le “menu” contrôle les choix. Telle est la première vulnérabilité que tentent d’exploiter nos outils.

    #liberté #technologie #web_design

  • Tristan Harris : « Des millions d’heures sont juste volées à la vie des gens » - Rue89 - L’Obs
    http://rue89.nouvelobs.com/2016/06/04/tristan-harris-millions-dheures-sont-juste-volees-a-vie-gens-264251

    Tristan Harris a été le « philosophe produit » de Google pendant trois ans. Ça vous laisse perplexe ? Nous aussi, au début.

    On a découvert cet ingénieur informatique américain formé à Stanford via un post de Medium passionnant titré « Comment la technologie pirate l’esprit des gens ». Il y explique (en anglais) comment les entreprises de la Silicon Valley nous manipulent pour nous faire perdre le plus de temps possible dans leurs interfaces.

    #économie_de_l'attention

  • We Are Hopelessly Hooked | The New York Review of Books (Jacob Weisberg, 25 février 2016)
    http://www.nybooks.com/articles/2016/02/25/we-are-hopelessly-hooked

    Some of Silicon Valley’s most successful app designers are alumni of the Persuasive Technology Lab at #Stanford, a branch of the university’s Human Sciences and Technologies Advanced Research Institute. The lab was founded in 1998 by B.J. Fogg, whose graduate work “used methods from experimental psychology to demonstrate that computers can change people’s thoughts and behaviors in predictable ways,” according to the center’s website. Fogg teaches undergraduates and runs “persuasion boot camps” for tech companies. He calls the field he founded “captology,” a term derived from an acronym for “computers as persuasive technology.” It’s an apt name for the discipline of capturing people’s #attention and making it hard for them to escape. Fogg’s behavior model involves building habits through the use of what he calls “hot triggers,” like the links and photos in Facebook’s newsfeed, made up largely of posts by one’s Facebook friends.

    (…) As consumers, we can also pressure technology companies to engineer apps that are less distracting. If product design has a conscience at the moment, it may be Tristan Harris, a former B.J. Fogg student at Stanford who worked until recently as an engineer at Google. In several lectures available on YouTube, Harris argues that an “attention economy” is pushing us all to spend time in ways we recognize as unproductive and unsatisfying, but that we have limited capacity to control. #Tech_companies are engaged in “a race to the bottom of the brain stem,” in which rewards go not to those that help us spend our time wisely, but to those that keep us mindlessly pulling the lever at the casino.

    Harris wants engineers to consider human values like the notion of “time well spent” in the design of consumer technology. Most of his proposals are “nudge”-style tweaks and signals to encourage more conscious choices. For example, Gmail or Facebook might begin a session by asking you how much time you want to spend with it that day, and reminding you when you’re nearing the limit. Messaging apps might be reengineered to privilege attention over interruption. iTunes could downgrade games that are frequently deleted because users find them too addictive.

    A propos de quatre bouquins :

    Reclaiming Conversation: The Power of Talk in a Digital Age, by Sherry Turkle

    Alone Together: Why We Expect More from Technology and Less from Each Other, by Sherry Turkle

    Reading the Comments: Likers, Haters, and Manipulators at the Bottom of the Web, by Joseph M. Reagle Jr.

    Hooked: How to Build Habit-Forming Products, by Nir Eyal with Ryan Hoover

    #écrans #conversation #commentaires #addiction #critique_techno #temps #déconnexion via @opironet