• AI machines aren’t ‘hallucinating’. But their makers are, by Naomi Klein (The Guardian)
    https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
    https://i.guim.co.uk/img/media/3e2a29edcacc4a95e491f4320c27942e55e75eca/0_160_4800_2880/master/4800.jpg?width=620&quality=85&dpr=1&s=none

    There is a world in which generative #AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

    And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI – far from living up to all those utopian hallucinations – is much more likely to become a fearsome tool of further dispossession and despoilation.

    I’ll dig into why that is so. But first, it’s helpful to think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

  • Naomi Klein s’intéresse aux dangers du “tout A.I.” dans le Guardian US
    https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

    un petit extrait pris au début

    Warped hallucinations are indeed afoot in the world of AI, however – but it’s not the bots that are having them; it’s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations, both individually and collectively. Here I am defining hallucination not in the mystical or psychedelic sense, mind-altered states that can indeed assist in accessing profound, previously unperceived truths. No. These folks are just tripping: seeing, or at least claiming to see, evidence that is not there at all, even conjuring entire worlds that will put their products to use for our universal elevation and education.

    Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year.

    There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

    Un autre vers la fin

    A world of deep fakes, mimicry loops and worsening inequality is not an inevitability. It’s a set of policy choices. We can regulate the current form of vampiric chatbots out of existence—and begin to build the world in which AI’s most exciting promises would be more than Silicon Valley hallucinations.
    Because we trained the machines. All of us. But we never gave our consent. They fed on humanity’s collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances.

    Entre les deux elle fait l’inventaire des “hallucinations” dont certains CEO aimeraient bien qu’elles deviennent collectives :

    Hallucination # 1: AI will solve the climate crisis
    Hallucination # 2: AI will deliver wise governance
    Hallucination # 3: tech giants can be trusted not to break the world
    Hallucination # 4: AI will liberate us from drudgery

  • « Pays-Bas, un empire logistique au coeur de l’Europe » : https://cairn.info/revue-du-crieur-2023-1-page-60.htm
    Excellent papier du dernier numéro de la Revue du Crieur qui montre comment le hub logistique néerlandais a construit des espaces dérogatoires aux droits pour exploiter des milliers de migrants provenant de toute l’Europe. Ces zones franches optimisent la déréglementation et l’exploitation, générant une zone de non-droit, où, des horaires de travail aux logements, toute l’existence des petites mains de la logistique mondiale dépend d’une poignée d’employeurs et de logiciels. L’article évoque notamment Isabel, le logiciel de l’entreprise bol.com qui assure la mise à disposition de la main d’oeuvre, en intégrant statut d’emploi, productivité, gérant plannings et menaces... optimisant les RH à « l’affaiblissement de la capacité de négociation du flexworker ». Une technique qui n’est pas sans rappeler Orion, le logiciel qui optimise les primes pour les faire disparaitre... https://www.monde-diplomatique.fr/2022/12/DERKAOUI/65381

    Les boucles de rétroaction de l’injustice sont déjà en place. Demain, attendez-vous à ce qui est testé et mis en place à l’encontre des migrants qui font tourner nos usines logistiques s’élargisse à tous les autres travailleurs. #travail #RH #migrants

  • Pour Geoffrey Hinton, le père fondateur de l’#IA, les progrès actuels sont « effrayants »

    Mercredi, pour sa première apparition publique depuis la parution de l’article, #Geoffrey_Hinton s’est expliqué longuement sur les raisons de son départ [de #Google]. Interrogé en visioconférence lors de la conférence EmTech Digital, organisée à Boston par la « MIT Technology Review », le chercheur a indiqué avoir « très récemment changé d’avis » sur la capacité des modèles informatiques à apprendre mieux que le cerveau humain. « Plusieurs éléments m’ont amené à cette conclusion, l’un d’entre eux étant la performance de systèmes tels que #GPT-4. »

    Avec seulement 1.000 milliards de connexions, ces systèmes ont, selon lui, « une sorte de sens commun sur tout, et en savent probablement mille fois plus qu’une personne, dont le cerveau a plus de 100.000 milliards de connexions. Cela veut dire que leur algorithme d’apprentissage pourrait être bien meilleur que le nôtre, et c’est effrayant ! »

    D’autant que, comme ces nouvelles formes d’intelligence sont numériques, elles peuvent partager instantanément ce qu’elles ont appris, ce dont les humains sont bien incapables… Reconnaissant qu’il avait longtemps refusé de croire aux dangers existentiels posés par l’#intelligence_artificielle, et en particulier à celui d’une « prise de contrôle » de l’humanité par des machines devenues superintelligentes, Geoffrey Hinton n’hésite plus à évoquer ce scénario catastrophe. « Ces choses auront tout appris de nous, lu tous les livres de Machiavel, et si elles sont plus intelligentes que nous, elles n’auront pas de mal à nous manipuler. » Avant d’ajouter, avec un humour pince-sans-rire : « Et si on sait manipuler les gens, on peut envahir un bâtiment à Washington sans être sur place. »
    Face à un tel risque, le chercheur avoue « ne pas avoir de solution simple à proposer. Mais je pense qu’il faut y réfléchir sérieusement. »

    (Les Échos)

  • Opinion | Lina Khan : We Must Regulate A.I. Here’s How. - The New York Times
    https://www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html

    Encore une excellent prise de position de Lina Khan... une des personnes les plus pointues sur la régulation des technologies.
    Jeune, dynamique, ouverte, courageuse, d’une intelligence et subtilité sans faille... je suis membre du fan-club.

    By Lina M. Khan

    Ms. Khan is the chair of the Federal Trade Commission.

    It’s both exciting and unsettling to have a realistic conversation with a computer. Thanks to the rapid advance of generative artificial intelligence, many of us have now experienced this potentially revolutionary technology with vast implications for how people live, work and communicate around the world. The full extent of generative A.I.’s potential is still up for debate, but there’s little doubt it will be highly disruptive.

    The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s. New, innovative companies like Facebook and Google revolutionized communications and delivered popular services to a fast-growing user base.

    Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.

    These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law. Coupled with aggressive strategies to acquire or lock out companies that threatened their position, these tactics solidified the dominance of a handful of companies. What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.

    The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.

    As companies race to deploy and monetize A.I., the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices. As these technologies evolve, we are committed to doing our part to uphold America’s longstanding tradition of maintaining the open, fair and competitive markets that have underpinned both breakthrough innovations and our nation’s economic success — without tolerating business models or practices involving the mass exploitation of their users. Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.

    While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.

    Enforcers and regulators must be vigilant. Dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance. Meanwhile, the A.I. tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination. Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully. The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.

    And generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply. Chatbots are already being used to generate spear-phishing emails designed to scam people, fake websites and fake consumer reviews —bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
    Editors’ Picks
    If You Stop Mowing This May, Will Your Lawn Turn Into a Meadow?
    My Weekend With an Emotional Support A.I. Companion
    Nothing Says Fashion in 2023 Like a Corset Hoodie

    When enforcing the law’s prohibition on deceptive practices, we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.

    Lastly, these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination — unfairly locking out people from jobs, housing or key services. These tools can also be trained on private emails, chats and sensitive data, ultimately exposing personal details and violating user privacy. Existing laws prohibiting discrimination will apply, as will existing authorities proscribing exploitative collection or use of personal data.

    The history of the growth of technology companies two decades ago serves as a cautionary tale for how we should think about the expansion of generative A.I. But history also has lessons for how to handle technological disruption for the benefit of all. Facing antitrust scrutiny in the late 1960s, the computing titan IBM unbundled software from its hardware systems, catalyzing the rise of the American software industry and creating trillions of dollars of growth. Government action required AT&T to open up its patent vault and similarly unleashed decades of innovation and spurred the expansion of countless young firms.

    America’s longstanding national commitment to fostering fair and open competition has been an essential part of what has made this nation an economic powerhouse and a laboratory of innovation. We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.

    #Lina_Khan #Régulation #Intelligence_artificielle

  • UK signs contract with US startup to identify migrants in small-boat crossings

    The UK government has turned a US-based startup specialized in artificial intelligence as part of its pledge to stop small-boat crossings. Experts have already pointed out the legal and logistical challenges of the plan.

    In a new effort to address the high number of Channel crossings, the UK Home Office is working with the US defense startup #Anduril, specialized in the use of artificial intelligence (AI).

    A surveillance tower has already been installed at Dover, and other technologies might be rolled out with the onset of warmer temperatures and renewed attempts by migrants to reach the UK. Some experts already point out the risks and practical loopholes involved in using AI to identify migrants.

    “This is obviously the next step of the illegal migration bill,” said Olivier Cahn, a researcher specialized in penal law.

    “The goal is to retrieve images that were taken at sea and use AI to show they entered UK territory illegally even if people vanish into thin air upon arrival in the UK.”

    The “illegal migration bill” was passed by the UK last month barring anyone from entering the country irregularly from filing an asylum claim and imposing a “legal duty” to remove them to a third country.
    Who is behind Anduril?

    Founded in 2017 by its CEO #Palmer_Luckey, Anduril is backed by #Peter_Thiel, a Silicon Valley investor and supporter of Donald Trump. The company has supplied autonomous surveillance technology to the US Department of Defense (DOD) to detect and track migrants trying to cross the US-Mexico border.

    In 2021, the UK Ministry of Defence awarded Anduril with a £3.8-million contract to trial an advanced base defence system. Anduril eventually opened a branch in London where it states its mission: “combining the latest in artificial intelligence with commercial-of-the-shelf sensor technology (EO, IR, Radar, Lidar, UGS, sUAS) to enhance national security through automated detection, identification and tracking of objects of interest.”

    According to Cahn, the advantage of Brexit is that the UK government is no longer required to submit to the General Data Protection Regulation (RGPDP), a component of data protection that also addresses the transfer of personal data outside the EU and EEA areas.

    “Even so, the UK has data protection laws of its own which the government cannot breach. Where will the servers with the incoming data be kept? What are the rights of appeal for UK citizens whose data is being processed by the servers?”, he asked.

    ’Smugglers will provide migrants with balaclavas for an extra 15 euros’

    Cahn also pointed out the technical difficulties of identifying migrants at sea. “The weather conditions are often not ideal, and many small-boat crossings happen at night. How will facial recognition technology operate in this context?”

    The ability of migrants and smugglers to adapt is yet another factor. “People are going to cover their faces, and anyone would think the smugglers will respond by providing migrants with balaclavas for an extra 15 euros.”

    If the UK has solicited the services of a US startup to detect and identify migrants, the reason may lie in AI’s principle of self-learning. “A machine accumulates data and recognizes what it has already seen. The US is a country with a significantly more racially and ethnically diverse population than the UK. Its artificial intelligence might contain data from populations which are more ethnically comparable to the populations that are crossing the Channel, like Somalia for example, thus facilitating the process of facial recognition.”

    For Cahn, it is not capturing the images which will be the most difficult but the legal challenges that will arise out of their usage. “People are going to be identified and there are going to be errors. If a file exists, there needs to be the possibility for individuals to appear before justice and have access to a judge.”

    A societal uproar

    In a research paper titled “Refugee protection in the artificial intelligence Era”, Chatham House notes “the most common ethical and legal challenges associated with the use of AI in asylum and related border and immigration systems involve issues of opacity and unpredictability, the potential for bias and unlawful discrimination, and how such factors affect the ability of individuals to obtain a remedy in the event of erroneous or unfair decisions.”

    For Cahn, the UK government’s usage of AI can only be used to justify and reinforce its hardline position against migrants. “For a government that doesn’t respect the Geneva Convention [whose core principle is non-refoulement, editor’s note] and which passed an illegal migration law, it is out of the question that migrants have entered the territory legally.”

    Identifying migrants crossing the Channel is not going to be the hardest part for the UK government. Cahn imagines a societal backlash with, “the Supreme Court of the United Kingdom being solicited, refugees seeking remedies to legal decisions through lawyers and associations attacking”.

    He added there would be due process concerning the storage of the data, with judges issuing disclosure orders. “There is going to be a whole series of questions which the government will have to elucidate. The rights of refugees are often used as a laboratory. If these technologies are ’successful’, they will soon be applied to the rest of the population."

    https://www.infomigrants.net/en/post/48326/uk-signs-contract-with-us-startup-to-identify-migrants-in-smallboat-cr

    #UK #Angleterre #migrations #asile #réfugiés #militarisation_des_frontières #frontières #start-up #complexe_militaro-industriel #IA #intelligence_artificielle #surveillance #technologie #channel #Manche

    –—

    ajouté à la métaliste sur la Bibby Stockholm:
    https://seenthis.net/messages/1016683

    • Huge barge set to house 500 asylum seekers arrives in the UK

      The #Bibby_Stockholm is being refitted in #Falmouth to increase its capacity from 222 to 506 people.

      A barge set to house 500 asylum seekers has arrived in the UK as the government struggles with efforts to move migrants out of hotels.

      The Independent understands that people will not be transferred onto the Bibby Stockholm until July, following refurbishment to increase its capacity and safety checks.

      The barge has been towed from its former berth in Italy to the port of Falmouth, in Cornwall.

      It will remain there while works are carried out, before being moved onto its final destination in #Portland, Dorset.

      The private operators of the port struck an agreement to host the barge with the Home Office without formal public consultation, angering the local council and residents.

      Conservative MP Richard Drax previously told The Independent legal action was still being considered to stop the government’s plans for what he labelled a “quasi-prison”.

      He accused ministers and Home Office officials of being “unable to answer” practical questions on how the barge will operate, such as how asylum seekers will be able to come and go safely through the port, what activities they will be provided with and how sufficient healthcare will be ensured.

      “The question is how do we cope?” Mr Drax said. “Every organisation has its own raft of questions: ‘Where’s the money coming from? Who’s going to do what if this all happens?’ There are not sufficient answers, which is very worrying.”

      The Independent previously revealed that asylum seekers will have less living space than an average parking bay on the Bibby Stockholm, which saw at least one person die and reports of rape and abuse on board when it was used by the Dutch government to detain migrants in the 2000s.

      An official brochure released by owner Bibby Marine shows there are only 222 “single en-suite bedrooms” on board, meaning that at least two people must be crammed into every cabin for the government to achieve its aim of holding 500 people.

      Dorset Council has said it still had “serious reservations about the appropriateness of Portland Port in this scenario and remains opposed to the proposals”.

      The Conservative police and crime commissioner for Dorset is demanding extra government funding for the local force to “meet the extra policing needs that this project will entail”.

      A multi-agency forum including representatives from national, regional and local public sector agencies has been looking at plans for the provision of health services, the safety and security of both asylum seekers and local residents and charity involvement.

      Portland Port said it had been working with the Home Office and local agencies to ensure the safe arrival and operation of the Bibby Stockholm, and to minimise its impact locally.

      The barge is part of a wider government push to move migrants out of hotels, which are currently housing more than 47,000 asylum seekers at a cost of £6m a day.

      But the use of ships as accommodation was previously ruled out on cost grounds by the Treasury, when Rishi Sunak was chancellor, and the government has not confirmed how much it will be spending on the scheme.

      Ministers have also identified several former military and government sites, including two defunct airbases and an empty prison, that they want to transform into asylum accommodation.

      But a court battle with Braintree District Council over former RAF Wethersfield is ongoing, and legal action has also been threatened over similar plans for RAF Scampton in Lancashire.

      Last month, a barrister representing home secretary Suella Braverman told the High Court that 56,000 people were expected to arrive on small boats in 2023 and that some could be made homeless if hotel places are not found.

      A record backlog of asylum applications, driven by the increase in Channel crossings and a collapse in Home Office decision-making, mean the government is having to provide accommodation for longer while claims are considered.

      https://www.independent.co.uk/news/uk/home-news/barge-falmouth-cornwall-migrants-bibby-b2333313.html
      #barge #bateau

    • ‘Performative cruelty’ : the hostile architecture of the UK government’s migrant barge

      The arrival of the Bibby Stockholm barge at Portland Port, in Dorset, on July 18 2023, marks a new low in the UK government’s hostile immigration environment. The vessel is set to accommodate over 500 asylum seekers. This, the Home Office argues, will benefit British taxpayers and local residents.

      The barge, however, was immediately rejected by the local population and Dorset council. Several British charities and church groups have condemned the barge, and the illegal migration bill it accompanies, as “an affront to human dignity”.

      Anti-immigration groups have also protested against the barge, with some adopting offensive language, referring to the asylum seekers who will be hosted there as “bargies”. Conservative MP for South Dorset Richard Drax has claimed that hosting migrants at sea would exacerbate tenfold the issues that have arisen in hotels to date, namely sexual assaults, children disappearing and local residents protesting.

      My research shows that facilities built to house irregular migrants in Europe and beyond create a temporary infrastructure designed to be hostile. Governments thereby effectively make asylum seekers more displaceable while ignoring their everyday spatial and social needs.
      Precarious space

      The official brochure plans for the Bibby Stockholm show 222 single bedrooms over three stories, built around two small internal courtyards. It has now been retrofitted with bunk beds to host more than 500 single men – more than double the number it was designed to host.

      Journalists Lizzie Dearden and Martha McHardy have shown this means the asylum seekers housed there – for up to nine months – will have “less living space than an average parking bay”. This stands in contravention of international standards of a minimum 4.5m² of covered living space per person in cold climates, where more time is spent indoors.

      In an open letter, dated June 15 2023 and addressed to home secretary Suella Braverman, over 700 people and nearly 100 non-governmental organisations (NGOs) voiced concerns that this will only add to the trauma migrants have already experienced:

      Housing people on a sea barge – which we argue is equal to a floating prison – is morally indefensible, and threatens to retraumatise a group of already vulnerable people.

      Locals are concerned already overstretched services in Portland, including GP practices, will not be able to cope with further pressure. West Dorset MP Chris Lode has questioned whether the barge itself is safe “to cope with double the weight that it was designed to bear”. A caller to the LBC radio station, meanwhile, has voiced concerns over the vessel’s very narrow and low fire escape routes, saying: “What they [the government] are effectively doing here is creating a potential Grenfell on water, a floating coffin.”

      Such fears are not unfounded. There have been several cases of fires destroying migrant camps in Europe, from the Grand-Synthe camp near Dunkirk in France, in 2017, to the 2020 fire at the Moria camp in Greece. The difficulty of escaping a vessel at sea could turn it into a death trap.

      Performative hostility

      Research on migrant accommodation shows that being able to inhabit a place – even temporarily – and develop feelings of attachment and belonging, is crucial to a person’s wellbeing. Even amid ever tighter border controls, migrants in Europe, who can be described as “stuck on the move”, nonetheless still attempt to inhabit their temporary spaces and form such connections.

      However, designs can hamper such efforts when they concentrate asylum seekers in inhospitable, cut-off spaces. In 2015, Berlin officials began temporarily housing refugees in the former Tempelhof airport, a noisy, alienating industrial space, lacking in privacy and disconnected from the city. Many people ended up staying there for the better part of a year.

      French authorities, meanwhile, opened the Centre Humanitaire Paris-Nord in Paris in 2016, temporary migrant housing in a disused train depot. Nicknamed la Bulle (the bubble) for its bulbous inflatable covering, this facility was noisy and claustrophobic, lacking in basic comforts.

      Like the barge in Portland Port, these facilities, placed in industrial sites, sit uncomfortably between hospitality and hostility. The barge will be fenced off, since the port is a secured zone, and access will be heavily restricted and controlled. The Home Office insists that the barge is not a floating prison, yet it is an unmistakably hostile space.

      Infrastructure for water and electricity will physically link the barge to shore. However, Dorset council has no jurisdiction at sea.

      The commercial agreement on the barge was signed between the Home Office and Portland Port, not the council. Since the vessel is positioned below the mean low water mark, it did not require planning permission.

      This makes the barge an island of sorts, where other rules apply, much like those islands in the Aegean sea and in the Pacific, on which Greece and Australia have respectively housed migrants.

      I have shown how facilities are often designed in this way not to give displaced people any agency, but, on the contrary, to objectify them. They heighten the instability migrants face, keeping them detached from local communities and constantly on the move.

      The government has presented the barge as a cheaper solution than the £6.8 million it is currently spending, daily, on housing asylum seekers in hotels. A recent report by two NGOs, Reclaim the Seas and One Life to Live, concludes, however, that it will save less than £10 a person a day. It could even prove more expensive than the hotel model.

      Sarah Teather, director of the Jesuit Refugee Service UK charity, has described the illegal migration bill as “performative cruelty”. Images of the barge which have flooded the news certainly meet that description too.

      However threatening these images might be, though, they will not stop desperate people from attempting to come to the UK to seek safety. Rather than deterring asylum seekers, the Bibby Stockholm is potentially creating another hazard to them and to their hosting communities.

      https://theconversation.com/performative-cruelty-the-hostile-architecture-of-the-uk-governments

      –---

      Point intéressant, lié à l’aménagement du territoire :

      “Since the vessel is positioned below the mean low water mark, it did not require planning permission”

      C’est un peu comme les #zones_frontalières qui ont été créées un peu partout en Europe (et pas que) pour que les Etats se débarassent des règles en vigueur (notamment le principe du non-refoulement). Voir cette métaliste, à laquelle j’ajoute aussi cet exemple :
      https://seenthis.net/messages/795053

      voir aussi :

      The circumstances at Portland Port are very different because where the barge is to be positioned is below the mean low water mark. This means that the barge is outside of our planning control and there is no requirement for planning permission from the council.

      https://news.dorsetcouncil.gov.uk/2023/07/18/leaders-comments-on-the-home-office-barge

      #hostile_architecture #architecture_hostile #dignité #espace #Portland #hostilité #hostilité_performative #île #infrastructure #extraterritorialité #extra-territorialité #prix #coût

    • Sur l’#histoire (notamment liées au commerce d’ #esclaves) de la Bibby Stockholm :

      Bibby Line, shipowners

      Information
      From Guide to the Records of Merseyside Maritime Museum, volume 1: Bibby Line. In 1807 John Bibby and John Highfield, Liverpool shipbrokers, began taking shares in ships, mainly Parkgate Dublin packets. By 1821 (the end of the partnership) they had vessels sailing to the Mediterranean and South America. In 1850 they expanded their Mediterranean and Black Sea interests by buying two steamers and by 1865 their fleet had increased to twenty three. The opening of the Suez Canal in 1869 severely affected their business and Frederick Leyland, their general manager, failed to persuade the family partners to diversify onto the Atlantic. Eventually, he bought them out in 1873. In 1889 the Bibby family revived its shipowning interests with a successful passenger cargo service to Burma. From 1893 it also began to carry British troops to overseas postings which remained a Bibby staple until 1962. The Burma service ended in 1971 and the company moved to new areas of shipowning including bulkers, gas tankers and accommodation barges. It still has its head office in Liverpool where most management records are held. The museum holds models of the Staffordshire (1929) and Oxfordshire (1955). For further details see the attached catalogue or contact The Archives Centre for a copy of the catalogue.

      The earliest records within the collection, the ships’ logs at B/BIBBY/1/1/1 - 1/1/3 show company vessels travelling between Europe and South America carrying cargoes that would have been produced on plantations using the labour of enslaved peoples or used within plantation and slave based economies. For example the vessel Thomas (B/BIBBY/1/1/1) carries a cargo of iron hoops for barrels to Brazil in 1812. The Mary Bibby on a voyage in 1825-1826 loads a cargo of sugar in Rio de Janeiro, Brazil to carry to Rotterdam. The log (B/BIBBY/1/1/3) records the use of ’negroes’ to work with the ship’s carpenter while the vessel is in port.

      In September 1980 the latest Bibby vessel to hold the name Derbyshire was lost with all hands in the South China Sea. This collection does not include records relating to that vessel or its sinking, apart from a copy ’Motor vessel ’Derbyshire’, 1976-80: in memoriam’ at reference B/BIBBY/3/2/1 (a copy is also available in The Archives Centre library collection at 340.DER). Information about the sinking and subsequent campaigning by the victims’ family can be found on the NML website and in the Life On Board gallery. The Archives Centre holds papers of Captain David Ramwell who assisted the Derbyshire Family Association at D/RAM and other smaller collections of related documents within the DX collection.

      https://www.liverpoolmuseums.org.uk/artifact/bibby-line-shipowners

      –—
      An Open Letter to #Bibby_Marine

      Links between your parent company #Bibby_Line_Group (#BLG) and the slave trade have repeatedly been made. If true, we appeal to you to consider what actions you might take in recompense.

      Bibby Marine’s modern slavery statement says that one of the company’s values is to “do the right thing”, and that you “strongly support the eradication of slavery, as well as the eradication of servitude, forced or compulsory labour and human trafficking”. These are admirable words.

      Meanwhile, your parent company’s website says that it is “family owned with a rich history”. Please will you clarify whether this rich history includes slaving voyages where ships were owned, and cargoes transported, by BLG’s founder John Bibby, six generations ago. The BLG website says that in 1807 (which is when slavery was abolished in Britain), “John Bibby began trading as a shipowner in Liverpool with his partner John Highfield”. John Bibby is listed as co-owner of three slaving ships, of which John Highfield co-owned two:

      In 1805, the Harmonie (co-owned by #John_Bibby and three others, including John Highfield) left Liverpool for a voyage which carried 250 captives purchased in West Central Africa and St Helena, delivering them to Cumingsberg in 1806 (see the SlaveVoyages database using Voyage ID 81732).
      In 1806, the Sally (co-owned by John Bibby and two others) left Liverpool for a voyage which transported 250 captives purchased in Bassa and delivered them to Barbados (see the SlaveVoyages database using Voyage ID 83481).
      In 1806, the Eagle (co-owned by John Bibby and four others, including John Highfield) left Liverpool for a voyage which transported 237 captives purchased in Cameroon and delivered them to Kingston in 1807 (see the SlaveVoyages database using Voyage ID 81106).

      The same and related claims were recently mentioned by Private Eye. They also appear in the story of Liverpool’s Calderstones Park [PDF] and on the website of National Museums Liverpool and in this blog post “Shenanigans in Shipping” (a detailed history of the BLG). They are also mentioned by Laurence Westgaph, a TV presenter specialising in Black British history and slavery and the author of Read The Signs: Street Names with a Connection to the Transatlantic Slave Trade and Abolition in Liverpool [PDF], published with the support of English Heritage, The City of Liverpool, Northwest Regional Development Agency, National Museums Liverpool and Liverpool Vision.

      While of course your public pledges on slavery underline that there is no possibility of there being any link between the activities of John Bibby and John Highfield in the early 1800s and your activities in 2023, we do believe that it is in the public interest to raise this connection, and to ask for a public expression of your categorical renunciation of the reported slave trade activities of Mr Bibby and Mr Highfield.

      https://www.refugeecouncil.org.uk/latest/news/an-open-letter-to-bibby-marine

      –-

      Très peu d’info sur John Bibby sur wikipedia :

      John Bibby (19 February 1775 – 17 July 1840) was the founder of the British Bibby Line shipping company. He was born in Eccleston, near Ormskirk, Lancashire. He was murdered on 17 July 1840 on his way home from dinner at a friend’s house in Kirkdale.[1]


      https://en.wikipedia.org/wiki/John_Bibby_(businessman)

    • ‘Floating Prisons’: The 200-year-old family #business behind the Bibby Stockholm

      #Bibby_Line_Group_Limited is a UK company offering financial, marine and construction services to clients in at least 16 countries around the world. It recently made headlines after the government announced one of the firm’s vessels, Bibby Stockholm, would be used to accommodate asylum seekers on the Dorset coast.

      In tandem with plans to house migrants at surplus military sites, the move was heralded by Prime Minister Rishi Sunak and Home Secretary Suella Braverman as a way of mitigating the £6m-a-day cost of hotel accommodation amid the massive ongoing backlog of asylum claims, as well as deterring refugees from making the dangerous channel crossing to the UK. Several protests have been organised against the project already, while over ninety migrants’ rights groups and hundreds of individual campaigners have signed an open letter to the Home Secretary calling for the plans to be scrapped, describing the barge as a “floating prison.”

      Corporate Watch has researched into the Bibby Line Group’s operations and financial interests. We found that:

      - The Bibby Stockholm vessel was previously used as a floating detention centre in the Netherlands, where undercover reporting revealed violence, sexual exploitation and poor sanitation.

      – Bibby Line Group is more than 90% owned by members of the Bibby family, primarily through trusts. Its pre-tax profits for 2021 stood at almost £31m, which they upped to £35.5m by claiming generous tax credits and deferring a fair amount to the following year.

      - Management aboard the vessel will be overseen by an Australian business travel services company, Corporate Travel Management, who have previously had aspersions cast over the financial health of their operations and the integrity of their business practices.

      - Another beneficiary of the initiative is Langham Industries, a maritime and engineering company whose owners, the Langham family, have longstanding ties to right wing parties.

      Key Issues

      According to the Home Office, the Bibby Stockholm barge will be operational for at least 18 months, housing approximately 500 single adult men while their claims are processed, with “24/7 security in place on board, to minimise the disruption to local communities.” These measures appear to have been to dissuade opposition from the local Conservative council, who pushed for background checks on detainees and were reportedly even weighing legal action out of concern for a perceived threat of physical attacks from those housed onboard, as well as potential attacks from the far right against migrants held there.

      Local campaigners have taken aim at the initiative, noting in the open letter:

      “For many people seeking asylum arriving in the UK, the sea represents a site of significant trauma as they have been forced to cross it on one or more occasions. Housing people on a sea barge – which we argue is equal to a floating prison – is morally indefensible, and threatens to re-traumatise a group of already vulnerable people.”

      Technically, migrants on the barge will be able to leave the site. However, in reality they will be under significant levels of surveillance and cordoned off behind fences in the high security port area.

      If they leave, there is an expectation they will return by 11pm, and departure will be controlled by the authorities. According to the Home Office:

      “In order to ensure that migrants come and go in an orderly manner with as little impact as possible, buses will be provided to take those accommodated on the vessel from the port to local drop off points”.

      These drop off points are to be determined by the government, while being sited off the coast of Dorset means they will be isolated from centres of support and solidarity.

      Meanwhile, the government’s new Illegal Migration Bill is designed to provide a legal justification for the automatic detention of refugees crossing the Channel. If it passes, there’s a chance this might set the stage for a change in regime on the Bibby Stockholm – from that of an “accommodation centre” to a full-blown migrant prison.

      An initial release from the Home Office suggested the local voluntary sector would be engaged “to organise activities that keep occupied those being accommodated, potentially involved in local volunteering activity,” though they seemed to have changed the wording after critics said this would mean detainees could be effectively exploited for unpaid labour. It’s also been reported the vessel required modifications in order to increase capacity to the needed level, raising further concerns over cramped living conditions and a lack of privacy.

      Bibby Line Group has prior form in border profiteering. From 1994 to 1998, the Bibby Stockholm was used to house the homeless, some of whom were asylum seekers, in Hamburg, Germany. In 2005, it was used to detain asylum seekers in the Netherlands, which proved a cause of controversy at the time. Undercover reporting revealed a number of cases abuse on board, such as beatings and sexual exploitation, as well suicide attempts, routine strip searches, scabies and the death of an Algerian man who failed to receive timely medical care for a deteriorating heart condition. As the undercover security guard wrote:

      “The longer I work on the Bibby Stockholm, the more I worry about safety on the boat. Between exclusion and containment I encounter so many defects and feel so much tension among the prisoners that it no longer seems to be a question of whether things will get completely out of hand here, but when.”

      He went on:

      “I couldn’t stand the way prisoners were treated […] The staff become like that, because the whole culture there is like that. Inhuman. They do not see the residents as people with a history, but as numbers.”

      Discussions were also held in August 2017 over the possibility of using the vessel as accommodation for some 400 students in Galway, Ireland, amid the country’s housing crisis. Though the idea was eventually dropped for lack of mooring space and planning permission requirements, local students had voiced safety concerns over the “bizarre” and “unconventional” solution to a lack of rental opportunities.
      Corporate Travel Management & Langham Industries

      Although leased from Bibby Line Group, management aboard the Bibby Stockholm itself will be handled by #Corporate_Travel_Management (#CTM), a global travel company specialising in business travel services. The Australian-headquartered company also recently received a £100m contract for the provision of accommodation, travel, venue and ancillary booking services for the housing of Ukrainian refugees at local hotels and aboard cruise ships M/S Victoria and M/S Ambition. The British Red Cross warned earlier in May against continuing to house refugees on ships with “isolated” and “windowless” cabins, and said the scheme had left many “living in limbo.”

      Founded by CEO #Jamie_Pherous, CTM was targeted in 2018 by #VGI_Partners, a group of short-sellers, who identified more than 20 red flags concerning the company’s business interests. Most strikingly, the short-sellers said they’d attended CTM’s offices in Glasgow, Paris, Amsterdam, Stockholm and Switzerland. Finding no signs of business activity there, they said it was possible the firm had significantly overstated the scale of its operations. VGI Partners also claimed CTM’s cash flows didn’t seem to add up when set against the company’s reported growth, and that CTM hadn’t fully disclosed revisions they’d made to their annual revenue figures.

      Two years later, the short-sellers released a follow-up report, questioning how CTM had managed to report a drop in rewards granted for high sales numbers to travel agencies, when in fact their transaction turnover had grown during the same period. They also accused CTM of dressing up their debt balance to make their accounts look healthier.

      CTM denied VGI Partners’ allegations. In their response, they paraphrased a report by auditors EY, supposedly confirming there were no question marks over their business practices, though the report itself was never actually made public. They further claim VGI Partners, as short-sellers, had only released the reports in the hope of benefitting from uncertainty over CTM’s operations.

      Despite these troubles, CTM’s market standing improved drastically earlier this year, when it was announced the firm had secured contracts for the provision of travel services to the UK Home Office worth in excess of $3bn AUD (£1.6bn). These have been accompanied by further tenders with, among others, the National Audit Office, HS2, Cafcass, Serious Fraud Office, Office of National Statistics, HM Revenue & Customs, National Health Service, Ministry of Justice, Department of Education, Foreign Office, and the Equality and Human Rights Commission.

      The Home Office has not released any figures on the cost of either leasing or management services aboard Bibby Stockholm, though press reports have put the estimated price tag at more than £20,000 a day for charter and berthing alone. If accurate, this would put the overall expenditure for the 18-month period in which the vessel will operate as a detention centre at almost £11m, exclusive of actual detention centre management costs such as security, food and healthcare.

      Another beneficiary of the project are Portland Port’s owners, #Langham_Industries, a maritime and engineering company owned by the #Langham family. The family has long-running ties to right-wing parties. Langham Industries donated over £70,000 to the UK Independence Party from 2003 up until the 2016 Brexit referendum. In 2014, Langham Industries donated money to support the re-election campaign of former Clacton MP for UKIP Douglas Carswell, shortly after his defection from the Conservatives. #Catherine_Langham, a Tory parish councillor for Hilton in Dorset, has described herself as a Langham Industries director (although she is not listed on Companies House). In 2016 she was actively involved in local efforts to support the campaign to leave the European Union. The family holds a large estate in Dorset which it uses for its other line of business, winemaking.

      At present, there is no publicly available information on who will be providing security services aboard the Bibby Stockholm.

      Business Basics

      Bibby Line Group describes itself as “one of the UK’s oldest family owned businesses,” operating in “multiple countries, employing around 1,300 colleagues, and managing over £1 billion of funds.” Its head office is registered in Liverpool, with other headquarters in Scotland, Hong Kong, India, Singapore, Malaysia, France, Slovakia, Czechia, the Netherlands, Germany, Poland and Nigeria (see the appendix for more). The company’s primary sectors correspond to its three main UK subsidiaries:

      #Bibby_Financial_Services. A global provider of financial services. The firm provides loans to small- and medium-sized businesses engaged in business services, construction, manufacturing, transportation, export, recruitment and wholesale markets. This includes invoice financing, export and trade finance, and foreign exchanges. Overall, the subsidiary manages more than £6bn each year on behalf of some 9,000 clients across 300 different industry sectors, and in 2021 it brought in more than 50% of the group’s annual turnover.

      - #Bibby_Marine_Limited. Owner and operator of the Bibby WaveMaster fleet, a group of vessels specialising in the transport and accommodation of workers employed at remote locations, such as offshore oil and gas sites in the North Sea. Sometimes, as in the case of Chevron’s Liquified Natural Gas (LNG) project in Nigeria, the vessels are used as an alternative to hotels owing to a “a volatile project environment.” The fleet consists of 40 accommodation vessels similar in size to the Bibby Stockholm and a smaller number of service vessels, though the share of annual turnover pales compared to the group’s financial services operations, standing at just under 10% for 2021.

      - #Garic Ltd. Confined to construction, quarrying, airport, agriculture and transport sectors in the UK, the firm designs, manufactures and purchases plant equipment and machinery for sale or hire. Garic brought in around 14% of Bibby Line Group’s turnover in 2021.

      Prior to February 2021, Bibby Line Group also owned #Costcutter_Supermarkets_Group, before it was sold to #Bestway_Wholesale to maintain liquidity amid the Covid-19 pandemic. In their report for that year, the company’s directors also suggested grant funding from #MarRI-UK, an organisation facilitating innovation in maritime technologies and systems, had been important in preserving the firm’s position during the crisis.
      History

      The Bibby Line Group’s story begins in 1807, when Lancashire-born shipowner John Bibby began trading out of Liverpool with partner John Highfield. By the time of his death in 1840, murdered while returning home from dinner with a friend in Kirkdale, Bibby had struck out on his own and come to manage a fleet of more than 18 ships. The mysterious case of his death has never been solved, and the business was left to his sons John and James.

      Between 1891 and 1989, the company operated under the name #Bibby_Line_Limited. Its ships served as hospital and transport vessels during the First World War, as well as merchant cruisers, and the company’s entire fleet of 11 ships was requisitioned by the state in 1939.

      By 1970, the company had tripled its overseas earnings, branching into ‘factoring’, or invoice financing (converting unpaid invoices into cash for immediate use via short-term loans) in the early 1980s, before this aspect of the business was eventually spun off into Bibby Financial Services. The group acquired Garic Ltd in 2008, which currently operates four sites across the UK.

      People

      #Jonathan_Lewis has served as Bibby Line Group’s Managing and Executive Director since January 2021, prior to which he acted as the company’s Chief Financial and Strategy Officer since joining in 2019. Previously, Lewis worked as CFO for Imagination Technologies, a tech company specialising in semiconductors, and as head of supermarket Tesco’s mergers and acquisitions team. He was also a member of McKinsey’s European corporate finance practice, as well as an investment banker at Lazard. During his first year at the helm of Bibby’s operations, he was paid £748,000. Assuming his role at the head of the group’s operations, he replaced Paul Drescher, CBE, then a board member of the UK International Chamber of Commerce and a former president of the Confederation of British Industry.

      Bibby Line Group’s board also includes two immediate members of the Bibby family, Sir #Michael_James_Bibby, 3rd Bt. and his younger brother #Geoffrey_Bibby. Michael has acted as company chairman since 2020, before which he had occupied senior management roles in the company for 20 years. He also has external experience, including time at Unilever’s acquisitions, disposals and joint venture divisions, and now acts as president of the UK Chamber of Shipping, chairman of the Charities Trust, and chairman of the Institute of Family Business Research Foundation.

      Geoffrey has served as a non-executive director of the company since 2015, having previously worked as a managing director of Vast Visibility Ltd, a digital marketing and technology company. In 2021, the Bibby brothers received salaries of £125,000 and £56,000 respectively.

      The final member of the firm’s board is #David_Anderson, who has acted as non-executive director since 2012. A financier with 35 years experience in investment banking, he’s founder and CEO of EPL Advisory – which advises company boards on requirements and disclosure obligations of public markets – and chair of Creative Education Trust, a multi-academy trust comprising 17 schools. Anderson is also chairman at multinational ship broker Howe Robinson Partners, which recently auctioned off a superyacht seized from Dmitry Pumpyansky, after the sanctioned Russian businessman reneged on a €20.5m loan from JP Morgan. In 2021, Anderson’s salary stood at £55,000.

      Ownership

      Bibby Line Group’s annual report and accounts for 2021 state that more than 90% of the company is owned by members of the Bibby family, primarily through family trusts. These ownership structures, effectively entities allowing people to benefit from assets without being their registered legal owners, have long attracted staunch criticism from transparency advocates given the obscurity they afford means they often feature extensively in corruption, money laundering and tax abuse schemes.

      According to Companies House, the UK corporate registry, between 50% and 75% of Bibby Line Group’s shares and voting rights are owned by #Bibby_Family_Company_Limited, which also retains the right to appoint and remove members of the board. Directors of Bibby Family Company Limited include both the Bibby brothers, as well as a third sibling, #Peter_John_Bibby, who’s formally listed as the firm’s ‘ultimate beneficial owner’ (i.e. the person who ultimately profits from the company’s assets).

      Other people with comparable shares in Bibby Family Company Limited are #Mark_Rupert_Feeny, #Philip_Charles_Okell, and Lady #Christine_Maud_Bibby. Feeny’s occupation is listed as solicitor, with other interests in real estate management and a position on the board of the University of Liverpool Pension Fund Trustees Limited. Okell meanwhile appears as director of Okell Money Management Limited, a wealth management firm, while Lady Bibby, Michael and Geoffrey’s mother, appears as “retired playground supervisor.”

      Key Relationships

      Bibby Line Group runs an internal ‘Donate a Day’ volunteer program, enabling employees to take paid leave in order to “help causes they care about.” Specific charities colleagues have volunteered with, listed in the company’s Annual Review for 2021 to 2022, include:

      - The Hive Youth Zone. An award-winning charity for young people with disabilities, based in the Wirral.

      – The Whitechapel Centre. A leading homeless and housing charity in the Liverpool region, working with people sleeping rough, living in hostels, or struggling with their accommodation.

      - Let’s Play Project. Another charity specialising in after-school and holiday activities for young people with additional needs in the Banbury area.

      - Whitdale House. A care home for the elderly, based in Whitburn, West Lothian and run by the local council.

      – DEBRA. An Irish charity set up in 1988 for individuals living with a rare, painful skin condition called epidermolysis bullosa, as well as their families.

      – Reaching Out Homeless Outreach. A non-profit providing resources and support to the homeless in Ireland.

      Various senior executives and associated actors at Bibby Line Group and its subsidiaries also have current and former ties to the following organisations:

      - UK Chamber of Shipping

      - Charities Trust

      - Institute of Family Business Research Foundation

      - Indefatigable Old Boys Association

      - Howe Robinson Partners

      - hibu Ltd

      - EPL Advisory

      - Creative Education Trust

      - Capita Health and Wellbeing Limited

      - The Ambassador Theatre Group Limited

      – Pilkington Plc

      – UK International Chamber of Commerce

      – Confederation of British Industry

      – Arkley Finance Limited (Weatherby’s Banking Group)

      – FastMarkets Ltd, Multiple Sclerosis Society

      – Early Music as Education

      – Liverpool Pension Fund Trustees Limited

      – Okell Money Management Limited

      Finances

      For the period ending 2021, Bibby Line Group’s total turnover stood at just under £260m, with a pre-tax profit of almost £31m – fairly healthy for a company providing maritime services during a global pandemic. Their post-tax profits in fact stood at £35.5m, an increase they would appear to have secured by claiming generous tax credits (£4.6m) and deferring a fair amount (£8.4m) to the following year.

      Judging by their last available statement on the firm’s profitability, Bibby’s directors seem fairly confident the company has adequate financing and resources to continue operations for the foreseeable future. They stress their February 2021 sale of Costcutter was an important step in securing this, given it provided additional liquidity during the pandemic, as well as the funding secured for R&D on fuel consumption by Bibby Marine’s fleet.
      Scandal Sheet

      Bibby Line Group and its subsidiaries have featured in a number of UK legal proceedings over the years, sometimes as defendants. One notable case is Godfrey v Bibby Line, a lawsuit brought against the company in 2019 after one of their former employees died as the result of an asbestos-related disease.

      In their claim, the executors of Alan Peter Godfrey’s estate maintained that between 1965 and 1972, he was repeatedly exposed to large amounts of asbestos while working on board various Bibby vessels. Although the link between the material and fatal lung conditions was established as early as 1930, they claimed that Bibby Line, among other things:

      “Failed to warn the deceased of the risk of contracting asbestos related disease or of the precautions to be taken in relation thereto;

      “Failed to heed or act upon the expert evidence available to them as to the best means of protecting their workers from danger from asbestos dust; [and]

      “Failed to take all reasonably practicable measures, either by securing adequate ventilation or by the provision and use of suitable respirators or otherwise, to prevent inhalation of dust.”

      The lawsuit, which claimed “unlimited damage”’ against the group, also stated that Mr Godfrey’s “condition deteriorated rapidly with worsening pain and debility,” and that he was “completely dependent upon others for his needs by the last weeks of his life.” There is no publicly available information on how the matter was concluded.

      In 2017, Bibby Line Limited also featured in a leak of more than 13.4 million financial records known as the Paradise Papers, specifically as a client of Appleby, which provided “offshore corporate services” such as legal and accountancy work. According to the Organized Crime and Corruption Reporting Project, a global network of investigative media outlets, leaked Appleby documents revealed, among other things, “the ties between Russia and [Trump’s] billionaire commerce secretary, the secret dealings of Canadian Prime Minister Justin Trudeau’s chief fundraiser and the offshore interests of the Queen of England and more than 120 politicians around the world.”

      This would not appear to be the Bibby group’s only link to the shady world of offshore finance. Michael Bibby pops up as a treasurer for two shell companies registered in Panama, Minimar Transport S.A. and Vista Equities Inc.
      Looking Forward

      Much about the Bibby Stockholm saga remains to be seen. The exact cost of the initiative and who will be providing security services on board, are open questions. What’s clear however is that activists will continue to oppose the plans, with efforts to prevent the vessel sailing from Falmouth to its final docking in Portland scheduled to take place on 30th June.

      Appendix: Company Addresses

      HQ and general inquiries: 3rd Floor Walker House, Exchange Flags, Liverpool, United Kingdom, L2 3YL

      Tel: +44 (0) 151 708 8000

      Other offices, as of 2021:

      6, Shenton Way, #18-08A Oue Downtown 068809, Singapore

      1/1, The Exchange Building, 142 St. Vincent Street, Glasgow, G2 5LA, United Kingdom

      4th Floor Heather House, Heather Road, Sandyford, Dublin 18, Ireland

      Unit 2302, 23/F Jubilee Centre, 18 Fenwick Street, Wanchai, Hong Kong

      Unit 508, Fifth Floor, Metropolis Mall, MG Road, Gurugram, Haryana, 122002 India

      Suite 7E, Level 7, Menara Ansar, 65 Jalan Trus, 8000 Johor Bahru, Johor, Malaysia

      160 Avenue Jean Jaures, CS 90404, 69364 Lyon Cedex, France

      Prievozská 4D, Block E, 13th Floor, Bratislava 821 09, Slovak Republic

      Hlinky 118, Brno, 603 00, Czech Republic

      Laan Van Diepenvoorde 5, 5582 LA, Waalre, Netherlands

      Hansaallee 249, 40549 Düsseldorf, Germany

      Poland Eurocentrum, Al. Jerozolimskie 134, 02-305 Warsaw, Poland

      1/2 Atarbekova str, 350062, Krasnodar, Krasnodar

      1 St Peter’s Square, Manchester, M2 3AE, United Kingdom

      25 Adeyemo Alakija Street, Victoria Island, Lagos, Nigeria

      10 Anson Road, #09-17 International Plaza, 079903 Singapore

      https://corporatewatch.org/floating-prisons-the-200-year-old-family-business-behind-the-bibby-s

      signalé ici aussi par @rezo:
      https://seenthis.net/messages/1010504

    • The Langham family seem quite happy to support right-wing political parties that are against immigration, while at the same time profiting handsomely from the misery of refugees who are forced to claim sanctuary here.


      https://twitter.com/PositiveActionH/status/1687817910364884992

      –---

      Family firm ’profiteering from misery’ by providing migrant barges donated £70k to #UKIP

      The Langham family, owners of Langham Industries, is now set to profit from an 18-month contract with the Home Office to let the Bibby Stockholm berth at Portland, Dorset

      A family firm that donated more than £70,000 to UKIP is “profiteering from misery” by hosting the Government’s controversial migrant barge. Langham Industries owns Portland Port, where the Bibby Stockholm is docked in a deal reported to be worth some £2.5million.

      The Langham family owns luxurious properties and has links to high-profile politicians, including Prime Minister Rishi Sunak and Deputy Prime Minister Oliver Dowden. And we can reveal that their business made 19 donations to pro-Brexit party UKIP between 2003 and 2016.

      Late founder John Langham was described as an “avid supporter” of UKIP in an obituary in 2017. Now his children, John, Jill and Justin – all directors of the family firm – are set to profit from an 18-month contract with the Home Office to let the Bibby Stockholm berth at Portland, Dorset.

      While Portland Port refuses to reveal how much the Home Office is paying, its website cites berthing fees for a ship the size of the Bibby Stockholm at more than £4,000 a day. In 2011, Portland Port chairman John, 71, invested £3.7million in Grade II* listed country pile Steeple Manor at Wareham, Dorset. Dating to around 1600, it has a pond, tennis court and extensive gardens designed by the landscape architect Brenda Colvin.

      The arrangement to host the “prison-like” barge for housing migrants has led some locals to blast the Langhams, who have owned the port since 1997. Portland mayor Carralyn Parkes, 61, said: “I don’t know how John Langham will sleep at night in his luxurious home, with his tennis court and his fluffy bed, when asylum seekers are sleeping in tiny beds on the barge.

      “I went on the boat and measured the rooms with a tape measure. On average they are about 10ft by 12ft. The bunk bed mattresses are about 6ft long. If you’re taller than 6ft you’re stuffed. The Langham family need to have more humanity. They are only interested in making money. It’s shocking.”

      (#paywall)
      https://www.mirror.co.uk/news/politics/family-firm-profiteering-misery-providing-30584405.amp

      #UK_Independence_Party

    • ‘This is a prison’: men tell of distressing conditions on Bibby Stockholm

      Asylum seekers share fears about Dorset barge becoming even more crowded, saying they already ‘despair and wish for death’

      Asylum seekers brought back to the Bibby Stockholm barge in Portland, Dorset, have said they are being treated in such a way that “we despair and wish for death”.

      The Guardian spoke to two men in their first interview since their return to the barge on 19 October after the vessel lay empty for more than two months. The presence of deadly legionella bacteria was confirmed on board on 7 August, the same day the first group of asylum seekers arrived. The barge was evacuated four days later.

      The new warning comes after it emerged that one asylum seeker attempted to kill himself and is in hospital after finding out he is due to be taken to the barge on Tuesday.

      A man currently on the barge told the Guardian: “Government decisions are turning healthy and normal refugees into mental patients whom they then hand over to society. Here, many people were healthy and coping with OK spirits, but as a result of the dysfunctional strategies of the government, they have suffered – and continue to suffer – from various forms of serious mental distress. We are treated in such a way that we despair and wish for death.”

      He said that although the asylum seekers were not detained on the barge and could leave to visit the nearby town, in practice, doing so was not easy.

      He added: “In the barge, we have exactly the feeling of being in prison. It is true that they say that this is not a prison and you can go outside at any time, but you can only go to specific stops at certain times by bus, and this does not give me a good feeling.

      “Even to use the fresh air, you have to go through the inspection every time and go to the small yard with high fences and go through the X-ray machine again. And this is not good for our health.

      “In short, this is a prison whose prisoners are not criminals, they are people who have fled their country just to save their lives and have taken shelter here to live.”

      The asylum seekers raised concerns about what conditions on the barge would be like if the Home Office did fill it with about 500 asylum seekers, as officials say is the plan. Those on board said it already felt quite full with about 70 people living there.

      The second asylum seeker said: “The space inside the barge is very small. It feels crowded in the dining hall and the small entertainment room. It is absolutely clear to me that there will be chaos here soon.

      “According to my estimate, as I look at the spaces around us, the capacity of this barge is maximum 120 people, including personnel and crew. The strategy of ​​transferring refugees from hotels to barges or ships or military installations is bound to fail.

      “The situation here on the barge is getting worse. Does the government have a plan for shipwrecked residents? Everyone here is going mad with anxiety. It is not just the barge that floats on the water, but the plans of the government that are radically adrift.”

      Maddie Harris of the NGO Humans For Rights Network, which supports asylum seekers in hotels, said: “Home Office policies directly contribute to the significant deterioration of the wellbeing and mental health of so many asylum seekers in their ‘care’, with a dehumanising environment, violent anti-migrant rhetoric and isolated accommodations away from community and lacking in support.”

      A Home Office spokesperson said: “The Bibby Stockholm is part of the government’s pledge to reduce the use of expensive hotels and bring forward alternative accommodation options which provide a more cost-effective, sustainable and manageable system for the UK taxpayer and local communities.

      “The health and welfare of asylum seekers remains the utmost priority. We work continually to ensure the needs and vulnerabilities of those residing in asylum accommodation are identified and considered, including those related to mental health and trauma.”

      Nadia Whittome and Lloyd Russell-Moyle, the Labour MPs for Nottingham East and Brighton Kemptown respectively, will travel to Portland on Monday to meet asylum seekers accommodated on the Bibby Stockholm barge and local community members.

      The visit follows the home secretary, Suella Braverman, not approving a visit from the MPs to assess living conditions as they requested through parliamentary channels.

      https://www.theguardian.com/uk-news/2023/oct/29/this-is-a-prison-men-tell-of-distressing-conditions-on-bibby-stockholm
      #prison #conditions_de_vie

  • The messy, secretive reality behind OpenAI’s bid to save the world
    https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secret

    17.2.2020 by Karen Hao -Every year, OpenAI’s employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. It’s mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.

    In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabet’s DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.

    Above all, it is lionized for its mission. Its goal is to be the first to create AGI—a machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

    The implication is that AGI could easily run amok if the technology’s development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

    OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to “build value for everyone rather than shareholders.” Its charter—a document so sacred that employees’ pay is tied to how well they adhere to it—further declares that OpenAI’s “primary fiduciary duty is to humanity.” Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.
    Photograph of OpenAI branded sign in their office space
    OpenAI’s logo hanging in its office.

    Christie Hemm Klok

    But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

    Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation “Can machines think?” Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

    “It is one of the most fundamental questions of all intellectual history, right?” says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. “It’s like, do we understand the origin of the universe? Do we understand matter?”

    The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. It’s not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.

    But the resounding consensus within the field is that such advanced capabilities would take decades, even centuries—if indeed it’s possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late ’80s and early ’90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. “The field felt like a backwater,” says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.
    Photograph of infinite jest conference room
    A conference room on the first floor named Infinite Jest.

    Christie Hemm Klok

    Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasn’t the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.

    The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped down as president of startup accelerator Y Combinator to become OpenAI’s CEO.)

    But more than anything, OpenAI’s nonprofit status made a statement. “It’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest,” the announcement said. “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.” Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.

    In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. “It was a beacon of hope,” says Chip Huyen, a machine learning expert who has closely followed the lab’s journey.

    At the intersection of 18th and Folsom Streets in San Francisco, OpenAI’s office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters “PIONEER BUILDING”—the remnants of its bygone owner, the Pioneer Truck Factory—wrap around the corner in faded red paint.

    Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space I’m restricted to during my visit. I’m forbidden to visit the second and third floors, which house everyone’s desks, several robots, and pretty much everything interesting. When it’s time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.
    Pioneer building
    The Pioneer Building.

    wikimedia commons / tfinc

    On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. “We’ve never given someone so much access before,” he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.

    Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a “focused, quiet childhood.” He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.

    Brockman takes me to lunch to remove me from the office during an all-company meeting. In the café across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. It’s easy to appreciate his charisma as a leader. Recounting memorable passages from the books he’s read, he zeroes in on the Valley’s favorite narrative, America’s race to the moon. (“One story I really love is the story of the janitor,” he says, referencing a famous yet probably apocryphal tale. “Kennedy goes up to him and asks him, ‘What are you doing?’ and he says, ‘Oh, I’m helping put a man on the moon!’”) There’s also the transcontinental railroad (“It was actually the last megaproject done entirely by hand … a project of immense scale that was totally risky”) and Thomas Edison’s incandescent lightbulb (“A committee of distinguished experts said ‘It’s never gonna work,’ and one year later he shipped”).
    Photograph of founder
    Greg Brockman, co-founder and CTO.

    Christie Hemm Klok

    Brockman is aware of the gamble OpenAI has taken on—and aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. It’s the price of daring greatly.

    Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was small—formed through a tight web of connections—and management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.

    Musk played no small part in building a collective mythology. “The way he presented it to me was ‘Look, I get it. AGI might be far away, but what if it’s not?’” recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. “‘What if it’s even just a 1% or 0.1% chance that it’s happening in the next five to 10 years? Shouldn’t we think about it very carefully?’ That resonated with me,” he says.

    But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasn’t clear the team itself knew either. “Our goal right now … is to do the best thing there is to do,” Brockman said. “It’s a little vague.”

    Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAI’s members. After two years, at Brockman’s request, Daniela joined too. “Imagine—we started with nothing,” Brockman says. “We just had this ideal that we wanted AGI to go well.”

    Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the company’s existence.

    By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that “in order to stay relevant,” Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass money—while somehow also staying true to the mission.

    Unbeknownst to the public—and most employees—it was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the lab’s core values but subtly shifted the language to reflect the new reality. Alongside its commitment to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” it also stressed the need for resources. “We anticipate needing to marshal substantial resources to fulfill our mission,” it said, “but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”

    “We spent a long time internally iterating with employees to get the whole company bought into a set of principles,” Brockman says. “Things that had to stay invariant even if we changed our structure.”
    Group photo of the team
    From left to right: Daniela Amodei, Jack Clark, Dario Amodei, Jeff Wu (technical staff member), Greg Brockman, Alec Radford (technical language team lead), Christine Payne (technical staff member), Ilya Sutskever, and Chris Berner (head of infrastructure).

    Christie Hemm Klok

    That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a “capped profit” arm—a for-profit with a 100-fold limit on investors’ returns, albeit overseen by a board that’s part of a nonprofit entity. Shortly after, it announced Microsoft’s billion-dollar investment (though it didn’t reveal that this was split between cash and credits to Azure, Microsoft’s cloud computing platform).

    Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: “Early investors in Google have received a roughly 20x return on their capital,” they wrote. “Your bet is that you’ll have a corporate structure which returns orders of magnitude more than Google ... but you don’t want to ‘unduly concentrate power’? How will this work? What exactly is power, if not the concentration of resources?”

    The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. “Can I trust OpenAI?” one question asked. “Yes,” began the answer, followed by a paragraph of explanation.

    The charter is the backbone of OpenAI. It serves as the springboard for all the lab’s strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the company’s existence. (“By the way,” he clarifies halfway through one recitation, “I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. It’s not like I was reading this before the meeting.”)

    How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? “As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that aren’t imaginable today.” How will you structure yourself to evenly distribute AGI? “I think a utility is the best analogy for the vision that we have. But again, it’s all subject to the charter.” How do you compete to reach AGI first without compromising safety? “I think there is absolutely this important balancing act, and our best shot at that is what’s in the charter.”
    Cover of open AI charter
    APRIL 9, 2018 5 MINUTE READ

    OpenAI

    For Brockman, rigid adherence to the document is what makes OpenAI’s structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesn’t mind—in fact, he agrees with the mentality. It’s the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.

    In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of “effective altruism.” They crack jokes using machine-learning terminology to describe their lives: “What is your life a function of?” “What are you optimizing for?” “Everything is basically a minmax function.” To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)

    But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employee’s absorption of the mission. Alongside columns like “engineering expertise” and “research direction” in a spreadsheet tab titled “Unified Technical Ladder,” the last column outlines the culture-related expectations for every level. Level 3: “You understand and internalize the OpenAI charter.” Level 5: “You ensure all projects you and your team-mates work on are consistent with the charter.” Level 7: “You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.”

    The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.

    But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponized to produce disinformation at immense scale.

    The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? “It seemed like OpenAI was trying to capitalize off of panic around AI,” says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.
    photograph of Jack
    Jack Clark, policy director.

    Christie Hemm Klok

    By May, OpenAI had revised its stance and announced plans for a “staged release.” Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithm’s potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, “no strong evidence of misuse so far.”

    Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadn’t been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that “safety and security concerns” would gradually oblige the lab to “reduce our traditional publishing in the future.”

    This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. “I think that is definitely part of the success-story framing,” said Miles Brundage, a policy research scientist, highlighting something in a Google doc. “The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.”

    But OpenAI’s media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the lab’s big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arm’s length.
    Photograph of books, games, and posters in the office space
    Cover images of OpenAI’s research releases hang on its office wall.

    Christie Hemm Klok

    This hasn’t stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMind’s AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAI’s achievement. I was not compensated for this.)

    And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the lab’s influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: “In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI,” says a line under the “Policy” section. “Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message.” Another, under “Strategy,” reads, “Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to.”

    There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?
    Photograph of Ilya
    Ilya Sutskever, co-founder and chief scientist.

    Christie Hemm Klok

    But little did people know this wasn’t the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.

    There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; it’s just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, won’t be enough.

    Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.

    Brockman and Sutskever deny that this is their sole strategy, but the lab’s tightly guarded research suggests otherwise. A team called “Foresight” runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the lab’s all-in, compute-driven strategy is the best approach.

    For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didn’t know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.
    Photograph of AI books

    Christie Hemm Klok

    In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was “sniffing around.”

    In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. “We expect that safety and security concerns will reduce our traditional publishing in the future,” the section states, “while increasing the importance of sharing safety, policy, and standards research.” The spokesperson also added: “Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.”

    One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.

    The man driving OpenAI’s strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.

    Amodei divides the lab’s strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investor’s “portfolio of bets.” Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.

    As in an investor’s portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why it’s important to keep an open mind. “Pure language is a direction that the field and even some of us were somewhat skeptical of,” he says. “But now it’s like, ‘Wow, this is really promising.’”

    Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun.
    Photo of Dario
    Dario Amodei, research director.

    Christie Hemm Klok

    The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2’s sentence constructions or a robot’s movements.

    Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. “At some point we’re going to build AGI, and by that time I want to feel good about these systems operating in the world,” he says. “Anything where I don’t currently feel good, I create and recruit a team to focus on that thing.”

    For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.

    “We’re in the awkward position of: we don’t know what AGI looks like,” he says. “We don’t know when it’s going to happen.” Then, with careful self-awareness, he adds: “The mind of any given person is limited. The best thing I’ve found is hiring other safety researchers who often have visions which are different than the natural thing I might’ve thought of. I want that kind of variation and diversity because that’s the only way that you catch everything.”

    The thing is, OpenAI actually has little “variation and diversity”—a fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musk’s startup working on computer-brain interfaces, shares the same building and dining room.
    Photograph of Daniela
    Daniela Amodei, head of people operations.

    Christie Hemm Klok

    According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didn’t specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)

    In fairness, this lack of diversity is typical in AI. Last year a report from the New York–based research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. “There is definitely still a lot of work to be done across academia and industry,” OpenAI’s spokesperson said. “Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.”

    Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New York–based company, the city just had too little diversity.

    But if diversity is a problem for the AI industry in general, it’s something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.

    Nor is it at all clear just how OpenAI plans to “distribute the benefits” of AGI to “all of humanity,” as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited “significant unresolved issues regarding … the way in which it would be implemented.”) “This is my biggest problem with OpenAI,” says a former employee, who spoke on condition of anonymity.
    photo of office space

    Christie Hemm Klok

    “They are using sophisticated technical practices to try to answer social problems with AI,” echoes Britt Paris of Rutgers. “It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.”

    Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. “How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,” he says. “I don’t think that that strategy is likely to succeed.”

    The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to “make sure that we are understanding the ramifications.”

    Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldn’t functionally change OpenAI’s approach to research. Microsoft was well aligned with the lab’s values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.

    For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didn’t even know what promises, if any, had been made to Microsoft.

    But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altman’s message is clear: OpenAI needs to make money in order to do research—not the other way around.

    This is a hard but necessary trade-off, the leadership has said—one it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.

    But the truth is that OpenAI faces this trade-off not only because it’s not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategy—not because it’s seen as the only way to AGI, but because it seems like the fastest.

    Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.

    Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldn’t omit from this profile. “I guess in my opinion, there’s problems,” she begins hesitantly. “Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.”

    “But to me, it feels like they are doing something a little bit right,” she says. “I got a sense that the folks there are earnestly trying.”

    Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didn’t think it was possible to “bake ethics in… from the very beginning” when developing AI, he intended it to mean that ethical questions couldn’t be solved from the beginning, not that they couldn’t be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not “on a farm,” but “on a hobby farm.” Brockman considers this distinction important.

    In addition, we have clarified that while OpenAI did indeed “shed its nonprofit status,” a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. We’ve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).

    #capitalisme #benevolat #intelligence_artificielle #USA #idéologie #effective_altruism

    • #Big_data et #Intelligence_artificielle (#IA), les #ressources_humaines (sic) #RH vont pouvoir passer à la #gouvernance_algorithmique (#data_driven) en temps réel (deliver value faster)

      While historically management consulting firms have viewed a highly talented workforce as their key asset, the emergence of data technologies has prompted them to turn to the productization of their offerings. According to “Killing Strategy: The Disruption Of Management Consulting” report by CB Insights, one of the main reasons for the disruption of the management consulting industry is the increasing pace of digitalization, and in particular, the expansion of Artificial Intelligence and Big Data capabilities. Incumbents in the consulting world are recognizing competitive pressure coming from smaller industry players, which leverage modern data analytics and visualization technologies to deliver value faster. At the same time, clients of major consulting companies are investing in software systems to collect and analyze data, aiming to empower their managers with data-driven decision-making tools.

      (l’auteur est product leader chez Google et, accessoirement, a fondé une boîte de #coaching : Our mission is to help talented product managers prepare for their job interviews in the most effective ways - ways that land them the offer they’re hoping for!)

  • Google C.E.O. Sundar Pichai on the A.I. Moment: ‘You Will See Us Be Bold’ - The New York Times
    https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.html

    Sundar Pichai has been trying to start an A.I. revolution for a very long time.

    In 2016, shortly after being named Google’s chief executive, Mr. Pichai declared that Google was an “A.I.-first” company. He spent lavishly to assemble an all-star team of A.I. researchers, whose breakthroughs powered changes to products like Google Translate and Google Photos. He even predicted that A.I.’s impact would be bigger than “electricity or fire.”

    So it had to sting when A.I.’s big moment finally arrived, and Google wasn’t involved.

    Instead, OpenAI — a scrappy A.I. start-up backed by Microsoft — stole the spotlight in November by releasing ChatGPT, a poem-writing, code-generating, homework-finishing marvel. ChatGPT became an overnight sensation, attracting millions of users and kicking off a Silicon Valley frenzy. It made Google look sluggish and vulnerable for the first time in years. (It didn’t help when Microsoft relaunched its Bing search engine with OpenAI’s technology inside, instantly ending Bing’s decade-long run as a punchline.)

    In an interview with The Times’s “Hard Fork” podcast on Thursday, his first extended interview since ChatGPT’s launch, Mr. Pichai said he was glad that A.I. was having a moment, even if Google wasn’t the driving force.

    #Intelligence_artificielle #Google

  • L’Italie bloque l’usage de #ChatGPT
    https://www.france24.com/fr/%C3%A9co-tech/20230331-l-italie-bloque-l-usage-de-l-intelligence-artificielle-chatgpt

    Dans un communiqué, l’Autorité italienne de protection des données personnelles prévient que sa décision a un « effet immédiat » et accuse le robot conversationnel de ne pas respecter la réglementation européenne et de ne pas vérifier l’âge des usagers mineurs.

    #ia #intelligence_artificielle #OpenAI

    • ChatGPT de nouveau autorisé en Italie
      https://www.liberation.fr/economie/economie-numerique/chatgpt-de-nouveau-autorise-en-italie-20230429_HZAXWZDVXFBYLP2H5IHUDVJQBU

      L’Autorité italienne de protection des données personnelles avait bloqué fin mars ChatGPT, qu’elle accusait de ne pas respecter la réglementation européenne et de ne pas avoir de système pour vérifier l’âge des usagers mineurs.

      Bloqué il y a un mois pour atteinte à la législation sur les données personnelles, le programme d’intelligence artificielle ChatGPT est de nouveau autorisé en Italie depuis vendredi. « ChatGPT est de nouveau disponible pour nos utilisateurs en Italie. Nous sommes ravis de leur souhaiter à nouveau la bienvenue et restons engagés dans la protection de leurs données personnelles », a indiqué un porte-parole de OpenAI vendredi 28 avril.

      L’Autorité italienne de protection des données personnelles avait bloqué fin mars ChatGPT, qu’elle accusait de ne pas respecter la réglementation européenne et de ne pas avoir de système pour vérifier l’âge des usagers mineurs. L’Autorité reprochait aussi à ChatGPT « l’absence d’une note d’information aux utilisateurs dont les données sont récoltées par OpenAI […] dans le but “d’entraîner” les algorithmes faisant fonctionner la plateforme ».

      En outre, alors que le programme est destiné aux personnes de plus de 13 ans, l’Autorité mettait « l’accent sur le fait que l’absence de tout filtre pour vérifier l’âge des utilisateurs expose les mineurs à des réponses absolument non conformes par rapport à leur niveau de développement ».

      Sonnets et code informatique
      OpenAI publie désormais sur son site des informations sur la façon dont il « collecte » et « utilise les données liées à l’entraînement » et offre une « plus grande visibilité » sur la page d’accueil de ChatGPT et OpenAI de la politique concernant les données personnelles. La compagnie assure aussi avoir mis en place un outil « permettant de vérifier en Italie l’âge des utilisateurs » une fois qu’ils se branchent.

      L’Autorité italienne a donc donné acte vendredi « des pas en avant accomplis pour conjuguer le progrès technologique avec le respect des droits des personnes ».

      ChatGPT est apparu en novembre et a rapidement été pris d’assaut par des utilisateurs impressionnés par sa capacité à répondre clairement à des questions difficiles, à écrire des sonnets ou du code informatique. Financé notamment par le géant informatique Microsoft, qui l’a ajouté à plusieurs de ses services, il est parfois présenté comme un concurrent potentiel du moteur de recherche Google.

      Le 13 avril, jour où l’Union européenne a lancé un groupe de travail pour favoriser la coopération européenne sur le sujet, l’Espagne a annoncé l’ouverture d’une enquête sur ChatGPT.

  • The Only Way to Deal With the Threat From AI ? Shut It Down | Time

    https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough

    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
    More from TIME

    Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

    Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

    Bon bah y’avait les monstres #Nucléaires et #ChangementClimatique. On peut rajouter #IntelligenceArtificielle dans les menaces à l’échelle de l’espèce humaine.

  • GPT-4 est pire que ses prédécesseurs sur la désinformation - Numerama
    https://www.numerama.com/tech/1314420-gpt-4-est-pire-que-ses-predecesseurs-sur-la-desinformation.html

    « Malgré les promesses d’OpenAI, le nouvel outil d’intelligence artificielle produit des informations erronées plus fréquemment et de manière plus convaincante que son prédécesseur », titre NewsGuard. Le titre fait référence ici à GPT-3.5, le modèle de langage qui équipe ChatGPT depuis sa sortie, fin novembre 2022. GPT-4 a été lancé le 14 mars.

    Les tests de NewsGuard ont débuté en janvier 2023 avec ChatGPT. Le site a alors cherché à faire écrire au chatbot 100 récits faux. ChatGPT a accepté d’en produire 80, mais en a rejeté 20. Lorsque GPT-4 est arrivé sur le web, NewsGuard a réitéré le test, mais les résultats ont été pires encore : GPT-4 a généré les 100 mensonges, sans en refuser un seul.

    Ces 100 récits erronés proviennent d’une base de données opérée par NewsGuard qui regroupe les désinformations les plus courantes. Ils portent sur la tuerie de l’école primaire Sandy Hook, l’effondrement du World Trade Center le 11 septembre, les vaccins contre le covid-19, l’origine du VIH ou encore les machines à voter Dominion lors de l’élection présidentielle américaine.

    #Intelligence_artificielle #ChapGPT #Fake_news

  • Le génie de l’oreille
    https://laviedesidees.fr/Le-genie-de-l-oreille.html

    Donner voix aux grands textes : Éric Chartier en a fait sa vocation, aussi bien sur les planches des théâtres où il incarne à lui seul romans et essais, que dans l’élaboration d’outils pédagogiques destinés à initier le plus grand nombre à la #littérature, en surmontant l’obstacle de la chose imprimée. Julien Gracq disait de lui qu’il avait donné « un nouveau volume » à ses œuvres. Depuis plus de 40 ans, Éric Chartier interprète sur scène à lui seul de grands pans de la littérature française. Ce sont des (...) #Entretiens

    / #Arts, #Entretiens_vidéo, #théâtre, littérature, #pédagogie, #enseignement, #lecture, intelligence (...)

    #intelligence_artificielle

  • A battle royal is brewing over copyright and AI | The Economist
    https://www.economist.com/business/2023/03/15/a-battle-royal-is-brewing-over-copyright-and-ai

    Même si je ne suis pas certian de partage les conclusions et certaines remarques, il y a une manière intéressante de poser le problèm et des exemples significatifs.

    Consider two approaches in the music industry to artificial intelligence (AI). One is that of Giles Martin, son of Sir George Martin, producer of the Beatles. Last year, in order to remix the Fab Four’s 1966 album “Revolver”, he used AI to learn the sound of each band member’s instruments (eg, John Lennon’s guitar) from a mono master tape so that he could separate them and reverse engineer them into stereo. The result is glorious. The other approach is not bad either. It is the response of Nick Cave, a moody Australian singer-songwriter, when reviewing lyrics written in his style by ChatGPT, an AI tool developed by a startup called OpenAI. “This song sucks,” he wrote. “Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past.”

    Mr Cave is unlikely to be impressed by the latest version of the algorithm behind Chatgpt, dubbed gpt-4, which Openai unveiled on March 14th. Mr Martin may find it useful. Michael Nash, chief digital officer at Universal Music Group, the world’s biggest label, cites their examples as evidence ofboth excitement and fear about the ai behind content-creating apps like Chatgpt (for text) or Stable Diffusion (for images). It could help the creative process. It could also destroy or usurp it. Yet for recorded music at large, the coming of the bots brings to mind a seismic event in its history: the rapid rise and fall of Napster, a platform for sharing mainly pirated songs at the turn of the millennium. Napster was ultimately brought
    down by copyright law. For aggressive bot providers accused of riding roughshod over intellectual property (ip), Mr Nash has a simple message that sounds, from a music-industry veteran of the Napster era, like a threat. “Don’t deploy in the market and beg for forgiveness. That’s the Napster approach.”

    The main issue here is not ai-made parodies of Mr Cave or faux-Shakespearean sonnets. It is the oceans of copyrighted data the bots
    have siphoned up while being trained to create humanlike content. That information comes from everywhere: social-media feeds, internet searches, digital libraries, television, radio, banks of statistics and so on. Often, it is alleged, ai models plunder the databases without permission. Those responsible for the source material complain that their work is hoovered up without consent, credit or compensation. In short, some ai platforms may be
    doing with other media what Napster did with songs—ignoring copyright altogether. The lawsuits have started to fly.

    It is a legal minefield with implications that extend beyond the creative industries to any business where machine-learning plays a role, such as self-driving cars, medical diagnostics, factory robotics and insurance-risk management. The European Union, true to bureaucratic form, has a directive on copyright that refers to data-mining (written before the recent bot boom). Experts say America lacks case history specific to generative ai. Instead, it has competing theories about whether or not data-mining without licences is permissible under the “fair use” doctrine. Napster also tried
    to deploy “fair use” as a defence in America—and failed. That is not to say that the outcome will be the same this time.

    The main arguments around “fair use” are fascinating. To borrow from a masterclass on the topic by Mark Lemley and Bryan Casey in the Texas Law Review, a journal, use of copyrighted works is considered fair when it serves a valuable social purpose, the source material is transformed from the original and it does not affect the copyright owners’ core market. Critics argue that ais do not transform but exploit the entirety of the databases they mine. They claim that the firms behind machine learningabuse fair use to “free-ride” on the work of individuals. And they contend that this threatens the livelihoods of the creators, as well as society at large if the ai promotes mass surveillance and the spread of misinformation. The authors weigh these arguments against the fact that the more access to training sets there is, the better ai will be, and that without such access there may be no ai at all. In other words, the industry might die in its infancy. They describe it as one of the most important legal questions of the century: “Will copyright law allow robots to learn?”

    An early lawsuit attracting attention is from Getty Images. The photography agency accuses Stability ai, which owns Stable Diffusion, of infringing its copyright on millions of photos from its collection in order to build an image-generating ai model that will compete with Getty. Provided the case is not settled out of court, it could set a precedent on fair use. An even more important verdict could come soon from America’s Supreme Court in a case involving the transformation of copyrighted images of Prince, a pop
    idol, by the late Andy Warhol, an artist. Daniel Gervais, an ip expert at Vanderbilt Law School in Nashville, believes the justices may provide long-awaited guidance on fair use in general.

    Scraping copyrighted data is not the only legal issue generative ai faces. In many jurisdictions copyright applies only to work created by humans, hence the extent to which bots can claim ip protection for the stuff they generate is another grey area. Outside the courtrooms the biggest questions will be political, including whether or not generative ai should enjoy the same liability protections for the content it displays as social-media platforms do, and to what extent it jeopardises data privacy.

    The copyrighting is on the wall

    Yet the ip battle will be a big one. Mr Nash says creative industries
    should swiftly take a stand to ensure artists’ output is licensed and used ethically in training ai models. He urges ai firms to “document and disclose” their sources. But, he acknowledges, it is a delicate balance. Creative types do not want to sound like enemies of progress. Many may benefit from ai in their work. The lesson from Napster’s “reality therapy”, as Mr Nash calls it, is that it is better to engage with new technologies than hope they go away. Maybe this time it won’t take 15 years of crumbling revenues to learn it.

    #Intelligence_artificielle #ChatGPT #Copyright #Apprentissage

  • Meta unveils a new large language model that can run on a single GPU [Updated] | Ars Technica
    https://arstechnica.com/information-technology/2023/02/chatgpt-on-your-pc-meta-unveils-new-ai-model-that-can-run-on-a-single-g

    Meta unveils a new large language model that can run on a single GPU [Updated]
    LLaMA-13B reportedly outperforms ChatGPT-like tech despite being 10x smaller.

    Benj Edwards - 2/24/2023, 9:02 PM

    On Friday, Meta announced a new AI-powered large language model (LLM) called LLaMA-13B that it claims can outperform OpenAI’s GPT-3 model despite being “10x smaller.” Smaller-sized AI models could lead to running ChatGPT-style language assistants locally on devices such as PCs and smartphones. It’s part of a new family of language models called “Large Language Model Meta AI,” or LLAMA for short.

    The LLaMA collection of language models range from 7 billion to 65 billion parameters in size. By comparison, OpenAI’s GPT-3 model—the foundational model behind ChatGPT—has 175 billion parameters.

    Meta trained its LLaMA models using publicly available datasets, such as Common Crawl, Wikipedia, and C4, which means the firm can potentially release the model and the weights open source. That’s a dramatic new development in an industry where, up until now, the Big Tech players in the AI race have kept their most powerful AI technology to themselves.

    “Unlike Chinchilla, PaLM, or GPT-3, we only use datasets publicly available, making our work compatible with open-sourcing and reproducible, while most existing models rely on data which is either not publicly available or undocumented,” tweeted project member Guillaume Lample.

    Meta calls its LLaMA models “foundational models,” which means the firm intends the models to form the basis of future, more-refined AI models built off the technology, similar to how OpenAI built ChatGPT from a foundation of GPT-3. The company hopes that LLaMA will be useful in natural language research and potentially power applications such as “question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models.”

    #Intelligence_artificielle #Meta #Compétition #Modèles

  • Que vont changer Bard et le nouveau Bing au Web ?
    https://www.ladn.eu/tech-a-suivre/changement-bing-bard-web

    Nos hypothèses : un internet plus intuitif et (encore) moins fiable, une relation à la machine plus ambiguë où le Web devient notre copilote, et le dévissage des audiences.

    Cette semaine Microsoft et Google ont annoncé l’intégration d’une intelligence artificielle génératrice de texte à leur moteur de recherche. Bing sera augmenté d’une version plus avancée de ChatGPT. Et Google Search de Bard, un chatbot similaire. Cette course à l’échalote nous projette-t-elle dans « un nouveau paradigme », comme le suggère Satya Nadella, PDG de Microsoft ? Pensez à un changement équivalent à l’arrivée du smartphone.

    Méfions-nous des discours grandiloquents. Après tout, on attend encore la grande révolution du Web promise il y a deux ans par les NFT, et il y a quelques mois par le métavers. Mais il est certain que si Google, la porte d’accès au Web de plus de 90 % des internautes, se métamorphose ou se fait remplacer par Bing, cela changera la donne.
    Bonjour, ceci est Bing. Je peux vous aider : )

    En gros : plutôt que de taper dans votre moteur de recherche préféré une requête, puis de faire votre tri naviguant de lien en lien, vous vous adresserez à un chatbot qui fournira une réponse déjà toute faite en langage naturel. (Les liens ne disparaîtront pas, mais la réponse des chatbots sera mise en avant).

    #Synthetic_medias #Intelligence_artificielle #Bing #Google

    • Le cauchemar. S’il n’y a plus qu’une réponse à chaque requête, il n’y aura donc plus à véritablement choisir. Or, s’il n’y a plus à comparer, à sélectionner, à choisir, à quoi bon maintenir un sens critique ?

      Et si, dans le fond, la victoire de l’IA, faute de pouvoir encore nous surpasser en intelligence, ne procedait pas d’abord de l’abrutissement généralisé ?

      Une nouvelle étape dans la fabrique du crétin digital ?

  • Le capitalisme : un système économique à l’agonie, un ordre social à renverser

    Cercle Léon Trotsky n°159 (22 février 2019)

    Le texte : https://www.lutte-ouvriere.org/publications/brochures/le-capitalisme-un-systeme-economique-lagonie-un-ordre-social-renvers

    Sommaire :

    La dynamique du capitalisme… et ses contradictions
    – Le travail humain, source de la valeur ajoutée
    – Le secret du #capital
    – La #reproduction_du_capital et ses #contradictions
    – La baisse du #taux_de_profit
    – Le capital, un produit collectif
    – La révolution sociale, une nécessité
    – L’#accumulation_du_capital… et de ses contradictions
    – Sans révolution sociale, la putréfaction continue

    Le #capitalisme aujourd’hui
    – Une courte phase de reconstitution des forces productives
    – Un #taux_de_profit restauré au détriment des travailleurs
    – La financiarisation de l’économie
    – La politique des banques centrales
    – L’#endettement général de la société… et ses conséquences
    – La #finance draine la plus-value créée dans la production
    – La flambée de la bourse et les #Gafam
    – La faiblesse des #investissements productifs
    – Baisse de la #productivité du travail
    – L’#intelligence_artificielle (#IA) et la fin du travail ?
    – La #Chine, moteur de la croissance mondiale ?
    – L’#informatique, nouvelle révolution industrielle ?

    La #révolution_sociale, seule voie pour sortir de l’impasse
    – Les forces productives sont plus que mûres pour le #socialisme
    – Réimplanter une #conscience_de_classe, reconstruire des partis révolutionnaires

    #lutte_de_classe #parti_ouvrier #parti_révolutionnaire #communisme #classe_ouvrière

  • Lecture de : La guerre des métaux rares. La face cachée de la transition énergétique et numérique, de Guillaume Pitron

    Une perspective nationaliste navrante, mais une somme d’informations capitales.

    Extraits :

    « Le monde a de plus en plus besoin de terres rares, de « #métaux rares », pour son #développement_numérique, et donc pour ttes les #technologies_de_l’information_et_de_la_communication. Les #voitures_électriques et #voitures_hybrides en nécessitent deux fois plus que les voitures à essence, etc. »

    « Nos aïeux du XIXe siècle connaissaient l’importance du #charbon, & l’honnête homme du XXe siècle n’ignorait rien de la nécessité du pétrole. Au XXIe siècle, nous ne savons même pas qu’un monde + durable dépend en très grande partie de substances rocheuses nommées métaux rares. »

    « #Terres_rares, #graphite, #vanadium, #germanium, #platinoïdes, #tungstène, #antimoine, #béryllium, #fluorine, #rhénium, #prométhium… un sous-ensemble cohérent d’une trentaine de #matières_premières dont le point commun est d’être souvent associées ds la nature aux métaux les + abondants »

    « C’est là la clé du « #capitalisme_vert » : [remplacer] des #ressources qui rejettent des millions de milliards de tonnes de #gaz_carbonique par d’autres qui ne brûlent pas – et ne génèrent donc pas le moindre gramme de CO2. »

    « Avec des réserves d’or noir en déclin, les stratèges doivent anticiper la guerre sans #pétrole. […] ne plus dépendre des énergies fossiles d’ici à 2040. […] En recourant notamment aux #énergies_renouvelables & en levant des légions de robots alimentés à l’électricité. »

    « La Grande-Bretagne a dominé le XIXe s. grâce à son hégémonie sur la production mondiale de charbon ; une grande partie des événements du XXe s. peuvent se lire à travers le prisme de l’ascendant pris par les Etats-Unis et l’Arabie saoudite sur la production et la sécurisation des routes du pétrole ; .. au XXIe siècle, un État est en train d’asseoir sa domina routes du pétrole ; au XXIe siècle, un État est en train d’asseoir sa domination sur l’exportation et la consommation des métaux rares. Cet État, c’est la Chine. »

    La Chine « détient le #monopole d’une kyrielle de métaux rares indispensables aux énergies bas carbone & numérique, ces 2 piliers de la transition énergétique. Il est le fournisseur unique du + stratégique : terres rares — sans substitut connu & dont personne ne peut se passer. »

    « Notre quête d’un modèle de #croissance + écologique a plutôt conduit à l’exploitation intensifiée de l’écorce terrestre pr en extraire le principe actif, à savoir les métaux rares, avec des #impacts_environnementaux encore + importants que cx générés par l’#extraction_pétrolière »

    « Soutenir le changement de notre #modèle_énergétique exige déjà un doublement de la production de métaux rares tous les 15 ans environ, et nécessitera au cours des trente prochaines années d’extraire davantage de minerais que ce que l’humanité a prélevé depuis 70 000 ans. » (25)

    « En voulant nous émanciper des #énergies_fossiles, en basculant d’un ordre ancien vers un monde nouveau, nous sombrons en réalité dans une nouvelle dépendance, plus forte encore. #Robotique, #intelligence_artificielle, #hôpital_numérique, #cybersécurité, #biotechnologies_médicale, objets connectés, nanoélectronique, voitures sans chauffeur… Tous les pans les + stratégiques des économies du futur, toutes les technologies qui décupleront nos capacités de calcul et moderniseront notre façon de consommer de l’énergie, le moindre de nos gestes quotidien… et même nos grands choix collectifs vont se révéler totalement tributaires des métaux rares. Ces ressources vont devenir le socle élémentaire, tangible, palpable, du XXIe siècle. » (26)

    #Metaux_Rares Derrière l’#extraction et le « #raffinage », une immense #catastrophe_écologique : « D’un bout à l’autre de la chaîne de production de métaux rares, quasiment rien en #Chine n’a été fait selon les standards écologiques & sanitaires les plus élémentaires. En même temps qu’ils devenaient omniprésents ds les technologies vertes & numériques les + enthousiasmantes qui soient, les métaux rares ont imprégné de leurs scories hautement toxiques l’eau, la terre, l’atmosphère & jusqu’aux flammes des hauts-fourneaux – les 4 éléments nécessaires à la vie »

    « C’est ici que bat le cœur de la transition énergétique & numérique. Sidérés, ns restons une bonne h à observer immensités lunaires & paysages désagrégés. Mais il vaut mieux déguerpir avant que la maréchaussée alertée par les caméras ne débarque »

    « Nous avons effectué des tests, et notre village a été surnommé “le village du cancer”. Nous savons que nous respirons un air toxique et que nous n’en avons plus pour longtemps à vivre. »

    « La seule production d’un #panneau_solaire, compte tenu en particulier du silicium qu’il contient, génère, avance-t-il, plus de 70 kilos de CO2. Or, avec un nombre de panneaux photovoltaïques qui va augmenter de 23 % par an dans les années à venir, cela signifie que les installations solaires produiront chaque année dix gigawatts d’électricité supplémentaires. Cela représente 2,7 milliards de tonnes de carbone rejetées dans l’atmosphère, soit l’équivalent de la #pollution générée pendant un an par l’activité de près de 600 000 automobiles.

    « Ces mêmes énergies – [dites] « renouvelables » – se fondent sur l’exploitation de matières premières qui, elles, ne sont pas renouvelables. »

    « Ces énergies – [dites] « vertes » ou « décarbonées » – reposent en réalité sur des activités génératrices de #gaz_à_effet_de_serre . »

    « N’y a-t-il pas une ironie tragique à ce que la pollution qui n’est plus émise dans les agglomérations grâce aux voitures électriques soit simplement déplacée dans les zones minières où l’on extrait les ressources indispensables à la fabrication de ces dernières ?

    .. En ce sens, la transition énergétique et numérique est une transition pour les classes les plus aisées : elle dépollue les centres-villes, plus huppés, pour mieux lester de ses impacts réels les zones plus miséreuses et éloignées des regards. »

    « Certaines technologies vertes sur lesquelles se fonde notre idéal de sobriété énergétique nécessitent en réalité, pour leur fabrication, davantage de matières premières que des technologies plus anciennes. »

    .. « Un futur fondé sur les technologies vertes suppose la consommation de beaucoup de matières, et, faute d’une gestion adéquate, celui-ci pourrait ruiner […] les objectifs de développement durable. » (The World Bank Group, juin 2017.)

    « Le #recyclage dont dépend notre monde + vert n’est pas aussi écologique qu’on le dit. Son bilan environnemental risque même de s’alourdir à mesure que nos sociétés produiront des alliages + variés, composés d’un nombre + élevé de matières, ds des proportions tjrs + importantes »

    « Dans le monde des matières premières, ces observations relèvent le + souvent de l’évidence ; pr l’immense majorité d’entre nous, en revanche, elles sont tellement contre-intuitives qu’il va certainement nous falloir de longues années avant de bien les appréhender & faire admettre. Peut-être [dans 30 ans] nous dirons-nous aussi que les énergies nucléaires sont finalement moins néfastes que les technologies que nous avons voulu leur substituer et qu’il est difficile d’en faire l’économie dans nos mix énergétiques. »

    « Devenue productrice prépondérante de certains métaux rares, la Chine [a] désormais l’opportunité inédite d’en refuser l’exportation vers les États qui en [ont] le plus besoin. […] Pékin produit 44 % de l’#indium consommé dans le monde, 55 % du vanadium, près de 65 % du #spath_fluor et du #graphite naturel, 71 % du germanium et 77 % de l’antimoine. La Commission européenne tient sa propre liste et abonde dans le même sens : la Chine produit 61 % du silicium et 67 % du germanium. Les taux atteignent 84 % pour le tungstène et 95 % pour les terres rares. Sobre conclusion de Bruxelles : « La Chine est le pays le plus influent en ce qui concerne l’approvisionnement mondial en maintes matières premières critiques ». »

    « La République démocratique du Congo produit ainsi 64 % du #cobalt, l’Afrique du Sud fournit 83 % du platine, de l’iridium et du #ruthénium, et le Brésil exploite 90 % du #niobium. L’Europe est également dépendante des États-Unis, qui produisent plus de 90 % du #béryllium . »

    « Les 14 pays membres de l’OPEP, capables depuis des décennies d’influencer fortement les cours du baril, ne totalisent « que » 41 % de la prod. mondiale d’or noir… La Chine, elle, s’arroge jusqu’à 99 % de la prod. mondiale de terres rares, le + convoité des métaux rares ! »

    Aimants — « Alors qu’à la fin de la décennie 1990 le Japon, les États-Unis et l’Europe concentraient 90 % du marché des aimants, la Chine contrôle désormais les 3/4 de la production mondiale ! Bref, par le jeu du chantage « technologies contre ressources », le monopole chinois de la production des minerais s’est transposé à l’échelon de leur transformation. La Chine n’a pas trusté une, mais deux étapes de la chaîne industrielle. C’est ce que confirme la Chinoise Vivian Wu : « Je pense même que, dans un avenir proche, la Chine se sera dotée d’une industrie de terres rares totalement intégrée d’un bout à l’autre de la chaîne de valeur. » Vœu déjà en partie réalisé. Il a surtout pris racine dans la ville de #Baotou, en #Mongolie-Intérieure . »

    « Baotou produit chaque année 30 000 tonnes d’aimants de terres rares, soit le tiers de la production mondiale. »

    « Nos besoins en métaux rares se diversifient et s’accroissent de façon exponentielle. […] D’ici à 2040, nous devrons extraire trois fois plus de terres rares, cinq fois plus de tellure, douze fois plus de cobalt et seize fois plus de #lithium qu’aujourd’hui. […] la croissance de ce marché va exiger, d’ici à 2050, « 3 200 millions de tonnes d’acier, 310 millions de tonnes d’aluminium et 40 millions de tonnes de #cuivre 5 », car les éoliennes engloutissent davantage de matières premières que les technologies antérieures.

    .. « À capacité [de production électrique] équivalente, les infrastructures […] éoliennes nécessitent jusqu’à quinze fois davantage de #béton, quatre-vingt-dix fois plus d’aluminium et cinquante fois plus de fer, de cuivre et de verre » que les installations utilisant des #combustibles traditionnels, indique M. Vidal. Selon la Banque mondiale, qui a conduit sa propre étude en 2017, cela vaut également pour le solaire et pour l’hydrogène. […] La conclusion d’ensemble est aberrante : puisque la consommation mondiale de métaux croît à un rythme de 3 à 5 % par an, « pour satisfaire les besoins mondiaux d’ici à 2050, nous devrons extraire du sous-sol plus de métaux que l’humanité n’en a extrait depuis son origine ».

    .. Que le lecteur nous pardonne d’insister : nous allons consommer davantage de #minerais durant la prochaine génération qu’au cours des 70 000 dernières années, c’est-à-dire des cinq cents générations qui nous ont précédés. Nos 7,5 milliards de contemporains vont absorber plus de #ressources_minérales que les 108 milliards d’humains que la Terre a portés jusqu’à ce jour. » (211-214)

    Sans parler des « immenses quantités d’eau consommées par l’industrie minière, [des] rejets de gaz carbonique causés par le transport, [du] #stockage et [de] l’utilisation de l’énergie, [de] l’impact, encore mal connu, du recyclage des technologies vertes [de] toutes les autres formes de pollution des #écosystèmes générées par l’ensemble de ces activités [et] des multiples incidences sur la biodiversité. » (215)

    « D’un côté, les avocats de la transition énergétique nous ont promis que nous pourrions puiser à l’infini aux intarissables sources d’énergie que constituent les marées, les vents et les rayons solaires pour faire fonctionner nos technologies vertes. Mais, de l’autre, les chasseurs de métaux rares nous préviennent que nous allons bientôt manquer d’un nombre considérable de matières premières. Nous avions déjà des listes d’espèces animales et végétales menacées ; nous établirons bientôt des listes rouges de métaux en voie de disparition. » (216)

    « Au rythme actuel de production, les #réserves rentables d’une quinzaine de métaux de base et de métaux rares seront épuisées en moins de cinquante ans ; pour cinq métaux supplémentaires (y compris le fer, pourtant très abondant), ce sera avant la fin de ce siècle. Nous nous dirigeons aussi, à court ou moyen terme, vers une pénurie de vanadium, de #dysprosium, de #terbium, d’#europium & de #néodyme. Le #titane et l’indium sont également en tension, de même que le cobalt. « La prochaine pénurie va concerner ce métal, Personne n’a vu le problème venir. »

    « La #révolution_verte, plus lente qu’espéré, sera emmenée par la Chine, l’un des rares pays à s’être dotés d’une stratégie d’approvisionnement adéquate. Et Pékin ne va pas accroître exagérément sa production de métaux rares pour étancher la soif du reste du monde. Non seulement parce que sa politique commerciale lui permet d’asphyxier les États occidentaux, mais parce qu’il craint à son tour que ses ressources ne s’amenuisent trop rapidement. Le marché noir des terres rares, qui représente un tiers de la demande officielle, accélère l’appauvrissement des mines, et, à ce rythme, certaines réserves pourraient être épuisées dès 2027. »

    De la question « du #taux_de_retour_énergétique (#TRE), c’est-à-dire le ratio entre l’énergie nécessaire à la production des métaux et celle que leur utilisation va générer. […] C’est une fuite en avant dont nous pressentons l’absurdité. Notre modèle de production sera-t-il encore sensé le jour où un baril permettra tt juste de remplir un autre baril ? […] Les limites de notre système productiviste se dessinent aujourd’hui plus nettement : elles seront atteintes le jour où il nous faudra dépenser davantage d’énergie que nous ne pourrons en produire. »

    « Plusieurs vagues de #nationalisme minier ont déjà placé les États importateurs à la merci de pays fournisseurs prtant bien moins puissants qu’eux. En fait de mines, le client ne sera donc plus (toujours) roi. La géopolitique des métaux rares pourrait faire émerger de nouveaux acteurs prépondérants, souvent issus du monde en développement : le #Chili, le #Pérou et la #Bolivie, grâce à leurs fabuleuses réserves de lithium et de cuivre ; l’#Inde, riche de son titane, de son #acier et de son #fer ; la #Guinée et l’#Afrique_australe, dont les sous-sols regorgent de bauxite, de chrome, de manganèse et de platine ; le Brésil, où le bauxite et le fer abondent ; la Nouvelle-Calédonie, grâce à ses prodigieux gisements de #nickel. » (226-227)

    « En engageant l’humanité ds la quête de métaux rares, la transition énergétique & numérique va assurément aggraver dissensions & discordes. Loin de mettre un terme à la géopol. de l’énergie, elle va au contraire l’exacerber. Et la Chine entend façonner ce nouveau monde à sa main. »

    « Les #ONG écologistes font la preuve d’une certaine incohérence, puisqu’elles dénoncent les effets du nouveau monde plus durable qu’elles ont elles-mêmes appelé de leurs vœux. Elles n’admettent pas que la transition énergétique et numérique est aussi une transition des champs de pétrole vers les gisements de métaux rares, et que la lutte contre le réchauffement climatique appelle une réponse minière qu’il faut bien assumer. » (234-235)

    « La bataille des terres rares (et de la transition énergétique et numérique) est bel et bien en train de gagner le fond des mers. Une nouvelle ruée minière se profile. […] La #France est particulièrement bien positionnée dans cette nouvelle course. Paris a en effet mené avec succès, ces dernières années, une politique d’extension de son territoire maritime. […] L’ensemble du #domaine_maritime français [est] le deuxième plus grand au monde après celui des #États-Unis. […] Résumons : alors que, pendant des milliers d’années, 71 % de la surface du globe n’ont appartenu à personne, au cours des six dernières décennies 40 % de la surface des océans ont été rattachés à un pays, et 10 % supplémentaires font l’objet d’une demande d’extension du plateau continental. À terme, les États pourvus d’une côte exerceront leur juridiction sur 57 % des fonds marins. Attirés, en particulier par le pactole des métaux rares, nous avons mené, en un tps record, la + vaste entreprise d’#appropriation_de_territoires de l’histoire. »

    « Le projet, entonné en chœur par tous les avocats de la #transition_énergétique et numérique, de réduire l’impact de l’homme sur les écosystèmes a en réalité conduit à accroître notre mainmise sur la #biodiversité. » (248)

    « N’est-il pas absurde de conduire une mutation écologique qui pourrait tous nous empoisonner aux métaux lourds avant même que nous l’ayons menée à bien ? Peut-on sérieusement prôner l’harmonie confucéenne par le bien-être matériel si c’est pour engendrer de nouveaux maux sanitaires et un #chaos_écologique – soit son exact contraire ? » (252)

    Métaux rares, transition énergétique et capitalisme vert https://mensuel.lutte-ouvriere.org//2023/01/23/metaux-rares-transition-energetique-et-capitalisme-vert_4727 (Lutte de classe, 10 janvier 2023)

    #écologie #capitalisme #impérialisme

  • Anthropic Said to Be Closing In on $300 Million in New A.I. Funding - The New York Times
    https://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html

    Silicon Valley has been gripped by a frenzy over start-ups working on “generative” A.I., technologies that can generate text, images and other media in response to short prompts. This week, Microsoft invested $10 billion in OpenAI, the San Francisco start-up that kicked off the furor in November with a chatbot, ChatGPT. ChatGPT has wowed more than a million people with its knack for answering questions in clear, concise prose.

    Even as funding for other start-ups has dried up, investors have chased deals in similar A.I. companies, signaling that the otherwise gloomy market for tech investing has at least one bright spot.

    Other funding deals in the works include Character.AI, which lets people talk to chatbots that impersonate celebrities. The start-up has held discussions about a large round of funding, according to three people with knowledge of the situation.

    Replika, another chatbot company, and You.com, which is rolling out similar technology into a new kind of search engine, said they, too, had received unsolicited interest from investors.

    All specialize in generative A.I. The result of more than a decade of research inside companies like OpenAI, these technologies are poised to remake everything from online search engines like Google Search and Microsoft Bing to photo and graphics editors like Photoshop.

    The explosion of interest in generative A.I. has investors and start-ups racing to choose their teams. Start-ups want to take money from the most powerful investors with the deepest pockets, and investors are trying to pick winners from a growing list of ambitious companies.

    #Intelligence_artificielle #Nouveaux_marchés #Economie_numérique

  • OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time
    https://time.com/6247678/openai-chatgpt-kenya-workers

    In a statement, an OpenAI spokesperson confirmed that Sama employees in Kenya contributed to a tool it was building to detect toxic content, which was eventually built into ChatGPT. The statement also said that this work contributed to efforts to remove toxic data from the training datasets of tools like ChatGPT. “Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” the spokesperson said. “Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”

    Even as the wider tech economy slows down amid anticipation of a downturn, investors are racing to pour billions of dollars into “generative AI,” the sector of the tech industry of which OpenAI is the undisputed leader. Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.

    Read More: AI Helped Write This Play. It May Contain Racism

    One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
    The Sama contracts

    Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

    In a statement, a Sama spokesperson said it was “incorrect” that employees only had access to group sessions. Employees were entitled to both individual and group sessions with “professionally-trained and licensed mental health therapists,” the spokesperson said. These therapists were accessible at any time, the spokesperson added.

    The contracts stated that OpenAI would pay an hourly rate of $12.50 to Sama for the work, which was between six and nine times the amount Sama employees on the project were taking home per hour. Agents, the most junior data labelers who made up the majority of the three teams, were paid a basic salary of 21,000 Kenyan shillings ($170) per month, according to three Sama employees. They also received monthly bonuses worth around $70 due to the explicit nature of their work, and would receive commission for meeting key performance indicators like accuracy and speed. An agent working nine-hour shifts could expect to take home a total of at least $1.32 per hour after tax, rising to as high as $1.44 per hour if they exceeded all their targets. Quality analysts—more senior labelers whose job was to check the work of agents—could take home up to $2 per hour if they met all their targets. (There is no universal minimum wage in Kenya, but at the time these workers were employed the minimum wage for a receptionist in Nairobi was $1.52 per hour.)

    In a statement, a Sama spokesperson said workers were asked to label 70 text passages per nine hour shift, not up to 250, and that workers could earn between $1.46 and $3.74 per hour after taxes. The spokesperson declined to say what job roles would earn salaries toward the top of that range. “The $12.50 rate for the project covers all costs, like infrastructure expenses, and salary and benefits for the associates and their fully-dedicated quality assurance analysts and team leaders,” the spokesperson added.

    Read More: Fun AI Apps Are Everywhere Right Now. But a Safety ‘Reckoning’ Is Coming

    An OpenAI spokesperson said in a statement that the company did not issue any productivity targets, and that Sama was responsible for managing the payment and mental health provisions for employees. The spokesperson added: “we take the mental health of our employees and those of our contractors very seriously. Our previous understanding was that [at Sama] wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so.”

    In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
    How OpenAI’s relationship with Sama collapsed

    In February 2022, Sama and OpenAI’s relationship briefly deepened, only to falter. That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT. In a statement, an OpenAI spokesperson did not specify the purpose of the images the company sought from Sama, but said labeling harmful images was “a necessary step” in making its AI tools safer. (OpenAI also builds image-generation technology.) In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document. OpenAI paid Sama a total of $787.50 for collecting the images, the document shows.

    Within weeks, Sama had canceled all its work for OpenAI—eight months earlier than agreed in the contracts. The outsourcing company said in a statement that its agreement to collect images for OpenAI did not include any reference to illegal content, and it was only after the work had begun that OpenAI sent “additional instructions” referring to “some illegal categories.” “The East Africa team raised concerns to our executives right away. Sama immediately ended the image classification pilot and gave notice that we would cancel all remaining [projects] with OpenAI,” a Sama spokesperson said. “The individuals working with the client did not vet the request through the proper channels. After a review of the situation, individuals were terminated and new sales vetting policies and guardrails were put in place.”

    In a statement, OpenAI confirmed that it had received 1,400 images from Sama that “​​included, but were not limited to, C4, C3, C2, V3, V2, and V1 images.” In a followup statement, the company said: “We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We never intended for any content in the C4 category to be collected. This content is not needed as an input to our pretraining filters and we instruct our employees to actively avoid it. As soon as Sama told us they had attempted to collect content in this category, we clarified that there had been a miscommunication and that we didn’t want that content. And after realizing that there had been a miscommunication, we did not open or view the content in question — so we cannot confirm if it contained images in the C4 category.”

    Sama’s decision to end its work with OpenAI meant Sama employees no longer had to deal with disturbing text and imagery, but it also had a big impact on their livelihoods. Sama workers say that in late February 2022 they were called into a meeting with members of the company’s human resources team, where they were told the news. “We were told that they [Sama] didn’t want to expose their employees to such [dangerous] content again,” one Sama employee on the text-labeling projects said. “We replied that for us, it was a way to provide for our families.” Most of the roughly three dozen workers were moved onto other lower-paying workstreams without the $70 explicit content bonus per month; others lost their jobs. Sama delivered its last batch of labeled data to OpenAI in March, eight months before the contract was due to end.

    Because the contracts were canceled early, both OpenAI and Sama said the $200,000 they had previously agreed was not paid in full. OpenAI said the contracts were worth “about $150,000 over the course of the partnership.”

    Sama employees say they were given another reason for the cancellation of the contracts by their managers. On Feb. 14, TIME published a story titled Inside Facebook’s African Sweatshop. The investigation detailed how Sama employed content moderators for Facebook, whose jobs involved viewing images and videos of executions, rape and child abuse for as little as $1.50 per hour. Four Sama employees said they were told the investigation prompted the company’s decision to end its work for OpenAI. (Facebook says it requires its outsourcing partners to “provide industry-leading pay, benefits and support.”)

    Read More: Inside Facebook’s African Sweatshop

    Internal communications from after the Facebook story was published, reviewed by TIME, show Sama executives in San Francisco scrambling to deal with the PR fallout, including obliging one company, a subsidiary of Lufthansa, that wanted evidence of its business relationship with Sama scrubbed from the outsourcing firm’s website. In a statement to TIME, Lufthansa confirmed that this occurred, and added that its subsidiary zeroG subsequently terminated its business with Sama. On Feb. 17, three days after TIME’s investigation was published, Sama CEO Wendy Gonzalez sent a message to a group of senior executives via Slack: “We are going to be winding down the OpenAI work.”

    On Jan. 10 of this year, Sama went a step further, announcing it was canceling all the rest of its work with sensitive content. The firm said it would not renew its $3.9 million content moderation contract with Facebook, resulting in the loss of some 200 jobs in Nairobi. “After numerous discussions with our global team, Sama made the strategic decision to exit all [natural language processing] and content moderation work to focus on computer vision data annotation solutions,” the company said in a statement. “We have spent the past year working with clients to transition those engagements, and the exit will be complete as of March 2023.”

    But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”

    With reporting by Julia Zorthian/New York

    #Travail_clic #Etiquetage #Intelligence_artificielle #Kenya #Violence_sexuelle #Modération

  • À 99 ans, elle est menacée de poursuites pour avoir omis de se présenter comme jurée Radio-Canada

    Marion Lenko, 99 ans, demeure au CHSLD Vigi Santé de Dollard-des-Ormeaux, à Montréal. Elle est alitée, reçoit des soins 24 heures sur 24, 7 jours sur 7, entend mal, et ses capacités cognitives sont diminuées. Pourtant, elle est menacée d’être poursuivie au criminel pour avoir omis de se présenter comme jurée.

    L’avis de convocation pour participer le 9 janvier à une séance de sélection d’un jury a d’abord été envoyé au CHSLD Vigi Santé. Puis, une lettre a été envoyée chez son beau-fils, le mari de sa fille maintenant décédée, Edward Ritchuk.

    Marion Lenko reçoit des soins 24 heures sur 24 au CHSLD Vigi Santé de Dollard-des-Ormeaux. Photo : Radio-Canada / Edward Ritchuk

    “Au début, j’ai pensé que c’était une blague !”

    Après avoir compris que la convocation était bien réelle, il a fait suivre la lettre au fils de Mme Lenko, qui en est le tuteur légal, mais qui vit en Floride, aux États-Unis. Il semble que ce dernier n’ait cependant jamais répondu à la convocation, omettant du même coup de demander une exemption pour sa mère.

    « Enfin, cette semaine, j’ai reçu une lettre du ministère de la Justice avisant que ma belle-mère devait paraître en cour le 31 janvier sans quoi des procédures judiciaires seraient entamées contre elle. »
    -- Une citation de Edward Ritchuk


    Edward Ritchuk veut aider sa belle-mère, mais doute de pouvoir le faire puisqu’il n’est pas son tuteur légal. Photo : Radio-Canada / CBC/Valeria Cori-Manocchio

    Il a alors composé le numéro de téléphone fourni dans la lettre, mais est tombé sur un système automatisé et assure ne pas avoir été en mesure de parler à qui que ce soit.

    De ce qu’il en comprend, quelqu’un doit aller en cour en personne, mais il ne peut pas le faire lui-même. C’est le fils de Mme Lenko, qui vit à plus de 2400 km de Montréal, qui doit représenter sa mère.

    Une situation difficile pour Edward Ritchuk, qui ne veut pas abandonner sa belle-mère. Il la connaît depuis 1972 et est resté en contact avec elle après la mort de son épouse, mais il se sent coincé et ne sait pas comment l’aider.

    Surtout, il a du mal à croire qu’aucune vérification sur l’état de sa belle-mère n’a été faite avant de lui envoyer une convocation pour une séance de sélection d’un jury.

    Des procédures conformes à la loi
    Selon la porte-parole du ministère de la Justice, Isabelle Boily, une personne peut être dispensée de ses devoirs de juré ou jurée si des circonstances l’empêchent de remplir ses obligations.

    “Il faut alors demander une exemption en remplissant le formulaire reçu avec l’avis de convocation, a-t-elle écrit par courriel. Le formulaire doit ensuite être envoyé avec des pièces justificatives dans les 20 jours suivant la réception de la convocation.”

    Les personnes de 65 ans et plus peuvent aussi demander cette exemption en appelant le bureau du shérif durant la même période de 20 jours. Un membre de la famille peut appeler au nom de la personne convoquée, assure Isabelle Boily.

    Selon René Verret, avocat criminaliste et ancien procureur de la Couronne, ce qui arrive à Mme Lenko est tout à fait conforme à la loi.

    « C’est ce que la loi prévoit, tout simplement. Une personne qui veut être exemptée doit présenter une demande [...] Il faut absolument répondre. »
    -- Une citation de René Verret, avocat criminaliste et ancien procureur de la Couronne, en entrevue à RDI

    Il n’est toutefois pas trop tard pour la dame de 99 ans, assure-t-il. Sa famille a jusqu’au 31 janvier pour présenter une demande d’exemption.

    “C’est honteux”
    Eric Sutton, aussi avocat criminaliste, doute cependant qu’un simple appel au bureau du shérif soit suffisant. “De ce que j’en comprends, la famille a tenté d’appeler, en vain.”

    “Et maintenant, elle fait face à la possibilité de devoir payer une amende ou même d’être emprisonnée. Je l’ai vu dans les documents. C’est assez dur pour une femme de 99 ans. C’est honteux.”

    Aucun des deux avocats n’a précisé si Edward Ritchuk ou quelqu’un au CHSLD aurait pu répondre à l’avis de convocation ou s’il fallait absolument que ce soit son fils qui s’en occupe.

    Eric Sutton souligne cependant que les convocations sont envoyées par le bureau du shérif en se basant sur la liste électorale, qui comprend tous les citoyens de 18 ans et plus, sans tenir compte de la date de naissance.

    Source : https://ici.radio-canada.ca/nouvelle/1949957/marion-lenko-femme-ainee-chsld-convocation-jury-criminel

    #IA #intelligence_artificielle #bêtise #justice #tribunal #vieillesse #algorithme #technologisme #bigdata #technologie

  • Pourquoi la plainte de Getty contre Stable Diffusion est importante ?
    https://www.ladn.eu/mondes-creatifs/plainte-getty-stable-diffusion

    Le géant de la banque d’images attaque en justice le générateur d’images par IA en l’accusant d’avoir volé des milliers de photos dans son catalogue. Mais les choses ne sont pas aussi simples que ça.

    Il fallait bien que ça arrive. Le 17 janvier 2022, le média The Verge a annoncé que l’agence Getty Images avait porté plainte contre Stability AI, l’entreprise derrière l’intelligence artificielle de génération d’images Stable Diffusion. Cette action en justice fait suite à une class action lancée le 16 janvier dernier par un trio d’artistes à l’encontre de Stability AI ainsi que MidJourney et la plateforme Deviant Art qui a lancé son propre générateur. Derrière ces actions en justice, une question récurrente est posée : ces entreprises ont-elles le droit de copier et d’analyser des milliards d’images sous copyright pour entraîner leur IA générative ?
    Les bases de données de la discorde

    Pour comprendre les raisons de ce litige, il faut se pencher sur le mode de fonctionnement des générateurs d’images. Pour créer un portrait, une photographie synthétique (une « synthographie » dans le jargon) ou bien un paysage, les IA ont besoin de s’entraîner sur des références qui existent déjà. Voilà pourquoi les entreprises comme Open AI ou Stability utilisent de gigantesques bases de données remplies d’images et de phrases qui les décrivent. Stable Diffusion a ainsi été entraîné sur LAION 2B, une banque se basant sur plus de 2 milliards d’images. Cette dernière est issue d’une banque de données encore plus imposante intitulée LAION 5b qui repose sur 5,85 milliards d’images. Sortie en 2022, cette gigantesque réserve de datas open source a été constituée par l’ONG Common Crawl dont l’objectif est de copier l’intégralité des contenus présents sur Internet à destination de chercheurs.
    Une base de données juridiquement irréprochable

    Une fois ces images et leur description collectées, ces dernières sont passées au travers d’un filtre appelé Clip. Cet élément permet de calculer les correspondances entre un texte et une image. Cette étape est essentielle, car une fois ces correspondances recueillies, les images et les textes sont tout simplement effacés de la base. Les résultats de Clip sont gardés, car ils suffisent – avec l’aide d’autres outils – à reconstituer l’image d’origine. Cette méthode permet à Common Crawl d’échapper aux questions de copyright puisque techniquement, ils ne fournissent pas les images qui ont été récoltées sur le Web. Seuls les générateurs d’IA qui ont été entraînés avec LAION peuvent les reconstituer.
    Un logo fantôme

    D’après une enquête menée par le blogueur Andy Baio, plus de 15 000 images issues de Getty Images ont été utilisées au sein de LAION 2B pour entraîner Stable Diffusion. Pour appuyer sa plainte, Getty a indiqué que le générateur open source savait recréer son logo quand on lui demande de le mettre sur une synthographie. Cette capacité serait pour l’agence, la preuve que ses images sont bien présentes dans les bases de données et que ce matériel est utilisé en dehors du cadre du « fair use » américain qui autorise l’usage des images sous copyright dans un objectif non commercial ou éducatif. Reste à voir si cet usage si particulier des images de Getty par les IA génératives tombe sous le coup de la loi. En attendant le verdict final, cette action en justice va être passionnante, car elle pourrait faire jurisprudence et va sans doute déterminer l’avenir commercial des générateurs d’image par IA.

    #Banques_Images #IA_générative #Intelligence_artificielle

  • L’ « altruisme affectif », une philosophie de merde pour les plus débiles des possédants. Donc, si vous entendez parler d’« #altruisme_affectif », sortez votre révolver.

    #Sam_Bankman-Fried, accusé de fraude à l’encontre des 9 millions de clients de #FTX, se réclamait de l’altruisme effectif, un mouvement philosophique utilitariste. La chute du fondateur et son arrestation entraînent une remise en question au sein du mouvement, très apprécié des #milliardaires de la #Silicon_Valley.

    L’altruisme effectif se retrouve, bien malgré lui, sous les feux des projecteurs. Sam Bankman-Fried, le fondateur de FTX, qui se réclamait de cette philosophie, attend son procès dans la maison de ses parents à Palo Alto. Il est soupçonné d’avoir commis « l’une des plus grandes fraudes financières de l’histoire des Etats-Unis », selon les autorités.
    Avant l’effondrement de sa plateforme de cryptomonnaies, les portraits élogieux abondaient pourtant dans les médias outre-Atlantique. « Sam Bankman-Fried a amassé 22,5 milliards de dollars avant ses 30 ans en profitant du boom des cryptomonnaies - mais il n’y croit pas vraiment. Il veut juste que sa fortune subsiste assez longtemps pour tout donner », écrivait le magazine « Forbes » en 2021. « Mon objectif est d’avoir de l’impact », répétait l’entrepreneur. A l’époque, il n’avait donné qu’une fraction de sa fortune, 25 millions de dollars, soit 0,1 %. Mais il espérait donner bien davantage un jour, disait-il. Depuis sa chute, l’ex-milliardaire a laissé entendre - dans une conversation qu’il croyait privée avec une journaliste de Vox - qu’il jouait surtout un rôle pour soigner son image.
    L’implosion de FTX remet en cause les fondements de l’altruisme effectif. Ce mouvement, né à la fin des années 2000 au Royaume-Uni, s’inspire largement des travaux de Peter Singer, un philosophe australien. Mais c’est aux Etats-Unis, dans la Silicon Valley en particulier, qu’il remporte ses plus francs succès.
    L’altruisme effectif s’inspire de la théorie économique classique. Il reprend notamment la notion d’utilité, qui correspond au bien-être d’un individu, et la transpose au domaine de la philanthropie. L’altruisme effectif s’efforce de maximiser le bonheur collectif, en distribuant l’argent de la façon la plus efficace possible. Il considère que cet impact peut être mesuré précisément, en années de vie ajustées en fonction du bien-être ressenti. Parmi les solutions privilégiées par les altruistes effectifs figurent notamment des ONG qui distribuent des moustiquaires imprégnées d’insecticide dans des pays en développement. Une façon d’améliorer la qualité de vie du plus grand nombre de personnes possibles à moindre coût. Mais les altruistes effectifs se laissent parfois aller à des débats abscons : ils débattent par exemple de l’impact exact du déparasitage sur la qualité de vie. Faut-il financer le déparasitage dans les pays pauvres, ou financer des études pour mesurer sa traduction en années de vie prospère ?
    Dans un libre publié l’été dernier, « What We Owe the Future », William MacAskill expose des idées long termistes. Le livre a été partagé par #Elon_Musk sur Twitter avec pour commentaire : « Cela vaut la peine de le lire. C’est très proche de ma philosophie. » Il n’est pas le seul, parmi les milliardaires de la tech, à se passionner pour ces idées.
    « Il y a une religion dans la Silicon Valley (long termisme, altruisme effectif et autre) qui s’est persuadée que la meilleure chose à faire ’pour l’humanité’ est de mettre autant d’argent que possible dans le problème de l’AGI », l’IA générale, relève Timnit Gebru, spécialiste de l’éthique dans la tech. « C’est la religion des milliardaires, elle leur permet de se sentir vertueux. La plupart sont des hommes blancs, très privilégiés » poursuit la chercheuse, qui a quitté #Google en l’accusant de censure.
    Les altruistes effectifs consacrent des sommes folles à des projets qui ne porteront leurs fruits que dans des dizaines d’années, au mieux. Ils investissent ainsi dans l’#IA ou la recherche médicale pour réduire les chances d’extinction de l’humanité. Ce qui donne lieu à des calculs très hypothétiques. « Si des milliers de personnes pouvaient, avec une probabilité de 55 %, réduire les chances d’extinction de l’humanité de 1 %, ces efforts pourraient sauver 28 générations. Si chacune de ces générations contient chacune 10 milliards de personnes, cela représente 280 milliards de personnes qui pourraient vivre des vies florissantes », écrit l’association 80.000 Heures sur son site.
    Sam Bankman-Fried, étudiant, était convaincu par l’altruisme effectif. Végan, il envisage d’abord de consacrer sa vie au bien-être animal, mais une rencontre avec #William_MacAskill le persuade d’aller dans la finance. Il commence comme #trader à Wall Street, avant de créer Alameda Research, qui tire profit des différences de prix entre le bitcoin en Asie et en Amérique.
    Frustré par les inefficacités sur les marchés des cryptos, il fonde FTX en 2019, toujours obsédé par l’impact. « Pour moi, ce que signifie [l’altruisme effectif], c’est gagner le maximum d’argent pour donner le maximum à des organismes charitables parmi les plus efficaces au monde », déclare-t-il à CNBC en septembre 2022. Depuis la faillit, William MacAskill a pris ses distances : « S’il a fait un mauvais usage des fonds de ses clients, Sam n’a pas écouté attentivement. » La fondation 80.000 Heures, qui prenait en exemple la carrière de Sam Bankman-Fried, écrit : « Nous sommes ébranlés [...], nous ne savons que dire ni penser. » Au-delà du manque à gagner pour les organisations liées à l’altruisme effectif, la chute de FTX risque de remettre le mouvement en question. Il était temps, disent les détracteurs de cette #philosophie un peu trop sûre d’elle-même.

    (Les Échos)
    #fraude_financière #intelligence_artificielle #cryptomonnaie

  • Vos données sur Adobe Creative Cloud utilisées par défaut pour entraîner des IA
    https://www.nextinpact.com/lebrief/70764/vos-donnees-sur-adobe-creative-cloud-utilisees-par-defaut-pour-entrainer

    C’est une évidence depuis la création de Gmail : ce qui est « dans le cloud » ne vous appartient plus vraiment totalement et devient un asset de l’entreprise qui gère votre activité cloud.

    En utilisant Adobe Creative Cloud, vous donnez, par défaut, l’autorisation à l’entreprise d’utiliser vos créations pour entrainer ses algorithmes d’intelligence artificielle. DPReview explique qu’une option pointée par le compte Twitter du logiciel libre de peinture numérique Krita est activée dans la partie « content analysis » des options de compte Adobe.

    Celle-ci permet à l’entreprise d’« analyser votre contenu Creative Cloud ou Document Cloud afin de fournir des fonctionnalités de produit et pour améliorer et développer nos produits et services. Le contenu Creative Cloud et Document Cloud inclut, sans s’y limiter, les fichiers image, audio, vidéo, texte ou document ainsi que les données associées », explique la FAQ d’Adobe qui précise aussi que le contenu stocké localement n’est pas analysé.

    Cette FAQ indique qu’ « Adobe utilise principalement le machine learning dans Creative Cloud et Document Cloud pour analyser votre contenu ». Lorsque cette analyse est effectuée, « nous agrégeons d’abord votre contenu avec d’autres contenus, puis utilisons le contenu agrégé pour former nos algorithmes et ainsi améliorer nos produits et services », ajoute Adobe.

    Pour désactiver cette option, il faut se connecter sur https://account.adobe.com/privacy puis dans la section Analyse de contenu, passer sur « off » le bouton « Autoriser mon contenu à être analysé par Adobe à des fins d’amélioration et de développement de produits ».

    #Cloud #Adobe #Intelligence_artificielle #Petits_caractères

  • The Dark Risk of Large Language Models | WIRED
    https://www.wired.com/story/large-language-models-artificial-intelligence

    There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. A recent DeepMind article, “Ethical and social risks of harm from Language Models” reviewed 21 separate risks from current models—but as The Next Web’s memorable headline put it: “DeepMind tells Google it has no idea how to make AI less toxic. To be fair, neither does any other lab.” Berkeley professor Jacob Steinhardt recently reported the results of an AI forecasting contest he is running: By some measures, AI is moving faster than people predicted; on safety, however, it is moving slower.

    Meanwhile, the ELIZA effect, in which humans mistake unthinking chat from machines for that of a human, looms more strongly than ever, as evidenced from the recent case of now-fired Google engineer Blake Lemoine, who alleged that Google’s large language model LaMDA was sentient. That a trained engineer could believe such a thing goes to show how credulous some humans can be. In reality, large language models are little more than autocomplete on steroids, but because they mimic vast databases of human interaction, they can easily fool the uninitiated.

    #Intelligence_artificielle #Chatbots