image_pdfVersione PDFimage_printStampa

Microsoft, Amazon and Google among those to cut ‘responsible AI’ teams as part of broader reductions

Big tech companies have been slashing staff from teams dedicated to evaluating ethical issues around deploying artificial intelligence, leading to concerns about the safety of the new technology as it becomes widely adopted across consumer products.

Microsoft, Meta, Google, Amazon and Twitter are among the companies that have cut members of their “responsible AI teams”, who advise on the safety of consumer products that use artificial intelligence.

The numbers of staff affected remain in the dozens and represent a small fraction of the tens of thousands of tech workers whose jobs were cut by companies in response to a broader industry downturn.

The companies said they remained dedicated to rolling out safe AI products. But experts said the cuts were worrying, as potential abuses of the technology were being discovered just as millions of people began to experiment with AI tools.

Their concern has grown following the success of the ChatGPT chatbot launched by Microsoft-backed OpenAI, which led other tech groups to release rivals such as Google’s Bard and Anthropic’s Claude.

“It is shocking how many members of responsible AI are being let go at a time when arguably, you need more of those teams than ever,” said Andrew Strait, former ethics and policy researcher at Alphabet-owned DeepMind and associate director at research organisation Ada Lovelace Institute.

Microsoft disbanded all of its ethics and society team in January, which led the company’s initial work in the area. The tech giant said the cuts amounted to fewer than 10 roles and that Microsoft still had hundreds of people working in its office of responsible AI.

“We have significantly grown our responsible AI efforts and have worked hard to institutionalise them across the company,” said Natasha Crampton, Microsoft’s chief responsible AI officer.

Twitter has slashed more than half of its headcount under its new billionaire owner Elon Musk, including its small ethical AI team. Its past work included fixing a bias in the Twitter algorithm, which appeared to favour white faces when choosing how to crop images on the social network. Twitter did not respond to a request for comment.

Twitch, the Amazon-owned streaming platform, cut its ethical AI team last week, making all teams working on AI products accountable for issues related to bias, according to a person familiar with the move. Twitch declined to comment.

In September, Meta dissolved its responsible innovation team of about 20 engineers and ethicists tasked with evaluating civil rights and ethics on Instagram and Facebook. Meta did not respond to a request for comment.

“Responsible AI teams are among the only internal bastions that Big Tech have to make sure that people and communities impacted by AI systems are in the minds of the engineers who build them,” said Josh Simons, former Facebook AI ethics researcher and author of Algorithms for the People.

“The speed with which they are being abolished leaves Big Tech’s algorithms at the mercy of advertising imperatives, undermining the wellbeing of kids, vulnerable people and our democracy.”

Another concern is that large language models, which underlie chatbots such as ChatGPT, are known to “hallucinate” — make false statements as if they were facts — and can be used for nefarious purposes such as spreading disinformation and cheating in exams.

“What we are beginning to see is that we can’t fully anticipate all of the things that are going to happen with these new technologies, and it is crucial that we pay some attention to them,” said Michael Luck, director of King’s College London’s Institute for Artificial Intelligence.

The role of internal AI ethics teams has come under scrutiny as there is debate about whether any human intervention into algorithms should be more transparent with input from the public and regulators.

In 2020, the Meta-owned photo app Instagram set up a team to address “algorithmic justice” on its platform. The “IG Equity” team was formed after the police murder of George Floyd and a desire to make adjustments to Instagram’s algorithm to boost discussions of race and highlight profiles of marginalised people.

Simons said: “They are able to intervene and change those systems and biases [and] explore technological interventions that will advance equity… but engineers should not be deciding how society is shaped.”

Some employees tasked with ethical AI oversight at Google have also been laid off as part of broader cuts at Alphabet of more than 12,000 staff, according to a person close to the company.

Google would not specify how many roles had been cut but that responsible AI remains a “top priority at the company, and we are continuing to invest in those teams”.

The tension between the development of AI technologies within companies and their impact and safety has previously emerged at Google. Two AI ethics research leaders, Timnit Gebru and Margaret Mitchell, exited in 2020 and 2021, respectively, after a highly publicised row with the company.

“It is problematic when responsible AI practices are deprioritised for competition or for a push to market,” said Strait from the Ada Lovelace Institute. “And unfortunately, what I am seeing now is that is exactly what’s happening.”

image_pdfVersione PDFimage_printStampa