Humane is said to be seeking a $1 billion buyout after only 10,000 orders of its terrible AI Pin

It emerged recently that Humane was trying to sell itself for as much as $1 billion after its confuddling, expensive and ultimately pretty useless AI Pin flopped. A New York Times report that dropped on Thursday shed a little more light on the company's sales figures and, like the wearable AI assistant itself, the details are not good.

By early April, around the time that many devastating reviews of the AI Pin were published, Humane is said to have received around 10,000 orders for the device. That's a far cry from the 100,000 it was hoping to ship this year, and about 9,000 more than I thought it might get. It's hard to think it picked up many more orders beyond those initial 10,000 after critics slaughtered the AI Pin.

At a price of $700 (plus a mandatory $24 per month for 4G service), that puts Humane's initial revenue at a maximum of about $7.24 million, not accounting for canceled orders. And yet Humane wants a buyer for north of $1 billion after taking a swing and missing so hard it practically knocked out the umpire.

HP is reportedly one of the companies that Humane was in talks with over a potential sale, with discussions starting only a week or so after the reviews came out. Any buyer that does take the opportunity to snap up Humane's business and tech might be picking up somewhat of a poisoned chalice. Not least because the company this week urged its marks customers to stop using the AI Pin's charging case over a possible “fire safety risk.”

This article originally appeared on Engadget at https://www.engadget.com/humane-is-said-to-be-seeking-a-1-billion-buyout-after-only-10000-orders-of-its-terrible-ai-pin-134147878.html?src=rss

Ex-Meta engineer sues company, claiming he was fired over handling of Palestine content

Ferras Hamad, who used to be an engineer working with Meta's machine learning team, has accused the company of firing him over his handling of Palestine-related Instagram posts in a lawsuit. According to Reuters, he is accusing the company of discrimination, wrongful termination and showing a pattern of bias against Palestinians. Hamad said he noted procedural irregularities on how the company handled restrictions on content from Palestinian Instagram personalities, which prevented them from appearing in feeds and searches. One particular case that involved a short video showing a destroyed building in Gaza seemingly led to his dismissal in February. 

Hamad discovered that the video, which was taken by Palestinian photojournalist Motaz Azaiza, was misclassified as pornographic. He said he received conflicting guidance on whether he was authorized to help resolve the issue but was eventually told in writing that helping troubleshoot it was part of his tasks. A month later, though, Hamad was reportedly notified that he was the subject of an investigation. He filed an internal discrimination complaint in response, but he was fired days later and was told that it was because he violated a policy that prohibits employees from working on issues involving accounts of people they personally know. Hamad, who is Palestinian-American, has denied that he personally knew Azaiza. 

In addition to detailing the events that led to his firing in the lawsuit, Hamad also accused the company of deleting internal communication between employees talking about deaths of their relatives in Gaza. Employees that use the Palestinian flag emoji were investigated, as well, whereas those who've previously posted the Israeli or the Ukrainian flags in similar contexts weren't subjected to the same scrutiny. 

Meta has been accused of suppressing posts that support Palestine even before the October 7 Hamas attacks against Israel. Late last year, Senator Elizabeth Warren wrote Mark Zuckerberg a letter raising concerns about how numerous Instagram users were accusing the company of "shadowbanning" them for posting about the conditions in Gaza. Meta's Oversight Board ruled last year that the company's tools mistakenly removed a video posted on Instagram showing the aftermath of a strike on the Al-Shifa Hospital in Gaza during Israel’s ground offensive. More recently, the board opened an investigation to review cases involving Facebook posts that used the phrase "from the river to the sea." We've asked Meta for a statement on Hamad's lawsuit, and we'll update this post when we hear back.

This article originally appeared on Engadget at https://www.engadget.com/ex-meta-engineer-sues-company-claiming-he-was-fired-over-handling-of-palestine-content-123057080.html?src=rss

AI workers demand stronger whistleblower protections in open letter

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic has signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman said that he had been genuinely embarrassed" by the provision and claimed it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees. After this story was published, nn OpenAI spokesperson told Engadget that the company had removed a non-disparagement clause from its standard departure paperwork and released all former employees from their non-disparagement agreements.

The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo said that he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term for AI systems that is as smart or smarter than humans. The letter — which was endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell — expresses grave concerns over the lack of effective government oversight for AI and the financial incentives driving tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbation of inequality and even the loss of human control over autonomous systems, potentially resulting in human extinction.

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas,” wrote Kokotajlo on X. “Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to ‘move fast and break things.’ Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

In a statement shared with Engadget, an OpenAI spokesperson said: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.” They added: “This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company.”

Google and Anthropic did not respond to request for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems" and it believes in its "scientific approach to addressing risk.” It added: “We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.”

The signatories are calling on AI companies to commit to four key principles:

  • Refraining from retaliating against employees who voice safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators about risks

  • Allowing a culture of open criticism

  • And avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out

The letter comes amid growing scrutiny of OpenAI's practices, including the disbandment of its "superalignment" safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company's prioritization of "shiny products" over safety.

Update, June 05 2024, 11:51AM ET: This story has been updated to include statements from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/former-openai-google-and-anthropic-workers-are-asking-ai-companies-for-more-whistleblower-protections-175916744.html?src=rss

Malicious code has allegedly compromised TikTok accounts belonging to CNN and Paris Hilton

There’s a new exploit making its way through TikTok and it has already compromised the official accounts of Paris Hilton, CNN and others, as reported by Forbes. It’s spread via direct message and doesn’t require a download, click or any form of response, beyond opening the chat. It’s currently unclear how many accounts have been affected.

Even weirder? The hacked accounts aren’t really doing anything. A source within TikTok told Forbes that these impacted accounts “do not appear to be posting content”. TikTok issued a statement to The Verge, saying that it is "aware of a potential exploit targeting a number of brand and celebrity accounts." The social media giant is "working directly with affected account owners to restore access." 

Semafor recently reported that CNN’s TikTok had been hacked, which forced the network to disable the account. It’s unclear if this is the very same hack that has gone on to infect other big-time accounts. The news organization said that it was “working with TikTok on the backend on additional security measures.” 

CNN staffers told Semafor that the news entity had “grown lax” regarding digital safety practices, with one employee noting that dozens of colleagues had access to the official TikTok account. However, another network source suggested that the breach wasn’t the result of someone gaining access from CNN’s end. That’s about all we know for now. We’ll update this post when more news comes in.

Of course, this isn’t the first big TikTok hack. Back in 2023, the company acknowledged that around 700,000 accounts in Turkey had been compromised due to insecure SMS channels involved with its two-factor authentication. Researchers at Microsoft discovered a vulnerability in 2022 that allowed hackers to overtake accounts with just a single click. Later that same year, an alleged security breach allegedly impacted more than a billion users.

This article originally appeared on Engadget at https://www.engadget.com/malicious-code-has-allegedly-compromised-tiktok-accounts-belonging-to-cnn-and-paris-hilton-174000353.html?src=rss

Twitch removes every member of its Safety Advisory Council

Twitch signed up cyberbullying experts, web researchers and community members back in 2020 to form the Safety Advisory Council. The review board was formed to help it draft new policies, develop products that improve safety and protect the interests of marginalized groups. Now, CNBC reports that the streaming website has terminated all the members of the council. Twitch reportedly called the nine members into a meeting on May 6 to let them know that their existing contracts would end on May 31 and that they would not be getting paid for the second half of 2024. 

The Safety Advisory Council's members include Dr. Sameer Hinduja, co-director of the Cyber Bullying Research Center, and Dr. T.L. Taylor, the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. There's also Emma Llansó, the director of the Free Expression Project for the Center for Democracy and Technology.  

In an email sent to the members, Twitch reportedly told them that going forward, "the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors." The Amazon subsidiary didn't mention any names, but it describes its Ambassadors as people who "positively contribute to the Twitch community — from being role models for their community, to establishing new content genres, to having inspirational stories that empower those around them."

In a statement sent to The Verge, Twitch trust and safety communications manager Elizabeth Busby said that the new council members will "offer [the website] fresh, diverse perspectives" after working with the same core members for years. "We’re excited to work with our global Twitch Ambassadors, all of whom are active on Twitch, know our safety work first hand, and have a range of experiences to pull from," Busby added.

It's unclear if the Ambassadors taking the current council members' place will get paid or if they're expected to lend their help to the company for free. If it's the latter, then this development could be a cost-cutting measure: The outgoing members were paid between $10,000 and $20,000 a year, CNBC says. Back in January, Twitch also laid off 35 percent of its workforce to "cut costs" and to "build a more sustainable business." In the same month, it reduced how much streamers make from every Twitch Prime subscription they generate, as well.

This article originally appeared on Engadget at https://www.engadget.com/twitch-removes-every-member-of-its-safety-advisory-council-131501219.html?src=rss

Twitch removes every member of its Safety Advisory Council

Twitch signed up cyberbullying experts, web researchers and community members back in 2020 to form the Safety Advisory Council. The review board was formed to help it draft new policies, develop products that improve safety and protect the interests of marginalized groups. Now, CNBC reports that the streaming website has terminated all the members of the council. Twitch reportedly called the nine members into a meeting on May 6 to let them know that their existing contracts would end on May 31 and that they would not be getting paid for the second half of 2024. 

The Safety Advisory Council's members include Dr. Sameer Hinduja, co-director of the Cyber Bullying Research Center, and Dr. T.L. Taylor, the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. There's also Emma Llansó, the director of the Free Expression Project for the Center for Democracy and Technology.  

In an email sent to the members, Twitch reportedly told them that going forward, "the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors." The Amazon subsidiary didn't mention any names, but it describes its Ambassadors as people who "positively contribute to the Twitch community — from being role models for their community, to establishing new content genres, to having inspirational stories that empower those around them."

In a statement sent to The Verge, Twitch trust and safety communications manager Elizabeth Busby said that the new council members will "offer [the website] fresh, diverse perspectives" after working with the same core members for years. "We’re excited to work with our global Twitch Ambassadors, all of whom are active on Twitch, know our safety work first hand, and have a range of experiences to pull from," Busby added.

It's unclear if the Ambassadors taking the current council members' place will get paid or if they're expected to lend their help to the company for free. If it's the latter, then this development could be a cost-cutting measure: The outgoing members were paid between $10,000 and $20,000 a year, CNBC says. Back in January, Twitch also laid off 35 percent of its workforce to "cut costs" and to "build a more sustainable business." In the same month, it reduced how much streamers make from every Twitch Prime subscription they generate, as well.

This article originally appeared on Engadget at https://www.engadget.com/twitch-removes-every-member-of-its-safety-advisory-council-131501219.html?src=rss

OpenAI says it stopped multiple covert influence operations that abused its AI models

OpenAI said that it stopped five covert influence operations that used its AI models for deceptive activities across the internet. These operations, which OpenAI shutdown between 2023 and 2024, originated from Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.

OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”

OpenAI said that the Russian operation called “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.

This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss

OpenAI says it stopped multiple covert influence operations that abused its AI models

OpenAI said that it stopped five covert influence operations that used its AI models for deceptive activities across the internet. These operations, which OpenAI shutdown between 2023 and 2024, originated from Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.

OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”

OpenAI said that the Russian operation called “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.

This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss