Report finds most subscription services manipulate customers with ‘dark patterns’

Most subscription sites use "dark patterns" to influence customer behavior around subscriptions and personal data, according to a pair of new reports from global consumer protection groups. Dark patterns are "practices commonly found in online user interfaces [that] steer, deceive, coerce or manipulate consumers into making choices that often are not in their best interests." The international research efforts were conducted by the International Consumer Protection and Enforcement Network (ICPEN) and the Global Privacy Enforcement Network (GPEN).

The ICPEN conducted the review of 642 websites and mobile apps with a subscription component. The assessment revealed one dark pattern in use at almost 76 percent of the platforms, and multiple dark patterns at play in almost 68 percent of them. One of the most common dark patterns discovered was sneaking, where a company makes potentially negative information difficult to find. ICPEN said 81 percent of the platforms with automatic subscription renewal kept the ability for a buyer to turn off auto-renewal out of the purchase flow. Other dark patterns for subscription services included interface interference, where desirable actions are easier to perform, and forced action, where customers have to provide information to access a particular function.

The companion report from GPEN examined dark patterns that could encourage users to compromise their privacy. In this review, nearly all of the more than 1,000 websites and apps surveyed used a deceptive design practice. More than 89 percent of them used complex and confusing language in their privacy policies. Interface interference was another key offender here, with 57 percent of the platforms making the least protective privacy option the easiest to choose and 42 percent using emotionally charged language that could influence users.

Even the most savvy of us can be influenced by these subtle cues to make suboptimal decisions. Those decisions might be innocuous ones, like forgetting that you've set a service to auto-renew, or they might put you at risk by encouraging you to reveal more personal information than needed. The reports didn't specify whether the dark patterns were used in illicit or illegal ways, only that they were present. The dual release is a stark reminder that digital literacy is an essential skill.

This article originally appeared on Engadget at https://www.engadget.com/report-finds-most-subscription-services-manipulate-customers-with-dark-patterns-225640057.html?src=rss

Elon Musk escapes paying $500 million to former Twitter employees

The social media platform formerly known as Twitter has been at the center of multiple legal battles from the very beginning of Elon Musk's takeover. One such suit relates to the more than 6,000 employees laid off by Musk following his acquisition of the company – and his alleged failure to pay them their full severance. Yesterday, Musk notched a win over his former employees.

The case in question is a class-action lawsuit filed by former Twitter employee Courtney McMillian. The complaint argued that under the federal Employee Retirement Income Security Act (ERISA), the Twitter Severance Plan owed laid off workers three months of pay. They received less than that, and sought $500 million in unpaid severance. However, on Tuesday, US District Judge Trina Thompson in the Northern District of California granted Musk's motion to dismiss the class-action complaint.

Judge Thompson found that the Twitter severance plan did not qualify under ERISA because they received notice of a separate payout scheme prior to the layoffs. Instead, she dismissed the case, ruling that the severance program adopted after Musk's takeover was the one that applied to these former employees, rather than the 2019 one the plaintiffs were expecting.

This ruling is a setback for the thousands of dismissed Twitter staffers, but there are future chances for them to win larger payments. Thompson's order noted that the plaintiffs could amend their complaint for non-ERISA claims. If they do, Thompson said "this Court will consider issuing an Order finding this case related to one of the cases currently pending" against X Corp/Twitter. There are still lawsuits underway on behalf of some past top brass at Twitter, one which is seeking $128 million in unpaid severance and another attempting to recoup about $1 million in unpaid legal fees.

This article originally appeared on Engadget at https://www.engadget.com/elon-musk-escapes-paying-500-million-to-former-twitter-employees-203813996.html?src=rss

Google brings passkeys to its Advanced Protection Program

Google is adding passkey support to its Advanced Protection Program. APP is the company's highest-level security option. It's intended for targets who could be at high risk of hacks or other scams, such as elected officials or human rights workers, and it previously required a physical security key to use. In Google's announcement today, it acknowledged that the physical component made APP less feasible for some of the people who need the service most. Now, people who enroll in APP can opt for a passkey or for a physical key.

Google was one of many tech companies to start offering passkeys for security, rolling out the option to Android and Chrome in 2022 and offering them to all Google accounts in 2023. Earlier this year, Google said that more than 400 million accounts have used passkeys more than 1 billion times. And that's a big number, but on the whole, uptake of this technology has still been gradual.

In addition to adding passkey support, Google also shared that it is partnering with media nonprofit Internews to provide cybersecurity support for its network of journalists and human rights advocates. The arrangement will cover ten countries, including Brazil, Mexico and Poland.

This article originally appeared on Engadget at https://www.engadget.com/google-brings-passkeys-to-its-advanced-protection-program-100034040.html?src=rss

Xbox is increasing Game Pass prices and adding a ‘standard’ plan

Time for Xbox fans to adjust their budgets. Xbox Game Pass is increasing prices this year in a phased rollout. Beginning on July 10, any new subscribers will be charged the updated price, while current subscribers will see the higher costs take effect starting September 12. For the US, Game Pass Ultimate prices will increase from $17 a month to $20 a month, while a year of access to Game Pass Core will jump from $60 to $75. Microsoft laid out all the regional increases in a graph.

Microsoft is also adding a less expensive option in September with Xbox Game Pass Standard. This plan offers access to Game Pass titles but without some perks of the Ultimate package, such as day one releases and Xbox Cloud Gaming. The Standard option will include online multiplayer, some store discounts, and all the other features of the Core plan. It will cost $15 per month in the US.

Breakdown of benefits for Xbox Game Pass plans
Xbox

The final change is what looks like the beginning of the end for the Xbox Game Pass for Console plan. This option will no longer be available for new customers, and if any current plan holders stop their automatic renewal, they'll have to choose a different option if they want to re-up.

This is the latest in a string of sad news stories about Game Pass. In February, we heard from Microsoft that the program had 34 million subscribers, marking a notable slowdown in growth with only 9 million new players added in the past two years. That total includes Core, which is the rebranded Xbox Live plan for playing online games with minimal other perks, meaning the number of new subscribers is even lower. And in June, Xbox's hoped-for big splash of new hardware announcements turned out to be a mere trickle of refreshes. It's a great offer for players who want to keep up with the vast number of new games being released every month, but it doesn't seem to be connecting with the audience in the way Microsoft hoped.

This article originally appeared on Engadget at https://www.engadget.com/xbox-is-increasing-game-pass-prices-and-adding-a-standard-plan-234657957.html?src=rss

Bumble wants users to report AI-generated images

Bumble is making it simpler for its members to report AI-generated profiles. The dating and social connection platform now has "Using AI-generated photos or videos" as an option under the Fake Profile reporting menu.

"An essential part of creating a space to build meaningful connections is removing any element that is misleading or dangerous," Bumble Vice President of Product at Bumble Risa Stein said in an official statement. "We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections."

According to a Bumble user survey, 71 percent of the service's Gen Z and Millennial respondents want to see limits on use of AI-generated content on dating apps. Another 71 percent considered AI-generated photos of people in places they've never been or doing activities they've never done a form of catfishing.

Fake profiles can also swindle people out of a lot of money. In 2022, the Federal Trade Commission received reports of romance scams from almost 70,000 people, and their losses to those frauds totaled $1.3 billion. Many dating apps take extensive safety measures to protect their users from scams, as well as from physical dangers, and the use of AI in creating fake profiles is the latest threat for them to combat. Bumble released a tool called the Deception Detector earlier this year, leveraging AI for positive ends to identify phony profiles. It also introduced an AI-powered tool to protect users from seeing unwanted nudes. Tinder launched its own approach to verifying profiles in the US and UK this year.

This article originally appeared on Engadget at https://www.engadget.com/bumble-wants-users-to-report-ai-generated-images-203627777.html?src=rss

OpenAI hit by two big security issues this week

OpenAI seems to make headlines every day and this time it's for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI's website, and since it's not available on the App Store, it doesn't have to follow Apple's sandboxing requirements. Vieito's work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company's internal messaging systems. The New York Times reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company's board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into major players' services and how chaotic the company's oversight, practices and public reputation have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss

OpenAI hit by two big security issues this week

OpenAI seems to make headlines every day and this time it's for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI's website, and since it's not available on the App Store, it doesn't have to follow Apple's sandboxing requirements. Vieito's work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company's internal messaging systems. The New York Times reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company's board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into major players' services and how chaotic the company's oversight, practices and public reputation have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss

Your next webcam could be a Game Boy Camera

Forget your phone cameras and laptop built-ins; your next webcam could be your old Game Boy Camera. The team (sort of) bringing this peripheral into the modern age is Epilogue. The company makes the GB Operator, which lets people play original Game Boy, Game Boy Advance and Game Boy Color cartridges on a current PC or a Steam Deck.

Today, Epilogue announced that it is working on an update that will make the Game Boy Camera into a webcam, but one that's a fuzzy, lo-fi, 16 kilopixel experience. The magic happens through the Playback emulator app that powers the GB Operator.

"We now have a live feed from the Game Boy Camera, but still need to fine-tune some things and allow for configuration options," the company said. "We wanted to share this update because it was exciting to see it finally work, and [we] can't wait to see everyone having fun with it. It's the worst and the best webcam you'll ever have."

We've seen fan projects adapting the Game Boy Camera before, and even a fan-made recreation of the hardware. Considering the original Game Boy is now more than three decades old, it's amazing to see the hardware continuing to inspire strange and creative experiences.

This article originally appeared on Engadget at https://www.engadget.com/your-next-webcam-could-be-a-game-boy-camera-231113749.html?src=rss

Your next webcam could be a Game Boy Camera

Forget your phone cameras and laptop built-ins; your next webcam could be your old Game Boy Camera. The team (sort of) bringing this peripheral into the modern age is Epilogue. The company makes the GB Operator, which lets people play original Game Boy, Game Boy Advance and Game Boy Color cartridges on a current PC or a Steam Deck.

Today, Epilogue announced that it is working on an update that will make the Game Boy Camera into a webcam, but one that's a fuzzy, lo-fi, 16 kilopixel experience. The magic happens through the Playback emulator app that powers the GB Operator.

"We now have a live feed from the Game Boy Camera, but still need to fine-tune some things and allow for configuration options," the company said. "We wanted to share this update because it was exciting to see it finally work, and [we] can't wait to see everyone having fun with it. It's the worst and the best webcam you'll ever have."

We've seen fan projects adapting the Game Boy Camera before, and even a fan-made recreation of the hardware. Considering the original Game Boy is now more than three decades old, it's amazing to see the hardware continuing to inspire strange and creative experiences.

This article originally appeared on Engadget at https://www.engadget.com/your-next-webcam-could-be-a-game-boy-camera-231113749.html?src=rss

Cloudflare is taking a stand against AI website scrapers

Cloudflare has released a new free tool that prevents AI companies' bots from scraping its clients' websites for content to train large language models. The cloud service provider is making this tool available to its entire customer base, including those on free plans. "This feature will automatically be updated over time as we see new fingerprints of offending bots we identify as widely scraping the web for model training," the company said.

In a blog post announcing this update, Cloudflare's team also shared some data about how its clients are responding to the boom of bots that scrape content to train generative AI models. According to the company's internal data, 85.2 percent of customers have chosen to block even the AI bots that properly identify themselves from accessing their sites.

Cloudflare also identified the most active bots from the past year. The Bytedance-owned Bytespider bot attempted to access 40 percent of websites under Cloudflare's purview, and OpenAI's GPTBot tried on 35 percent. They were half of the top four AI bot crawlers by number of requests on Cloudflare's network, along with Amazonbot and ClaudeBot.

It's proving very difficult to fully and consistently block AI bots from accessing content. The arms race to build models faster has led to instances of companies skirting or outright breaking the existing rules around blocking scrapers. Perplexity AI was recently accused of scraping websites without the required permissions. But having a backend company at the scale of Cloudflare getting serious about trying to put the kibosh on this behavior could lead to some results.

"We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection," the company said. "We will continue to keep watch and add more bot blocks to our AI Scrapers and Crawlers rule and evolve our machine learning models to help keep the Internet a place where content creators can thrive and keep full control over which models their content is used to train or run inference on."

This article originally appeared on Engadget at https://www.engadget.com/cloudflare-is-taking-a-stand-against-ai-website-scrapers-220030471.html?src=rss