DOJ says TikTok collected users’ views on issues like abortion, gun control and religion

The Department of Justice on Friday night asked a federal court to reject TikTok’s bid to have the law that could ban it overturned, citing national security concerns that include its alleged use of internal search tools to collect information on users’ views around sensitive topics. It comes in response to a petition filed in May by TikTok in an attempt to challenge the law that now requires its China-based parent company, ByteDance, to sell the app or it will be banned in the US. President Biden signed the bill into law in April.

In one of the documents filed with the US Court of Appeals for the DC Circuit, the DOJ says a search tool within Lark, the web-suite system the company’s employees use to communicate, “allowed ByteDance and TikTok employees in the United States and China to collect bulk user information based on the user’s content or expressions, including views on gun control, abortion, and religion.” The DOJ also argues in the filings that TikTok could be used to subject US users to content manipulation, and that their sensitive information could end up stored on servers in China.

TikTok has repeatedly denied the accusations about it being a threat to national security and has called the efforts to ban it “unconstitutional.” In its latest statement responding to the DOJ filing, posted on X, TikTok said, “Nothing in this brief changes the fact that the Constitution is on our side.”

This article originally appeared on Engadget at https://www.engadget.com/doj-says-tiktok-collected-users-views-on-issues-like-abortion-gun-control-and-religion-201617503.html?src=rss

ISPs are fighting to raise the price of low-income broadband

A new government program is trying to encourage Internet service providers (ISPs) to offer lower rates for lower income customers by distributing federal funds through states. The only problem is the ISPs don’t want to offer the proposed rates.

Ars Technica obtained a letter sent to US Commerce Secretary Gina Raimondo signed by more than 30 broadband industry trade groups like ACA Connects and the Fiber Broadband Association as well as several state based organizations. The letter raises “both a sense of alarm and urgency” about their ability to participate in the Broadband Equity, Access and Deployment (BEAD) program. The newly formed BEAD program provides over $42 billion in federal funds to “expand high-speed internet access by funding planning, infrastructure, deployment and adoption programs” in states across the country, according to the National Telecommunications and Information Administration (NTIA).

The money first goes to the NTIA and then it’s distributed to states after they obtain approval from the NTIA by presenting a low-cost broadband Internet option. The ISP industries’ letter claims a fixed rate of $30 per month for high speed Internet access is “completely unmoored from the economic realities of deploying and operating networks in the highest-cost, hardest-to-reach areas.”

The letter urges the NTIA to revise the low-cost service option rate proposed or approved so far. Twenty-six states have completed all of the BEAD program’s phases.

Americans pay an average of $89 a month for Internet access. New Jersey has the highest average bill at $126 per month, according to a survey conducted by U.S. News and World Report. A 2021 study from the Pew Research Center found that 57 percent of households with an annual salary of $30,000 or less have a broadband connection.

This article originally appeared on Engadget at https://www.engadget.com/isps-are-fighting-to-raise-the-price-of-low-income-broadband-220620369.html?src=rss

OpenAI unveils SearchGPT, an AI-powered search engine

OpenAI on Thursday announced a new AI-powered search engine prototype called SearchGPT. The move marks the company’s entry into a competitive search engine market dominated by Google for decades. On its website, OpenAI described SearchGPT as “a temporary prototype of new AI search features that give you fast and timely answers with clear and relevant sources.” The company plans to test out the product with 10,000 initial users and then roll it into ChatGPT after gathering feedback.

The launch of SearchGPT comes amid growing competition in AI-powered search. Google, the world’s dominant search engine, recently began integrating AI capabilities into its platform. Other startups like the Jeff Bezos-backed Perplexity have also aimed to take on Google and have marketed themselves as “answer engines” that use AI to summarize the internet. 

The rise of AI-powered search engines has been controversial. Last month, Perplexity faced criticism for summarizing stories from Forbes and Wired without adequate attribution or backlinks to the publications as well as ignoring robots.txt, a way for websites to tell crawlers that scrape data to back off. Earlier this week, Wired publisher Condé Nast reportedly sent a cease and desist letter to Perplexity and accused it of plagiarism. 

Perhaps because of these tensions, OpenAI appears to be taking a more collaborative approach with SearchGPT. The company's blog post emphasizes that the prototype was developed in partnership with various news organizations and includes quotes from the CEOs of The Atlantic and News Corp, two of many publishers that OpenAI has struck licensing deals with.

“SearchGPT is designed to help users connect with publishers by prominently citing and linking to them in searches,” the company’s blog post says. “Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links.” OpenAI also noted that publishers will have control over how their content is presented in SearchGPT and can opt out of having their content used for training OpenAI's models while still appearing in search results.

SearchGPT's interface features a prominent textbox asking users, "What are you searching for?" Unlike traditional search engines like Google that provide a list of links, SearchGPT categorizes the results with short descriptions and visuals.

SearchGPT
OpenAI

For example, when searching for information about music festivals, the engine provides brief descriptions of events along with links for more details. Some users have pointed out, however, that the search engine is already presenting inaccurate information in its results.

We reiterate: Please don't get your news from AI chatbots.

This article originally appeared on Engadget at https://www.engadget.com/openai-unveils-searchgpt-an-ai-powered-search-engine-195235766.html?src=rss

Meta takes down 63,000 Instagram accounts linked to extortion scams

Meta has taken down tens of thousands of Instagram accounts from Nigeria as part of a massive crackdown on sextortion scams. The accounts primarily targeted adult men in the United States, but some also targeted minors, Meta said in an update.

The takedowns are part of a larger effort by Meta to combat sextortion scams on its platform in recent months. Earlier this year, the company added a safety feature in Instagram messages to automatically detect nudity and warn users about potential blackmail scams. The company also provides in-app resources and safety tips about such scams.

According to Meta, the recent takedowns included 2,500 accounts that were linked to a group of about 20 people who worked together to carry out sextortion scams. The company also took down thousands of accounts and groups on Facebook that provided tips and other advice, including scripts and fake images, for would-be sextortionists. Those accounts were linked to the Yahoo Boys, a group of “loosely organized cybercriminals operating largely out of Nigeria that specialize in different types of scams,” Meta said.

Meta has come under particular scrutiny for not doing enough to protect teens from sextortion on its apps. During a Senate hearing earlier this year, Senator Lindsey Graham pressed Mark Zuckerberg on whether the parents of a child who died by suicide after falling victim to such a scam should be able to sue the company.

Though the company said that the “majority” of the scammers it uncovered in its latest takedowns targeted adults, it confirmed that some of the accounts had targeted minors as well and that those accounts had also been reported to the National Center for Missing and Exploited Children (NCMEC).

This article originally appeared on Engadget at https://www.engadget.com/meta-takes-down-63000-instagram-accounts-linked-to-extortion-scams-175118067.html?src=rss

AI search engines that don’t pay up can’t index Reddit content

When Reddit said last month that it would block unauthorized data scraping from its site, everyone’s (rightful) first reaction was “AI, AI, AI.” However, now that the change has taken effect, chatbot makers may not be the only ones being locked out. The widely used forum also appears to be blocking major search engines other than Brave and Google, the latter of which reportedly inked a deal earlier this year with Reddit worth $60 million annually. However, a Reddit spokesperson told Engadget that the empty search results are about Google’s rivals not agreeing to the company’s requirements for AI training. It says it’s it’s in discussions with several of them.

404 Media reported on Wednesday (and Engadget confirmed in our queries) that searching for Reddit results from the past week on rival engine Bing (using “site:reddit.com”) returns empty results. The publication reported that DuckDuckGo produced seven links without any descriptions, only providing the note, “We would like to show you a description here but the site won’t allow us.” The engine now appears to have removed even those, as our test only produced an empty page, reading, “no results found.”

When Reddit said last month that it would update its Robots Exclusion Protocol (robots.txt) to block automated data scraping, it’s now apparent that it wasn’t only meant to thwart AI companies like Perplexity and its controversial “answer engine.” Currently, Google appears to be the only search engine allowed to crawl Reddit and produce results from “the front page of the internet.”

A Reddit spokesperson told Engadget on Wednesday it isn’t accurate to say the missing search results are a result of its Google deal. “We block all crawlers that are unwilling to commit to not using crawl data for AI training, which is in line with enforcing our Public Content Policy and updated robots.txt file,” the company said. “Anyone accessing Reddit content must abide by our policies, including those in place to protect redditors. We are selective about who we work with and trust with large-scale access to Reddit content.”

Meanwhile, a source familiar with Reddit’s thinking told Engadget on Wednesday that Bing’s omission is due to Microsoft refusing to agree to Reddit’s terms regarding AI crawling. Instead, the Bing maker allegedly claimed its standard web controls were sufficient. The source claims Microsoft’s stance conflicts with Reddit’s data privacy policy, leading to the impasse and empty search results.

The ubiquitous robots.txt is the web standard that communicates which parts of a site can be crawled. Although many crawlers are known to ignore its instructions, Google’s standard procedure is to respect it. So, on the technical side, the companies in cahoots on the lucrative deal appear to have deployed some manual override.

The saga could be seen as a trickle-down effect of AI chatbots scraping the live web for results. With courts slow to determine how much of the open web is fair use to train chatbots on, companies like Reddit, whose bottom lines now depend on safeguarding their data from those who don’t pay, are building walls at the expense of the open web. (Although, given the integral role Microsoft has played in this AI era, cozying up with OpenAI early on, it seems ironic that Bing finds itself on the losing end of at least one aspect of the fallout.)

Colin Hayhurst, CEO of lesser-known “no-tracking” search engine Mojeek, told 404 Media that Reddit is “killing everything for search but Google.” In addition, the executive said his attempts to contact Reddit were ignored. “It’s never happened to us before,” he said. “Because this happens to us, we get blocked, usually because of ignorance or stupidity or whatever, and when we contact the site you certainly can get that resolved, but we’ve never had no reply from anybody before.”

Reddit has made no secret of its desire to block AI companies from scraping its treasure trove of data in this burgeoning age of AI. Last year, CEO Steve Huffman risked alienating large portions of its user base by blocking third-party API requests, leading to the demise of beloved apps like Christian Selig’s Apollo. Despite widespread protests among moderators and forum-goers, the company only temporarily lost negligible numbers of users.

The gamble appeared to pay off, and Reddit recovered. It went public in March.

Update, July 24, 2024, 5:00 PM ET: This story has been updated to add statements from Reddit and additional context from sources familiar with the company’s thinking.

This article originally appeared on Engadget at https://www.engadget.com/search-engines-that-dont-pay-up-cant-index-reddit-content-172949170.html?src=rss

CrowdStrike offered a $10 Uber Eats card to teammates and partners, but it got flagged for fraud

Last week’s CrowdStrike outage plunged a noticeable portion of the world into a sea of blue death screens. The cybersecurity company tried to apologize with an Uber Eats gift card but its roll out had some troubles as well, according to a report from TechCrunch.

CrowdStrike apparently tried to send its "teammates and partners" a $10 Uber Eats gift card on Tuesday. The gift card was an attempt to apologize for the global shutdown that locked up computer systems for banks, hospitals, airlines and more and “the additional work that the July 19 incident has caused,” according to TechCrunch’s source who received the message.

When some tried to use the gift card on Uber Eats, they only saw a screen telling them that the offer had been rescinded by the issuing party. CrowdStrike told us that Uber flagged it as a fraud because of high usage rates.

CrowdStrike blamed the global system outage on a bug in an update that contained “problematic data.” The bug forced machines running on Windows into a boot loop that caused mass delays at airports, delayed scheduled surgeries and other operations at hospitals and disruptions at banks and even the London Stock Exchange.

Correction: July 24, 2024, 4:45PM ET: This story originally claimed that Crowdstrike tried to apologize for its recent outage by sending customers an Uber Eats gift card. The company gave us the following statement: "CrowdStrike did not send gift cards to customers or clients. We did send these to our teammates and partners who have been helping customers through this situation. Uber flagged it as fraud because of high usage rates."

This article originally appeared on Engadget at https://www.engadget.com/crowdstrike-offers-a-10-uber-eats-card-to-say-sorry-before-pulling-the-offer-172605510.html?src=rss

Russia-linked hackers cut heat to 600 Ukrainian apartment buildings in the dead of winter, researchers say

Cybersecurity company Dragos has flagged malware that can attack industrial control systems (ICS), tricking them into malicious behavior like turning off the heat and hot water in the middle of winter. TechCrunch reports that’s precisely what the malware, dubbed FrostyGoop, did this January in Lviv, Ukraine, when residents in over 600 apartment buildings lost heat for two days amid freezing temperatures.

Dragos says FrostyGoop is only the ninth known malware designed to target industrial controllers. It’s also the first to specifically set its sights on Modbus, a widely deployed communications protocol invented in 1979. Modbus is frequently used in industrial environments like the one in Ukraine that FrostyGoop attacked in January.

Ukraine’s Cyber Security Situation Center (CSSC), the nation’s government agency tasked with digital safety, shared information about the attack with Dragos after discovering the malware in April of this year, months after the attack. The malicious code, written in Golang (The Go programming language designed by Google), directly interacts with industrial control systems over an open internet port (502).

The attackers likely gained access to Lviv’s industrial network in April 2023. Dragos says they did so by “exploiting an undetermined vulnerability in an externally facing Mikrotik router.” They then installed a remote access tool that voided the need to install the malware locally, which helped it avoid detection.

The attackers downgraded the controller firmware to a version lacking monitoring capabilities, helping to cover their tracks. Instead of trying to take down the systems altogether, the hackers caused the controllers to report inaccurate measurements — resulting in the loss of heat in the middle of a deep freeze.

Dragos has a longstanding policy of neutrality in cyberattacks, preferring to focus on education without assigning blame. However, it noted that the adversaries opened secure connections (using layer two tunneling protocol) to Moscow-based IP addresses.

“I think it’s very much a psychological effort here, facilitated through cyber means when kinetic perhaps here wasn’t the best choice,” Dragos researcher Mark “Magpie” Graham told TechCrunch. Lviv is in the western part of Ukraine, which would be much more difficult for Russia to hit than eastern cities.

Dragos warns that, given how ubiquitous the Modbus protocol is in industrial environments, FrostyGoop could be used to disrupt similar systems worldwide. The security company recommends continuous monitoring, noting that FrostyGoop evaded virus detection, underscoring the need for network monitoring to flag future threats before they strike. Specifically, Dragos advises ICS operators to use the SANS 5 Critical Controls for World-Class OT Cybersecurity, a security framework for operational environments.

This article originally appeared on Engadget at https://www.engadget.com/russia-linked-hackers-cut-heat-to-600-ukrainian-apartment-buildings-in-the-dead-of-winter-researchers-say-171414527.html?src=rss

Google isn’t killing third-party cookies in Chrome after all

Google won’t kill third-party cookies in Chrome after all, the company said on Monday. Instead, it will introduce a new experience in the browser that will allow users to make informed choices about their web browsing preferences, Google announced in a blog post. Killing cookies, Google said, would adversely impact online publishers and advertisers. This announcement marks a significant shift from Google's previous plans to phase out third-party cookies by early 2025.

“[We] are proposing an updated approach that elevates user choice,” wrote Anthony Chavez, vice president of Google’s Privacy Sandbox initiative. “Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time. We're discussing this new path with regulators, and will engage with the industry as we roll this out.”

Google will now focus on giving users more control over their browsing data, Chavez wrote. This includes additional privacy controls like IP Protection in Chrome's Incognito mode and ongoing improvements to Privacy Sandbox APIs.

Google’s decision provides a reprieve for advertisers and publishers who rely on cookies to target ads and measure performance. Over the past few years, the company’s plans to eliminate third-party cookies have been riding on a rollercoaster of delays and regulatory hurdles. Initially, Google aimed to phase out these cookies by the end of 2022, but the deadline was pushed to late 2024 and then to early 2025 due to various challenges and feedback from stakeholders, including advertisers, publishers, and regulatory bodies like the UK's Competition and Markets Authority (CMA).

In January 2024, Google began rolling out a new feature called Tracking Protection, which restricts third-party cookies by default for 1% of Chrome users globally. This move was perceived as the first step towards killing cookies completely. However, concerns and criticism about the readiness and effectiveness of Google's Privacy Sandbox, a collection of APIs designed to replace third-party cookies, prompted further delays.

The CMA and other regulatory bodies have expressed concerns about Google's Privacy Sandbox, fearing it might limit competition and give Google an unfair advantage in the digital advertising market. These concerns have led to extended review periods and additional scrutiny, complicating Google's timeline for phasing out third-party cookies. Shortly after Google’s Monday announcement, the CMA said that it was “considering the impact” of Google’s change of direction.

This article originally appeared on Engadget at https://www.engadget.com/google-isnt-killing-third-party-cookies-in-chrome-after-all-202031863.html?src=rss

EU officials say Meta may be violating consumer laws with paid ‘ad-free’ plan

The European Commission really isn't happy about a Meta business model that gives users in the EU, European Economic Area and Switzerland the generous choice of continuing to use Facebook and Instagram with targeted ads without paying anything, or signing up for a monthly subscription that's said to offer an ad-free experience.

Officials from the Consumer Protection Cooperation (CPC) Network — a group of national authorities that enforce EU consumer protection laws — have suggested that Meta may be violating consumer legislation with the "pay or consent" approach. The Commission, which is the European Union's executive arm, coordinated the group's action against Meta.

The CPC Network sent Meta a letter laying out numerous ways in which it believes the company may be violating consumer laws. The company has until September 1 to reply and propose solutions to officials' concerns. If CPC officials find that Meta doesn't take appropriate steps to solve the problems, they could take enforcement actions against the company, which may include sanctions.

CPC authorities have suggested that Meta is misleading users by describing its platforms as free to use if they opt not to pay for a subscription, when Meta in fact monetizes their personal data by displaying targeted ads. They further say that Meta is "confusing users" by requiring them to access different areas of the privacy policy and terms of service to see how their data is being used for personalized ads.

Officials have also taken aim at Meta's "imprecise terms and language" that suggest subscribers will not see ads at all, even though those still might be displayed "when engaging with content shared via Facebook or Instagram by other members of the platform." Furthermore, they claim Meta is pressuring users who have long used Facebook and Instagram without forking over any payment "to make an immediate choice, without giving them a pre-warning, sufficient time and a real opportunity to assess how that choice might affect their contractual relationship with Meta, by not letting them access their accounts before making their choice."

Meta introduced its "pay or consent" options last year in an attempt to comply with the EU's data protection laws while maintaining its advertising model. CPC officials say they are concerned that "many consumers might have been exposed to undue pressure to choose rapidly" between consenting to data collection or paying a monthly fee, "fearing that they would instantly lose access to their accounts and their network of contacts."

This action is separate from other investigations the EU is carrying out against Meta over the "pay or consent" model. Earlier this month, the EU said Meta had potentially breached the Digital Markets Act with this approach. If found guilty, Meta could be on the hook for a fine of up to 10 percent of its global annual revenue.

In addition, the Commission requested more information from the company in March about the "pay or consent" model under the Digital Services Act, another law the bloc designed to keep the power of major tech companies in check. Not only that, consumer rights groups have filed complaints arguing that the approach violates the EU's General Data Protection Regulation.

This article originally appeared on Engadget at https://www.engadget.com/eu-officials-say-meta-may-be-violating-consumer-laws-with-paid-ad-free-plan-175834177.html?src=rss

Apple accused of underreporting suspected CSAM on its platforms

Apple has been accused of underreporting the prevalence of child sexual abuse material (CSAM) on its platforms. The National Society for the Prevention of Cruelty to Children (NSPCC), a child protection charity in the UK, says that Apple reported just 267 worldwide cases of suspected CSAM to the National Center for Missing & Exploited Children (NCMEC) last year.

That pales in comparison to the 1.47 million potential cases that Google reported and 30.6 million reports from Meta. Other platforms that reported more potential CSAM cases than Apple in 2023 include TikTok (590,376), X (597,087), Snapchat (713,055), Xbox (1,537) and PlayStation/Sony Interactive Entertainment (3,974). Every US-based tech company is required to pass along any possible CSAM cases detected on their platforms to NCMEC, which directs cases to relevant law enforcement agencies worldwide.

The NSPCC also said Apple was implicated in more CSAM cases (337) in England and Wales between April 2022 and March 2023 than it reported worldwide in one year. The charity used freedom of information requests to gather that data from police forces.

As The Guardian, which first reported on the NSPCC's claim, points out, Apple services such as iMessage, FaceTime and iCloud all have end-to-end encryption, which stops the company from viewing the contents of what users share on them. However, WhatsApp has E2EE as well, and that service reported nearly 1.4 million cases of suspected CSAM to NCMEC in 2023.

“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” Richard Collard, the NSPCC's head of child safety online policy, said. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the roll out of the Online Safety Act in the UK.”

In 2021, Apple announced plans to deploy a system that would scan images before they were uploaded to iCloud and compare them against a database of known CSAM images from NCMEC and other organizations. But following a backlash from privacy and digital rights advocates, Apple delayed the rollout of its CSAM detection tools before ultimately killing the project in 2022.

Apple declined to comment on the NSPCC's accusation, instead pointing The Guardian to a statement it made when it shelved the CSAM scanning plan. Apple said it opted for a different strategy that “prioritizes the security and privacy of [its] users.” The company told Wired in August 2022 that "children can be protected without companies combing through personal data." 

This article originally appeared on Engadget at https://www.engadget.com/apple-accused-of-underreporting-suspected-csam-on-its-platforms-153637726.html?src=rss