Russian and North Korean hackers used OpenAI tools to hone cyberattacks

Microsoft and OpenAI say that several state-backed hacking groups are using the latter’s generative AI (GAI) tools to bolster cyberattacks. The pair suggests that new research details for the first time how hackers linked to foreign governments are making use of GAI. The groups in question have ties to China, Russia, North Korea and Iran.

According to the companies, the state actors are using GAI for code debugging, looking up open-source information to research targets, developing social engineering techniques, drafting phishing emails and translating text. OpenAI (which powers Microsoft GAI products such as Copilot) says it shut down the groups’ access to its GAI systems after finding out they were using its tools.

Notorious Russian group Forest Blizzard (better known as Fancy Bear or APT 12) was one of the state actors said to have used OpenAI's platform. The hackers used OpenAI tools "primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks," the company said.

As part of its cybersecurity efforts, Microsoft says it tracks north of 300 hacking groups, including 160 nation-state actors. It shared its knowledge of them with OpenAI to help detect the hackers and shut down their accounts.

OpenAI says it invests in resources to pinpoint and disrupt threat actors' activities on its platforms. Its staff uses a number of methods to look into hackers' use of its systems, such as employing its own models to follow leads, analyzing how they interact with OpenAI tools and determining their broader objectives. Once it detects such illicit users, OpenAI says it disrupts their use of the platform through the likes of shutting down their accounts, terminating services or minimizing their access to resources.

This article originally appeared on Engadget at https://www.engadget.com/russian-and-north-korean-hackers-used-openai-tools-to-hone-cyberattacks-152424393.html?src=rss

Mozilla is laying off around 60 workers

Mozilla is the latest in a long line of tech companies to lay off employees this year. The not-for-profit company is firing around 60 people, which equates to roughly five percent of its workforce. Most of those who are leaving Mozilla worked on the product development team. The news was first reported by Bloomberg

“We’re scaling back investment in some product areas in order to focus on areas that we feel have the greatest chance of success,” a Mozilla spokesperson told Engadget in a statement. “To do so, we've made the difficult decision to eliminate approximately 60 roles from across the company. We intend to re-prioritize resources towards products like Firefox Mobile, where there’s a significant opportunity to grow and establish a better model for the industry.”

According to an internal memo obtained by TechCrunch, Mozilla plans to pare back investments on several products, including its VPN and Online Scrubber tool. Hubs, the 3D virtual world Mozilla debuted in 2018, is shutting down while the company is also reducing resources dedicated to its Mastodon instance.

One area into which Mozilla does plan to funnel extra resources is, unsurprisingly, artificial intelligence. "In 2023, generative AI began rapidly shifting the industry landscape. Mozilla seized an opportunity to bring trustworthy AI into Firefox, largely driven by the Fakespot acquisition and the product integration work that followed," the memo reportedly reads. "Additionally, finding great content is still a critical use case for the internet. Therefore, as part of the changes today, we will be bringing together Pocket, Content and the AI/ML teams supporting content with the Firefox Organization."

The reorganization comes after Mozilla appointed a new CEO just last week. Former Airbnb, PayPal and eBay executive Laura Chambers, who joined Mozilla's board three years ago, was appointed chief executive for the rest of this year. "Her focus will be on delivering successful products that advance our mission and building platforms that accelerate momentum," Mitchell Baker, Mozilla's former long-time CEO and its new executive chairman, wrote when Chambers took on the job.

Update 2/15 12:23PM ET: Clarifying that one of the products in which Mozilla is reducing investment is its Online Scrubber, and not the new Mozilla Monitor Plus as previously reported. The company says it "will continue to support and make investments in" Mozilla Monitor Plus.

This article originally appeared on Engadget at https://www.engadget.com/mozilla-is-laying-off-around-60-workers-210313813.html?src=rss

Who makes money when AI reads the internet for us?

Last week, The Browser Company, a startup that makes the Arc web browser, released a slick new iPhone app called Arc Search. Instead of displaying links, its brand new “Browse for Me” feature reads the first handful of pages and summarizes them into a single, custom-built, Arc-formatted web page using large language models from OpenAI and others. If a user does click through to any of the actual pages, Arc Search blocks ads, cookies and trackers by default. Arc’s efforts to reimagine web browsing have received near-universal acclaim. But over the last few days, “Browse for Me” earned The Browser Company its first online backlash.

For decades, websites have served ads and pushed people visiting them towards paying for subscriptions. Monetizing traffic is one of the primary ways most creators on the web continue to make a living. Reducing the need for people to visit actual websites deprives those creators of compensation for their work, and disincentivizes them from publishing anything at all.

“Web creators are trying to share their knowledge and get supported while doing so”, tweeted Ben Goodger, a software engineer who helped create both Firefox and Chrome. “I get how this helps users. How does it help creators? Without them there is no web…” After all, if a web browser sucked out all information from web pages without users needing to actually visit them, why would anyone bother making websites in the first place?

The backlash has prompted the company’s co-founder and CEO Josh Miller to question the fundamental nature of how the web is monetized. Miller, who was previously a product director at the White House and worked at Facebook after it acquired his previous startup, Branch, told Goodger on X that how creators monetize web pages needs to evolve. He also told Platformer’s Casey Newton that generative AI presents an opportunity to “shake up the stagnant oligopoly that runs much of the web today” but admitted that he didn’t know how writers and creators who made the actual website that his browser scrapes from would be compensated. “It completely upends the economics of publishing on the internet,” he admitted.

Miller declined to speak to Engadget, and The Browser Company did not respond to Engadget’s questions.

Arc set itself apart from other web browsers by fundamentally rethinking how web browsers look and work ever since it was released to the general public in July last year. It did this by adding features like the ability to split multiple tabs vertically and offering a picture-in-picture mode for Google Meet video conferences. But for the last few months, Arc has been rapidly adding AI-powered features such as automatic web page summaries, ChatGPT integration and giving users the option to switch their default search engine to Perplexity, a Google rival that uses AI to provide answers to search queries by summarizing web pages in a chat-style interface and providing tiny citations to sources. The “Browse for Me” feature lands Arc smack in the middle of one of AI’s biggest ethical quandaries: who pays creators when AI products rip off and repurpose their content?

“The best thing about the internet is that somebody super passionate about something makes a website about the thing that they love,” tech entrepreneur and blogging pioneer Anil Dash told Engadget. “This new feature from Arc intermediates that and diminishes that.” In a post on Threads shortly after Arc released the app, Dash criticized modern search engines and AI chatbots that sucked up the internet’s content and aimed to stop people from visiting websites, calling them “deeply destructive.”

It’s easy, Dash said, to blame the pop-ups, cookies and intrusive advertisements that power the economic engine of the modern web as the reason why browsing feels broken now. And there may be signs that users are warming to the concept of having their information presented to them summarized by large language models rather than manually clicking around multiple web pages. On Thursday, Miller tweeted that people chose “Browse for Me” over regular Google search in Arc Search on mobile for approximately 32 percent of all queries. The company is currently working on making that the default search experience and also bringing it to its desktop browser.

“It’s not intellectually honest to say that this is better for users,” said Dash. “We only focus on short term user benefit and not the idea that users want to be fully informed about the impact they’re having on the entire digital ecosystem by doing this.” Summarizing this double-edged sword succinctly a food blogger tweeted at Miller, "As a consumer, this is awesome. As a blogger, I’m a lil afraid.”

Last week, Matt Karolian, the vice president of platforms, research and development at The Boston Globe typed “top Boston news” into Arc Search and hit “Browse for Me”. Within seconds, the app had scanned local Boston news sites and presented a list of headlines containing local developments and weather updates. “News orgs are gonna lose their shit about Arc Search,” Karolian posted on Threads. “It’ll read your journalism, summarize it for the user…and then if the user does click a link, they block the ads.”

Local news publishers, Karolian told Engadget, almost entirely depend on selling ads and subscriptions to readers who visit their websites to survive. “When tech platforms come along and disintermediate that experience without any regard for the impact it could have, it is deeply disappointing.” Arc Search does include prominent links and citations to the websites it summarizes from. But Karolian said that this misses the point. “It fails to ponder the consequences of what happens when you roll out products like this.”

Arc Search isn’t the only service using AI to summarize information from web pages. Google, the world’s biggest search engine, now offers AI-generated summaries to users’ queries at the top of its search results, something that experts have previously called “a bit like dropping a bomb right at the center of the information nexus.” Arc Search, however, goes a step beyond and eliminates search results altogether. Meanwhile, Miller has continued to tweet throughout the controversy, posting vague musings about websites in an “AI-first internet” while simultaneously releasing products based on concepts he has admittedly still not sorted out.

On a recent episode of The Vergecast that Miller appeared on, he compared what Arc Search might do to the economics of the web to what Craigslist did to business models of print newspapers. “I think it’s absolutely true that Arc Search and the fact that we remove the clutter and the BS and make you faster and get you what you need in a lot less time is objectively good for the vast majority of people, and it is also true that it breaks something,” he says. “It breaks a bit of the value exchange. We are grappling with a revolution with how software works and how computers work and that’s going to mess up some things.”

Karolian from The Globe said that the behavior of tech companies applying AI to content on the web reminded him of a monologue delivered by Ian Malcolm, one of the protagonists in Jurassic Park to park creator John Hammond about applying the power of technology without considering its impact: “Your scientists were so preoccupied with whether or not they could they didn’t stop to think if they should.”

This article originally appeared on Engadget at https://www.engadget.com/who-makes-money-when-ai-reads-the-internet-for-us-200246690.html?src=rss

Google, Apple, Meta and other huge tech companies join US consortium to advance responsible AI

A whole bunch of big tech companies, 200 in all, have joined a US-based effort to advance responsible AI practices. The US AI Safety Institute Consortium (AISIC) will count Meta, Google, Microsoft and Apple as members. Commerce Secretary Gina Raimondo just announced the group's numerous new members and said that they'll be tasked with carrying out actions indicated by President Biden’s sweeping executive order on artificial intelligence.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.

Biden’s October executive order was far-reaching, so this consortium will focus on developing guidelines for “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”

Red-teaming is a cybersecurity term that dates back to the Cold War. It refers to simulations in which the enemy was called the “red team.” In this case, the enemy would be an AI hellbent on behaving badly. Those engaged in this practice will try to trick the AI into doing bad things, like exposing credit card numbers, via prompt hacking. Once people know how to break the system, they can build better protections.

Watermarking synthetic content is another important aspect of Biden’s original order. Consortium members will develop guidelines and actions to ensure that users can easily identify AI-generated materials. This will hopefully decrease deepfake trickery and AI-enhanced misinformation. Digital watermarking has yet to be widely adopted, though this program will “facilitate and help standardize” underlying technical specifications behind the practice.

The consortium’s work is just beginning, though the Commerce Department says it represents the largest collection of testing and evaluation teams in the world. Biden’s executive order and this affiliated consortium are pretty much all we’ve got for now. Congress keeps failing to pass meaningful AI legislation of any kind.

This article originally appeared on Engadget at https://www.engadget.com/google-apple-meta-and-other-huge-tech-companies-join-us-consortium-to-advance-responsible-ai-164352301.html?src=rss

ChatGPT will digitally tag images generated by DALL-E 3 to help battle misinformation

In an age where fraudsters are using generative AI to scam money or tarnish one's reputation, tech firms are coming up with methods to help users verify content — at least still images, to begin with. As teased in its 2024 misinformation strategy, OpenAI is now including provenance metadata in images generated with ChatGPT on the web and DALL-E 3 API, with their mobile counterparts receiving the same upgrade by February 12.

The metadata follows the C2PA (Coalition for Content Provenance and Authenticity) open standard, and when one such image is uploaded to the Content Credentials Verify tool, you'll be able to trace its provenance lineage. For instance, an image generated using ChatGPT will show an initial metadata manifest indicating its DALL-E 3 API origin, followed by a second metadata manifest showing that it surfaced in ChatGPT.

Despite the fancy cryptographic tech behind the C2PA standard, this verification method only works when the metadata is intact; the tool is of no use if you upload an AI-generated image sans metadata — as is the case with any screenshot or uploaded image on social media. Unsurprisingly, the current sample images on the official DALL-E 3 page returned blank as well. On its FAQ page, OpenAI admits that this isn't a silver bullet to addressing the misinformation war, but it believes that the key is to encourage users to actively look for such signals.

While OpenAI's latest effort on thwarting fake content is currently limited to still images, Google's DeepMind already has SynthID for digitally watermarking both images and audio generated by AI. Meanwhile, Meta has been testing invisible watermarking via its AI image generator, which may be less prone to tampering.

This article originally appeared on Engadget at https://www.engadget.com/chatgpt-will-digitally-tag-images-generated-by-dall-e-3-to-help-battle-misinformation-102514822.html?src=rss

Disney+ has started cracking down on password sharing in the US

Disney+ started getting strict about password sharing in Canada last year, and now it's expanding the restriction to the US. According to The Verge, the streaming service has been sending out emails to its subscribers in the country, notifying them about a change in its terms of service. Its service agreement now states that users may not share their passwords outside of their household "unless otherwise permitted by [their] service tier," suggesting the arrival of new subscription options in the future. 

The Verge says Disney+ told subscribers that they can analyze the use of their account to "determine compliance," though it didn't elaborate on how its methods work exactly. "We're adding limitations on sharing your account outside of your household, and explaining how we may assess your compliance with these limitations," Disney+ reportedly wrote in its email. In its Service Agreement, the service describes "household" as "the collection of devices associated with [subscribers'] primary personal residence that are used by the individuals who reside therein." The rule already applies to new subscribers, but old ones have until March 14 to feel its effects. 

Disney's other streaming service, Hulu, also recently announced that it's clamping down on password sharing outside the subscriber's "primary personal residence." It used the same language in its its warning to users, also telling them that their accounts will be analyzed for compliance and that it will start enforcing the new rule on March 14. 

This article originally appeared on Engadget at https://www.engadget.com/disney-has-started-cracking-down-on-password-sharing-in-the-us-070317512.html?src=rss

Meta plans to ramp up labeling of AI-generated images across its platforms

Meta plans to ramp up its labeling of AI-generated images across Facebook, Instagram and Threads to help make it clear that the visuals are artificial. It's part of a broader push to tamp down misinformation and disinformation, which is particularly significant as we wrangle with the ramifications of generative AI (GAI) in a major election year in the US and other countries.

According to Meta's president of global affairs, Nick Clegg, the company has been working with partners from across the industry to develop standards that include signifiers that an image, video or audio clip has been generated using AI. "Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads," Clegg wrote in a Meta Newsroom post. "We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app." Clegg added that, as it expands these capabilities over the next year, Meta expects to learn more about "how people are creating and sharing AI content, what sort of transparency people find most valuable and how these technologies evolve." These will help inform both industry best practices and Meta's own policies, he wrote.

Meta says the tools it's working on will be able to detect invisible signals — namely AI generated information that aligns with the C2PA and IPTC technical standards — at scale. As such, it expects to be able to pinpoint and label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, all of which are incorporating GAI metadata into images that their products whip up.

As for GAI video and audio, Clegg points out that companies in the space haven't started incorporating invisible signals into those at the same scale that they have images. As such, Meta isn't yet able to detect video and audio that's generated by third-party AI tools. In the meantime, Meta expects users to label such content themselves.

"While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it," Clegg wrote. "We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context."

That said, putting the onus on users to add disclosures and labels to AI-generated video and audio seems like a non-starter. Many of those people will be trying to intentionally deceive others. On top of that, others likely just won't bother or won't be aware of the GAI policies.

In addition, Meta is looking to make it harder for people to alter or remove invisible markers from GAI content. The company's FAIR AI research lab has developed tech that "integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled," Clegg wrote. Meta is also working on ways to automatically detect AI-generated material that doesn't have invisible markers.

Meta plans to continue collaborating with industry partners and "remain in a dialogue with governments and civil society" as GAI becomes more prevalent. It believes this is the right approach to handling content that's shared on Facebook, Instagram and Threads for the time being, though it will adjust things if necessary.

One key issue with Meta's approach — at least while it works on ways to automatically detect GAI content that doesn't use the industry-standard invisible markers — is that it requires buy-in from partners. For instance, C2PA has a ledger-style method of authentication. For that to work, both the tools used to create images and the platforms on which they're hosted both need to buy into C2PA.

Meta shared the update on its approach to labeling AI-generated content just a few days after CEO Mark Zuckerberg shed some more light on his company's plans to build general artificial intelligence. He noted that training data is one major advantage Meta has. The company estimates that the photos and videos shared on Facebook and Instagram amount to a dataset that's greater than the Common Crawl. That's a dataset of some 250 billion web pages that has been used to train other AI models. Meta will be able to tap into both, and it doesn't have to share the data it has vacuumed up through Facebook and Instagram with anyone else.

The pledge to more broadly label AI-generated content also comes just one day after Meta's Oversight Board determined that a video that was misleadingly edited to suggest that President Joe Biden repeatedly touched the chest of his granddaughter could stay on the company's platforms. In fact, Biden simply placed an "I voted" sticker on her shirt after she voted in person for the first time. The board determined that the video was permissible under Meta's rules on manipulated media, but it urged the company to update those community guidelines.

This article originally appeared on Engadget at https://www.engadget.com/meta-plans-to-ramp-up-labeling-of-ai-generated-images-across-its-platforms-160234038.html?src=rss

Mozilla Monitor scrubs your leaked personal information from the web, for a fee

Mozilla is rolling out a tool that can automatically monitor data brokers for your personal information and scrub any of your exposed details from them. Mozilla Monitor Plus expands on the Mozilla Monitor (formerly Firefox Monitor) service, which lets you know when your email address is included in a data breach.

This new paid service, which costs $9 per month or $107.88 per year, aims to proactively make sure your personal information stays off more than 190 data broker sites. Mozilla says that's double the number of data brokers that its competitors monitor. Subscribers will receive data breach alerts too.

A screenshot of Mozilla's Monitor tool, showing how many instances of personal data it has removed from the internet on the user's behalf.
Mozilla

To get a better understanding of how prevalent the issue is, you can get a free one-time scan that can show you if and where your data has been exposed. To do so, you'll need to sign up for a Mozilla account and provide your name, current city and state, date of birth and your email address. Mozilla says it will encrypt this data, which it notes is the least amount of information needed to obtain the most accurate results. The tool will also highlight information from "high-risk data breaches" — such as social security numbers, credit card details and banking information — along with advice on how to have that data scrubbed.

Mozilla Monitor and Monitor Plus are only available to folks based in the US for now. Google offers a similar tool. If you sign up for Mozilla's version, you can also get access to features including two-factor authentication, email alias tool Firefox Relay and Mozilla VPN.

This article originally appeared on Engadget at https://www.engadget.com/mozilla-monitor-scrubs-your-leaked-personal-information-from-the-web-for-a-fee-140021466.html?src=rss

How to watch the CEOs of Meta, TikTok, Discord, Snap and X testify about child safety

The CEOs of five social media companies are headed to Washington to testify in a Senate Judiciary Committee hearing about child safety. The hearing will feature Meta CEO Mark Zuckerberg, Snap CEO Evan Spiegel, TikTok CEO Shou Chew, Discord CEO Jason Citron and X CEO Linda Yaccarino.

The group will face off with lawmakers over their record on child exploitation and their efforts to protect teens using their services. The hearing will be live streamed beginning at 10 AM ET on Wednesday, January 31.

Though there have been previous hearings dedicated to teen safety, Wednesday’s event will be the first time Congress has heard directly from Spiegel, Yaccarino and Citron. It’s also only the second appearance for TikTok’s Chew, who was grilled by lawmakers about the app’s safety record and ties to China last year.

Zuckerberg, of course, is well-practiced at these hearings by now. But he will likely face particular pressure from lawmakers following a number of allegations about Meta’s safety practices that have come out in recent months as the result of a lawsuit from 41 state attorneys general. Court documents from the suit allege that Meta turned a blind eye to children under 13 using its service, did little to stop adults from sexually harassing teens on Facebook and that Zuckerberg personally intervened to stop an effort to ban plastic surgery filters on Instagram.

As with previous hearings with tech CEOs, it’s unclear what meaningful policy changes might come from their testimony. Lawmakers have proposed a number of bills dealing with online safety and child exploitation, though none have been passed into law. However, there is growing bipartisan support for measures that would shield teens from algorithms and data gathering and implement parental consent requirements.

This article originally appeared on Engadget at https://www.engadget.com/how-to-watch-the-ceos-of-meta-tiktok-discord-snap-and-x-testify-about-child-safety-214210385.html?src=rss

X plans to hire 100 content moderators to fill new Trust and Safety center in Austin

X’s head of business operations Joe Benarroch said the company plans to open a new office in Austin, Texas for a team that will be dedicated to content moderation, Bloomberg reports. The “Trust and Safety center of excellence,” for which the company is planning to hire 100 full-time employees, will primarily focus on stopping the spread of child sexual exploitation (CSE) materials. 

X CEO Linda Yaccarino is set to testify before Congress on Wednesday in a hearing about CSE, and the platform at the end of last week published a blog post about its efforts to curb such materials, saying it’s “determined to make X inhospitable for actors who seek to exploit minors.”

According to Bloomberg, Benarroch said, “X does not have a line of business focused on children, but it’s important that we make these investments to keep stopping offenders from using our platform for any distribution or engagement with CSE content.” The team will also address other content issues, like hate speech and “violent posts,” according to Bloomberg. Elon Musk spent much of his first year at X taking steps to turn the platform into a bastion of “free speech,” and gutted the content moderation teams that had been put in place by Twitter before his takeover.

This article originally appeared on Engadget at https://www.engadget.com/x-plans-to-hire-100-content-moderators-to-fill-new-trust-and-safety-center-in-austin-173111536.html?src=rss