X sues California over deceptive AI-made election content ban

Elon Musk’s X is taking the state of California to court over a new law that prevents the spread of AI-generated election misinformation. Bloomberg reports that X filed a lawsuit against AB 2655, also known as the Defending Democracy from Deepfake Deception Act of 2024, in a Sacramento federal court.

California Gov. Gavin Newsom signed the bill into law on September 17, creating accountability standards for using false political speech faked with AI programs close to an election. The legislation prevents the distribution of “materially deceptive audio or visual media of a candidate within 60 days of an election at which the candidate will appear on the ballet.”

X argues that the law will create more political speech censorship. The complaint says the First Amendment “includes tolerance for potentially false speech made in the context of such criticisms.”

Newsom signed AB 2655 into law as part of a large package of bills addressing concerns about the use of AI to create sexually explicit deepfakes and other deceptive material. The next day, a federal judge issued a preliminary injunction against the law and other bills from Newsom’s signing.

California has become one of the epicenters of debate over the use and implementation of AI. Concerns about the use of AI in film and television projects, among other issues, prompted SAG-AFTRA to go on strike in 2023. SAG eventually reached a deal that included AI protections for actors prohibiting studios from using their likeness without permission or proper compensation. The following year, the state of California passed AB 2602, a law that makes it illegal for studios, publishers and video game studios to use someone’s likeness without their permission.

This article originally appeared on Engadget at https://www.engadget.com/ai/x-sues-california-over-deceptive-ai-made-election-content-ban-185010406.html?src=rss

ChatGPT rejected 250,000 election deepfake requests

A lot of people tried to use OpenAI's DALL-E image generator during the election season, but the company said that it was able to stop them from using it as a tool to create deepfakes. ChatGPT rejected over 250,000 requests to generate images with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance and Governor Walz, OpenAI said in a new report. The company explained that it's a direct result of a safety measure it previously implemented so that ChatGPT would refuse to generate images with real people, including politicians. 

OpenAI has been preparing for the US presidential elections since the beginning of the year. It laid out a strategy that was meant to prevent its tools from being used to help spread misinformation and made sure that people asking ChatGPT about voting in the US are directed to CanIVote.org. OpenAI said 1 million ChatGPT responses directed people to the website in the month leading up to election day. The chatbot also generated 2 million responses on election day and the day after, telling people who ask it for the results to check Associated Press, Reuters and other news sources. OpenAI made sure that ChatGPT's responses "did not express political preferences or recommend candidates even when asked explicitly," as well.

Of course, DALL-E isn't the only AI image generator out there, and there are plenty of election-related deepfakes going around social media. One such deepfake featured Kamala Harris in a campaign video altered so that she'd say things she didn't actually say, such as "I was selected because I am the ultimate diversity hire."

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-rejected-250000-election-deepfake-requests-170037063.html?src=rss

ChatGPT rejected 250,000 election deepfake requests

A lot of people tried to use OpenAI's DALL-E image generator during the election season, but the company said that it was able to stop them from using it as a tool to create deepfakes. ChatGPT rejected over 250,000 requests to generate images with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance and Governor Walz, OpenAI said in a new report. The company explained that it's a direct result of a safety measure it previously implemented so that ChatGPT would refuse to generate images with real people, including politicians. 

OpenAI has been preparing for the US presidential elections since the beginning of the year. It laid out a strategy that was meant to prevent its tools from being used to help spread misinformation and made sure that people asking ChatGPT about voting in the US are directed to CanIVote.org. OpenAI said 1 million ChatGPT responses directed people to the website in the month leading up to election day. The chatbot also generated 2 million responses on election day and the day after, telling people who ask it for the results to check Associated Press, Reuters and other news sources. OpenAI made sure that ChatGPT's responses "did not express political preferences or recommend candidates even when asked explicitly," as well.

Of course, DALL-E isn't the only AI image generator out there, and there are plenty of election-related deepfakes going around social media. One such deepfake featured Kamala Harris in a campaign video altered so that she'd say things she didn't actually say, such as "I was selected because I am the ultimate diversity hire."

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-rejected-250000-election-deepfake-requests-170037063.html?src=rss

Rideshare drivers in Massachusetts can unionize without being full-time employees

Massachusetts has passed a statewide ballot initiative that gives rideshare drivers the opportunity to unionize while remaining independent contractors. The initiative was brought forward by the Service Employees International Union and the International Association of Machinists and Aerospace Workers. It passed with a narrow margin of about 54 percent of the vote.

The measure will allow the state's 70,000 rideshare drivers to form unions and leverage collective bargaining power, which is not permitted for independent contractors under the National Labor Relations Act. These workers can unionize if they receive signatures from at least 25 percent of active drivers in Massachusetts. The initiative also creates a hearing process so that drivers for companies such as Lyft and Uber can bring complaints about unfair work practices to a state board. However, the ballot initiative does not contain language about strike protections. It also does not extend to food delivery drivers.

Uber and Lyft did not actively campaign against the Massachusetts measure, but they have raised concerns about the specific language. Some labor advocates also opposed the initiative, cautioning that it could hamper efforts for rideshare drivers to win recognition as full-time employees. "We're not against unionization," Kelly Cobb-Lemire, an organizer with Massachusetts Drivers United, told The New York Times. "But we don't feel this goes far enough."

Independent contractors often are not protected by federal or state labor laws because they aren't full-time employees. The Massachusetts ballot measure could create a precedent for other states to offer unionization options for gig workers. California has been a battleground for labor protections for gig workers who drive for Uber and Lyft for several years. Most recently, a court allowed California drivers to retain independent contractor status.

This article originally appeared on Engadget at https://www.engadget.com/transportation/rideshare-drivers-in-massachusetts-can-unionize-without-being-full-time-employees-212202426.html?src=rss

Track US election results with Apple’s Live Activity feature

Election day 2024 has finally arrived in the US and the race between vice president Kamala Harris and former president Donald Trump is so close we're all going to be glued to our screens waiting to see what happens. Apple is making sure you see developments right away (and can't take any breaks from it) with Live Activities, AppleInsider reports. Starting Tuesday night, Apple News will display the ongoing US election results as a Live Activity.

The Live Activity tracker will show up on your lock screen and give you the latest election updates. It's available on iPhones, iPads and Apple Watches. If your device has a Dynamic Island, then you'll also be able to track the electoral college results there.

If you're interested in receiving Live Activity updates about the election, you can turn it on through Apple News. Click "Follow the 2024 election live" banner or open the "Election 2024" tab and you should see a notification about enabling it. 

This article originally appeared on Engadget at https://www.engadget.com/apps/track-us-election-results-with-apples-live-activity-feature-130032299.html?src=rss

FBI warns voters about inauthentic videos relating to election security

The FBI issued a statement on Saturday about deceptive videos circulating ahead of the election, saying it’s aware of two such videos “falsely claiming to be from the FBI relating to election security.” That includes one claiming the FBI had “apprehended three linked groups committing ballot fraud,” and one about Kamala Harris’ husband. Both depict false content, the FBI said.

Disinformation — including the spread of political deepfakes and other forms of misleading videos and imagery — has been a major concern in the leadup to the US presidential election. In its statement posted on X, the FBI added:

Election integrity is among our highest priorities, and the FBI is working closely with state and local law enforcement partners to respond to election threats and protect our communities as Americans exercise their right to vote. Attempts to deceive the public with false content about FBI operations undermines our democratic process and aims to erode trust in the electoral system.

Just a day earlier, the FBI, along with the Office of the Director of National Intelligence (ODNI) and the Cybersecurity and Infrastructure Security Agency (CISA) said they’d traced two other videos back to “Russian influence actors,” including one “that falsely depicted individuals claiming to be from Haiti and voting illegally in multiple counties in Georgia.”

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/fbi-warns-voters-about-inauthentic-videos-relating-to-election-security-185108885.html?src=rss

FBI warns voters about inauthentic videos relating to election security

The FBI issued a statement on Saturday about deceptive videos circulating ahead of the election, saying it’s aware of two such videos “falsely claiming to be from the FBI relating to election security.” That includes one claiming the FBI had “apprehended three linked groups committing ballot fraud,” and one about Kamala Harris’ husband. Both depict false content, the FBI said.

Disinformation — including the spread of political deepfakes and other forms of misleading videos and imagery — has been a major concern in the leadup to the US presidential election. In its statement posted on X, the FBI added:

Election integrity is among our highest priorities, and the FBI is working closely with state and local law enforcement partners to respond to election threats and protect our communities as Americans exercise their right to vote. Attempts to deceive the public with false content about FBI operations undermines our democratic process and aims to erode trust in the electoral system.

Just a day earlier, the FBI, along with the Office of the Director of National Intelligence (ODNI) and the Cybersecurity and Infrastructure Security Agency (CISA) said they’d traced two other videos back to “Russian influence actors,” including one “that falsely depicted individuals claiming to be from Haiti and voting illegally in multiple counties in Georgia.”

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/fbi-warns-voters-about-inauthentic-videos-relating-to-election-security-185108885.html?src=rss

The Harris/Walz campaign has its own Fortnite map

We’re in the final stretch of the 2024 presidential election and both sides are pulling out all the stops to get those all-important undecided voters. The Harris/Walz campaign is exploring an unconventional option: a map in Epic Games’ mega online multiplayer hit Fortnite.

The “Freedom Town, USA” map available at 7331-5536-6547 is a little different from the usual Fortnite matches. Forbes senior contributor Paul Tassi played the new map and reported that there aren’t any guns in Freedom Town (probably for obvious reasons). Instead, the game focuses on racing with cars and parkour style. The map also has some campaign signs and decorations for Vice President Kamala Harris and Gov. Tim Walz’s presidential run.

Video games have become a cornerstone of the Harris/Walz campaign. Harris’ camp has its own Twitch page that’s been broadcasting games like World of Warcraft and the latest Madden title as a way to spark discussions with the voting public. The Fortnite map, however, doesn’t look like it’s doing a great job of getting the message out to players. As of this story’s publishing, the map only has less than 300 active players.

Political ads and recruitment in video games isn’t just limited to this campaign cycle. Then-candidate Barack Obama’s 2008 campaign introduced the concept to politics when they purchased ads in 18 games including Need for Speed: Carbon and Madden NFL 13 on Microsoft’s Xbox Live service and the mobile version of Tetris, according to NPR.

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-harriswalz-campaign-has-its-own-fortnite-map-220450255.html?src=rss

Viewers don’t trust candidates who use generative AI in political ads, study finds

Artificial intelligence is expected to have an impact on the upcoming US election in November. States have been trying to protect against misinformation by passing laws that require political advertisements to disclose when they have used generative AI. Twenty states now have rules on the books, and according to new research, voters have a negative reaction to seeing those disclaimers. That seems like a pretty fair response: If a politician uses generative AI to mislead voters, then voters don't appreciate that. The study was conducted by New York University’s Center on Technology Policy and first reported by The Washington Post.

The investigation had a thousand participants watch political ads from fictional candidates. Some of the ads were accompanied by a disclaimer that AI was used in the creation of the spot, while others had no disclaimer. The presence of a disclaimer was linked to viewers rating the promoted candidate as less trustworthy and less appealing. Respondents also said they would be more likely to flag or report the ads on social media when they contained disclaimers. In attack ads, participants were more likely to express negative opinions about the candidate who sponsored the spot rather than the candidate being attacked. The researchers also found that the presence of an AI disclaimer led to worse or unchanged opinions regardless of the fictional candidate's political party.

The researchers tested two different disclaimers inspired by two different state requirements for AI disclosure in political ads. The text tied to Michigan's law reads: "This video has been manipulated by technical means and depicts speech or conduct that did not occur." The other disclaimer is based on Florida's law, and says: "This video was created in whole or in part with the use of generative artificial intelligence." Although the approach of Michigan's requirements is more common among state laws, study participants said they preferred seeing the broader disclaimer for any type of AI use.

While these disclaimers can play a part in transparency about the presence of AI in an ad, they aren't a perfect failsafe. As many as 37 percent of the respondents said they didn't recall seeing any language about AI after viewing the ads.

This article originally appeared on Engadget at https://www.engadget.com/ai/viewers-dont-trust-candidates-who-use-generative-ai-in-political-ads-study-finds-194532117.html?src=rss

Judge blocks new California law barring distribution of election-related AI deepfakes

One of California's new AI laws, which aims to prevent AI deepfakes related to elections from spreading online, has been blocked a month before the US presidential elections. As TechCrunch and Reason report, Judge John Mendez has issued a preliminary injunction, preventing the state's attorney general from enforcing AB 2839. California Governor Gavin Newsom signed it into law, along with other bills focusing on AI, back in mid-September. After doing so, he tweeted a screenshot of a story about X owner Elon Musk sharing an AI deepfake video of Vice President Kamala Harris without labeling it as fake. "I just signed a bill to make this illegal in the state of California," he wrote. 

AB 2839 holds anybody who distributes AI deepfakes accountable, if they feature political candidates and if they're posted within 120 days of an election in the state. Anybody who sees those deepfakes can file a civil action against the person who distributed it, and a judge can order the poster to take the manipulated media down if they don't want to face monetary penalties. After Newsom signed it into law, the video's original poster, X user Christopher Kohls, filed a lawsuit to block it, arguing that the video was satire and hence protected by the First Amendment. 

Judge Mendez has agreed with Kohls, noting in his decision [PDF] that AB 2839 does not pass strict scrutiny and is not narrowly tailored. He also said that the law's disclosure requirements are unduly burdensome. "Almost any digitally altered content, when left up to an arbitrary individual on the internet, could be considered harmful," he wrote. The judge likened YouTube videos, Facebook posts and X tweets to newspaper advertisements and political cartoons and asserted that the First Amendment "protects an individual’s right to speak regardless of the new medium these critiques may take." Since this is merely a preliminary injunction, the law may be unblocked in the future, though that might not happen in time for this year's presidential elections. 

This article originally appeared on Engadget at https://www.engadget.com/ai/judge-blocks-new-california-law-barring-distribution-of-election-related-ai-deepfakes-133043341.html?src=rss