US Senators John Kennedy (R-LA) and Jeff Merkley (D-OR) introduced a bipartisan bill Wednesday to end involuntary facial recognition screening at airports. The Traveler Privacy Protection Act would block the Transportation Security Administration (TSA) from continuing or expanding its facial recognition tech program. It would also require the government agency to explicitly receive congressional permission to renew it, and it would have to dispose of all biometric data within three months.
Senator Merkley described the TSA’s biometric collection practices as the first steps toward an Orwellian nightmare. “The TSA program is a precursor to a full-blown national surveillance state,” Merkley wrote in a news release. “Nothing could be more damaging to our national values of privacy and freedom. No government should be trusted with this power.” Other Senators supporting the bill include Edward J. Markey (D-MA), Roger Marshall (R-KS), Bernie Sanders (I-VT) and Elizabeth Warren (D-MA).
The TSA began testing facial recognition at Los Angeles International Airport (LAX) in 2018. The agency’s pitch to travelers framed it as an exciting new high-tech feature, promising a “biometrically-enabled curb-to-gate passenger experience.” The TSA said this summer it planned to expand the program to over 430 US airports within the next few years.
I was back at Washington National Airport this month, and @TSA was up to their old tricks—making it unclear that you ARE able to opt out of using facial recognition technology. I’ll keep holding them accountable. pic.twitter.com/absGn5v1Q3
The program at least technically allows travelers to opt-out, but that process isn’t always transparent in practice. Merkley posted the video above to X in September, demonstrating how agents guided travelers to the facial scanner without mentioning that it’s optional. No signs near the booths said it was optional or explicitly mentioned the gathering of facial data, either. The booths were arranged so that flyers would have difficulty entering their driver’s license or ID (required) without stepping in front of the facial scanner.
Advocacy groups supporting the bill include the ACLU, Electronic Privacy Information Center and Public Citizen. “The privacy risks and discriminatory impact of facial recognition are real, and the government’s use of our faces as IDs poses a serious threat to our democracy,” wrote Jeramie Scott, Senior Counsel and Director of EPIC’s Project on Surveillance Oversight, in Markley’s press release. “The TSA should not be allowed to unilaterally subject millions of travelers to this dangerous technology.”
“Every day, TSA scans thousands of Americans’ faces without their permission and without making it clear that travelers can opt out of the invasive screening,” Sen. Kennedy wrote in a separate news release. “The Traveler Privacy Protection Act would protect every American from Big Brother’s intrusion by ending the facial recognition program.”
This article originally appeared on Engadget at https://www.engadget.com/bipartisan-senate-bill-would-kill-the-tsas-big-brother-airport-facial-recognition-191010937.html?src=rss
Several companies have taken shots at Sonos over the years when it comes to multi-room audio and self-tuning speakers with built-in voice assistants. These devices are a lot more common in 2023 than they used to be, so there’s a whole host of options if you’re looking for alternatives to the Move or Era. JBL is the latest to give it a go with new additions to its Authentics line of speakers. While audio may be its primary use, these devices are the first to run two voice assistants simultaneously without having to switch from one to the other. And on the Authentics 300 ($450), you get a portable unit that doesn’t have to stay parked on a shelf.
Design
Most wireless JBL speakers fit into three categories. They’re either rugged and compact, modern-looking boomboxes or internally-lit party units. For this new Authentics series, the company opted for a more refined design: all black with a gold frame around the front speaker grille. It’s certainly an aesthetic that fits in nicely on a shelf, without the raucous palette of some of the company’s smaller options. All three of the Authentics speakers look almost exactly the same with the main difference being size, although the 300 does have a boombox-like rotating handle the other two don’t. That’s because it’s the only portable option in the range with a built-in battery.
JBL describes the Authentics look as “retro,” but I’m not sure I agree. Sure, there’s a classic vibe thanks to the ‘70s-inspired Quadrex grille the company has employed in the past, but the finer details and onboard controls are decidedly modern. Speaking of controls, up top you’ll find volume, treble and bass knobs that illuminate the level as you turn them. Pressing in the center of the volume dial gives you the playback controls. There are also Bluetooth, power and Moment buttons along with a thin light bar that indicates charging status when the speaker is plugged in. Around back is a microphone mute switch, along with Ethernet, 3.5mm aux, USB-C and power ports.
Software and features
The features and settings for the Authentics speakers are managed inside the JBL One app. Here, you’re greeted with a list of the company’s products you own as well as their connected status, battery level and whatever media is playing on the device. After selecting the Authentics 300, JBL dumps you into the specifics, with battery level once again visible up top. A media player is just below, complete with the ability to sync Amazon Music, Tidal, Napster, Qobuz, TuneIn, iHeartRadio and Calm Radio so you can play them directly inside this app.
JBL offers some limited EQ customization. There’s a manual slider with options for bass, mid and treble, but that’s it. You won’t find any carefully-tuned presets or the ability to make more detailed adjustments along the curve. To get to your tunes quickly, JBL offers a feature called Moment. Accessible via the heart button on the speaker, this allows you to save a favorite album or playlist from the app’s list of supported streaming services. You can also specify volume and auto-off timing during setup.
Lastly, a word on streaming music over Wi-Fi. The Authentics line supports a range of options here, including AirPlay, Chromecast, Alexa, Spotify Connect and Tidal Connect, all of which are more convenient than swiping over to the Bluetooth menu and pairing the speaker every time you use it. With Wi-Fi, playing music on the Authentics devices are just a couple of taps away inside of the app where you’re browsing and selecting music or podcasts from. The speakers also support multi-room audio via AirPlay, Alexa and the Google Home app
Double assistants, double the fun
JBL says the Authentics series is the first set of speakers to run two voice assistants simultaneously. Each of the three units can employ both Alexa and Google Assistant without you having to pick one or the other beforehand. This opens up availability across compatible smart home devices and it means your speaker choice isn’t as limited by your go-to assistant.
The speaker never had trouble hearing my commands and it didn’t mistake a query for one assistant with a question for the other. When you ask Google Assistant for help, a white light shows at the top center of the speaker grille. Summon Alexa and that LED burns blue until your convo is over. When you mute the microphones with the switch on the back of the 300, that light glows red and remains until you turn them back on. As is the case with any smart speaker, the voice command limitations are the general hindrances of the assistants themselves rather than any shortfalls of the speaker.
Sound quality
The Authentics 300 really shines with more mellow, chill music like jazz, bluegrass and acoustic-driven country. There’s a warm inviting sound with great clarity across those styles. When you jump to the full band chaos of metal and hardcore, or even the guitar-heavy but mellifluous tones of Chris Stapleton, the speaker’s tuning overemphasizes vocals and the lack of bassy thump creates a muddy overall sound.
Sure, you can dial up the bass with the physical controls or the EQ in the app, but that doesn’t add the kind of deep low-end that would open up the soundstage. It does improve the overall tuning of albums like Stapleton’s Higher, but there’s still an overemphasis on vocals. You can really hear the impact on The Killer’s Rebel Diamonds as Brandon Flowers almost entirely drowns out the backing synth on “Jenny Was A Friend Of Mine” from Hot Fuss.
At times though, the Authentics 300 is a joy to listen to. Put on some Miles Davis and the speaker is at its best. Ditto for the bluegrass of Nickel Creek, the mellow country tunes of Charles Wesley Godwin and classic Christmas mixes. However, the inconsistency across styles is frustrating. Interestingly, JBL says the Authentics speakers offer automatic self-tuning every time you power them on, but I didn’t notice much difference as I moved the 300 around.
Battery life
JBL says the Authentics 300 will last up to eight hours on a charge. Within two minutes of unplugging, the JBL One app already had the battery level down two percent while playing music via AirPlay 2, at about 30 percent volume. That may seem like a low level, but it’s good for “working music” on this speaker. After 30 minutes, the app was showing 88 percent, but things slowed down and I managed to still have 24 percent remaining when the eight-hours were up. During a test over Bluetooth, the percentages fell in a similar fashion, but I had no problem making it to eight hours at 50 percent volume (Bluetooth was quieter than AirPlay at 30 percent).
JBL does offer a Battery Saving Mode to help you maximize playtime when you’re away from home. This setting “optimizes” both volume and bass to extend battery life, according to the company. There’s also an optional automatic power off feature that kicks in at either 15 minutes, 30 minutes or an hour when you’re not connected to power and audio is no longer playing.
The competition
JBL offers two alternatives to the Authentics 300 within the same speaker range. The smaller Authentics 200 ($350) is more compact, but not portable, while the larger 500 ($700) is a high-fidelity unit with support for Dolby Atmos. Both still run two voice assistants at the same time and have both Bluetooth and Wi-Fi, along with everything else the Authentics line offers. In order to support that immersive audio, the Authentics 500 has more drivers than the other two, with three 25mm tweeters, three 2.75-inch mid-range and a 6.5-inch subwoofer. I look forward to seeing if the extra components and added 170 watts of output power improve sound quality, but it only has slightly lower frequency response than the 300 (40Hz vs. 45Hz).
If you’re looking for something portable that can also pull double duty at home, the Sonos Move 2 is a solid option. It’s too big to haul around with ease, but it does support both Bluetooth and Wi-Fi along with improved sound and better battery life compared to version 1.0. There’s also startling loudness and a durable design. What’s more, it’s the same price as the Authentics 300 at $449. For something more stationary and immersive, you could get the Sonos Era 300 without paying more. My colleague Nathan Ingraham noted the excellent sound quality on this unit during his review, but he did encounter inconsistent performance when it came to spatial audio. There’s also no Google Assistant support on this model.
Wrap-up
When I try to come up with a final verdict on the Authentics 300, I find myself running in circles. For every thing I like about the speaker, there’s immediately something that I don’t. The company certainly deserves some kudos for being the first to run two assistants at the same time and for figuring out how to do that with no confusion or headaches. However, the inconsistent sound quality is a major problem, especially on a $450 speaker. And while the device offers better-than-advertised battery life, it’s larger size makes portability an issue. So unless you absolutely need to seamlessly switch between Alexa and Google Assistant, there are better-sounding options.
This article originally appeared on Engadget at https://www.engadget.com/jbl-authentics-300-review-alexa-and-google-assistant-coexisting-190036434.html?src=rss
Meta has sued the Federal Trade Commission (FTC) in an attempt to stop regulators from reopening a landmark $5 billion privacy settlement from 2020 and to allow it to monetize kids’ data across apps like Facebook, Instagram and Whatsapp. This comes after a federal judge ruled on Monday that the FTC would be allowed to expand on 2020’s privacy settlement, paving the way for the agency to propose tough new rules on how the social media giant could operate in the wake of the Cambridge Analytica scandal.
Today’s lawsuit demands an immediate stop to the FTC’s proceedings, calling it an “obvious power grab” and an “unconstitutional adjudication by fiat.” A Meta spokesperson even referred to the FTC as “prosecutor, judge, and jury in the same case”, as reported by Bloomberg. This is the second attempt by Facebook’s parent company to stop the sanctions in court.
The FTC, for its part, says that Meta has repeatedly violated the terms of 2020’s settlement regarding user privacy. The agency also says that the company has violated the Children’s Online Privacy Protection Act (COPPA) by monetizing the data of younger users. The FTC has already been given the go ahead by a judge to restrict this type of monetization, a decision Meta hopes to overturn.
The FTC also seeks to implement new restrictions that limit Meta’s use of facial recognition, as well as a complete moratorium on new products and services until a third-party completes an audit to determine if the company’s complying with its privacy obligations.
“Facebook has repeatedly violated its privacy promises,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.” To that end, multiple states have sued Meta to stop the monetization of children’s data, along with the EU.
The FTC has been a consistent thorn in Meta’s side, as the agency tried to stop the company’s acquisition of VR software developer Within on the grounds that the deal would deter "future innovation and competitive rivalry." The agency dropped this bid after a series of legal setbacks. It also opened up an investigation into the company’s VR arm, accusing Meta of anti-competitive behavior.
Corporations have been all over the FTC lately in attempts to paint the agency as a prime example of government overreach. Beyond Meta, biotech giant Illumina is suing the FTC to halt a decision that stops it from a $7 billion acquisition of the cancer detection startup Grail.
This article originally appeared on Engadget at https://www.engadget.com/meta-sues-ftc-to-block-new-restrictions-on-monetizing-kids-data-185051764.html?src=rss
The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.
A quick history of watermarking
Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.
Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.
Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.
The here and now
Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.
The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.
“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”
Zhao says that while the White House's executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”
He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”
“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.
We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.
“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.
How Content Credentials work
With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021.
CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology!
Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.
“The analogy that we've used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that's where the watermark sits. It's actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”
Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.
Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That's the beauty of Content Credentials and watermarks together," Sickles said. "They become a much, much stronger system as a basis for authenticity and understanding provenance around an image” than they would individually." Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.
In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.
That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it's adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.
Nightshade: The CR alternative that’s deadly to databases
Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.
Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these "glazed" images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.
While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it's trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.
The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.
Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”
This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss
YouTube Music users who have seen their Spotify- and Apple Music-using friends share their listening stats from this year can now join the party. YouTube Music Recap is now live and you can access it from the 2023 Recap page in the app. You'll be able to see your top artists, songs, moods, genres, albums, playlists and more from 2023. There's also the option to view your Recap in the main YouTube app, along with some other new features for 2023.
This year, you'll be able to add custom album art. YouTube will create this using your top song and moods from the year, as well as your energy score. The platform will mash together colors, vibes and visuals to create a representation of your year in music.
YouTube says another feature will match your mood with your top songs of the year. You might see, for instance, the percentages of songs you listened to that are classed as upbeat, fun, dancey or chill. Last but not least, you can use snaps from Google Photos to create a customized visual that sums up your year in music (and perhaps your year in travel too).
This article originally appeared on Engadget at https://www.engadget.com/youtube-music-brings-personalized-album-art-to-its-2023-recap-182904330.html?src=rss
Mini PCs are becoming quite the trend these days, but despite their small and seemingly portable sizes, they’re not exactly meant or easy to carry around. Their boxy shapes, while space-efficient, aren’t conducive for carrying around, not to mention they need to be plugged into a power source, monitor, keyboard, and mouse to even be usable. There are exceptions to this formula, of course, and one manufacturer had the rather unconventional and somewhat outlandish idea of a portable mini PC that you can carry with you without a bag because the PC itself becomes something like a glamorous purse or handbag just by adding a shoulder strap to its sides.
You can already tell at a glance that this isn’t your run-of-the-mill mini PC. It has a retro-futuristic vibe going with its round rectangle shape, glossy plastic finish, front grille, and chromed levers and feet. The lever at the top is a rather physical volume control that adds a little fun to the act of adjusting the volume. The design is both simple and elegant but actually hides a few tricks that set it further apart from other mini computers.
For starters, the design has two chrome buttons at the sides where you can attach a matching strap to carry it on your shoulders. You’ll probably still want to put it inside a large carrying bag for protection, but you can still carry it directly if you’re just transferring locations quickly, like moving from one room to another in the same building. That said, the SOONNOOZ Mini is not exactly that small, so it might look awkward carrying it like that. And at 1.5kg, it’s not lightweight either.
You’d still need to connect it to some peripherals to use it, of course, but you might not need to have it always plugged in. It has a built-in battery, not unlike a laptop, which could allow you a few hours of use before you need to recharge it. This makes it convenient as a portable entertainment system when paired with a portable projector, though you’ll still need a way to navigate the computer, like with a portable keyboard and mouse.
Its last trick is that its fascia is actually a detachable Bluetooth speaker that can be used on its own. As far as specs go, it’s a pretty standard mini PC that won’t really stand out in terms of performance, though certain configurations could definitely support some light gaming. Interesting as it might be, the SOONNOOZ Mini isn’t something you can acquire outside of China, so its novelty will probably never reach global renown.
Evernote has confirmed the service’s tightly leashed new free plan, which the company tested with some users earlier this week. Starting December 4, the note-taking app will restrict new and current accounts to 50 notes and one notebook. Existing free customers who exceed those limits can still view, edit, delete and export their notes, but they’ll need to upgrade to a paid plan (or delete enough old ones) to create new notes that exceed the new confines.
The company says most free accounts are already inside those lines. “When setting the new limits, we considered that the majority of our Free users fall below the threshold of fifty notes and one notebook,” the company wrote in an announcement blog post. “As a result, the everyday experience for most Free users will remain unchanged.” Engadget reached out to Evernote to clarify whether “the majority of Free users” staying within those bounds includes long-dormant accounts that may have tried the app for a few minutes a decade ago and never logged in again. We’ll update this article if we hear back.
Evernote’s premium plans, now practically essential for anything more than minimal use, include a $15 monthly Personal plan with 10GB of monthly uploads. You can double that to 20GB (and get other perks) with an $18 tier. It also offers annual versions of those plans for $130 and $170, respectively.
The company acknowledged in its announcement post that “these changes may lead you to reconsider your relationship with Evernote.” Leading alternatives with more bountiful free plans include Notion, Microsoft OneNote, Google Keep, Bear (Apple devices only), Obsidian and SimpleNote.
This article originally appeared on Engadget at https://www.engadget.com/evernote-officially-limits-free-users-to-50-notes-and-one-measly-notebook-174436735.html?src=rss
When I first got to see the Expressive E Osmose way back in 2019, I knew it was special. In my 15-plus years covering technology, it was one of the only devices I’ve experienced that actually had the potential to be truly “game changing.” And I’m not being hyperbolic.
But, that was four years ago, almost to the day. A lot has changed in that time. MPE (MIDI Polyphonic Expression) has gone from futuristic curiosity to being embraced by big names like Ableton and Arturia. New players have entered and exited the scene. More importantly, the Osmose is no longer a promising prototype, but an actual commercial product. The questions, then, are obvious: Does the Osmose live up to its potential? And, does it seem as revolutionary today as it did all those years ago? The answers, however, are less clear.
What sets the Osmose ($1,799) apart from every other MIDI controller and synthesizer (MPE or otherwise) is its keybed. At first glance, it looks like almost any other keyboard, albeit a really nice one. The body is mostly plastic, but it feels solid and the top plate is made of metal. (Shoutout to Expressive E, by the way, for building the OSMOSE out of 66 percent recycled materials and for making the whole thing user repairable — no glue or speciality screws to be found.)
The keys themselves have this lovely, almost matte finish and a healthy amount of heft. It’s a nice change of pace from the shiny, springy keys on even some higher-end MIDI controllers. But the moment you press down on a key you’ll see what sets it apart — the keys move side to side. And this is not because it’s cheaply assembled and there’s a ton of wiggle. This is a purposeful design. You can bend notes (or control other parameters) by actually bending the keys, much like you would on a stringed instrument.
This is huge for someone like me who is primarily a guitar player. Bending strings and wiggling my fingers back and forth to add vibrato comes naturally. And, as I mentioned in my review of Roli’s Seaboard Rise 2, I find myself doing this even on keyboards where I know it will have no effect. It’s a reflex.
It’s a very simple thing to explain, but very difficult to encapsulate its effect on your playing. It’s all of the same things that make playing the Seaboard special: the slight pitch instability from the unintentional micro movements of your fingers, the ability to bend individual notes for shifting harmonies and the polyphonic aftertouch that allows you to alter things like filter cutoff on a per-note basis.
These tiny changes in tuning and expression add an almost ineffable fluidity to your playing. In particular, for sounds based on acoustic instruments like flutes and strings, it adds an organic element missing from almost every other synthesizer. There is a bit of a learning curve, but I got the hang of it after just a few days.
What separates it from the Roli, though, is its formfactor. While the Seaboard is keyboard-esque, it’s still a giant squishy slab of silicone. It might not appeal to someone who grew up taking piano lessons every week. The Osmose, on the other hand, is a traditional keyboard, with full-sized keys and a very satisfying action. It’s probably the most familiar and approachable implementation of MPE out there.
If you are a pianist, or an accomplished keyboard player, this is probably the MPE controller you’ve been waiting for. And it’s hands-down one of the best on the market.
Where things get a little dicier is when looking at the Osmose as a standalone synthesizer. But let’s start where it goes right: the interface. The screen to the left of the keyboard is decently sized (around 4 inches) and easy to read at any angle. There are even some cute graphics for parameters such as timbre (a log), release (a yo-yo) and drive (a steering wheel).
There aren’t a ton of hands-on controls, but menu diving is kept to a minimum with some smart organization. The four buttons across the top of the screen take you to different sections for presets, synth (parameters and macros), sensitivity (MPE and aftertouch controls) and playing (mostly just for the arpeggiator at the moment). Then to the left of the screen there are two encoders for navigating the submenus, and the four knobs below control whatever option is listed above them on the screen. So, no, you’re not going to be doing a lot of live tweaking, but you also won’t spend 30 minutes trying to dial in a patch.
Part of the reason you won’t spend 30 minutes dialing in a patch is because there really isn’t much to dial in. The engine driving the Osmose is Haken Audio’s EaganMatrix and Expressive E keeps most of it hidden behind six macro controls. In fact, you can’t really design a patch from scratch — at least not the synth directly. You need to download the Haken Editor, which requires Max (not the streaming service), to do serious sound design. Then you need to upload your new patch to the Osmose over USB. Other than that, you’re stuck tweaking presets.
This isn’t necessarily a bad thing because, frankly, EaganMatrix feels less like a musical instrument and more like a PHD thesis. It is undeniably powerful, but it’s also confusing as hell. Expressive E even describes it as “a laboratory of synthesis,” and that seems about right; patching in the EaganMatrix is like doing science. Except, it’s not the fun science you see on TV with fancy machines and test tubes. Instead it’s more like the daily grind of real life science where you stare at a nearly inscrutable series of numbers, letters, mathematical constants and formulas.
I couldn’t get the Osmose and Haken Editor to talk to each other on my studio laptop (a five-year-old Dell XPS), though I did manage to get it to work on my work-issue MacBook. That being said, it was mostly a pointless endeavor. I simply can’t wrap my head around the EaganMatrix. I was able to build a very basic patch with the help of a tutorial, but I couldn’t actually make anything usable.
There are some presets available on Patchstorage, but the community is nowhere near as robust as what you’d find for the Organelle or ZOIA. And, it’s not obvious how to actually upload those handful of presets to the Osmose. You can drag and drop the .mid files you download to the empty slots across the top of the Haken Editor and that will add them to the Osmose's user presets. But you wont actually see that reflected on the Osmose itself until you turn it off and turn it back on.
Honestly, many of the presets available on Patchstorage cover the same ground as 500 or so factory ones that ship with the Osmose. And it’s while browsing those hundreds of presets that both the power and the limitations of the EaganMatrix become obvious. It’s capable of covering everything from virtual analog, to FM to physical modeling, and even some pseudo-granular effects. Its modular, matrix-based patching system is so robust that it would almost certainly be impossible to recreate physically (at least without spending thousands of dollars).
Now, this is largely a matter of taste, but I find the sounds that come out of this obviously over-powered synth often underwhelming. They’re definitely unique and in some cases probably only possible with the EaganMatrix. But the virtual analog patches aren’t very “analog,” the FM ones lack the character of a DX7 or the modern sheen of a Digitone, and the bass patches could use some extra oomph. Sometimes patches on the Osmose feel like tech demos rather than something you’d actually use musically.
That’s not to say there’s no good presets. There are some solid analog-ish sounds and there are a few decent FM pads. But it’s the physical modeling patches where EaganMatrix is at its best. They definitely land in a kind of uncanny valley, though — not convincing enough to be mistaken for the real thing, but close enough that it doesn’t seem quite right coming out of a synthesizer.
Still, the way tuned drums and plucked or bowed strings are handled by Osmose is impressive. Quickly tapping a key can get you a ringing resonant sound, while holding it down mutes it. Aftertouch can be used to trigger repeated plucks that increase in intensity as you press harder. And bowed patches can be smart enough to play notes within a certain range of each other as legato, while still allowing you to play more spaced out chords with your other hand. (This latter feature is called Pressure Glide and can be fine tuned to suit your needs.)
The level of precision with which you can gently coax sound out of some presets with the lightest touch is unmatched by any synth or MIDI controller I’ve ever tested. And that becomes all the more shocking when you realize that very same patch can also be a percussive blast if you strike the keys hard.
But, at the end of the day, I rarely find myself reaching for Osmose — at least not as a synthesizer. I’ve been testing one for a few months now, and while I have used it quite extensively in my studio, it’s been mostly as a controller for MPE-enabled soft synths like Arturia’s Pigments and Ableton’s Drift. It’s undeniably one of the most powerful MIDI controllers on the market. My one major complaint on that front being that its incredible arpeggiator isn’t available in controller mode.
The Osmose is a gorgeous instrument that, in the right hands, is capable of delivering nuanced performances unlike anything else. Even if, at times, the borrowed sound engine doesn’t live up to the keyboard’s lofty potential.
This article originally appeared on Engadget at https://www.engadget.com/expressive-e-osmose-review-a-game-changing-mpe-keyboard-but-a-frustrating-synthesizer-170001300.html?src=rss
Google is rolling out a trio of system updates to Android, Wear OS and Google TV devices. Each brings new features to associated gadgets. Android devices, like smartphones, are getting updated Emoji Kitchen sticker combinations. You can remix emojis and share with friends as stickers via Gboard.
Google Messages for Android is getting a nifty little refresh. There’s a new beta feature that lets users add a unique background and an animated emoji to voice messages. Google’s calling the software Voice Moods and says it’ll help users better express how they’re “feeling in the moment.” Nothing conveys emotion more than a properly-positioned emoji. There are also new reactions for messages that go far beyond simple thumbs ups, with some taking up the entire screen. In addition, you’ll be able to change chat bubble colors.
The company’s also adding an interesting tool that provides AI-generated image descriptions for those with low-vision. The TalkBack feature will read aloud a description of any image, whether sourced from the internet or a photo that you took. Google’s even adding new languages to its Live Caption feature, enhancing the pre-existing ability to take phone calls without needing to hear the speaker. Better accessibility is always a good thing.
Wear OS is getting a bunch of little updates. You can control more smart home devices and light groups directly from a watch, which comes in handy when creating mood lighting. You can also tell your smart home devices that you are home or away with a tap. There’s a new Assistant Routines feature that automates daily tasks and an Assistant At a Glance shortcut on the watch face that displays information relevant to your day, like the weather and traffic data.
As for Google TV, there are ten new free channels to choose from, bringing the grand total to well over 800. None of these channels require an additional subscription, but they will have commercials. All of these updates begin rolling out today, but it could be a few weeks before they hit everyone’s inbox.
This article originally appeared on Engadget at https://www.engadget.com/googles-latest-android-update-includes-ai-created-image-descriptions-and-animations-for-voice-messages-172522129.html?src=rss
Google is rolling out a string of updates for the Messages app, including the ability to customize the colors of the text bubbles and backgrounds. So, if you really want to, you can have blue bubbles in your Android messaging app. You can have a different color for each chat, which could help prevent you from accidentally leaking a secret to family or friends.
With the help of on-device Google AI (meaning you'll likely need a recent Pixel device to use this feature), you can transform photos into reactions with Photomoji. All you need to do is pick a photo, decide which object (or person or animal) you'd like to turn into a Photomoji and hit the send button. These reactions will be saved for later use, and friends in the chat can use any Photomoji you send them as well.
The new Voice Moods feature allows you to apply one of nine different vibes to a voice message, by showing visual effects such as heart-eye emoji, fireballs (for when you're furious) and a party popper. Google says it has also upgraded the quality of voice messages by bumping up the bitrate and sampling rate.
In addition, there are more than 15 Screen Effects you can trigger by typing things like "It's snowing" or "I love you." These will make "your screen erupt in a symphony of colors and motion," Google says. Elsewhere, Messages will display animated effects when certain reactions and emoji are used.
On top of all of that, users will now be able to set up a profile that appends their name and photo to their phone number to help them have more control over how they appear across Google services. The company says this feature could help when it comes to receiving messages from a phone number that isn't in your group chats. It could help you know the identity of everyone in a group chat too.
Some of these features will be available in beta starting today in the latest version of Google Messages. Google notes that some feature availability will depend on market and device.
Google is rolling out these updates alongside the news that more than a billion people now use Google Messages with RCS enabled every month. RCS (Rich Communication Services) is a more feature-filled and secure format of messaging than SMS and MMS. It supports features such as read receipts, typing indicators, group chats and high-res media. Google also offers end-to-end encryption for one-on-one and group conversations via RCS.
For years, Google had been trying to get Apple to adopt RCS for improved interoperability between Android and iOS. Apple refused, perhaps because iMessage (and its blue bubbles) have long been a status symbol for its users. However, likely to ensure Apple falls in line with European Union regulations, Apple has relented. The company recently said it would start supporting RCS in 2024.
This article originally appeared on Engadget at https://www.engadget.com/google-messages-now-lets-you-choose-your-own-chat-bubble-colors-170042264.html?src=rss