Apple halts iPadOS 18 update for M4 iPad Pro after bricking reports

Apple has temporarily paused the rollout of iPadOS 18 for M4 iPad Pro models, some of the most expensive iPads that the company sells, after some users complained that the update bricked their devices. Apple acknowledged the issue in a statement to Engadget, saying, “We have temporarily removed the iPadOS 18 update for M4 iPad Pro models as we work to resolve an issue that is impacting a small number of devices.”

The issue first came to light through Reddit, where a growing number of M4 iPad Pro users described how their iPads became unusable after they tried installing the latest version of iPadOS. “At some point during the update my iPad turned off, and would no longer turn on,” a user named tcorey23 posted on Reddit. “I just took it to the Apple Store who confirmed it’s completely bricked, but they said they had to send it out to their engineers before they can give me a replacement even though I have Apple care.”

Another Reddit user called Lisegot wrote that the Apple Store they took their bricked M4 iPad Pro to did not have a replacement in stock, which meant they they would need to wait five to seven days for a working iPad. “No one was particularly apologetic and they even insinuated that there was no way for them to know whether the update caused this,” they wrote.

Having a software bug brick an iPad is rare. ArsTechnica, which first reported this story, pointed out that iPads can typically be put into recovery mode if a software update goes bad.

If you own an M4 iPad Pro, Apple will no longer offer you iPadOS 18 until it fixes the issue. It’s not clear when it will be fixed.

This article originally appeared on Engadget at https://www.engadget.com/mobile/tablets/apple-halts-ipados-18-update-for-m4-ipad-pro-after-bricking-reports-000258237.html?src=rss

OpenAI’s new safety board has more power and no Sam Altman

OpenAI has announced significant changes to its safety and security practices, including the establishment of a new independent board oversight committee. This move comes with a notable shift: CEO Sam Altman is no longer part of the safety committee, marking a departure from the previous structure.

The newly formed Safety and Security Committee (SSC) will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other key members include Quora CEO Adam D'Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, former EVP and General Counsel of Sony Corporation. 

This new committee replaces the previous Safety and Security Committee that was formed in June 2024, which included Altman among its members. The original committee was tasked with making recommendations on critical safety and security decisions for OpenAI projects and operations.

The SSC's responsibilities now extend beyond recommendations. It will have the authority to oversee safety evaluations for major model releases and exercise oversight over model launches. Crucially, the committee will have the power to delay a release until safety concerns are adequately addressed. 

This restructuring follows a period of scrutiny regarding OpenAI's commitment to AI safety. The company has faced criticism in the past for disbanding its Superalignment team and the departures of key safety-focused personnel. The removal of Altman from the safety committee appears to be an attempt to address concerns about potential conflicts of interest in the company's safety oversight.

OpenAI's latest safety initiative also includes plans to enhance security measures, increase transparency about their work, and collaborate with external organizations. The company has already reached agreements with the US and UK AI Safety Institutes to collaborate on researching emerging AI safety risks and standards for trustworthy AI. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-safety-board-has-more-power-and-no-sam-altman-230113547.html?src=rss

OpenAI’s new o1 model is slower, on purpose

OpenAI has unveiled its latest artificial intelligence model called o1, which, the company claims, can perform complex reasoning tasks more effectively than its predecessors. The release comes as OpenAI faces increasing competition in the race to develop more sophisticated AI systems. 

O1 was trained to "spend more time thinking through problems before they respond, much like a person would," OpenAI said on its website. "Through training, [the models] learn to refine their thinking process, try different strategies, and recognize their mistakes." OpenAI envisions the new model being used by healthcare researchers to annotate cell sequencing data, by physicists to generate mathematical formulas and software developers.  

Current AI systems are essentially fancier versions of autocomplete, generating responses through statistics instead of actually "thinking" through a question, which means that they are less "intelligent" than they appear to be. When Engadget tried to get ChatGPT and other AI chatbots to solve the New York Times Spelling Bee, for instance, they fumbled and produced nonsensical results.

With o1, the company claims that it is "resetting the counter back to 1" with a new kind of AI model designed to actually engage in complex problem-solving and logical thinking. In a blog post detailing the new model, OpenAI said that it performs similarly to PhD students on challenging benchmark tasks in physics, chemistry and biology, and excels in math and coding. For example, its current flagship model, GPT-4o, correctly solved only 13 percent of problems in a qualifying exam for the International Mathematics Olympiad compared to o1, which solved 83 percent.  

The new model, however, doesn't include capabilities like web browsing or the ability to upload files and images. And, according to The Verge, it's significantly slower at processing prompts compared to GPT-4o. Despite having longer to consider its outputs, o1 hasn't solved the problem of "hallucinations" — a term for AI models making up information. "We can't say we solved hallucinations," the company's chief research officer Bob McGrew told The Verge

O1 is still at a nascent stage. OpenAI calls it a "preview" and is making it available only to paying ChatGPT customers starting today with restrictions on how many questions they can ask it per week. In addition, OpenAI is also launching o1-mini, a slimmed-down version that the company says is particularly effective for coding. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-o1-model-is-slower-on-purpose-185711459.html?src=rss

Apple invents its own version of Google Lens called Visual Intelligence

Apple has introduced a new feature called Visual Intelligence with the iPhone 16, which appears to be the company's answer to Google Lens. Unveiled during its September 2024 event, Visual Intelligence aims to help users interact with the world around them in smarter ways.

The new feature is activated by a new touch-sensitive button on the right side of the device called Camera Control. With a click, Visual Intelligence can identify objects, provide information, and offer actions based on what you point it at. For instance, aiming it at a restaurant will pull up menus, hours, or ratings, while snapping a flyer for an event can add it directly to your calendar. Point it at a dog to quickly identify the breed, or click a product to search for where you can buy it online.

Later this year, Camera Control will also serve as a gateway into third-party tools with specific domain expertise, according to Apple's press release. For instance, users will be able to leverage Google for product searches or tap into ChatGPT for problem-solving, all while maintaining control over when and how these tools are accessed and what information is shared. Apple emphasized that the feature is designed with privacy in mind, meaning the company doesn’t have access to the specifics of what users are identifying or searching.

Apple claims that Visual Intelligence maintains user privacy by processing data on the device itself, ensuring that the company does not know what you clicked on.

Catch up on all the news from Apple’s iPhone 16 event!

This article originally appeared on Engadget at https://www.engadget.com/ai/apple-invents-its-own-version-of-google-lens-called-visual-intelligence-180647182.html?src=rss

There’s no Apple Watch Ultra 3, just a new color and a new band

At its September 2024 iPhone event, Apple didn’t announce a new version of the Apple Watch Ultra like it has done the past two years. Instead, it updated the Apple Watch Ultra 2 with a new color and a band, as well as several enhancements through watchOS 11.

The Ultra 2 now comes in a satin black finish, which, Apple claims, was achieved through a "custom blasting process" and a "diamond-like carbon physical vapor deposition," giving the rugged smartwatch a refined and durable look. A notable addition is a new band — a titanium Milanese loop, inspired by mesh historically used by divers. This band is designed for both style and performance, featuring corrosion-resistant titanium that makes it suitable for scuba diving and other water activities. Apple also highlighted that the Ultra 2 is made from 95% recycled grade 5 titanium as part of its efforts to be "carbon neutral." 

The Ultra 2 will also get new software enhancements through WatchOS 11, which introduces a bevy of new features such as sleep apnea notifications, an enhanced Vitals app and the Tides app, which offers tidal forecasts and conditions for various water activities. Another practical upgrade is the ability to play audio directly through the Watch’s built-in speakers, allowing users to listen to music, podcasts and more without needing to connect to headphones or another device. (These features are also coming to the new Apple Watch Series 10, which was also announced today alongside the iPhone 16 and AirPods 4.)

Pre-orders for the black titanium version, along with the new titanium Milanese Loop and other updated bands, are now available, with shipping beginning September 20. The Apple Watch Ultra 2 continues to start at $799, though you can get earlier band and color combos right now at Amazon for as much as $110 off

Catch up on all the news from Apple’s iPhone 16 event!

Update, Sept. 9, 6:34PM ET: Added some additional context, including specifying that the new WatchOS 11 features coming to the Ultra 2 will also be available on the Apple Watch Series 10. 

This article originally appeared on Engadget at https://www.engadget.com/wearables/theres-no-apple-watch-ultra-series-3-just-a-new-color-and-a-new-band-173236966.html?src=rss

The US, UK, EU and other major nations have signed a landmark global AI treaty

The United States, United Kingdom, European Union, and several other countries have signed an AI safety treaty laid out by the Council of Europe (COE), an international standards and human rights organization. This landmark treaty, known as the Framework Convention on artificial intelligence and human rights, democracy, and the rule of law, opened for signature in Vilnius, Lithuania. It is the first legally binding international agreement aimed at ensuring that AI systems align with democratic values.

The treaty focuses on three main areas: protecting human rights (including privacy and preventing discrimination), safeguarding democracy, and upholding the rule of law. It also provides a legal framework covering the entire lifecycle of AI systems, promoting innovation, and managing potential risks.

Besides the US, UK and the EU, the treaty’s other signatories include Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and Israel. Notably absent are many major countries from Asia and the Middle East, and Russia, but any country will be eligible to join it in the future as long as they commit to comply with its provisions, according to a statement from the Council of Europe.

“We must ensure that the rise of AI upholds our standards, rather than undermining them,” said COE secretary general Marija Pejčinović Burić in the statement. “The Framework Convention is designed to ensure just that. It is a strong and balanced text - the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives.

The treaty will enter into force three months after five signatories, including at least three Council of Europe member states, ratify it. The COE’s treaty joins other recent efforts to regulate AI including the UK's AI Safety Summit, the G7-led Hiroshima AI Process, and the UN's AI resolution.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-us-uk-eu-and-other-major-nations-have-signed-a-landmark-global-ai-treaty-232119489.html?src=rss

This startup wants to be the iTunes of AI content licensing

TollBit wants to be a marketplace for AI companies and publishers.
TollBit

The 28-year-old founders of TollBit, a New York-based startup that is all of six months old, think we’re living in the “Napster days” of AI. Just like people of a certain generation downloaded digital music, companies are ripping off vast swaths of the internet without paying the rights holders. They want TollBit to be the iTunes of the AI world.

“It’s kind of the Wild West right now,” Olivia Joslin, the company’s co-founder and chief operating officer, told Engadget in an interview. “We want to make it easier for AI companies to pay for the data they need.” Their idea is simple: create a marketplace that connects AI companies that need access to fresh, high-quality data to the publishers who actually spend money creating it.

AI companies have, indeed, only recently started paying for (some of) the data they need from news publishers. OpenAI kicked off an arms race at the end of 2022, but it was only a year ago that the company signed the first of its many licensing deals with the Associated Press. Later that year, OpenAI announced a partnership with German publisher Axel Springer, which operates Business Insider and Politico in the US. Multiple publishers including Vox, the Financial Times, News Corp and TIME, have since signed deals with OpenAI and Google.

But that still leaves countless other publishers and creators out in the cold — without the option to strike this Faustian Bargain even if they want to. This is the “long tail” of publishers that TollBit wants to target.

“Powerful AI models already exist and they have already been trained,” Toshit Panigrahi, TollBit’s co-founder and CEO told Engadget. “And right now, there are thousands of applications just taking these existing models off the shelves. What they need is fresh content. But right now, there’s no infrastructure — neither for them to buy it, nor for content-makers to sell it in a way that is seamless.”

Both Joslin and Panigrahi weren’t particularly knowledgeable about the media industry. But they both knew how online marketplaces and platforms operated – they were colleagues at Toast, a platform that lets restaurants manage billing and reservations. Panigrahi watched both the deals — and the lawsuits — pile up in the AI sector, then called on Joslin.

Their early conversations were about RAG, which stands for Retrieval-Augmented Generation in the AI world. With RAG, AI models first look up information from specific databases (like the scrapable portions of the internet) and use that information to synthesize a response instead of simply relying on training data. Services like ChatGPT don’t know current home prices, or the latest news. Instead, they fetch that data, typically by looking at websites. That absence of fresh data is why AI chatbots are often stumped by queries about breaking news events — if they don’t scrape the latest data, they simply can’t keep up.

“We thought that using content for RAG was something fundamentally different than using it for training,” said Panigrahi.

Olivia Joslin is TollBit's co-founder
TollBit

By some estimations, RAG is the future of search engines. More and more, people are asking questions on the internet and expecting complete answers in return instead of a list of blue links. In just over a year, startups like Perplexity, backed by Jess Bezos and NVIDIA among others, have burst onto the scene with ambitions of taking on Google. Even OpenAI has plans to someday let ChatGPT become your search engine. In response, Google has sprung into action — it now culls relevant information from search results and presents it as a coherent answer at the top of the results page, a feature it calls AI Overviews. (It doesn’t always work well, but is seemingly here to stay).

The rise of RAG-based search engines has publishers shaking in their boots. After all, who would make money if AI reads the internet for us? After Google rolled out AI Overviews earlier this year, at least one report estimated that publishers would lose more than $2 billion in ad revenue because fewer people would have a reason to visit their websites. “AI companies need continuous access to high quality content and data too,” said Joslin, “but if you don’t figure out some economic model here, there will be no incentive for anyone to create content, and that’ll be the end of AI applications too.”

Instead of cutting one-off checks, TollBit’s model aims to compensate publishers on an ongoing basis. Hypothetically, if someone’s content was used in a thousand AI-generated answers, they would get paid a thousand times at a price that they set and which they can change on the fly.

Each time an AI company accesses fresh data from a publisher through TollBit, it can pay a small fee set by the publisher that Panigrahi and Joslin think should be roughly equivalent to whatever a traditional page view would have made the publisher. And the platform can also block AI companies who haven’t signed up from accessing publishers’ data.

So far, the founders claim to have onboarded a hundred publishers and are in pilots with three AI companies since TollBit launched in February. They refused to reveal which publishers or AI companies had signed on so far, citing confidentiality clauses, but did not deny speaking with OpenAI, Anthropic, Google and Meta. So far, they say that no money has changed hands between AI companies and publishers on their platform.

Toshit Panigrahi is TollBit's co-founder
TollBit

Until that happens, their model is still a giant hypothetical — although one that investors have so far poured $7 million into. TollBit’s investors include Sunflower Capital, Lerer Hippeau, Operator Collective, AIX and Liquid 2 Ventures, and more investors are currently “pounding down their door,” Joslin claimed. In April, TollBit also brought on Campbell Brown as a senior adviser, a former television anchor who previously acted as Meta’s head of news partnerships for the better part of a decade.

In spite of some high-profile lawsuits, AI companies are still scraping the internet for free and largely getting away with it. Why would they have any incentive to actually pay publishers for this data? There are three big reasons, the founders say: more websites are taking steps to prevent their content from being scraped ever since generative AI went mainstream, which means that scraping the web is getting harder and more expensive; no one wants to deal with ongoing copyright lawsuits; and, crucially, being able to easily pay for content on an as-needed basis lets AI companies tap into smaller and more niche publications because it isn’t possible to strike individual licensing deals with every single website. Joslin also pointed out that multiple TollBit investors have also invested in AI companies which they worry might face litigation for using content without permission.

Getting AI companies to pay for content could provide a recurring revenue stream for not just large publishers but to potentially anyone who publishes anything online. Last month, Perplexity — which was accused of illegally scraping content from Forbes, Wired and Condé Nast — launched a Publishers’ Program under which it plans to share a cut of any revenue it earns with publishers if it uses their content to generate answers with AI. The success of the program, however, hinges on how much money Perplexity makes when it introduces ads in the app later this year. Like Tollbit, it's another complete hypothetical.

“Our thesis with TollBit is that if you lose a page view today, you should be compensated for it immediately rather than a few years after when a tech company figures out its ads program,” said Panigrahi about Perplexity’s initiative.

Despite all the existing licensing deals and technical advances, AI-powered chatbots still make for terrible news sources. They still make up facts and confidently conjure up entire links to stories that don’t actually exist. But technology companies are now stuffing AI chatbots in every crevice they can, which means that many people will still get their news from one of these products in the not-so-distant future.

A more cynical take on TollBit’s premise is that the startup is effectively offering hush money to publishers whose work is more likely than not to be sausaged into misinformation. Its founders, naturally, don’t agree with the characterization. “We are careful about the AI partners we onboard,” Panigrahi said. “These companies are very mindful about the quality of input material and correctness of responses. We’re seeing that paying for content – even nominal amounts – creates incentive to respect the raw inputs into their systems instead of treating it as a free, replaceable commodity.”

This article originally appeared on Engadget at https://www.engadget.com/ai/this-startup-wants-to-be-the-itunes-of-ai-content-licensing-162942714.html?src=rss

OpenAI will now use content from Wired, Vogue and The New Yorker in ChatGPT’s responses

Condé Nast, the media conglomerate that owns publications like The New Yorker, Vogue and Wired, has announced a multi-year partnership OpenAI to display content from Condé Nast titles in ChatGPT as well as SearchGPT, the company’s prototype AI-powered search engine. The partnership comes amid growing concerns over the unauthorized use of publishers’ content by AI companies. Last month, Condé Nast sent a cease-and-desist letter to AI search startup Perplexity, accusing it of plagiarism for using its content to generate answers.

“Over the last decade, news and digital media have faced steep challenges as many technology companies eroded publishers’ ability to monetize content, most recently with traditional search,” Condé Nast CEO Roger Lynch wrote to employees in a memo that was first reported by Semafor’s Max Tani. “Our partnership with OpenAI begins to make up for some of that revenue, allowing us to continue to protect and invest in our journalism and creative endeavors.” It's not clear how much money OpenAI will pay Condé Nast for the partnership. 

The move makes Condé Nast the latest in a growing line of publishers who have struck deals with OpenAI. These include News Corp, Vox, The Atlantic, TIME and Axel Springer among others. But not everyone is on board with the idea. Last year, the New York Times filed a lawsuit against OpenAI for using information from the publisher’s articles in ChatGPT’s responses.

Lynch has been vocal about these concerns. In January, he warned that “many” media companies could face financial ruin by the time it would take for litigations against AI companies to conclude and called upon Congress to take “immediate action" to take "immediate action" and clarify that publishers must be compensated by AI companies for both training and output if they use their content. Earlier this month, three senators introduced the COPIED ACT, a bill that aims to protect journalists and artists from having their content scraped by AI companies without their permission.

Perplexity, which was recently accused by Forbes and Wired of stealing content, now plans to share a portion of potential advertising revenues with publishers who sign up for a newly-launched Publishers’ Program.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-now-use-content-from-wired-vogue-and-the-new-yorker-in-chatgpts-responses-193057432.html?src=rss

Google brings the AI feature that told Americans to eat rocks to six more countries

Google is expanding AI Overviews, the feature that summarizes answers to complex questions from the web and presents them at the top of traditional search results, to six more countries — India, Japan, Mexico, Indonesia, Brazil and the United Kingdom — from Thursday with support for local languages as well as English.

That’s less than three months after AI Overviews launched in the United States and promptly told people to eat rocks and put glue on their pizzas. Bringing them to millions more people begs the question: How do you prevent another glue pizza fiasco in a foreign country?

“It’s a challenging space,” Hema Budaraju, senior director of product management for Search at Google, told Engadget in an interview. “Understanding quality at the scale of the web across all these languages is a hard problem, and integrating LLMs (large language models) is not easy to do. Using AI to better understand languages is pretty critical.”

To prevent a glue pizza situation in, say, Hindi or Japanese, Google said it has done language-specific testing of AI Overviews as well as red-teaming, a technique used by the tech industry to stress-test how systems might behave under attack from bad actors. “We are focused on addressing potential issues and we are committed to listening and acting quickly,” Budaraju said. In May, Google put additional guardrails on AI Overviews after its outlandish responses, such as limiting the inclusion of satire and humor content and restricting the types of queries that triggered the feature to begin with.

In addition to expanding the feature to more countries, Google is also making one more big change to AI Overviews: it will now prominently display links to sources on the right-hand side of each AI-generated answer, making it easier for people to click through to the actual website where the answer came from. And for a small percentage of users, it will also add links directly within the text of AI Overviews. If this move is rolled out more broadly, it could allay concerns from publishers about losing traffic to AI that reads the internet for people and reduces the need to click through to actual web pages.

"This experiment has shown early positive results and we are able to drive more traffic with links directly in the text,” Budaraju said.

Users who opt in to Search Labs, the company’s platform for trying out upcoming features ahead of their general release also get to play with a couple of additional features — the ability to “save” a specific AI Overview for future reference, as well as an option to simplify the language of an AI-generated answer, something that Google previewed earlier this year.

Update, August 15 2024, 12:50 PM ET: This story has been updated to clarify that links within the text of AI Overviews are available for a small percentage of users, not just those signed up for Search Labs.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-brings-the-ai-feature-that-told-americans-to-eat-rocks-to-six-more-countries-160025221.html?src=rss

Here are all the AI features coming to the Pixel 9 phones

Google’s Pixel 9 lineup is powered by cutting-edge hardware like the Tensor G4 processor and tons of RAM that should help keep your phone feeling fast and fresh for years to come. But all that hardware is also designed to power brand new AI experiences.

“Android is reimagining your phone with Gemini,” wrote Sameer Samat, Google’s president of the Android Ecosystem, in a blog post published on Tuesday. “With Gemini deeply integrated into Android, we’re rebuilding the operating system with AI at the core. And redefining what phones can do.”

Here are the big new AI features coming with the new Pixel devices.

Gemini, Google’s AI-powered chatbot, will be the default assistant on the new Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL and Pixel 9 Pro Fold phones. To access it, simply hold down your phone’s power button and start talking or typing in your question.

A big new change is that you can now bring up Gemini on top of any app you’re using to ask questions about what’s on your screen, like finding specific information about a YouTube video you’re watching, for instance. You’ll also be able to generate images directly from this overlay and drag and drop them into the underlying app, as well as upload a photo into the overlay and ask Gemini questions about it.

Gemini overlays
Google

If you buy the pricier Pixel 9 Pro (starting at $999), Google’s bundling in one free year of the Google One AI Premium Plan that typically runs $19.99 a month for access to 2 TB cloud storage and access to Gemini Advanced, which lets you try Gemini directly in Google products like Gmail and Docs to help you summarize text and conversations.

Crucially, Gemini Advanced also includes access to Gemini Live, which Google describes as a new “conversational experience” to make speaking with Gemini more intuitive (I’m not the only one having a hard time keeping track of all the things Google brands “Gemini,” don’t worry). You can use Gemini Live to have natural conversations with Gemini about anything that’s on your mind, including, Google says, using it for help with complex questions and job interviews, choosing between a variety of voices that sound stunningly lifelike, according to demos that Google showed Engadget earlier this month.

Gemini Live
Google

Recently, OpenAI released Advanced Voice Mode, a similar feature, to paying ChatGPT customers with a voice assistant that can talk, sing, laugh and allegedly understand emotion. When asked if getting Gemini Live to sound as human-like as possible was one of Google’s goals, Sissie Hsiao, the company’s vice president and general manager of Gemini Experiences told Engadget that Google was “not here to flex the technology. We’re here to build a super helpful assistant.”

Google is using AI to make both taking and editing pictures dramatically better with the Pixel 9 phones, something they’ve focused on for years now. A new feature called Add Me, which will be released in preview with the new devices, for instance, will let you take a group photo and then take a picture of the photographer separately and add it to the main picture seamlessly — handy if you don’t have anyone around to take a picture of your entire group.

Meanwhile, Magic Editor, the built-in, AI-powered editing tool on Android, can now suggest the best crops and even expand existing images by filling in details with generative AI to get more of the scene. Finally, a new “reimagine” feature will let you add elements like fall leaves or make grass greener — punching up your images, yes, but blurring the line between which of your memories are real and which are not.

You can already search anything that you see on your phone by simply circling it, but now, AI will intelligently clip whatever you’ve circled and let you instantly share it in a text message or an email. Handy.

Circle to Search with Share
Google

Pixel Screenshots
Google

If you can't figure out how to sort through the tons of pictures of receipts, tickets and screenshots from social media littering your phone's photo gallery, use AI to help. A brand new app called Pixel Screenshots available on the new Pixel devices at launch will go through your photo library (once you give it permission), pick out screenshots, and then identify what's within each picture. You can also click pictures of real-world signs (such as a music festival you want to attend, for example), and directly ask the app relevant questions like when do the tickets for the festival go on sale. 

A new feature called Call Notes will automatically save a private summary of each phone call. so you can refer back to a transcript to quickly look up important information from the call like an appointment time, address, or phone number later. Google notes that the feature runs fully on-device, which means that nothing is sent to Google's servers for processing. And everyone on the call will be notified if you've activated Call Notes. 

Pixel Studio
Google

We've been able to use AI to generate images for a long time now, but Google is finally building in the feature right into Android thanks to Pixel Studio, a dedicated new image-generation app for Pixel 9 devices. The app runs on both, an on-device model powered by the new Tensor G4 processor and Google's Imagen 3 model in the cloud. You can share any images you create in the app through messaging or email directly. 

A similar feature called Apple Image Playground is coming to newer iPhones with iOS 18 in September.

Google will use AI to create custom weather reports for your specific location right at the top of a new Weather app so you "don't have to scroll through a bunch of numbers to get a sense of the day's weather," according to the company's blog post

This article originally appeared on Engadget at https://www.engadget.com/ai/here-are-all-the-ai-features-coming-to-the-pixel-9-phones-173551511.html?src=rss