Apple Intelligence at WWDC 2024: The Future of Personalized and Secure AI

Apple Intelligence is a revolutionary step in making your devices smarter and more in tune with your daily life. Imagine having an assistant that not only responds to your commands but also understands your needs and habits. Unlike traditional AI, which relies on generic data, Apple Intelligence integrates deeply with your personal context. It knows your routine, preferences, and relationships, making interactions more meaningful and tailored specifically to you.

What is Apple Intelligence?

Apple Intelligence redefines how you interact with technology, making it feel like a natural extension of yourself. Whether managing your schedule, writing emails, or capturing moments, the intelligence within your devices anticipates your needs and offers solutions proactively. Moreover, it’s built with privacy at its core, ensuring your personal data remains secure and confidential. In an era where privacy concerns are paramount, Apple Intelligence sets a new standard by processing most tasks directly on your device and using private cloud computing only when necessary.

Apple Intelligence

Apple Intelligence is designed to simplify tasks, anticipate your needs, and provide relevant suggestions, all while keeping your data secure. This transformation goes beyond adding new features; it’s about enhancing your user experience. For busy professionals, students, and families, having a device that understands and adapts to your unique context can streamline daily tasks, reduce stress, and enhance productivity. By prioritizing user privacy, Apple Intelligence also addresses growing concerns about data security, making it a trustworthy assistant in the digital age.

Is Apple Intelligence the First True GenAI Ecosystem That’s Secure?

Absolutely. Apple has set the bar high with the first truly secure generative AI ecosystem. Apple Intelligence ensures that your data remains private by performing most of the processing on your device. For more complex tasks that require additional computing power, it uses Private Cloud Compute. This innovative approach allows Apple to harness the full potential of generative AI while ensuring that your personal data is never exposed or compromised.

Apple’s commitment to privacy extends to its server operations as well. All server-based models run on dedicated Apple silicon servers, and the code is open to independent inspection. This means you can trust that your information is handled with the utmost care and transparency. By combining cutting-edge AI capabilities with rigorous privacy standards, Apple Intelligence delivers powerful, personalized intelligence without sacrificing security.

The importance of this secure ecosystem cannot be overstated. In a time where data breaches and privacy concerns are rampant, Apple Intelligence offers peace of mind. Users can confidently rely on their devices to handle sensitive information and perform complex tasks without fear of data misuse. This makes Apple Intelligence a technological advancement playing a vital development role in fostering user trust and safety in digital environments.

Elements and Features of Apple Intelligence

Apple Intelligence is built into your iPhone, iPad, and Mac, bringing a suite of powerful features that make everyday tasks easier and more enjoyable. Let’s dive into what this means for you.

1. Writing Tools

Apple Intelligence includes systemwide writing tools that act like a personal editor wherever you write. Whether you’re drafting an email, jotting down notes, or crafting a blog post, these tools enhance your writing by providing real-time suggestions and corrections. Rewrite allows you to adjust the tone of your text to suit different audiences, whether you need to be more formal for a cover letter or add some humor to a party invitation. Proofread checks your grammar, word choice, and sentence structure, offering edits and explanations to help you improve.

Summarize is another fantastic feature, perfect for condensing long articles or documents into digestible key points or paragraphs. Imagine being able to take a lengthy research paper and quickly create a concise summary for easier review. These tools are integrated into apps like Mail, Notes, and Pages, as well as third-party apps, making them accessible whenever and wherever you need them. With Apple Intelligence, your writing becomes clearer, more precise, and tailored to any situation.

These tools are particularly useful because they enhance communication and productivity. For students, professionals, and casual users alike, having an intelligent assistant that helps refine and improve writing can save time and reduce errors. Whether you’re preparing a report, sending an important email, or simply jotting down ideas, these features ensure that your written communication is effective and polished.

2. Priority Messages and Notifications

Managing your communications has never been easier with Apple Intelligence’s Priority Messages and Notifications. This feature ensures that you see the most important emails and notifications first. For example, a same-day dinner invitation or a boarding pass for an upcoming flight will appear at the top of your inbox, so you never miss critical information. Instead of previewing the first few lines of each email, you can see summaries that highlight the key points, making it easier to manage your inbox efficiently.

The new Reduce Interruptions Focus mode is a game-changer for those who need to stay focused on important tasks. It filters notifications to show only the ones that require immediate attention, such as a text about an early pickup from daycare. This means fewer distractions and more time to concentrate on what matters most. By prioritizing urgent messages and minimizing unnecessary interruptions, Apple Intelligence helps you stay organized and in control of your communications.

These features are crucial for anyone who juggles multiple responsibilities. By keeping you informed of the most important messages and minimizing distractions, Apple Intelligence helps improve productivity and reduce stress. Whether you’re managing work emails, family schedules, or social invitations, having a streamlined communication system ensures that you stay on top of what truly matters without getting overwhelmed by the noise.

3. Audio Recording and Transcription

In the Notes and Phone apps, Apple Intelligence introduces advanced audio recording and transcription capabilities. This feature is incredibly useful for capturing detailed notes during calls and meetings. When you record a conversation, Apple Intelligence transcribes the audio in real-time, allowing you to read and review the content immediately. This makes it easy to keep track of important discussions and decisions without missing any details.

Once the recording is finished, Apple Intelligence generates a summary of the key points, making it simple to recall and share the most important information. For instance, after a brainstorming session, you can quickly review the main ideas and action items without having to listen to the entire recording. This feature saves time and enhances productivity by ensuring that you have accurate and organized notes at your fingertips.

The importance of these features lies in their ability to enhance efficiency and accuracy. For students, professionals, and anyone who attends meetings or interviews, having an automatic transcription and summary tool can significantly reduce the time spent on note-taking and ensure that no critical information is missed. This allows you to focus more on the conversation and less on capturing every detail manually.

4. Image Playground and Genmoji

Image Playground and Genmoji are two exciting features powered by Apple Intelligence that make communication and self-expression more fun and creative. Image Playground lets you create engaging images in seconds, choosing from styles like Animation, Illustration, or Sketch. It’s built right into apps like Messages, so you can easily spice up your conversations with personalized visuals. Whether you’re creating an image of your family hiking or a fun meme for a group chat, the possibilities are endless.

Genmoji takes emojis to the next level by allowing you to create unique, personalized emojis based on descriptions or photos. For example, typing “smiley relaxing wearing cucumbers” generates a Genmoji that you can use in your messages or as a sticker. You can even create Genmoji of friends and family members based on their photos, adding a personal touch to your digital interactions. These features make it easier and more enjoyable to express yourself visually, whether you’re chatting with friends or creating content for social media.

These features are especially significant in today’s online interactions. Visual elements, such as images and emojis, are essential for expressing ourselves. With tools to create personalized and unique visuals, Apple Intelligence enriches our connections with others. This approach makes conversations more engaging and allows for deeper, more personal expression, ultimately fostering stronger relationships and more enjoyable interactions.

5. Enhanced Photo and Video Search

Apple Intelligence makes finding and managing your photos and videos a breeze with enhanced search and editing capabilities. Using natural language processing, you can search for specific photos and videos with simple descriptions. For example, searching for “Maya skateboarding in a tie-dye shirt” quickly pulls up the exact photo from your library. This feature saves time and makes it easier to relive your favorite moments.

The Clean Up tool is another handy feature that helps you enhance your photos by removing distracting objects from the background without altering the main subject. This is perfect for tidying up those almost-perfect shots. Additionally, the new Memories feature allows you to create movies from your photos and videos by simply typing a description. Apple Intelligence selects the best images, crafts a narrative, and even suggests music from Apple Music to match the mood. This makes it effortless to create beautiful, shareable memories from your digital collection.

These features are invaluable for everyone who captures and stores digital memories. With the sheer volume of photos and videos we accumulate, having tools that simplify searching and editing can save significant time and effort. Whether you’re organizing a photo album, creating a digital scrapbook, or just looking for that perfect picture from last summer, Apple Intelligence makes it easier to manage and enjoy your visual content.

6. Siri Enhancements

Powered by Apple Intelligence, Siri is now more capable and contextually aware than ever before. This transformation means your assistant can understand and act on a wider range of requests, making it even more useful in your daily life. Whether you need to schedule an email, switch from Light to Dark Mode, or add a new address to a contact card, Siri can handle these tasks with ease. It can follow along if you stumble over words and maintain context from one request to the next, providing a more seamless interaction.

One of the most exciting improvements is Siri’s ability to perform hundreds of new actions across Apple and third-party apps. For instance, you can ask Siri to bring up an article from your Reading List, send photos from a recent event to a friend, or play a podcast recommended by a contact. The new design includes an elegant glowing light around the edge of the screen when Siri is active, making it visually engaging and easier to know when Siri is listening.

Siri now pulls from your messages, emails, and other apps to provide more personalized assistance. If a friend texts you their new address, you can simply say, “Add this address to his contact card,” and Siri will update the contact information without needing additional input. Additionally, Siri can understand and follow up on previous requests, making it easier to carry on more natural, conversational interactions. Imagine planning a day trip; you could ask about the weather at your destination, book a reservation at a local restaurant, and get directions—all in a seamless flow of conversation.

Another significant enhancement is Siri’s deep integration with personal context. For example, you can ask, “What’s the weather like near Muir Woods?” and then follow up with, “Create an event for a hike there tomorrow at 9 a.m.” Siri maintains the context of “there” to schedule the hike without needing you to repeat the location. The assistant also supports switching between text and voice input seamlessly, which is perfect for situations where speaking aloud isn’t convenient.

These improvements are crucial for making technology more accessible and user-friendly. By expanding its capabilities and making it more contextually aware, Apple Intelligence ensures that users can rely on Siri for a wider range of tasks. This can significantly enhance productivity and convenience, making it easier to manage daily activities and access information quickly. For anyone looking to streamline their workflow or simplify their interaction with technology, these enhancements make Siri an even more integral part of everyday life. Whether you’re coordinating a busy schedule, managing home automation, or simply trying to stay organized, the new capabilities make your digital assistant more indispensable than ever.

Conclusion

Integrating deeply with personal context, Apple Intelligence redefines how we interact with our devices. It understands your routines, preferences, and relationships, offering solutions that feel tailor-made just for you. With most tasks processed on your device, it keeps your data secure while boosting productivity through advanced writing tools, priority notifications, and real-time transcription. Fun features like Image Playground and Genmoji bring a personal touch to visual communication, while enhanced photo and video search make managing digital memories a breeze. Siri’s enhanced abilities enable smooth, conversational interactions, making everyday tech use more intuitive and efficient, all while upholding strong privacy standards. I hope this helps answer and explain what Apple Intelligence is, what the features are, and how you can benefit from them.

The post Apple Intelligence at WWDC 2024: The Future of Personalized and Secure AI first appeared on Yanko Design.

Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance

Siri, Alexa, Cortana, Google Voice ChatGPT 4o, it’s no coincidence that they all have female voices (and sometimes even names). In fact, Spike Jonze even literally called his dystopian AI-based film “Her” after the AI assistant Samantha from the movie. Played by Scarlett Johansson, the movie had a premise that sounded absurd 11 years ago but now feels all too realistic after OpenAI announced their voice-based AI model GPT 4o (omni). The announcement was also followed by an uproar from Johansson, who claimed the AI sounded a lot like her even though she hadn’t given OpenAI the permission to use her voice. Johansson mentioned that she was approached by OpenAI CEO Sam Altman to be the voice of GPT 4o, but declined. Just days before GPT 4o was announced, Altman asked her once again to reconsider, but she still declined. GPT 4o was announced exactly 10 days ago on the 13th of May, and Johansson distinctly recognized the voice as one that sounded quite similar to her own. While there are many who say that the voices don’t sound similar, it’s undeniable that OpenAI was aiming for something that sounded like Samantha from Her rather than going for a more feminine yet mechanical voice like Siri or Google Voice. All this brings a few questions to mind – Why do most AI voice assistants have female voices? How do humans perceive these voices? Why don’t you see that many male AI voice assistants (and does mansplaining have a role to play here)? And finally, do female voice assistants actually help or harm real women and gender equality in the long run? (Hint: a little bit of both, but the latter seems more daunting)

AI Voice Assistants: A History

The history of AI voice assistants extends well before 2011 when Siri was first introduced to the world… however, a lot of these instances were fiction and pop-culture. Siri debuted as the first-ever voice assistant relying on AI, but you can’t really credit Siri with being the first automated female voice because for years, IVR dominated phone conversations. Do you remember the automated voices when you called a company’s service center like your bank, cable company or internet provider? Historically, a lot of times the voices were female, paving the way for Siri in 2011. In fact, this trend dates back to 1878, with Emma Nutt being the first woman telephone operator, ushering in an entirely female-dominated profession. Women operators then naturally set the stage for female-voiced IVR (Interactive Voice Response) calls. However, while IVR calls were predominantly just a set of pre-recorded responses, Siri didn’t blurt out template-ish pre-recorded sentences. She was trained on the voice of a real woman, and conversed with you (at least that time) like an actual human. The choice of a female voice for Siri was influenced by user studies and cultural factors, aiming to make the AI seem friendly and approachable. This decision was not an isolated case; it marked the beginning of a broader trend in the tech industry. In pop culture, however, the inverse was said to be true. Long before Siri in 2011, JARVIS took the stage in the 2008 movie Iron Man as a male voice assistant. Although somewhat robotic, JARVIS could do pretty much anything, like control every micro detail of Tony Stark’s house, suit, and life… and potentially even go rogue. However, that aside, studies show something very interesting about how humans perceive female voices.

JARVIS helping control Iron Man’s supersuit

Historically, Robots are Male, and Voice Assistants are Female

The predominance of female voices in AI systems is not a random occurrence. Several factors contribute to this trend:

  • User Preference: Research indicates that many users find female voices more soothing and pleasant. This preference often drives the design choices of AI developers who seek to create a comfortable user experience.
  • The Emotional Connection: Female voices are traditionally associated with helpful and nurturing roles. This aligns well with the purpose of many AI systems, which are designed to assist and support users in various tasks.
  • Market Research: Companies often rely on market research to determine the most effective ways to engage users. Female voices have consistently tested well in these studies, leading to their widespread adoption.
  • Cultural Influences: There are cultural and social influences that shape how voices are perceived. For instance, in many cultures, female voices are stereotypically associated with service roles (e.g., receptionists, customer service), which can influence design decisions.

These are but theories and studies, and the flip side is equally interesting. Physical robots are often built with male physiques and proportions given that their main job of lifting objects and moving cargo around is traditionally done by men too. Pop culture plays a massive role again, with Transformers being predominantly male, as well as Terminator, T-1000, Ultron, C3PO, Robocop, the list is endless.

What Do Studies Say on Female vs. Male AI Voices?

Numerous studies have analyzed the impact of gender in AI voices, revealing a variety of insights that help us understand user preferences and perceptions. Here’s what these studies reveal:

  • Likability: Research indicates that users generally find female voices more likable. This can enhance the effectiveness of AI in customer service and support roles, where user comfort and trust are paramount.
  • Comfort and Engagement: Female voices are often perceived as more comforting and engaging, which can improve user satisfaction and interaction quality. This is particularly important in applications like mental health support, where a soothing tone can make a significant difference.
  • Perceived Authority: Male voices are sometimes perceived as more authoritative, which can be advantageous in contexts where a strong, commanding presence is needed, such as navigation systems or emergency alerts. However, this perception can vary widely based on individual and cultural differences.
  • Task Appropriateness: The suitability of a voice can depend on the specific task or context. For example, users might prefer female voices for personal assistants who manage everyday tasks, while male voices might be preferred for financial or legal advice due to perceived authority.
  • Cognitive Load: Some research suggests that the perceived ease of understanding and clarity of female voices can reduce cognitive load, making interactions with AI less mentally taxing and more intuitive for users.
  • Mansplaining, A Problem: The concept of “mansplaining” — when a man explains something to someone, typically a woman, in a condescending or patronizing manner — can indirectly influence the preference for female AI voices. Male voices might be perceived as more authoritative, which can sometimes come across as condescending. A male AI voice disagreeing with you or telling you something you already know can feel much more unpleasant than a female voice doing the same thing.

The 2013 movie Her had such a major impact on society and culture that Hong Kong-based Ricky Ma even built a humanoid version of Scarlett Johansson

Do Female AI Voices Help Women Be Taken More Seriously in the Future?

20 years back, it was virtually impossible to determine how addictive and detrimental social media was going to be to our health. We’re at the point in the road where we should be thinking of the implications of AI. Sure, the obvious discussion is about how AI could replace us, flood the airwaves with potential misinformation, and make humans dumb and ineffective… but before that, let’s just focus on the social impact of these voices, and what they do for us and the generations to come. There are a few positive impacts to this trend:

  • Normalization of Female Authority: Regular exposure to female voices in authoritative and knowledgeable roles can help normalize the idea of women in leadership positions. This can contribute to greater acceptance of women in such roles across various sectors.
  • Shifting Perceptions: Hearing female voices associated with expertise and assistance can subtly shift societal perceptions, challenging stereotypes and reducing gender biases.
  • Role Models: AI systems with confident and competent female voices can serve as virtual role models, demonstrating that these traits are not exclusive to men and can be embodied by women as well.

However, the impact of this trend depends on the quality and neutrality of the AI’s responses, which is doubtful at best. If female-voiced AI systems consistently deliver accurate and helpful information, they can enhance the credibility of women in technology and authoritative roles… but what about the opposite?

Female AI Voices Running on Male-biased Databases

The obvious problem, however, is that these AI assistants are still, more often than not, coded by men who may bring their own subtle (or obvious) biases into how these AI bots operate. Moreover, a vast corpus of databases fed into these AI LLMs (Large Language Models) is created by men. Historically, culture, literature, politics, and science, have all been dominated by men for centuries, with women only very recently playing a larger and more visible role in contributing to these fields. All this has a distinct and noticeable effect on how the AI thinks and operates. Having a female voice doesn’t change that – it actually has a more unintended negative effect.

There’s really no problem when the AI is working with hard facts… but it becomes an issue when the AI needs to share opinions. Biases can undermine an AI’s credibility, can cause problems by not accurately representing the women it’s supposed to, can promote wrong stereotypes, and even reinforce biases. We’re already noticing the massive spike in the usage of words like ‘delve’ and ‘testament’ because of how often AI LLMs use them – think about all the stuff we CAN’T see, and how it may affect life and society a decade from now.

In 2014, Alex Garland’s Ex Machina showed how a lifelike female robot passed the Turing Test and won the heart of a young engineer

The Future of AI Voice Assistants

I’m no coder/engineer, but here’s where AI voice assistants should be headed and what steps should be taken:

  • Diverse Training Data: Ensuring that training data is diverse and inclusive can help mitigate biases. This involves sourcing data from a wide range of contexts and ensuring a balanced representation of different genders and perspectives.
  • Bias Detection and Mitigation: Implementing robust mechanisms for detecting and mitigating bias in AI systems is crucial. This includes using algorithms designed to identify and correct biases in training data and outputs.
  • Inclusive Design: Involving diverse teams in the design and development of AI systems can help ensure that different perspectives are considered, leading to more balanced and fair AI systems.
  • Continuous Monitoring: AI systems should be continuously monitored and updated to address any emerging biases. This requires ongoing evaluation and refinement of both the training data and the AI algorithms.
  • User Feedback: Incorporating user feedback can help identify biases and areas for improvement. Users can provide valuable insights into how the AI is perceived and where it might be falling short in terms of fairness and inclusivity.

AI assistants aren’t going anywhere. There was a time not too long ago when it seemed that AI assistants were dead. In the end of 2022, Amazon announced that Alexa had racked up $10 billion in debt and seemed like a failed endeavor – that same month, ChatGPT made its debut. Cut to today and AI assistants have suddenly become mainstream again. Mainstream in a way that almost every company and startup is looking for ways to integrate AI into their products and services. Siri and GPT 4o are just the beginning of this new female voice-led frontier… it’s important we understand the pitfalls and avoid them before it’s too late. After all, if you remember in the movie Terminator Salvation, Skynet was a female too…

The post Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance first appeared on Yanko Design.

Apple needs to solve its overheating problem before it can introduce Siri AI

You may read the title of this article and think to yourself, “wait a second, isn’t Siri already an AI?” Well, yes and no. Back in the early 2010s, “AI” was a popular buzzword among big tech companies, who described their virtual assistant services as AI companions – which was technically true, due to their reliance on natural language processing to interpret voice commands and output rote responses.

When tech corps talk about AI today, they’re primarily referring to chatbots using generative AI models like ChatGPT, which are vastly more advanced – and costly to operate. These have far more agency to “guess” correct information – even if you only provide limited input – thanks to sweeping advancements in machine learning. If you’re used to talking to Siri, Alexa, Cortana, or Google Assistant, you probably know there’s only so much these assistants can do… without a reasonable amount of extra programming to make them speak to far more advanced AI software like ChatGPT.

Throughout 2023, generative AI has dominated the entire conversation about artificial intelligence, and for that reason, a large chunk of software has been retrofitted to include some form of generative AI to help users navigate faster. Now, according to a Bloomberg report by Mark Gurman, Apple is racing to implement a similar model into Siri in iOS devices as soon as next year – with the release of iOS 18. That’s exciting on paper, if not for the fact it feels like Apple is putting the cart before the horse; chasing flashy new features while letting its most loyal users deal with quality control issues.

iPhone 15 Pro models are still overheating

You can’t really escape the fact that the iPhone 15 Pro and Pro Max overheating issue happened. Period. More importantly, it’s still happening. The iOS 17.1 update that Apple promised would fix the overheating issue – which caused everything from periods of system instability to OLED burn-in – isn’t even fully released yet, as of the writing of this article. What it does include, however, is a large number of fixes that should have launched with the $999+ iPhone 15 Pro and $1,199+ iPhone 15 Pro Max to begin with, nearly one month ago.

On one hand, this whole situation feels like it was caused by a simple quality control error. The fact it’s evidently simple enough to fix over a couple of smaller software updates and one larger update, one month on, still raises questions on an organizational level. But in any case, adding a generative AI layer on the OS level is likely to make any existing issues worse, for reasons I’ll explain later on.

Overheating seems to be a software issue

As I previously mentioned, the overheating issue in the iPhone 15 Pro and Pro Max seems like it comes from an interaction between iOS 17 and the iPhone 15 Pro itself. It’s easy to speculate what’s going on under the hood without delving deeper into the inner workings of the iPhone 15 Pro/Max or the massive interlocking systems of code powering it.

But again, purely from a software standpoint, Apple won’t easily escape the potential quality control issues that could come up by making its existing code exponentially more complex and demanding with advanced AI features. That follows the logic of why Apple is introducing more fixes than features with iOS 17.1 – it sorely needs to, in order to satisfy the disappointed iPhone 15 Pro owners – but the fact it’s apparently dropping “$1 billion per year” on integrating AI at this exact moment feels like an example of over-eagerness to catch up with a trend it doesn’t already have its thumb on.

Running generative AI models locally is system-intensive

A sizeable number of programs that use generative AI, do so by outsourcing their AI processing to external providers (like ChatGPT) via API calls, or just by telling a less intelligent program to manually send a command to an AI chatbot through a web interface. This is because the amount of computational power it takes to run an LLM locally is rather high. Some AI processes can work well locally, even on phones, thanks to specialized chips that are optimized for those purposes. For instance, the Google Pixel 8 is inherently built to run a wide number of AI-powered services, like Zoom Enhance, without using any external API.

On an indirectly related note, I can use my NVIDIA RTX 4070 Ti to double the framerates in video games with DLSS 3 and Frame Generation, both of which use an AI model to (locally) enhance the image of my games before they reach my display. That’s the sort of thing you can generally do with specialized chips, without pushing your CPU or other internals beyond their limitations.

Running an LLM locally, which the current iPhone generation isn’t equipped to do, is a whole different thing. And yet, it’s the only conceivable improvement to Siri that would make sense when Apple talks about “upgrading” Siri with AI features. As an Apple user: yes, I want the functionality of generative AI in Siri, but I’d like to not have to worry about it tanking my system performance or rely on an always-on internet connection to work.

Siri likely won’t win the generative AI race

An LLM-based Siri would be behind the times, even if it came out in 2024 as a local function within the next generation of iPhones. That’s already highly unlikely for the reasons I described above. And since we already have so many powerful web-based LLM chatbots like ChatGPT, Google Bard, and Microsoft Bing, it’s not like the addition of a Siri-based AI chatbot would provide a ton of additional value – except to diehard Apple users who wouldn’t ever dip outside of Apple’s ecosystem anyhow.

At this point in the race, it feels like Apple is chasing the hype train rather than defining it. Once again, it’s the cart placed before the horse.

Improved Siri AI is still coming, inevitably

Everyone is adding some form of generative AI bot to their software, so why not also do the same thing to iOS 18? It would add sorely-needed functionality to Siri, making it a true AI assistant with the ability to – for instance – fully manage an iPhone or iPad’s calendar. It could also help iPhone users learn how to use the system more efficiently, and take care of other menial tasks. If Apple is putting as much effort and capital into the project as is evident by the Bloomberg report, it’s definitely coming at some point.

Now, with all that said and done, I’m not knocking a hypothetical generative AI-based Siri upgrade – if we were talking solely about utility. If it can work well, it will be a greatly-welcomed addition to the iOS 18 feature set. However, it’s still questionable about how well it will work. For now, I’ll be more impressed when Apple fixes the iPhone 15 Pro’s overheating issues.

The post Apple needs to solve its overheating problem before it can introduce Siri AI first appeared on Yanko Design.

Integrate ChatGPT into Siri to make your Apple voice assistant 100x smarter

We’ve all been there: you ask Siri a question, and it responds with the ever-frustrating “Sorry I didn’t understand that”. It could be an accent or dialect problem, the fact that Siri isn’t trained on the vast volume of data that Google’s AI is trained on, or just that Apple absolutely dropped the ball on Siri. Apple launched the voice AI as an app almost 13 years ago, although Siri today still feels noticeably dumb and unhelpful even after more than a decade. Google’s voice AI seems to overwhelmingly be the most popular choice nowadays, although there’s a new kid on the block that’s absolutely eating Google’s lunch, at least in the search department.

Unveiled less than a year ago, ChatGPT from OpenAI took the world by storm for its incredible natural language processing capabilities, hitting a million users in just 5 days, and 100 million users in just two months (that’s faster than the growth seen by social media giants like Facebook, Google, and even Snapchat). ChatGPT’s intelligent and human-like responses make it the perfect AI chatbot, especially given that it really understands natural sentences much better than most other AI tools, and it’s most likely to respond with a helpful answer than an apology. Developer Mate Marschalko saw this as a brilliant opportunity to integrate ChatGPT’s intelligence with Siri, turning it into a much more helpful voice AI. With a little bit of hackery (which just took him about an hour), Marschalko combined Siri’s voice features with ChatGPT’s NLP intelligence using Apple’s Shortcuts feature. The result? A much better Voice AI that fetches better search results, offers more meaningful conversations, and even lets you control your smart home in a much more ‘human-friendly’ way… almost rivaling Tony Stark’s JARVIS in terms of usability. The best part? You can do it too!

Marschalko lists out his entire procedure in a Medium blog post that I definitely recommend checking out if you want to build your own ‘SiriGPT’ too, with an approach that required absolutely no coding experience. “I asked GPT-3 to pretend to be the smart brain of my house, carefully explained what it can access around the house and how to respond to my requests,” he said. “I explained all this in plain English with no programme code involved.”

The video above demonstrates exactly how Marschalko’s ‘SiriGPT’ works. His home is filled with dozens of lights, thermostats, underfloor heating, ventilation unit, cameras, and a lot more, making it the perfect testing ground for possibly every use case. Marschalko starts by splitting up his tasks into four distinct request types. The four request types are labeled Command, Query, Answer, and Clarify, and each request type has its own process that GPT-3 follows to determine what needs to be done.

Marschalko’s AI is significantly better at processing indirectly worded commands.

Where the magic really unfolds is in how even indirect requests from Marschalko are understood and translated into meaningful actions by the assistant. While Siri and other AI assistants only respond to direct requests like “turn the light on”, or “open the garage door”, GPT3 allows for more nuanced conversations. In one example, Marschalko says “Notice that I’m recording this video in the dark, in the office. Can you do something about that,” and the assistant promptly turns on the light while responding with an AI-generated response instead of a template reply. In another example, he says “my wife is on the way driving home, and will be here in 15 minutes. Switch lights on for her outside just before she parks up”, to which the assistant responds with “The lights should be turned on by the time your guest arrives!”, demonstrating two powerful things… A. The ability to grasp concepts as complex as ‘wanting to switch a specific light on after a delay of a couple of minutes’, and B. Responding in a natural manner that conveys that they understood exactly what you wanted to be done.

Marschalko hooked all this into a shortcut called Okay Smart Home, and to power it, all he had to do was activate Siri and say the name of the shortcut (in this case “Okay Smart Home”) and then begin talking to his assistant. The four request types basically allowed Marschalko to cover all kinds of scenarios, from controlling smart home appliances with the Command request to asking the status of an appliance (like the temperature of a room or the oven) with the Query request. The Answer request covers more chat-centric queries like asking the AI for recommendations, suggestions, or general information from across the web, and the final Clarify request would allow the AI to ask you to repeat or rephrase your question if it was unable to detect any of the three previous request types.

Although this GPT-powered assistant absolutely runs circles around the visibly dumber Siri, it doesn’t come for free. You have to set up an OpenAI account and buy tokens to access its API. “Using the API will cost around $0.014 per request, so you could perform over 70 requests for $1,” Marschalko says. “Bear in mind that this is considered expensive because our request is very long, so with shorter ones you will pay proportionally less.”

The entire process is listed in this Medium blog post if you want to learn how to build out your own assistant with its distinct features. If you’ve got an OpenAI account and want to use the AI that Marschalko built in the video above, the Okay Smart Home shortcut is available to download and use with your own API keys.

The post Integrate ChatGPT into Siri to make your Apple voice assistant 100x smarter first appeared on Yanko Design.

Apple’s April 20th event revealed by Siri and our favorite conceptual designs we’d love to see!

While the entire world hunts for the barest hint of an Apple leak, the source of this news is from the horse’s mouth, or the horse’s designed personal assistant – Siri! You read that right. I can easily imagine a tech reporter, after a long day of searching for the newest tech hits, decides to ask their personal assistant for some help – Hey Siri! “When is the next Apple Event?”, you actually get a reply saying “The special event is on Tuesday, April 20, at Apple Park in Cupertino, CA. You can get all the details on Apple.com.” This might be one of those rare moments when Siri’s reply managed to shock and awe us.

Now, Siri’s resourcefulness stops about here. As MacRumors first reported, Siri is providing information in some cases only, while most refer the user to Apple’s website for information on events. Rumors although have been abounding about the launch of a new 12.9-inch iPad Pro that boasts of a Mini LED screen and also the launch of long-speculated AirTags. While we patiently wait, here are some stellar Apple-inspired concept designs that take cues from their patents to designer’s innovation to satisfy our innate Apple-related wishful thinking – that maybe the next design they share proves to be the pivotal change our saturated tech space truly needs.

Apple patent reveals a new type of Pencil with replaceable nibs for different creative applications. Watch out, Wacom and Adobe! In a new patent granted to Apple by the US Patent and Trademark Office, the company is reportedly looking at a next-generation Apple Pencil with swappable nib modules. While the patent doesn’t exclusively outline what these nibs would look like or be used for, it focuses more on the underlying technology, which would allow nibs to connect to the pencil handle via a special lightning-style connector. The Apple Pencil is arguably the iPad Pro‘s secret sauce. Along with the Pencil, the iPad Pro becomes the ultimate creator’s setup (for both 2D as well as 3D creation). It would therefore make sense to explore how the Pencil could further become a ‘power-user tool, allowing creators to unlock new potentials. Yanko Design has imagined what these new nibs could look like, with explorations for more niche 2D uses. The interchangeable nibs include a fine-tip nib, a chisel nib, and a flexible brush-pen nib. Other nib styles could unlock 3D modeling features like being able to sculpt on the iPad. “The filing suggests the nib could contain several different sensors for varying purposes. The component list includes tactile sensors, contact sensors, capacitive and touch sensors, a camera, a piezoelectric sensor, a pressure sensor, or a photodiode”, reports Apple Insider.





The iPhone Fold concept designed by Svyatoslav Alexandrov (for the YouTube channel ConceptsiPhone) comes in the familiar Galaxy Fold format, with a primary 6.3-inch screen on the outside, and a larger, 8-inch folding screen on the inside. It ditches FaceID for the reliable TouchID, and turns the entire primary display into a fingerprint sensor – so you can unlock your phone simply by swiping up. The lack of FaceID means a significantly smaller notch with just one front-facing camera for selfies. The back, however, comes with the iPhone 12 Pro’s entire camera setup, featuring wide, ultra-wide, and telephoto lenses, along with a flash and a LiDAR scanner. Open the iPhone up and it transforms into a squarish iPad Mini that’s designed to be perfectly portable.

The iPhone Q by Johan Gustafsson (named after the fact that it comes with a dedicated QWERTY keyboard) presents a bold ‘new’ vision for the iPhone. I use the word ‘new’ in air-quotes because while adding a dedicated tactile keyboard to a phone isn’t new, it’s new for the iPhone, and more importantly, it presents a new format as smartphone companies desperately try to make their phones look less blockish and more gimmicky. In a world of folding phones with creased displays, pathetic battery lives, and clunky bodies, the iPhone Q feels like that perfect premium, enterprise-grade smartphone to pair with the iPad Pro or the MacBook Pro. The phone comes sans a notch but makes up for the lack of a front-facing camera with a complete tactile keyboard right underneath the screen.

A better way to describe PS Design’s iPhone 13 concept is to compare the rear display to Apple’s closest product – the Apple Watch. The 3-inch always-on rear display practically mirrors the watch’s capabilities, allowing you to see the time, notifications, and a wide variety of other data on it. The display on the rear uses Apple’s low-temperature polycrystalline oxide (LTPO) technology to provide its always-on feature, and the fact that it sits right beside the main camera setup (and that it’s larger than the Mi 11 Ultra’s display), means the front of the phone can ditch the notch entirely, creating a beautifully bezel-less iPhone that leaves little to be desired.

Presenting, the ‘Cheesegrater’ Case for the iPhone 12 Pro as visualized by Sarang Sheth. Made from a TPE bumper and a machined aluminum backplate, the case puts the familiar cheesegrater texture on the back of the iPhone to help it cool more efficiently (well at least in theory). In theory, it’s also perfectly suited to mince cloves of garlic or grate some Parmigiano Reggiano. Now that we have a (sort of) clear vision of what the cheesegrater texture would look like on an iPhone, let’s objectively and subjectively judge this. For starters, it just looks like a really bad idea. Objectively speaking, a textured metal body would most certainly trap dirt, dust, pieces of lint, aside from also preventing the phone from wirelessly charging. The current textured metal plate is 1mm thick, and for any sort of texture, you’d need 3D depth which adds unnecessary thickness to the phone – something Apple probably won’t want to do.

The colored iMacs are really a hat-tip to the candy-colored iMac G3 series from back in 2008. According to Jon Prosser, who collaborated with Concept Creator over the following images, the 2021 iMacs are likely to come in 5 colors – black, white, green, blue, and rose gold… just like the 2020 iPad Air. The colors will be much more subtle than the iMac G3’s, but they provide an interesting dynamic to the aluminum-clad all-in-one computers. When viewed from the front, the new iMacs tend to resemble the iPad too, with the bezel treatment. Unlike previous iMacs that came with a massive chin under the screen that sported the Apple logo, the new iMacs will have much more uniform bezels. It isn’t really apparent if they’ll also come with FaceID — although given they’ll be used indoors, in settings where masks aren’t really required!

The iPhone Flip (created by Technizo Concept in collaboration with LetsGoDigital) shares the same nomenclature and folding format as the Galaxy Z Flip from Samsung, albeit with a few key differences. The device measures about the same size as your current iPhone 12 Pro Max, but it sports a folding line across its ‘waist’, which allows the iPhone to fold in half like a clamshell phone from the 90s. This folding structure allows the smartphone to become more compact and easier to carry (although the resulting folded form would be twice the thickness of the phone), while also giving you the option to use the iPhone as a miniature laptop by folding it halfway in an ‘L’ shape.

Love it, hate it, but for sure, you cannot ignore Apple. As these renders show, there are tons of innovation we look forward to from the powerhouse that is Apple.

This Microsoft self-driving car concept takes aim at the ambitious Apple Project Titan

People have been arguing over ‘Windows vs Macintosh’ for decades, but the extent of that ideological battle has only been as far as computers are concerned. With the Microsoft Surface car concept, that feud extends into the world of transportation too!

Meet the Microsoft Surface Car, an automobile that beautifully channels the sleek aesthetic of Microsoft’s Surface laptops into its automotive design. Visualized by Yang Gu-rum, an automotive designer based out of Korea, the Surface car concept shows how design details from tech products can seamlessly be carried forward into car-design. The Surface Car comes with a relatively boxy yet sleek design, dominated by flat surfaces and straight lines. Channeling the same visual language of the Surface tablets and laptops, the car sports a satin-finish silver body, with black accents and tinted glass. The absence of a radiator grill indicates that the concept is powered by an electrical drivetrain, and it wouldn’t be too risky to assume that the car also has some form of a self-driving AI built in. There are no renders of what the interiors of the car looks like, but judging from its design, it seats two people. The vehicle sports camera-based rear-view mirrors, and remarkably streamlined LED strips on the front and the back, serving as headlights and taillights… not to mention that Microsoft logo that shows up on the top right corner of the front of the car, as well as on both doors.

Although there isn’t any indication that Microsoft is working on an in-house production car (and that this car over here is just a fan-made design exercise), the Surface Car does definitely look fascinating. Not to mention the fact that it would definitely make the Apple vs Microsoft rivalry a whole lot more interesting too! I just hope the car doesn’t come bundled with Cortana…

Designer: Yang Gu-Rum

iOS 14.3 is out with support for Fitness+, AirPods Max and more

Apple has rolled out iOS 14.3 with support for Fitness+, AirPods Max, App Store privacy labels and much more. The Apple Watch-powered Fitness+ service is now live in Australia, Canada, Ireland, New Zealand, United Kingdom and the US. It costs $10/mon...

Apple unveils AirPods Max, its first over-ear noise-cancelling headphones

After months of rumors, we finally have confirmation. Apple officially (and quietly) announced its first over-ear headphones today. The AirPods Max offer active noise cancellation (ANC), easy access to Siri and all the current elements of Apple desig...

Apple’s HomePod mini won’t leave marks on your fancy wood furniture

Between its diminutive size and more affordable price tag, Apple’s new HomePod mini has a lot going for it over its $300 predecessor. But its greatest strength might be what it won’t do to your expensive furniture. That is, it won’t leave unsightly w...