AI-powered smart tea set creates narratives from stories shared by friends

AI can almost be found everywhere these days, but most people will probably be familiar with generative AI like ChatGPT. These are mostly encountered in computers and phones because that’s where they make the most sense, but their applications can definitely go beyond that limited scope. These conversational AI can, for example, be embedded anywhere that has a computer, a microphone, and a speaker, which can literally be any object you can imagine.

Yes, it might result in an odd combination that challenges your notions of what AI chatbots can do for you. This smart tea set concept, for example, is a rather intriguing example of this idea, weaving technology, tea-drinking rituals, and social bonds in an unexpected way.

Designers: Kevin Tang, Kelly Fang

ChatGPT and others like it have started to approach the so-called “uncanny valley” in a totally non-visual way. The responses they give sound or read so naturally that it really takes an expert to distinguish it from human output. Talking to these chatbots almost feels like talking to someone, perhaps a friend who is willing to hear how your day went.

That’s the kind of experience that gpTea, a play on the brewed drink and this type of generative AI, wants to bring in a rather novel way. As a smart tea set, it not only brews tea but even tips the kettle forward to automatically pour the drink into a specially designed cup. Impressive as that may seem, that’s not even its most notable feat.

1

gpTea’s key feature is actually in interactive storytelling that weaves the responses of friends and family separated by distance and connected only through the Internet using this smart tea set. It asks you how your day went and, depending on your response, it might share a similar story given by another friend or loved one in the past. The more people use it, the bigger and longer the narrative grows. It’s almost like developing an oral tradition or history, except one that’s stored in the memory of an AI.

1

Another interesting feature of gpTea is the glass cup itself, which has a circular display at the bottom. The AI also generates images related to the story it’s telling, making it feel like you’re using magic to see the scene inside the cup. Admittedly, it’s a rather convoluted and complex way of sharing stories with friends when you can just talk to each other, but it’s still an interesting application of AI that actually tries to build connections between humans who are physically far apart.

The post AI-powered smart tea set creates narratives from stories shared by friends first appeared on Yanko Design.

OpenAI is building their own AI Chips to take on Nvidia’s Chip Dominance

In a strategic move that feels like it’s straight from an Aaron Sorkin movie, OpenAI has started crafting its own AI chip, a custom creation designed to tackle the heavy demands of running its advanced models. The company, known for developing ChatGPT, has partnered with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) to roll out its first in-house chip by 2026, Reuters reports. While many giants might build factories to keep all chip manufacturing in-house, OpenAI opted to shelve that multi-billion-dollar venture. It’s instead using industry muscle in a way that’s both practical and quietly rebellious.

Why bother with the usual suppliers? OpenAI is already a massive buyer of Nvidia’s GPUs, essential for training and inference—the magic that turns data into meaningful responses. But here’s the twist: Nvidia’s prices are soaring, and OpenAI wants to diversify. AMD’s new MI300X chips add to the mix, showing OpenAI’s resourcefulness in navigating a GPU market often plagued by shortages. Adding AMD into this lineup might look like a mere “supply chain insurance,” but it’s more than that—this move exhibits OpenAI’s reluctance to put all its eggs in one pricey basket. Sort of like Apple developing its own Apple Intelligence while leaning on ChatGPT whenever necessary.

Broadcom is helping OpenAI shape the chip, along with a data transfer capability that’s critical for OpenAI’s needs, where endless rows of chips work in synchrony. Securing TSMC, the world’s largest contract chipmaker, to produce these chips highlights OpenAI’s knack for creative problem-solving. TSMC brings a powerhouse reputation to the table, which gives OpenAI’s experimental chip a significant production edge—key to scaling its infrastructure to meet ever-growing AI workloads.

OpenAI’s venture into custom chips isn’t just about technical specs or saving money; it’s a tactical play to gain full control over its tech (something we’ve seen with Apple before). By tailoring chips specifically for inference—the part of AI that applies what’s learned to make decisions—OpenAI aims for real-time processing at a speed essential for tools like ChatGPT. This quest for optimization is about more than efficiency; it’s the kind of forward-thinking move that positions OpenAI as an innovator who wants to carve its own path in an industry where Google and Meta have already done so.

The strategy here is fascinating because it doesn’t pit OpenAI against its big suppliers. Even as it pursues its custom chip, OpenAI remains close to Nvidia, preserving access to Nvidia’s newest, most advanced Blackwell GPUs while avoiding potential friction. It’s like staying friendly with the popular kid even while building your own brand. This partnership-heavy approach provides access to top-tier hardware without burning any bridges—a balancing act that OpenAI is managing with surprising finesse.

(Representational images generated using AI)

The post OpenAI is building their own AI Chips to take on Nvidia’s Chip Dominance first appeared on Yanko Design.

VocaEase 360° MagSafe AI Translation Ring Revolutionizes Global Communication

The Internet has made the world a smaller place, but it hasn’t completely taken down the language barriers that divide us. Translation services, both traditional and those now powered by AI try to bridge those gaps, but many of them require fumbling with apps on phones or computers. With more people from around the globe now communicating with each other, whether online or in person, we need a translation tool that isn’t just instant and seamless but also integrates with our modern lifestyles. That’s the value that VocaEase is bringing to the table, offering a slim and compact AI-powered translation device that easily snaps to the back of your phone, translating more than 138 languages with just a press of a button.

Designers: Louis Yan, Roger Law and Linko

Click Here to Buy Now: $79 $139 ($60 off). Hurry, only 8/200 left! Raised over $37,000.

Anyone who has worked with languages will know that supporting 138 languages is no easy feat, especially when it also takes into account regional dialects and local expressions. Thanks to ChatGPT, that’s exactly what VocaEase does, providing the speed and accuracy you need to hold a conversation in another language in real time. Whether you’re making friends in other countries, holding an international business meeting, or simply enjoying videos or music in other lanugages, this comprehensive linguistic tool has all your language bases covered.

VocaEase isn’t just some voice translation gadget, though. It can work in different modes, handle languages in different formats, and meet the needs of anyone dealing with both spoken and written languages. Voice and Video Call translation enables smooth-flowing and natural conversations that are automatically transcribed and translated into subtitles. Cross-App translation covers your social media needs, translating text and voice messages with a super-fast 0.5-second response time. VocaEase can also record and translate meeting transcripts that you can share with other people in the team. And with Dialogue Translation, you don’t even have to press the ring’s button and simply touch the voice button on the screen for that same convenient and speedy translation.

Best of all, you don’t have to bring a bulky and blocky recorder to enjoy all these features. VocaEase comes in the form of a thin magnetic ring that you can stick on the back of phones or even laptops. Constructed using lightweight aluminum alloy, the resilient yet elegant ring can turn 360 degrees to provide your phone with a reliable grip or a stand for watching videos or doing voice calls, maybe in other languages as well. It also boasts an impressive battery life and a 10-minute charge is enough to last up to 30 days on standby.

Say goodbye to the days of manually copying and pasting text between apps or carrying and fumbling with a separate device just for translations. Powered by ChatGPT AI and supporting over 138 languages, this linguistic tool offers fast and accurate translations that keep the conversation flowing. Whether for business, travel, education, or fun, the VocaEase 360° MagSafe AI Translation Ring not only brings people closer together but also delivers a stylish and versatile accessory for your smartphone.

Click Here to Buy Now: $79 $139 ($60 off). Hurry, only 8/200 left! Raised over $37,000.

The post VocaEase 360° MagSafe AI Translation Ring Revolutionizes Global Communication first appeared on Yanko Design.

With integrated ChatGPT, Play T 1 foldable phone is effortless to use via voice commands

When the mobile phone industry is swaying toward convenience and ergonomics, here’s one phone concept deviating from the norm but putting ChatGPT in your pocket. Well, if you’ve not been living under a rock, AI phones are already making waves, allowing enhanced photography and more applications for user convenience. With the integration of ChatGPT in its innards, the Play T 1 becomes a foldable mobile phone easy to use with voice commands.

With ChatGPT from OpenAI integrated into the phone, it would be helpful for users, limiting them from having to toggle between tools. A simple voice request would get photos and documents altered, emails perfected, and of course, webpages or lectures summarized for you.

Designer: Yeongkyu YOO

This is the right time for a product concept like the Play T 1 to energy. Only a few days back Apple announced the integration of a layer of AI into its new operating systems for iPhone, Mac, and iPads. The newly integrated artificial intelligence features would bring a striking change to Apple’s stream of gadgets courtesy of revamped Siri support, the ability to compose emails, or even create personalized emojis among other things, without the user having to use multiple applications for a task.

The Play T 1 with embedded generative AI at the core of its functionality will offer unimaginable capabilities to the handheld. The device is not a basic handset; it has been designed to be modular, such that different thickish modules can add up for functionality. The foldable smartphone and its accessories – the detachable 5000 mAh battery – are made from compostable plant-based materials, which make the phone essentially eco-friendly.

The detachable battery clips to the bottom of the Play T 1 mobile phone using magnets and can instantly charge the phone. By magnetically fastening it to a speaker base, the phone can effortlessly become a high-performance ChatGPT speaker you can command at will.

As the pictures depict, this folding ChatGPT phone is in itself pretty thick for modern smartphone users and when the magnetic battery pack merges with the main body, it does form a nice unified unit, but at the cost of additional weight and thickness. If the design is slimmed and the folding creases evened out, who knows the GenAI-based Play T 1 can have a future.

The post With integrated ChatGPT, Play T 1 foldable phone is effortless to use via voice commands first appeared on Yanko Design.

Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance

Siri, Alexa, Cortana, Google Voice ChatGPT 4o, it’s no coincidence that they all have female voices (and sometimes even names). In fact, Spike Jonze even literally called his dystopian AI-based film “Her” after the AI assistant Samantha from the movie. Played by Scarlett Johansson, the movie had a premise that sounded absurd 11 years ago but now feels all too realistic after OpenAI announced their voice-based AI model GPT 4o (omni). The announcement was also followed by an uproar from Johansson, who claimed the AI sounded a lot like her even though she hadn’t given OpenAI the permission to use her voice. Johansson mentioned that she was approached by OpenAI CEO Sam Altman to be the voice of GPT 4o, but declined. Just days before GPT 4o was announced, Altman asked her once again to reconsider, but she still declined. GPT 4o was announced exactly 10 days ago on the 13th of May, and Johansson distinctly recognized the voice as one that sounded quite similar to her own. While there are many who say that the voices don’t sound similar, it’s undeniable that OpenAI was aiming for something that sounded like Samantha from Her rather than going for a more feminine yet mechanical voice like Siri or Google Voice. All this brings a few questions to mind – Why do most AI voice assistants have female voices? How do humans perceive these voices? Why don’t you see that many male AI voice assistants (and does mansplaining have a role to play here)? And finally, do female voice assistants actually help or harm real women and gender equality in the long run? (Hint: a little bit of both, but the latter seems more daunting)

AI Voice Assistants: A History

The history of AI voice assistants extends well before 2011 when Siri was first introduced to the world… however, a lot of these instances were fiction and pop-culture. Siri debuted as the first-ever voice assistant relying on AI, but you can’t really credit Siri with being the first automated female voice because for years, IVR dominated phone conversations. Do you remember the automated voices when you called a company’s service center like your bank, cable company or internet provider? Historically, a lot of times the voices were female, paving the way for Siri in 2011. In fact, this trend dates back to 1878, with Emma Nutt being the first woman telephone operator, ushering in an entirely female-dominated profession. Women operators then naturally set the stage for female-voiced IVR (Interactive Voice Response) calls. However, while IVR calls were predominantly just a set of pre-recorded responses, Siri didn’t blurt out template-ish pre-recorded sentences. She was trained on the voice of a real woman, and conversed with you (at least that time) like an actual human. The choice of a female voice for Siri was influenced by user studies and cultural factors, aiming to make the AI seem friendly and approachable. This decision was not an isolated case; it marked the beginning of a broader trend in the tech industry. In pop culture, however, the inverse was said to be true. Long before Siri in 2011, JARVIS took the stage in the 2008 movie Iron Man as a male voice assistant. Although somewhat robotic, JARVIS could do pretty much anything, like control every micro detail of Tony Stark’s house, suit, and life… and potentially even go rogue. However, that aside, studies show something very interesting about how humans perceive female voices.

JARVIS helping control Iron Man’s supersuit

Historically, Robots are Male, and Voice Assistants are Female

The predominance of female voices in AI systems is not a random occurrence. Several factors contribute to this trend:

  • User Preference: Research indicates that many users find female voices more soothing and pleasant. This preference often drives the design choices of AI developers who seek to create a comfortable user experience.
  • The Emotional Connection: Female voices are traditionally associated with helpful and nurturing roles. This aligns well with the purpose of many AI systems, which are designed to assist and support users in various tasks.
  • Market Research: Companies often rely on market research to determine the most effective ways to engage users. Female voices have consistently tested well in these studies, leading to their widespread adoption.
  • Cultural Influences: There are cultural and social influences that shape how voices are perceived. For instance, in many cultures, female voices are stereotypically associated with service roles (e.g., receptionists, customer service), which can influence design decisions.

These are but theories and studies, and the flip side is equally interesting. Physical robots are often built with male physiques and proportions given that their main job of lifting objects and moving cargo around is traditionally done by men too. Pop culture plays a massive role again, with Transformers being predominantly male, as well as Terminator, T-1000, Ultron, C3PO, Robocop, the list is endless.

What Do Studies Say on Female vs. Male AI Voices?

Numerous studies have analyzed the impact of gender in AI voices, revealing a variety of insights that help us understand user preferences and perceptions. Here’s what these studies reveal:

  • Likability: Research indicates that users generally find female voices more likable. This can enhance the effectiveness of AI in customer service and support roles, where user comfort and trust are paramount.
  • Comfort and Engagement: Female voices are often perceived as more comforting and engaging, which can improve user satisfaction and interaction quality. This is particularly important in applications like mental health support, where a soothing tone can make a significant difference.
  • Perceived Authority: Male voices are sometimes perceived as more authoritative, which can be advantageous in contexts where a strong, commanding presence is needed, such as navigation systems or emergency alerts. However, this perception can vary widely based on individual and cultural differences.
  • Task Appropriateness: The suitability of a voice can depend on the specific task or context. For example, users might prefer female voices for personal assistants who manage everyday tasks, while male voices might be preferred for financial or legal advice due to perceived authority.
  • Cognitive Load: Some research suggests that the perceived ease of understanding and clarity of female voices can reduce cognitive load, making interactions with AI less mentally taxing and more intuitive for users.
  • Mansplaining, A Problem: The concept of “mansplaining” — when a man explains something to someone, typically a woman, in a condescending or patronizing manner — can indirectly influence the preference for female AI voices. Male voices might be perceived as more authoritative, which can sometimes come across as condescending. A male AI voice disagreeing with you or telling you something you already know can feel much more unpleasant than a female voice doing the same thing.

The 2013 movie Her had such a major impact on society and culture that Hong Kong-based Ricky Ma even built a humanoid version of Scarlett Johansson

Do Female AI Voices Help Women Be Taken More Seriously in the Future?

20 years back, it was virtually impossible to determine how addictive and detrimental social media was going to be to our health. We’re at the point in the road where we should be thinking of the implications of AI. Sure, the obvious discussion is about how AI could replace us, flood the airwaves with potential misinformation, and make humans dumb and ineffective… but before that, let’s just focus on the social impact of these voices, and what they do for us and the generations to come. There are a few positive impacts to this trend:

  • Normalization of Female Authority: Regular exposure to female voices in authoritative and knowledgeable roles can help normalize the idea of women in leadership positions. This can contribute to greater acceptance of women in such roles across various sectors.
  • Shifting Perceptions: Hearing female voices associated with expertise and assistance can subtly shift societal perceptions, challenging stereotypes and reducing gender biases.
  • Role Models: AI systems with confident and competent female voices can serve as virtual role models, demonstrating that these traits are not exclusive to men and can be embodied by women as well.

However, the impact of this trend depends on the quality and neutrality of the AI’s responses, which is doubtful at best. If female-voiced AI systems consistently deliver accurate and helpful information, they can enhance the credibility of women in technology and authoritative roles… but what about the opposite?

Female AI Voices Running on Male-biased Databases

The obvious problem, however, is that these AI assistants are still, more often than not, coded by men who may bring their own subtle (or obvious) biases into how these AI bots operate. Moreover, a vast corpus of databases fed into these AI LLMs (Large Language Models) is created by men. Historically, culture, literature, politics, and science, have all been dominated by men for centuries, with women only very recently playing a larger and more visible role in contributing to these fields. All this has a distinct and noticeable effect on how the AI thinks and operates. Having a female voice doesn’t change that – it actually has a more unintended negative effect.

There’s really no problem when the AI is working with hard facts… but it becomes an issue when the AI needs to share opinions. Biases can undermine an AI’s credibility, can cause problems by not accurately representing the women it’s supposed to, can promote wrong stereotypes, and even reinforce biases. We’re already noticing the massive spike in the usage of words like ‘delve’ and ‘testament’ because of how often AI LLMs use them – think about all the stuff we CAN’T see, and how it may affect life and society a decade from now.

In 2014, Alex Garland’s Ex Machina showed how a lifelike female robot passed the Turing Test and won the heart of a young engineer

The Future of AI Voice Assistants

I’m no coder/engineer, but here’s where AI voice assistants should be headed and what steps should be taken:

  • Diverse Training Data: Ensuring that training data is diverse and inclusive can help mitigate biases. This involves sourcing data from a wide range of contexts and ensuring a balanced representation of different genders and perspectives.
  • Bias Detection and Mitigation: Implementing robust mechanisms for detecting and mitigating bias in AI systems is crucial. This includes using algorithms designed to identify and correct biases in training data and outputs.
  • Inclusive Design: Involving diverse teams in the design and development of AI systems can help ensure that different perspectives are considered, leading to more balanced and fair AI systems.
  • Continuous Monitoring: AI systems should be continuously monitored and updated to address any emerging biases. This requires ongoing evaluation and refinement of both the training data and the AI algorithms.
  • User Feedback: Incorporating user feedback can help identify biases and areas for improvement. Users can provide valuable insights into how the AI is perceived and where it might be falling short in terms of fairness and inclusivity.

AI assistants aren’t going anywhere. There was a time not too long ago when it seemed that AI assistants were dead. In the end of 2022, Amazon announced that Alexa had racked up $10 billion in debt and seemed like a failed endeavor – that same month, ChatGPT made its debut. Cut to today and AI assistants have suddenly become mainstream again. Mainstream in a way that almost every company and startup is looking for ways to integrate AI into their products and services. Siri and GPT 4o are just the beginning of this new female voice-led frontier… it’s important we understand the pitfalls and avoid them before it’s too late. After all, if you remember in the movie Terminator Salvation, Skynet was a female too…

The post Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance first appeared on Yanko Design.

Nothing just beat Apple by bringing ChatGPT to all its TWS earbuds… even the older models

London-based tech company Nothing is making waves in the tech world by expanding its integration of ChatGPT, a powerful AI language model, to a wider range of its audio devices. This move comes just a month after the feature debuted on the company’s latest earbuds, the Ear and Ear (a), and their smartphone lineup… and coincidentally, just hours before Google’s I/O event, where the company’s expected to announce an entire slew of AI features and upgrades.

The earlier-than-expected rollout signifies Nothing’s commitment to bringing advanced AI features to everyday tech. This integration isn’t limited to Nothing-branded devices; it extends to their sub-brand CMF as well. Users with older Nothing and CMF earbud models, including the Ear (1), Ear (stick), Ear (2), CMF Neckband Pro, and CMF Buds Pro, will be able to leverage the capabilities of ChatGPT starting May 21st with a simple update to the Nothing X app. It also cleverly pre-empts Apple, which is allegedly working with OpenAI to bring ChatGPT to future models of the iPhone.

Read the Nothing Ear (a) Review here

There’s a caveat, however. To enjoy the benefits of ChatGPT through your Nothing or CMF earbuds, you’ll need to be using them with a Nothing smartphone running Nothing OS 2.5.5 or later. The good news is that activating ChatGPT is a breeze. Once you’ve updated the Nothing X app, you can enable a new gesture feature that allows you to initiate conversations with the AI assistant by simply pinching the stem of your earbuds.

This development signifies a growing trend in the tech industry: embedding AI assistants directly into consumer devices. By offering voice control through earbuds, Nothing is making it easier for users to perform everyday tasks hands-free, like checking the weather or controlling music playback. Imagine asking your earbuds for directions while jogging or requesting a quick weather update during your commute – all without reaching for your phone.

The move comes at a perfect time, right between OpenAI’s GPT-4o announcement, and Google’s I/O event, which will include multiple AI improvements including integration of Gemini AI into a vast variety of Google products as well as with the Pixel hardware lineup.

The post Nothing just beat Apple by bringing ChatGPT to all its TWS earbuds… even the older models first appeared on Yanko Design.

This WALL-E-inspired tabletop robot has artificial intelligence and a friendly personality

If we’re going to give in to our eventual robot overlords, my only hope is that they’re as adorable-looking as Doly.

With its googly eyes and treadmill-operated motion system, Doly instantly reminds me of Pixar’s WALL-E. Designed as a robot companion with high emotional intelligence, and the ability to respond to requests, evoke joy, and even serve as a learning tool, Doly combines an open-source build with AI capabilities. The result is remarkably better than the tabletop toys you’re used to expecting. Doly is smart, sensitive, and self-sufficient, as it moves around from A to B, enriches you with interactions and those adorable eyes, and then makes its way back to its charging station when it’s low on batteries.

Designer: Levent Erenler – Limitbit Inc.

Click Here to Buy Now: $299 $449 ($150 off). Hurry, only 4/325 left! Raised over $190,000.

On the design front, the Doly adopts a familiar form factor, mimicking the success of WALL-E and even the Vector robot by Anki. It stands at just 68mm (2.67 inches) tall, but has a personality that’s larger than life. Nearly half that height can be attributed to Doly’s massive eyes that give it its distinct cartoonish character that instantly makes you fall in love with the robot. The eyes can look in different directions, respond to stimuli, express emotions, and can even be replaced by imagery like weather status, a clock, or a timer. Depending on Doly’s mood, or how it reacts to your commands, the eyes do most of the speaking… while voice models allow Doly to speak in any tone of your choice too.

Doly accepts touch and voice inputs, through strategically located microphones and capacitive touch surfaces located on its body. You can tap its head, pet it, tickle it, and Doly emotes exactly how you’d expect a pet to. Talk to it too, and its built-in AI responds intelligently to your queries and commands, letting you set timers, know the weather, take a photo, etc. The robot has natural language understanding, and packs an 8MP camera that lets it see the world around it, identify humans, and even recognize familiar faces. Treads on both sides allow Doly to move around too, shifting forward, backward, and even making turns, while ToF sensors on the front allow it to sense depth, and four strategically placed edge-detection sensors prevent your robot from accidentally driving off surfaces like the stairs or a tabletop (Amazon’s Astro could pick up a few lessons from Doly)

On the inside, Doly runs on a Raspberry Pi board that drives its systems and even powers the AI functions. The robot is built on open-source approach with open-hardware and open-design, allowing you to mod or customize your robot in a variety of ways through I/O ports or even by adding quirky attachments to the robot’s magnetic hands. The hands themselves are an interactive dream to begin with, allowing you fistbump your Doly , or even have it grab things, with lights inside the arms adding a rich layer of interact-ability. I/O ports on the top let you build attachments for your Doly, transforming it in a variety of ways and helping you learn robotics too.

8 MP camera allows Doly to memorize and recognize people with their names, take high quality snapshots and many more.

Doly communicates and responds you back with his own voice when you ask about the weather forecast, time, your name and many more.

To that end, Doly’s much more advanced than most other STEM toys out there. It grows with you, learning and evolving to understand you, your mannerisms, needs, etc. so that no two Doly robots are alike after multiple months/years of usage. Moreover, the robot itself encourages people of all ages to learn coding, with support for languages like C, C++, and Python that let you program your robot, and even much more intuitive block-based coding apps like Google’s own Blockly that help children grasp the basics of programming through the robot toy.

Doly relies on cameras to analyze its surroundings and recognize faces, and built-in microphones to pick up on voice commands – that’s a fair amount of data that your toy robot gathers on a daily basis (sort of like your smart camera and smart speaker combined). Coupled with the fact that Doly has a built-in AI that learns from you (which means it does gather data for machine learning purposes), data privacy can be a pretty large concern. To ensure that your data stays safe and away from hackers, governments, and data-brokers who sell data to third parties, Doly stores and processes all its information locally, oftentimes even working offline. Embedded processing power and local storage ensure that your data never reaches any remote server where it can be compromised by targeted hacks.

Other than that, each Doly comes with an app that lets you access specific features like managing settings or performing graphical programming (Doly’s creators emphasize that you don’t NEED an app to use your robot). The creators do, however, mention that the robot can be customized to wild degrees, with even the ability to swap out the Raspberry Pi module on the inside with better CM4 boards that have better RAM and storage. The Doly robot starts at $269 for a DIY kit that lets you build your own robot from scratch, or $299 for a fully assembled bionic buddy. Limitbit, the creators behind Doly, promise free lifetime over-the-air (OTA) software updates to ensure the robot is always up to date with the latest features, and are apparently even working on ChatGPT integration to make your tiny robotic friend even smarter! Just promise that you won’t turn it against humanity!

Click Here to Buy Now: $299 $449 ($150 off). Hurry, only 4/325 left! Raised over $190,000.

The post This WALL-E-inspired tabletop robot has artificial intelligence and a friendly personality first appeared on Yanko Design.

This WALL-E-inspired tabletop robot has artificial intelligence and a friendly personality

If we’re going to give in to our eventual robot overlords, my only hope is that they’re as adorable-looking as Doly.

With its googly eyes and treadmill-operated motion system, Doly instantly reminds me of Pixar’s WALL-E. Designed as a robot companion with high emotional intelligence, and the ability to respond to requests, evoke joy, and even serve as a learning tool, Doly combines an open-source build with AI capabilities. The result is remarkably better than the tabletop toys you’re used to expecting. Doly is smart, sensitive, and self-sufficient, as it moves around from A to B, enriches you with interactions and those adorable eyes, and then makes its way back to its charging station when it’s low on batteries.

Designer: Levent Erenler – Limitbit Inc.

Click Here to Buy Now: $299 $449 ($150 off). Hurry, only 4/325 left! Raised over $190,000.

On the design front, the Doly adopts a familiar form factor, mimicking the success of WALL-E and even the Vector robot by Anki. It stands at just 68mm (2.67 inches) tall, but has a personality that’s larger than life. Nearly half that height can be attributed to Doly’s massive eyes that give it its distinct cartoonish character that instantly makes you fall in love with the robot. The eyes can look in different directions, respond to stimuli, express emotions, and can even be replaced by imagery like weather status, a clock, or a timer. Depending on Doly’s mood, or how it reacts to your commands, the eyes do most of the speaking… while voice models allow Doly to speak in any tone of your choice too.

Doly accepts touch and voice inputs, through strategically located microphones and capacitive touch surfaces located on its body. You can tap its head, pet it, tickle it, and Doly emotes exactly how you’d expect a pet to. Talk to it too, and its built-in AI responds intelligently to your queries and commands, letting you set timers, know the weather, take a photo, etc. The robot has natural language understanding, and packs an 8MP camera that lets it see the world around it, identify humans, and even recognize familiar faces. Treads on both sides allow Doly to move around too, shifting forward, backward, and even making turns, while ToF sensors on the front allow it to sense depth, and four strategically placed edge-detection sensors prevent your robot from accidentally driving off surfaces like the stairs or a tabletop (Amazon’s Astro could pick up a few lessons from Doly)

On the inside, Doly runs on a Raspberry Pi board that drives its systems and even powers the AI functions. The robot is built on open-source approach with open-hardware and open-design, allowing you to mod or customize your robot in a variety of ways through I/O ports or even by adding quirky attachments to the robot’s magnetic hands. The hands themselves are an interactive dream to begin with, allowing you fistbump your Doly , or even have it grab things, with lights inside the arms adding a rich layer of interact-ability. I/O ports on the top let you build attachments for your Doly, transforming it in a variety of ways and helping you learn robotics too.

8 MP camera allows Doly to memorize and recognize people with their names, take high quality snapshots and many more.

Doly communicates and responds you back with his own voice when you ask about the weather forecast, time, your name and many more.

To that end, Doly’s much more advanced than most other STEM toys out there. It grows with you, learning and evolving to understand you, your mannerisms, needs, etc. so that no two Doly robots are alike after multiple months/years of usage. Moreover, the robot itself encourages people of all ages to learn coding, with support for languages like C, C++, and Python that let you program your robot, and even much more intuitive block-based coding apps like Google’s own Blockly that help children grasp the basics of programming through the robot toy.

Doly relies on cameras to analyze its surroundings and recognize faces, and built-in microphones to pick up on voice commands – that’s a fair amount of data that your toy robot gathers on a daily basis (sort of like your smart camera and smart speaker combined). Coupled with the fact that Doly has a built-in AI that learns from you (which means it does gather data for machine learning purposes), data privacy can be a pretty large concern. To ensure that your data stays safe and away from hackers, governments, and data-brokers who sell data to third parties, Doly stores and processes all its information locally, oftentimes even working offline. Embedded processing power and local storage ensure that your data never reaches any remote server where it can be compromised by targeted hacks.

Other than that, each Doly comes with an app that lets you access specific features like managing settings or performing graphical programming (Doly’s creators emphasize that you don’t NEED an app to use your robot). The creators do, however, mention that the robot can be customized to wild degrees, with even the ability to swap out the Raspberry Pi module on the inside with better CM4 boards that have better RAM and storage. The Doly robot starts at $269 for a DIY kit that lets you build your own robot from scratch, or $299 for a fully assembled bionic buddy. Limitbit, the creators behind Doly, promise free lifetime over-the-air (OTA) software updates to ensure the robot is always up to date with the latest features, and are apparently even working on ChatGPT integration to make your tiny robotic friend even smarter! Just promise that you won’t turn it against humanity!

Click Here to Buy Now: $299 $449 ($150 off). Hurry, only 4/325 left! Raised over $190,000.

The post This WALL-E-inspired tabletop robot has artificial intelligence and a friendly personality first appeared on Yanko Design.

Actual working Pokédex uses ChatGPT to identify Pokémon… and you can build one too

Let’s face it. You didn’t click on this article by accident. You’re as much of a Pokémon nerd as I am and there’s complete reason to feel excited given what I’m about to show you. A YouTuber by the name of Abe’s Projects decided to throw together a few components to make a rudimentary (but functioning) Pokédex and I CANNOT KEEP CALM!

This Pokédex works surprisingly like the original. Relying on the powers of ChatGPT to identify imagery captured through a rather basic camera setup, Abe’s Pokédex does a fairly good job of replicating the experience of the original from the hit TV series and comic book. Abe even encased his electronics in a wonderfully nostalgic red 3D-printed enclosure, making it resemble the original Pokédex to an uncanny degree… and if that wasn’t enough, he even programmed the Pokédex to speak just like the original, with a computer-ish robotic voice.

Designer: Abe’s Projects

The process, although fairly complicated, gets detailed out by Abe in the YouTube video. One of his admittedly harder builds, Abe mentions the first conundrum – planning the exterior and interior. The problem – you can’t 3D model an outer shape without knowing where your inner components are going to sit, and you can’t know where your inner components are going to sit without planning out your outer shell. Nevertheless, Abe designed a rudimentary framework featuring an outer shell, a few removable components (like the bezel for the screen and buttons), and a flap that ‘opens’ your Pokédex.

The internals feature a XIAO ESP32S3 Sense microcontroller that has its own integrated camera, connected to a black and white OLED screen (based on the Pokédex toy from the 90s), an amplifier that hooks to a speaker, a set of breaker buttons, a battery, and a USB-C port for loading all the information to run the mini-computer, as well as to charge the battery.

The way the Pokédex works is rather clever – it uses GPT4 along with the PokéAPI, relying on the latter’s massive information database. The GPT4 gives the device its AI chops, and an AI voice generator (PlayHT) helps create the signature vocal effect of the Pokédex. Together, they work in tandem to first, identify the Pokémon, second, reference the information in the database, third, display the Pokémon on the screen, and finally, play relevant audio about the Pokémon’s name, type, background, and performance. This does, however, mean that the Pokédex needs to stay connected to WiFi at all times to constantly tap into GPT4 and the PokéAPI (since nothing happens locally on-device).

The entire process wasn’t without its fair share of problems, however. The problems started with the software itself, which hung, crashed, and sometimes got overburdened with just the amount of heavy lifting it had to do. Meanwhile, the PlayHT audio generator posed its own share of issues, like an annoying ticking noise that played in the background as the AI spoke. Abe mentions all the problems he had in a dedicated section of the video, also outlining how he fixed them (hint: a lot of coding).

Once all the bugs were fixed, Abe took his Pokédex out for a spin. In all fairness, it did a pretty good job of identifying Pokémon strictly by analyzing their shape. This meant the Pokédex worked absolutely flawlessly when pointed at images, or an accurate 3D figurine or toy. It didn’t however, fare too well with plushes, which can sometimes have exaggerated proportions. That being said, it’s still impressive that the Pokédex works ‘as advertised’.

Building your own isn’t simple, Abe mentions… although he does have a paywall on his YouTube page where paid members can get access to behind-the-scenes content where Abe talks more extensively about his entire process. If you’re a coding and engineering whiz (with a penchant for Pokémon and 3D printing), hop on over to the Abe’s Projects YouTube page and maybe you’ll figure out how to build your own Pokédex too! Maybe you’ll simplify the process so simpletons like us can build them as well…

The post Actual working Pokédex uses ChatGPT to identify Pokémon… and you can build one too first appeared on Yanko Design.

GPT-powered ballpoint pen can digitize, summarize, translate, and compute your notes in real-time

Imagine a pen that solves your mathematical equations as you note them down, or converts all your handwritten meeting notes into a comprehensive list of bullet points, or even enriches your essays with tags and other relevant information for easier searching and even for a better output. Sure, ChatGPT can do all of that, but it’s limited to the fact that it exists in your phone. The XNote puts the powers of AI inside your ballpoint pen, allowing you to instantly digitize drawings, doodles, and notes, and even have the AI interpret, expand, and solve them for you. Notes get synced in real-time, and through the power of GPT, can also be summarized, bulleted, or even translated instantly. Never did I think that AI would revolutionize the world of stationery, but here we are!

Designer: XNote

Click Here to Buy Now: $179 $249 ($70 off). Hurry, exclusive secret perk for YD readers only! Raised over $275,000.

The XNote looks like an ordinary classy notebook and pen combo – the kind you’d carry to work and into meeting rooms… but let not its deceptive exterior fool you, because within that pen lies some of the most impressive tech ever invented since the gel pen that could write in space. The XNote pen comes with a built-in computer that instantly digitizes your notes, sketches, doodles, and technical drawings… but it doesn’t stop just there. It leverages the power of ChatGPT to interpret what you write, allowing you to simply have GPT summarize meeting notes, turn a set of instructions into a task list, solve equations in real-time, translate notes into different languages, and even expand on paragraphs you may have written with even more information. Your notes then exist in two forms – the one written on paper, and the other, in the XNote app, where you can save notes, search through them, and use them digitally however you see fit.

The borderline wizardry lies in the XNote pen’s engineering, and the way it communicates with the app to tap into its AI powers. The pen boasts a 300mAh battery that grants it an impressive standby duration of 60 days, along with 7-8 hours of actual usage. Transmitting data to XNote’s app via BLE, the pen also offers a noteworthy 100MB of storage capacity. While these specs may initially seem modest, it’s equivalent to a thousand A4 pages teeming with text and illustrations. The notebook pairs with the pen wonderfully too, with its Moleskine-like exterior and its luxurious appeal that just makes the XNote feel incredibly premium.

All your written matter – be it notes, scribbles, drawings, or even complex graphs – get digitized and synced with the XNote app, which leverages the full spectrum of ChatGPT’s capabilities. Operating on OpenAI’s API, it intelligently interprets text and drawings, deducing insights from them, comprehending inherent instructions, and conveniently categorizing them for effortless future retrieval. Write a paragraph, and the app can summarize it, or translate it into 53 different languages (as of January 2024). It gives you the ability to ‘chat’ with your notes, unleashing the kind of power that seemed absolutely impossible just 2 years ago. During a meeting, it transforms your notes into task lists, action plans, or ready-to-send Minutes-of-Meeting emails within seconds. You can even pose questions to your notebook, convert quick scribbles into reminders, or tackle complex equations and graphs with ChatGPT’s assistance. A simple paragraph could become a dissertation, a note could become a well-executed email, a quick list of ingredients could convert into a perfect recipe, or even the opposite – your recipe could get converted into a shopping list that you could then use to pick up the right groceries. The possibilities are quite literally endless, and the XNote’s ability to create meaningful tags for all your written matter means you can effectively search through your notes too.

The beauty of the XNote lies in the fact that it takes cutting-edge advanced technology, but packages it in a way that pretty much anyone can use. You don’t need to ‘learn’ how to use the XNote pen, simply because there’s nothing really to learn. The entire experience is automatic and intuitive, and the app helps you work with your data in a myriad of ways, saving time and effort without having you ‘adjust’ to a new technology or method of working.

There is, however, the concern of privacy… which XNote takes incredibly seriously. XNote relies on ChatGPT’s secure API, which is end-to-end encrypted to protect user data. Moreover, your data doesn’t ever get used to train OpenAI’s GPT models, so you can rest assured knowing that your information only belongs to you and nobody else. Your handwritten notes obviously exist in the notebook, but the digitized version of your notes exist on the cloud, and can be stored offline on your device so you don’t need an internet connection to access them. The app even supports adding voice memos to your text, a pretty useful feature that lets you add context to all your notes. Most of the app’s essential OCR and transcription features are free, like unlimited cloud storage, seamless syncing, and offline accessibility. However, the ChatGPT-powered features require a subscription to the XNote AI+ Membership plan, priced at $59 for an annual subscription.

The notebook and pen combo, typically priced at $199, is available for a special Yanko Design exclusive price of $179. This offer includes an 18-month warranty for the pen’s hardware, a charging cord, 5 complimentary ink refills, a 1-month trial of the AI+ subscription, and worldwide shipping options.

Click Here to Buy Now: $179 $249 ($70 off). Hurry, exclusive secret perk for YD readers only! Raised over $275,000.

The post GPT-powered ballpoint pen can digitize, summarize, translate, and compute your notes in real-time first appeared on Yanko Design.