What Apple’s WWDC got right… and what Google’s I/O got wrong

Exactly ten years ago, Google co-founder Sergey Brin jumped out of an airplane and parachuted down into a live event to present Google I/O. Cut to 2024, and Google arguably had one of the most yawn-inducing I/O events ever… but Apple, on the other hand, hat-tipped Brin by having senior VP of Software Engineering Craig Federighi jump out of a plane and parachute down into the Apple headquarters, kicking off the Worldwide Developer’s Conference (WWDC) event. If you were fortunate enough to sit through both Google’s I/O event for developers, and yesterday’s WWDC, chances are you probably thought the same thing as me – How did Google become so boring and Apple so interesting?

Google’s Sergey Brin skydiving into the I/O event wearing the radical new Google Glass in 2014

The Tale of Two Keynotes

Practically a month apart, Google and Apple both had their developer conferences, introducing new software features, integrations, and developer tools for the Android and Apple OS communities respectively. The objective was the same, yet presented rather differently. Ten years ago, Google’s I/O was an adrenaline-filled event that saw a massive community rally around to witness exciting stuff. Apple’s WWDC, on the other hand, was a developer-focused keynote that didn’t really see much involvement from the Apple consumer base. Google popularized the Glass, and unveiled Material Design for the first time, Apple, on the other hand, revealed OSX Yosemite and iOS 8. Just go back and watch the keynotes and you’ll notice how vibrant one felt versus the other. Both pretty much announced the same things – developer tools, new software versions, feature upgrades within first-party apps, and a LOT of AI… but Google’s I/O got 1.8 million views on YouTube over 3 weeks, and Apple’s WWDC sits at 8.6 million views in just one day. (As of writing this piece)

How Apple held the attention

Broadly, having seen both events, I couldn’t help but describe them differently. Google’s keynote seemed like a corporate presentation. Apple’s keynote felt like an exciting showcase. The language was different, the visuals were different, but most importantly, the scenes were different too. Google’s entire I/O was held in person, while Apple did have an in-person event, but the keynote was televised, showing different environments, dynamic angles, and great cinematography. Both events were virtually the same length, with Google’s keynote being 1 hour and 52 minutes long, while Apple’s was 1 hour and 43 minutes. Honestly, after the 80-minute mark, anyone’s mind will begin drifting off, but Apple did a much better job retaining my focus than Google. How? Well, it boiled down to three things – A. a consumer-first approach, B. simplified language, and C. a constant change of scenery.

Notice Apple’s language throughout the presentation, and you’ll see how the entire WWDC rhetoric was user-functionality first, developer-feature second. Whether it was VisionOS, MacOS, iOS, WatchOS, iPadOS, or even TV and Music, Apple’s team highlighted new features that benefit all Apple users first, then mentioned the availability of SDKs and APIs to help developers implement those features in their apps too. One could argue that a Worldwide Developer Conference should inherently be developer-first, but hey, developers are going to watch the keynote regardless. The fact that 8.6 million people (mostly Apple users) watched the WWDC keynote on YouTube shows that Apple wanted to make sure users know about new features first, then developers get their briefing. The fact that a majority of viewers were users also boils down to Apple’s language. There was hardly any technical jargon used in the Keynote. No mention of how many teraflops are used by Apple’s GPUs while making genmojis, what version number Sequoia is going to be, or what Apple Intelligence’s context window is, or whether it’s multimodal. Simple language benefits everyone, whether it’s a teenager excited about new iMessage features, a filmmaker gearing to make spatial content using iPhones or Canon cameras, or a developer looking forward to building Apple Intelligence into their apps. Even Apple Intelligence’s user-first privacy features were explained in ways everyone could understand. Finally, Apple’s production quality helped visually divide the keynote into parts so the brain didn’t feel exhausted. All the different OS segments were hosted by different people in different locations. Craig Federighi and Tim Cook made multiple appearances, but shifted locations throughout, bringing a change of scenery. This helped the mind feel refreshed between segments… something that Google’s in-person keynote couldn’t benefit from.

Where Google dropped the ball

A keynote that’s nearly 2 hours long can be exhausting, not just for the people presenting but also for the people watching. Having the entire keynote on one stage with people presenting in person can feel exactly like an office presentation. Your mind gets exhausted faster, seeing the same things and the same faces. Google didn’t announce any hardware (like they’ve done in past years) to break the monotony either. Instead, they uttered the word AI more than 120 times, while being pretty self-aware about it. The lack of a change of scenery was just one of the factors that made Google’s event gather significantly fewer eyeballs.

Unlike Apple’s presentation, which had a very systematic flow of covering each OS from the more premium VisionOS down to the WatchOS, Google’s presentation felt like an unplanned amalgamation of announcements. The event was broadly about three things – Google’s advancements in AI, new features for users, and new tools for developers – but look at the event’s flow and it feels confusing. I/O started with an introduction where Pichai spoke about multimodality and context windows, then progressed to Deep Mind, then to Search (a user feature), then Workspace (an enterprise feature), then Gemini (a user feature again), then Android (which arguably was supposed to be the most important part of the event), and then to developer tools. An Android enthusiast wouldn’t be concerned with DeepMind or Google Workplace. They might find Search interesting, given how core it is to the Google experience, but then they’d have to wait through 2 more segments before the event even GOT to Android. Search and Gemini are highly intertwined, but they weren’t connected in the keynote – instead, there was an entire 13-minute segment on Workplace in between.

If all that wasn’t fatiguing enough, Google’s I/O tended to lean into technical jargon describing tokens, context windows, and how the multimodal AI could segment data like speech and videos, grabbing frames, finding context, eliminating junk data, and providing value. There was also a conscious attempt at showing how all this translated into real-world usage, and how users could benefit from this technology too, but not without flexing terms that developers and industrial-folk would understand.

Although it’s natural to read through this article and conclude that one company did ‘a better job’ than another company, that isn’t really the case. Both Apple and Google showcased the best they had to offer on a digital/software level. However, the approach to these keynotes has changed a lot over the last 10 years. While Google’s I/O in 2014 had a lot of joie de vivre, their 2024 I/O did lack a certain glamor. Conversely, Apple’s WWDC had everyone at the edge of their seat, enjoying the entire ride. Maybe you got tired towards the end (I definitely did mid-way through the Apple Intelligence showcase), but ultimately Apple managed to deliver a knockout performance… and that’s not me saying so – just look at the YouTube numbers.

The post What Apple’s WWDC got right… and what Google’s I/O got wrong first appeared on Yanko Design.

Google didn’t talk about THIS even ONCE while discussing AI during their I/O 2023 event

The keynote speakers at Google I/O mentioned the word “AI” more than a hundred and forty times… but didn’t even talk about integrating AI into Google’s Assistant even once. In fact, Google didn’t showcase ANY upgrades to their Assistant, proving one thing – Sundar Pichai is Google’s least innovative CEO and leader ever. Sure, these are entirely my opinions, but I’ve got hard facts to back them.

Not improving Google Assistant by integrating AI was a MASSIVE missed opportunity

Back in 2018, Google unveiled Duplex, arguably the first ever impressively-human AI voice assistant that could fool people into believing it was real. Like almost all of Google’s pet projects, Duplex was shuttered following the slightest backlash, and Google now doesn’t even mention its existence while talking about Bard. However, we now need Duplex more than ever… With Google’s AI now being powerful enough to write entire emails within Gmail, having a voice-powered AI that’s as powerful as Bard would be a complete game-changer. Imagine being able to talk to the Assistant the way you chat with Bard, Bing Chat, or ChatGPT. Or have a conversation with your smart home to automate routines or save electricity wherever possible, or your wearable/fitness tracker to motivate you, personalize your diet and exercise plan, and be your al-round personal coach.

ChatGPT is the second time Google felt its core business seriously threatened (TikTok was the first)

The past one year seems to have given Google a bunch of rude awakenings. In July last year, Senior VP Prabhakar Raghavan ominously pointed out that youngsters were using Google Maps less and less to explore places. “In our studies, something like almost 40% of young people, when they’re looking for a place for lunch, they don’t go to Google Maps or Search,” Raghavan said. “They go to TikTok or Instagram.” Google got yet another violent jolt when OpenAI rolled out ChatGPT in November last year, with Sundar Pichai calling Google co-founders Sergey Brin and Larry Page back to the company as an ‘existential emergency move’. To make things worse, Microsoft made a major power move by pouring $10 billion into OpenAI and reviving their Bing search engine with new AI-powered chops. Suddenly, Bing managed to hit 100 million daily active users within a matter of weeks. “I want people to know we made them dance,” mentioned Microsoft CEO Satya Nadella, referring to Bing Chat’s David-versus-Goliath moment.

Google’s AI boost at I/O 2023 felt like overcompensation (but not necessarily in a bad way)

It seems like Bing Chat really lit a fire under Google’s behind, giving it that push it needed to really make AI more ubiquitous. A strong opposition is the key to a healthy democracy and a strong competitor is the key to great innovation… although it seems like Google was sitting on innovation all along, given that they’ve had these AI models for a while now. Google unveiled Duplex in 2018, and LaMDA in 2020, but it only took up until now for Google to really push out AI tools to users. These were all small-time garage projects up until ChatGPT pushed Google out of its comfort zone. It’s almost like Google had all the AI advancements, but had no idea where to put them until ChatGPT rolled out. The “Help me write” feature in Gmail looks like it was literally lifted from GhatGPT’s abilities to write great emails, pitches, letters, etc. in a variety of styles and emotions… which sort of implies Google’s at that place where they’re too big to have great ideas anymore.

Goomics #291 by Manu Cornet

A look at the Google Graveyard, and how the company treats its most brilliant products

Killed By Google is a website that chronicles all of the products Google axes, from the legacy products like Reader (which really infuriated people) to even Stadia, which was barely a couple of years old, to begin with. The website has listed as many as 285 of Google’s products/services that the company just decided to pull the plug on – some of which were still being heavily used by people (I still use Picasa to this day, no lie). Google operates on what they call a 20% rule, where employees are encouraged to spend 20% of their overall office time on their own side projects. This culture resulted in some of Google’s biggest hits, like Gmail, AdSense, and Google News… but remember Inbox? A weird spin-off of Gmail that promised to be a better version of email? It barely lasted as Google pitched the vision and immediately either got bored or tired of it.

The truth is that Google doesn’t really care about what we users think – altruism isn’t a business. Aside from its core money-making businesses, Search, Gmail, Drive, AdSense, Photos, YouTube, and a few of its cloud/enterprise services, everything is potentially ‘cancellable’. Amazon just recently reported that Alexa was a loss-making business to the tune of $10 billion EACH YEAR, so it’s pretty conceivable that Google resigns its Assistant to a similar fate. However, it’s also possible that some young, ruthless startup can light a fire under Google once again, forcing it to come running back to the battlefield with a new and improved Assistant. After all, not many people know this, but Siri was, in fact, a young independent startup that Apple acquired more than a decade ago.

Google Assistant is still way ahead of Siri, hinting at MAJOR complacency

Speaking of Siri, it’s been over a decade since the announcement of the iPhone 4S… and truth be told, Siri hasn’t majorly improved in those 10 years. It still fumbles in places where Google’s own Assistant shines, but don’t mistake that for praise for Google Assistant. It routinely messes up or misbehaves, but not as much as Siri. In fact, many iPhone users found Siri so useless they devised a way to make ChatGPT their default assistant on the phone, with some even creating a ChatGPT shortcut to efficiently control their smart home. This may be a serious indictment of Siri’s abilities, but’s also one of Google’s own Assistant. It seems like Google needs a worthy competitor to be motivated to innovate. Bing did that with Google Search… and I can’t help but think that Google will only upgrade their assistant when Siri drastically improves, or if there’s a much more capable new competitor on the field.

Goomics #377 by Manu Cornet

AI-powered Google Assistant could have really been a part of Sundar’s legacy

What did Tim Cook do for Apple? Well, the Watch and AirPods were launched under him, as was Apple’s push towards building their own silicon chips. Apple became a trillion-dollar company under Cook, and briefly even hit the three trillion dollar mark before settling on 2.71 trillion as of now. Satya Nadella transformed Microsoft by pivoting from Windows to building Azure and other cloud-based services. Microsoft should be like the air you breathe, he would often say. Invisible but important. It’s difficult to really pin-point Sundar’s legacy at Google, considering the company really fumbled on hardware for a bit, building Tensor only after Apple built their M-series chips. Under Pichai, Google canceled Stadia, Chromebooks, Hangouts, Duo, and a bunch of other apps and services. It laid off 6% of its entire workforce in early 2023 (more than 12,000 people), and even though the optics of that looked bad as it is, Sundar Pichai also cut himself a pretty sweet $226 million paycheck in 2022. The point I’m trying to make here is that it’s difficult to think of Pichai as a true innovator because it doesn’t outwardly seem so. At least not two hundred and twenty-six million dollars worth…

However, things could have been different. Instead of Google playing catch-up to OpenAI and other LLMs, Sundar could have doubled down by outdoing them. A conversational AI like Duplex could have really made I/O 2023 much more magical. Sure, you can still talk to Google’s Assistant, but you can’t get it to work across services and data sets the way you can with Bard. For now, Bard is still heavily text-input-based, and one can just hope that there will be some changes in the coming months and years. Until then, at least we’ve got Google’s MusicLM to keep us distracted…

Cover Image via MidJourney
Goomics Cartoon by Manu Cornet

The post Google didn’t talk about THIS even ONCE while discussing AI during their I/O 2023 event first appeared on Yanko Design.

Google’s Project Starline is redefining how we video-chat by using 3D capturing and holograms





Probably spurred by the way the pandemic absolutely upended social communications, Google unveiled Project Starline today at its I/O 2021 event – a one-of-a-kind teleconferencing system that ditches the camera and screen for something much more advanced. Dubbed as a ‘magic window’, Project Starline creates a lifelike hologram of the person you’re chatting with. Rather than interacting with a 2-dimensional representation of them, Starline makes it feel like you’re in a chatting booth with a real person sitting behind a sheet of glass… and it’s all thanks to incredibly complex 3D scanning, imaging, and AI recognition technology.

The video does a pretty standup job of explaining how Project Starline basically works. Instead of two parties staring at their phone screens, Starline’s video-booth allows people to interact with each other via rather futuristic holograms. It literally feels like having the opposite person right in front of you, and the 3D hologram can be viewed from multiple angles for that feeling of ‘true depth’.

The technology Google is currently using is far from anything found in regular consumer tech. According to WIRED, Project Starline’s video booth uses an entire slew of depth sensors to capture you and your movements (while an AI isolates you, the foreground, from the background). 3D video is then sent to a “light field display” that lets the viewer see a complete 3D hologram of the person they’re talking to. In a demo video, people using the tech describe how lifelike the experience is. It’s “as if she was right in front of me,” one person says.

Project Starline is still in an incredibly nascent stage. It uses highly specialized (and ridiculously expensive) equipment, and it hasn’t even been cleared for sale by the FCC yet, which means we’re potentially years away from being able to chat with 3D holograms of each other. There’s even the question of how our existing internet connections could support this dense and heavy image transfer – after all, you’re not video chatting, you’re 3D chatting. Notably, the tech also seems to work only with one-on-one chats (there’s a small snippet of a 3-person chat although the third person’s a baby) and group chats seem a bit like a stretch for now. However, if the demo is as real as the Google Duplex demo we saw a few years back (where an AI booked a reservation at a salon via phone call), Project Starline might have completely reinvented video chats. Can’t wait for a day when smartphones have this technology within them!

Designer: Google

Google’s ‘Search On’ event will reveal new AI-powered features on Thursday

If you’re not done with presentations after Apple’s iPhone 12 launch earlier, set some time aside on Thursday afternoon to find out what’s new in search. Google just sent out invites for a live stream event on October 15th at 12 PM PT / 3 PM ET where...

The first public Android 11 beta will be available on June 3rd

Google I/O isn’t taking place this year because of the COVID-19 pandemic, so you might be wondering what’s next for Android, which Google typically spotlights at the event. It’s releasing the Android 11 beta a bit later than usual this year, with a v...

How Android Q supports 5G apps and why you should care

When Francesco Grilli and his peers were working on the 4G standard, they had a few ideas as to what the popular use cases might be. Video calls over the internet, perhaps, or rich messaging content, they thought. "In the end, none of that really hap...

Google improved Android Auto by making it act more like your phone

Every year, Android gets a chance to reinvent itself on smartphones with new features and new design flourishes. The same can't be said of Android Auto, Google's phone-powered in-car interface: It's tremendously helpful for drivers, and its feature s...

Here’s all the important stuff Google announced at I/O 2019

A better, faster, stronger Google is in store for 2019. During its I/O developer conference on Tuesday, the company unveiled dozens of updates to every corner of the Google ecosystem; from search and Google Assistant to the next generation of Android...

The Nest Hub Max is the first addition to Google’s revamped Nest family

Quite a few noteworthy things happened at today’s Google I/O event, including some sheer breakthroughs in the way the company processes, stores, and protects data… however on the hardware front, the Nest Hub Max may have been the keynote’s biggest reveal. Google’s shifted all its home accessories to the Nest brand, leaving core operations to Google, and hardware for homes to its then-thermostat-and-security company, Nest. The Nest Hub Max is in, a lot of ways like the Google Home Hub, but takes the mantle from its predecessor and introduces some pretty big changes to it.

First of all, the Nest Hub Max is a pretty large pivot from the Home Hub in terms of visual privacy. While the Home Hub was constantly touted as a camera-less smart home device that didn’t capture you or what you did, the latest Nest Hub Max does quite the opposite. It comes with a camera that sports facial recognition, allowing you to make and receive calls, interact with the display in ways that seemed impossible before, and to an extent even control how the Hub Max behaves around different members of your family.

In many ways, the Nest Hub Max is like a smartphone for your house. Used collectively by all members of the family, the Nest Hub Max combines all of Google and Android’s stellar products/features into one package that sits on your kitchen tabletop, or your coffee table, or even your mantelpiece.

The Nest Hub Max serves all the purposes its previous iteration did. It plays videos, music, allows you to control smart-home equipment, set reminders, wallpapers, and now, even allows you to video call. The device packs a better set of speakers now, making it a much more capable playback device, and sports facial and gesture recognition. However, given the company’s reputation of knowing absolutely everything about you, some would find the Hub Max disconcerting. The device greets you by name when you enter the room, which obviously means it stores facial recognition data… a fact that might give some people chills. The device does come with a switch at the back that allows you to disconnect the camera and microphone (so the product isn’t perpetually watching and listening to you), but that’s hardly reassuring for most people like me. Would I trade that level of privacy for some incredibly useful features to make my home and my family feel enriched? Maybe not… but there are certainly people out there who’d love to own this cutting-edge tech in their homes.

Designer: Google Nest

Image Credits: The Verge