Google vs. Apple in an Age of Integrated AI

Google I/O 2025 showed that Gemini is not just a set of capable AI models, but that Google is leading the industry integrating them into its ecosystem. Ironically, Gemini’s success might not mean that Android gains against iOS.

Summary and Competitive Context

At Google I/O 2025, Google showed that Gemini is not just a series of highly capable AI models, it is becoming well-integrated across Google's hardware and software ecosystem. The keynote still managed to mix messages between AI that is ready today, coming soon, or still science fiction for now. Still, Google is persistently improving its AI model sizes while also working to make AI more useful with tools like AI video editors, image generators that can handle text, and contextual awareness that can access your data in its responses. And there is more to come: AndroidXR demos we got suggest that the platform is nearly ready for production in headsets and glasses. You'll be able to get in-glasses directions, live-caption or translate speech, and never lose your keys again, because Gemini will remember where you last saw them.

Like Microsoft BUILD, Google I/O had plenty of coding-with-AI examples, because programming is a key early success case for AI. However, Google went far further than Microsoft, showing Gemini used for consumer and creative use cases including Search, extremely sophisticated video generation, and enterprise services alongside HP. Techsponential will be at Apple’s WWDC next month but Apple has already admitted that what it promised at last year’s WWDC is delayed; there may be plenty of exciting announcements coming from Cupertino in June, but Apple is well behind Google in AI. Meta Connect is this fall. Meta remains ahead of Google XR wearables today, but Google has a mobile platform and Meta doesn’t, and what I saw of AndroidXR suggest that Google is at working to close the gap in XR. That leaves OpenAI and it’s (almost certainly not coincidentally timed) $6.5 billion acquisition of Jony Ive’s IO. Whatever AI-driven wearable IO is working on could be the phone accessory to beat if it ships as planned in 2027, but it’s awfully hard to see this as a dire competitive threat to Google when we don’t even know what it is, let alone what it is supposed to do and how much it will cost.

When Will Google’s AI Advantage Lead to Pixel > iPhone?

When Apple delayed the more advanced features of Apple Intelligence, it wasn’t clear if rivals would be able to take advantage. Now we know: Google has a significant lead over Apple when it comes to integrating useful AI capabilities into its mobile platform. Will that lead consumers to switch from iPhones and buy Pixel or Galaxy instead?

Probably not, but not for the reason you might think. These capabilities are still new, and some are still in the lab. Google’s own list of Gemini app and service integrations is mostly limited to Google and Samsung’s own apps: ask Gemini to read a schedule and add it to your calendar, and it works – provided you use Google’s calendar, not Microsoft Outlook. Google has a significant lead, but it still has a lot of work to do. In the meantime, IF Apple can get Apple Intelligence working AND it can convince its developers to support agentic Siri interactions with their apps, Apple could still generate considerable ecosystem lock-in. This would require a cultural and engineering change in Apple’s AI group – which Apple is reportedly attempting – and also a cultural and process/economics change in Apple’s developer relations – which Apple has shown no clear indication of.

Apple may not have to catch up to keep its base happy. While it works on expanding Siri and building out its own agentic/app integrations, Apple could simply lean harder on partner models like OpenAI to supplement Siri, and I expect that is what Apple will do. That would put Google in a similar situation to Search today, only with better competition. If Google were Apple or Microsoft in the 2000’s, it would keep its software, OS platforms, and AI models exclusive to confer maximum competitive advantage to Android. However, Google is fundamentally an advertising company. Google doesn’t make money on Android (not directly, anyway). Ad monetization requires the most eyeballs, not the highest hardware margins (like Apple) or platform software margins (like Gates/Balmer -era Microsoft). I expect Google to respond by offering Gemini integration on iOS to Apple to get the broadest distribution rather than let Apple incorporate OpenAI or Anthropic models deeper in iOS – even if doing so disadvantages Android. It remains to be seen if Google will be allowed to pay Apple $20 billion annually for the privilege, as it had been doing with Search before a recent court decision seems to have ruled that out.

Long term, XR may be Google’s best way of building an AI-infused ecosystem around Android that pulls people away from iOS. The challenge there will be fending off not only Apple’s own spatial computing wearables, but also Meta (and maybe OpenAI) that are being designed to work alongside Apple.

Unraveling Google I/O 2025 Announcements

Google’s I/O announcements were extremely impressive, but as usual, Google mixed its messages and audiences in its I/O keynote, with the presentation jumping around among consumers, enterprises, developers, AI researchers (and recruiters), and investors. As is my tradition, the rest of this report will group the announcements together, the way Google might have done if it wanted to tell a cohesive narrative.

Investors & Researchers

Google wanted to make the point that its investments in AI are rational, and so it put up a graph showing just how rapidly people are using it. “Monthly tokens processed” is a poor metric for AI use, but the graph is moving sharply up and to the right; it is now at 480 trillion tokens per month. Portions of the keynote discussing Google’s progress in growing its AI models over time seemed aimed at convincing AI researchers to choose working at Google over rivals. Consumers don’t care about what’s under the hood today, let alone three generations ago.

Google reports that AI Overviews in Search are driving growth, and a Gemini 2.5-based AI mode is now broadly available to everyone in the U.S. Presumably, Search growth drives ad sales, and ad sales drives profits. “Presumably” is doing a lot of work here: Google did not clarify how well it expects to integrate advertising with AI-driven responses, especially once Gemini is your personal Jarvis/C3PO. Will Google resort to product placement akin to Laura Linney in The Truman Show?

Google also hinted at monetizing Gemini through shopping and directly for personal and enterprise use (see below).

Consumers: Android

Google moved up its consumer Android messaging to an event the week before Google I/O to free up time at the keynote. The Android news included a focus on partners, security, and a design refresh. Here's Techsponential’s summary and analysis of The Android Show.

Consumers: Search and Gemini

In addition to a better model driving AI Overviews in Search, sports and financial search queries will get AI-generated graphs in responses this summer.

Circle to Search is getting upgraded to Search Live: you'll be able to show videos to Search and get helpful advice.

The Shopping Graph is being incorporated into AI search to aid shoppers, so that’s another one way for Google to monetize. Google will provide search-to-sale integration for an unspecified number of product categories; the ‘wow’ feature was a personalized AI-generated virtual mannequin for trying on clothing in the browser.

Google I/O conformed to “Avi’s Law for AI Demos,” which is that every AI demo must include a segment where someone plans a trip.*

Google Meet gets real-time translation between English and Spanish today; more languages coming. Not a live demo, and I’m skeptical that it works quite as seamlessly as shown in the video clip – because otherwise we would have gotten a live demo – but when it does work this well it’s going to be extraordinary, and should be equally valuable to enterprises as consumers.

Google has integrated features from Project Astra in Gemini Live for Android and iOS, which have serious uses – helping people with low vision see things and navigate their environments – and silly – Gemini can see what you're looking at and drolly correct you when you tell it nonsense. This is obviously a preview of functionality that will be even more useful on glasses when those are ready for market (see below).

Google is making bold claims about the future of Gemini being a personalized AI that understands your context and can be proactive. The first instance of this will be adding personal context in “Personalized Smart Replies,” coming to Gmail this summer. Personalization is opt-in and Google claims that it is "secure," but it is unclear if processing is kept on device or if the personalization works across devices and services in some way. This is more than AI; it is a software ecosystem play, where Gemini Live will get proactive when it sees things on your calendar, Keep, and eventually your files and email.

This was followed by a video segment on Google Project Astra Pro that makes Gemini Live into an AI so capable that it matches Marvel's Jarvis, complete with the ability to understand when you are and aren't talking to it. Google did not make it clear how much of this video is real today, how much is partly working in the lab when connected directly to a server farm, or if it’s just science fiction and a “future of AI” video.

Enterprise

Coding is the big early category win for AI, and Google I/O is still a developer conference, so Google showed off just how easy it is to vibe code with Gemini 2.5, along with new features and security. The flashiest demo was writing an app based on a design scribbled on a napkin.

Google is rebranding its Starline videoconferencing system, "Google Beam." Google is still working with HP on the hardware and distribution, and it is still on track to ship this year. I've had a demo and Google Beam gives you a visceral sense that you are sitting in front of a live person rather than a video of someone potentially on the other side of the word. Enterprise clients are already lined up to buy pairs of Google Beams; while the system is almost certainly at least a six figure purchase, if it saves on flights it should pay for itself quickly. The new feature announced alongside the rebranding is live simultaneous translation; I was not able to test this feature, but other analysts at Google I/O did and told me that it is still a work in progress.

Google showed off new photo and video generation models that have definite consumer uses – and will eventually migrate to the free pricing tier – but at launch are aimed at paying enterprise and prosumers.

Imagen 4 in Gemini Live is better with detail and can actually deal with text in images. Google says that it is 10x faster than Imagen 3. The Android and Apple logo playing tug of war with each other that I used in this report was generated by Gemini Live in under 10 seconds, and required just the single prompt. It’s a simple image construct, but this is the first time I’ve gotten exactly what I asked for on the first try. Google’s new Veo 3 can generate short video clips, now with sound, and we’ve already seen lots of experiments online – and in Joanna Stern’s The Wall Street Journal column – that show just how capable it can be, even if there are still plenty of flaws and limitations. To make these images and video clips usable for presentations and storytelling, Google built Flow, editing software for AI-generated video.

Is Google monetizing all these tools (and some of its more advanced Gemini models)? Yes. Yes it is. Google AI Pro is $20/month, and is the plan we used to generate the image above. Google AI Ultra, which includes Veo 3 access,is $250/month, putting it out of reach for consumers, but perfectly reasonable for SMB and enterprises using these tools to create video or build services around.

AndroidXR

Judging by the use cases that Google demonstrated, the company’s XR efforts seem aimed primarily at consumers. Google expects you to wear different devices for different purposes and locations: more immersive headsets indoors for work and entertainment, lighter and mobile glasses form factors for outdoors and transit use. This is now a fairly accepted point of view: Meta has its Quest 3 and Ray-Ban Meta smartglasses, and even its early prototype Orion AI glasses will not replace VR headsets. Apple is currently just in the VR-with-passthrough business, but Tim Cook has repeatedly said that Apple plans glasses form factors as well (and Bloomberg’s Mark Gurman has plenty of leaked roadmap confirmation). What isn’t clear is whether a single OS platform can address these different form factors and use cases; I was initially skeptical, but after seeing how Gemini is a core part of the experience across the board, I am coming around to Google’s point of view.

The first AndroidXR device will be a headset from Samsung and Qualcomm coming out later this year, currently code-named Project Moohan (“Infinity” in Korean). I got a live demo, and it was polished and nearly ready for launch. The hardware is lighter than the Apple Vision Pro and feels lighter than it is thanks to a comfortably rigid strap, though it could still use more padding in spots. Project Moohan/AndroidXR takes a similar approach to Apple Vision Pro with its gesture-based interface; Samsung leans a bit heavier on hand tracking, while Apple does more with eye tracking. Google is doing a lot more with Gemini, though – voice is as much a part of the UI as gestures. I found it easy to navigate. Google showed off Maps experiences, native YouTube – a key advantage over rivals, and mimicked Apple’s 2D-to-3D image AI sorcery. Scrolling through memories is of my favorite things to do in the Apple Vision Pro, and my personal library is already stored in Google Photos, so this is great …but isn’t enough to justify the purchase of a headset. 2D Android apps should work in AndroidXR, and perhaps browser-based apps will work better in Chrome for AndroidXR than they do in Safari in VisionOS, but Google and Samsung will need native apps for Project Moohan, just like Apple does for Apple Vision Pro.

The next device to launch in the AndroidXR ecosystem is expected to be Project Aura with XREAL. These are optical see-through glasses that are tethered to smartphones. Like Project Moohan, they will use Qualcomm’s XR platform, though the specific silicon has not been announced. XREAL is promising a 70-degree FOV (Field of View); considerably better than the 57-degree FOV in the XREAL ONE Pro that just started shipping. We should get more information about Project Aura at AWE next week; they appear to be a middle ground between an immersive headset and less-capable glasses with displays. By outsourcing the battery and some processing to a tethered phone or computing puck, Project Aura should also be more affordable than standalone smart glasses and both more affordable and more comfortable than headsets, while offering some of their capabilities when it comes to consuming content.

(The XREAL ONE Pro could theoretically provide full XR capabilities with the addition of its optional camera module, but XREAL is unlikely to generate sufficient developer traction on its own, which is why XREAL is so keen to work with Google. For now, XREAL’s existing product line is the goldilocks way of watching content, gaming, or working privately with a large virtual screen, comfortably, at a reasonable price. I wore XREAL ONE Pro glasses on the flights to and from Google I/O and made a small dent in my TV backlog.)

Standalone smart glasses with integrated displays and agentic AI are the holy grail for tech companies – we’ve certainly seen them often enough in media (Tony Stark’s EDITH glasses are practically a cast member in Spider-Man Far From Home). Google famously tried smart glasses in the past, but the technology has dramatically improved since Google Glass, and AI adds entirely new capabilities. The culture has changed in the decade since Google Glass, too: consumers now accept that cameras are everywhere, either handheld in the 6” glass squares everyone holds in front of them at all times, or in city centers. Ray-ban Meta smart glasses have sold in the low millions without backlash, partly because they look like Ray-ban glasses, not sci-fi appendages. Google is wisely following Meta’s lead in working with Essilor Luxottica and found other companies that know how to make and sell things that consumers want to put on their faces: Warby Parker and Gentle Monster. Google is essentially providing a reference model for the technology, and its eyeglass partners will manage the design and sales.

Glasses with cameras, speakers, and microphones should be available before models with small built-in displays, but Google has not provided a timeline for when either type of AndroidXR glasses will ship. Google did provide demonstrations at Google I/O of XR glasses with displays making search queries, superimposing maps into the wearer's field of vision, and translating languages in real time. The Project Astra features were particularly impressive, while the language translation was hit or miss. It’s exciting stuff, though obviously pricing was another mystery alongside launch timing.

For Techsponential clients, a report is a springboard to personalized discussions and strategic advice. To discuss the implications of this report on your business, product, or investment strategies, contact Techsponential at avi@techsponential.com.

*AI developers believe that humans are constantly planning trips, so this is a prime example of how AI can offload this regularly-occurring task. It also presumes that consumers trust today’s AI to execute this complex, high-stakes set of interrelated decisions perfectly.