

It appears that turning Android into a desktop operating system is Google’s next big consumer initiative. To put it bluntly, I think this effort fails if all they’re doing is making a one-to-one Windows, Mac, or ChromeOS competitor. The path to success that I see is Google developing a next-generation OS that leverages AI and Gemini to make possible an entirely new user experience that is voice and agent-first.
9to5Google has a rebooted newsletter that highlights the biggest Google stories with added commentary and other tidbits. Sign up here!
Officially, ChromeOS will be adopting more Android under-the-hood. The announcement last year did not signal an explicit transition — in the vein of Play Music to YouTube Music, Google Duo to Meet, or the company’s various chat app migrations — but I see that reading between the lines. Given other recent developments in Android “desktop mode,” I’d say that the general direction is to extend Android into being a desktop operating system.
Even if desktop Android gets all the user interface and experience basics right, I don’t think that’s enough to compete with Windows and macOS. If Google does everything right, they end up with ChromeOS, but based on Android.
Some positives there could be a native touch and 2-in-1 experience, as well as making cellular connectivity a standard feature. A desktop-class Chrome browser seems inevitable and could address the shortcomings of large screen Android apps.
All that is fine, and I think there’s a strong case to consolidate engineering resources into one platform.
However, this does not beat the big two desktop operating systems. I know this because Chromebooks are still a distant third after all these years.
Maybe Google is happy with that outcome, but I think what they really want is to replicate the success of Android on smartphones to take on Microsoft and Apple in laptops.
To do so, I think Google needs to offer a next-generation operating system.
Desktop Android needs something unique and I think that could be a Gemini smart assistant that changes how we interact with computers. Fortunately, Google has created many of those next-gen interactions in developing Android XR on headsets.

I’m thinking of a voice-first user experience that you use to browse websites and control apps, as well as accomplish tasks like searching files and writing/editing text.
Ideally, there’s no hotword, with the computer recognizing when you’re speaking to it and ignoring other utterances to yourself and others. The version of Gemini Live powered by Project Astra that I used in Android XR back in December could do this. It is aware of what’s on your screen without needing to preface, with desktop OSes giving you a mouse cursor. Meanwhile, Google is working on a Project Mariner browser agent.
I imagine having a web browser and Google Docs side-by-side, with the assistant understanding when you’re transcribing into the document and when you’re issuing commands to browse or open a website to gather information.
Google has achieved a lot of this with their XR headset. On that form factor, Android and Gemini are so closely integrated.
It makes sense for that to arrive first on desktops before phones. There’s much more compute and battery available on even a laptop.
Android at I/O 2024 last May was disappointingly light on new features. My main takeaway was Google saying that it “embarked on a multi-year journey to reimagine Android, with AI at the core.” Android XR was the first real sign of that, and I hope desktop Android is next.

Add 9to5Google to your Google News feed.
FTC: We use income earning auto affiliate links. More.
What’s your reaction?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Leave a Reply
View Comments