Gemini Will Soon Tap Into Your Google Account Data To Become A More ‘personal’ AI

gemini-will-soon-tap-into-your-google-account-data-to-become-a-more-‘personal’-ai
Gemini Will Soon Tap Into Your Google Account Data To Become A More ‘personal’ AI

4

Sign in to your Android Police account

Gemini Live interface on a phone

Summary

  • Gemini AI will begin using personal data from your Google account to provide a more personalized experience.
  • The information comes from a post on X from Google’s VP of AI.
  • Gemini will access your Gmail, Calendar, YouTube history, Photos, and other Google services.

It may come as a shock for people to learn Google’s Gemini AI isn’t already tapping into everyone’s emails and calendars. Apparently, it’s true, but not for long. The powerful AI is about to turn your Android device into a super-personalized assistant.

A brain connected to some laptops, the Gemini logo in the center, and some warning signs around.

Related

Google VP Josh Woodward, who oversees the Gemini app and leads Google Labs, laid out the plans for Gemini to become the most “personal, proactive, and powerful” AI assistant yet. The biggest change will come from Gemini using your own Google account data to help you more effectively.

Personalized AI thanks to your online life

“” data-modal-id=”single-image-modal” data-modal-container-id=”single-image-modal-container” data-img-caption=””””>

A screenshot of Josh Woodward's post on X, outlining the update to Gemini.

Woodward took to X to outline what’s internally referred to as ‘pcontext,’ or personalized context. It would allow Gemini to access most of your Google stack, including the following:

  • Gmail
  • Photos
  • Calendar
  • YouTube history
See also  Google Now Has Two More Reasons For You To Subscribe To Its AI Premium Plan

There was no mention if Gemini will be able to access your Google searches, chats, Keep notes, or docs. The post also didn’t clarify if this will also apply to Google Workspace accounts, or only personal accounts.

The idea is that Gemini could draw from this information to offer more personalized responses. But it goes beyond simply knowing your past chats. The goal is for Gemini to proactively anticipate your needs before you even ask. That could look like surfacing calendar reminders or pulling up relevant documents without a prompt. Woodward calls this ‘anticipatory AI.’

“” data-modal-id=”single-image-modal” data-modal-container-id=”single-image-modal-container” data-img-caption=””””>

A screenshot of part two of Josh Woodward's post on X, highlighting some of the perks of the new Gemini update.

Gemini 2.5 Pro and Flash already support advanced multimodal capabilities. These AI models can generate code, video, and images. Google wants to build on that. It will combine all this personalized context into the broader ecosystem.

Free for students, and other features

“” data-modal-id=”single-image-modal” data-modal-container-id=”single-image-modal-container” data-img-caption=””””>

A screenshot of the last part of Josh Woodward's post on X, where he highlighted all the updates Google has shipped in the past three weeks.

Google has been crazy about Gemini lately. The company launched Gemini 2.5 Flash, enabled image upload and editing, added LaTeX support, and announced free Gemini plans for all US students. All that in only the last few weeks. Woodward said similar access will expand to more countries soon.

See also  Verizon's Latest Perk Gives You Access To Gemini Advanced For Half Price

Naturally, there are some privacy concerns around this. Google says all access will require explicit user permission. Woodward mentioned we’ll all learn more when Google I/O kicks off at the end of this month.