Android Smart Assistants Are Becoming Increasingly Aware Of Context

android-smart-assistants-are-becoming-increasingly-aware-of-context
Android Smart Assistants Are Becoming Increasingly Aware Of Context

AI has taken the world by storm since OpenAI 2022 accelerated the AI boom in 2022. Alternative AI tools like Gemini, Grok, and DeepSeek have attempted to establish themselves as the go-to AI assistant, with mixed success. While AI chatbots still regularly run into pitfalls, most notably when it comes to safe use of AI, the technology has rapidly improved since 2022.

The difference between AI assistants now and the first ChatGPT models is night and day. Most notably, AI assistants have a much greater awareness of context. Based on your surroundings, what’s on your device, or previous conversations, AI assistants can make relevant responses tailored to you. But it’s hard to keep up on what AI is capable of in 2025, so we’ve gathered together this list of how AI assistants have become increasingly aware of context.

An illustration with the Gemini 2.0 logo against a festive, pink background

Related

Gemini 2.0: The good, the bad, and the meh

Gemini 2.0 brings useful new features and enhancements

4 AI can remember what you care about

No need to remind Gemini that you don’t eat glue

Screenshots of Gemini's new Saved Info in action.

Imagine that every year you visit your parents’ house, and you ask them to make you lunch. A few minutes later, they serve you a tuna sandwich. Unfortunately, you don’t like tuna sandwiches and tell them so. They are grateful that you told them this, and they make you a cheese sandwich instead. Next year, you ask them to make you lunch, and they again make you a tuna sandwich.

See also  Google’s Next IPhone Surprise May Not Be An App

This is what talking to early AI chatbots was like. While they could recall your responses within the same conversation, information from earlier sessions was completely forgotten. But now, Gemini can recall your preferences from earlier conversations and use this in its responses.

For example, let’s say you tell Gemini that your favorite activity on holiday was hiking. In a later conversation, you can ask it for hotel recommendations, and it should prioritize hotels that are close to hiking or walking trails. These preferences go beyond simple likes and dislikes. You can ask Gemini to summarize information in a specific way (e.g., never use bullet points), and change or modify your preferences at any time.

Gemini isn’t the only smart assistant to offer this feature. ChatGPT’s Memory feature works almost identically. These features are a massive improvement to smart assistants, as they are a big step towards creating an experience akin to human conversations.

3 AI can perform complex tasks with minimal user input

It can do a lot with a little

The Samsung Galaxy S25 running Google Gemini

AI-powered assistants are only as useful as the information they have access to. This is why AI models are trained on copyrighted data without acquiring consent first, as the AI arms race doesn’t respect laws or ethics. But while there are some serious ethical and legal issues around this, there is also plenty that AI can do by accessing data legally.

Google has vast amounts of data at hand that it can do with as it wishes. By allowing Gemini to access and use this data, Gemini can perform complex tasks with minimal user input. Using preferences you’ve established in earlier conversations, you can ask Gemini to book you a dinner reservation on a specific date. Using data pulled from Google Maps and Search, it can automatically sign you up for an OpenTable account and book your reservation.

As AI companies broaden the range of data their AI assistants can access, AI chatbots will become steadily more powerful. As they build up a more detailed picture of your preferences, they can do these tasks more accurately.

See also  HP Is Buying The Humane AI Platform

2 AI can perform repetitive tasks without prompting

Let AI take care of boring tasks

Google-Calendar-Gemini-integration-AI-sidebar-test-anim

Source: Google

As you use your AI assistant more and more, it learns your habits and routines (assuming you’ve let it remember your preferences). There are a few AI-powered calendar and email tools, like Motion, which can automate repetitive tasks. For example, you can ask your AI assistant to draft a weekly summary email every Friday afternoon. Or you can enable a calendar assistant to automatically reschedule events when you add a higher priority task. However, we’re on the tipping point for even more unprompted actions.

Automating repetitive tasks has become more accurate since the dawn of AI assistants, but it’s a fairly simple task for AI assistants to complete. Performing tasks unprompted is the new target for companies like Google and OpenAI.

Imagine you’re leaving the house with your partner, and after you walk out the door, your partner behind you points out your forgotten wallet on the counter. You head back in, grab it, and then leave the house. This is exactly the kind of task Google hopes to achieve with Project Astra. If applied to hardware like smart glasses, Project Astra could perform tasks autonomously when you do a frequent action, like leaving the house.

This technology is available now, but it’s still in its infancy. Results are even more inaccurate than regular AI chatbots, and it seems to only analyze snapshots from your live camera feed rather than the entire stream. Nevertheless, it’s a huge step forward for AI assistants.

1 AI can make personalized recommendations and suggestions proactively

No need to make a prompt to receive timely information

A person holding a phone with Oura Advisor open

Source: Oura

One of the simplest integrations of AI into our daily lives is spell checkers. I’ve asked Gemini to suggest more concise alternatives to my sentences in its responses. That way, I’m always improving my grammar when I’m using Gemini. But even though this is a proactive response from Gemini, it’s still heavily based on my inputs from earlier conversations. The calendar and email AI assistants I mentioned earlier can do similar tasks, but again, it’s still a fairly basic advancement in AI technology. What is more impressive is how AI is integrated into healthcare.

See also  6 Reasons Google Slacking On AI Safety Reports Is A Big Problem

Fitness apps have collated our data for years, but until now, it was up to us to make sense of it. The more you exercise, the more you need to adapt your fitness routine to suit your changing body. We all know this on a fundamental level, for example, you want to try to keep raising the number of pushups you do each day. AI can provide you with a significantly more nuanced interpretation of your health, as exemplified by Oura’s latest health features for its smart rings.

The Oura Advisor mimics a health coach, flagging problems and suggesting alternative actions based on the data it collects from you. This is a genuinely useful implementation of AI, and one that showcases its current capabilities. What’s special about Oura Advisor and similar AI-powered healthcare assistants is that they make these recommendations proactively. You don’t need to keep checking in to see what your next steps should be.

Context is essential for the future of AI

Without an accurate understanding of the context surrounding your requests, AI doesn’t have much of a future. While we’ve got a long way to go in areas like safety and accuracy, making AI aware of context is the way it’ll truly stand out from older assistants like Siri or Google Assistant. But most of these developments are still in their infancy, so we’ve got a long way to go before we are confident in the technology.

Warning signs with some blurred reports around and the Google logo melting in the center.

Related