
Contents
Earlier this week, during the WWDC keynote, Apple showed off its new iOS 26. For the first time since iOS 7 in 2013, Apple is revamping the operating system’s look and feel, introducing a very Windows Aero-esque design language called “Liquid Glass” (RIP Windows Vista), and since this was the flashy new thing at the keynote, it’s been the week’s hot topic.
However, we also saw teasers of other new features that aren’t getting the same level of attention. Within the segment on iOS, for example, Billy Sorrentino showed off a new capability of Apple’s AI-powered Visual Intelligence, which is called, pretty simply, Image Search. The way it works is that you take a screenshot of anything you see on your iPhone’s screen. Once you have the screenshot, you can hit the Image Search button in the lower right. Using AI, Visual Intelligence will scan the screenshot and search for things it sees or create calendar events for dates and times revealed in the image.
If this sounds familiar, it’s because Google’s Circle to Search does the exact same thing and has been available for over a year now. However, I’m not bringing this up to do the usual “LOL, Apple stealing from Android!” reaction. I’m bringing it up because, based on what we saw in the video, Image Search within iOS 26 seems uncharacteristically bad.
Visual Intelligence in iOS 26: Circle to Search, but bad

During the keynote (starts at 38:27 in the video embedded at the top), Sorrentino makes Image Search seem so easy and powerful. In his first demo, he pulls up a social media feed. There are multiple posts that are only text, and then one image. He takes a screenshot, initiates Image Search, and tells us, the audience, that he’s interested in the jacket the model is wearing in the social media post.
Apple’s own demo on this Circle to Search-esque feature was plagued with bad answers and a poor UI.
Image Search does its thing and pulls up a collection of images that share similarities with the social media post. Note that it doesn’t search for the jacket. The software doesn’t even know that Sorrentino is interested in the jacket because he never indicated that. All the software does is find images that look similar to the one in his screenshot, and Sorrentino acts like this is a marvel. Sir, I’ve been using TinEye to do that since 2008.
Also, note that Image Search ignored everything else going on in the screenshot. It didn’t search for the Emoji that appears in one of the posts, nor did it search for anything related to the numerous avatar images. Somehow it knew to only search through that one image, which seems like something that won’t ever happen in real life.

In the next demo, Sorrentino finds an image of a room with a mushroom-shaped lamp. He initiates Image Search again, but this time tells the system to investigate the lamp specifically. He does this by scribbling over the lamp with his finger. Note that he doesn’t circle the lamp, because that would be a dead giveaway of Apple’s intention here, but whatever.
Once he circles to search scribbles on the lamp, he sees another list of images. Notice anything weird, though? None of the lamps on the visible list are the one from the original photo! Even the first result, the one he chooses, is very clearly not the lamp he was looking for, but Sorrentino moves forward with adding it to his Etsy favorites as if this were a big success. My guy, that is not the lamp. The system failed, and you’re pretending it succeeded.
You need to use your hands? That’s like a baby’s toy!

In Sorrentino’s final demo, he uses Visual Intelligence to deduce what a photo depicts and ask a question about it. In the example, the photo is of a small stringed instrument. He captures the screenshot and types out a question to ChatGPT. He finds out that the photo is of a mandolin and that this instrument has been used in many popular rock songs.
The glaring thing here is that Sorrentino types out his question. That doesn’t seem very convenient. With Circle to Search, I can just ask my question verbally. Even during the demo, it’s awkward as we watch him thumb out the message about which rock songs use the instrument.
Ultimately, that’s what was so alarming about this whole segment. This is a pre-recorded Apple keynote demo, so you know it will work better here than in real life. But even the demo shows that it is woefully behind Circle to Search in both form and function. I shudder to think how well it will work when it actually lands.
This whole demo was another example of Apple being woefully behind the curve when it comes to helpful implementations of AI tools.
This is just another thing to throw on the pile when it comes to Apple dropping the AI ball. It was late to the game, and everything it’s tried to do has either been a direct lift from Google, Android, or other Android OEMs, or relied on OpenAI to do the real work. Watching this Image Search demo was like watching an overconfident football player stumble through the big game and still try to act like they nailed it.
If nothing else, though, the segment proved a hundred times over that Circle to Search is one of Google’s biggest successes in years. How many times has Google made something that Apple then tried to riff on and failed this hard? Granted, I’ll give Apple the benefit of the doubt for now. It’s possible Image Search could be a lot better when it goes stable in September with the iPhone 17 series. But based on today’s demo, its Circle to Search clone is a dud.
What’s your reaction?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Leave a Reply
View Comments