Google On How Pixel 9’s Add Me Camera Uses AR, TPU

google-on-how-pixel-9’s-add-me-camera-uses-ar,-tpu
Google On How Pixel 9’s Add Me Camera Uses AR, TPU

Add Me is one of the Pixel 9’s tentpole camera features, and Google has a blog post today explaining how it was developed. Personally, I’ve noticed that people didn’t really use Add Me until the holiday season.

The Pixel 9 feature was first pitched internally by a member of the “Creative Camera” team in August of 2022.

According to a separate job listing, that team’s mission is to: “imagine and build the future of photography and videography. Our team is reinventing digital imagery: from new algorithms for creating the highest-quality images and videos possible on mobile devices, to creating entirely new ways of capturing and reliving our experiences.” 

They’ve worked on Night Sight, Best Take, Magic Eraser, and Magic Editor, with the two latter capabilities eventually dropping Pixel exclusivity and coming to Google Photos. The rest of the job description makes for an interesting read:


Google has ambitious plans to continue to be an innovator in this field–and ensure Pixel camera continues to be seen as the biggest selling point for Pixel–while also expanding the reach of our work to other products to Google Photos and the Android Ecosystem.

Google will continue to improve mobile and server-side photography image quality, reinventing the way we capture, process, and relive moments. Our research has the potential to help define the future of photography and promote photography’s continued democratization–helping all users capture the important moments in their lives.


When Add Me was first brought up, which would have been a few months before the Pixel 7 event, Google said “development was already underway for the Pixel 8 series,” which launched in October 2023 and reflects the long development cycle for devices.

See also  Galaxy S25 Ultra Steals The Spotlight In Samsung Promo Video Leak

Besides Creative Camera, Add Me was a collaboration with the main Pixel Camera team and Google XR division. Today’s post describes the latter as such:

(The Google XR team works on Android XR and ARCore, platforms for building augmented and virtual reality experiences.)

Google “explored a few methods” to align/frame the first and second images before landing on augmented reality.

…developing an interface where the AR feature was self-explanatory — so even those unfamiliar with the technology could use it — wasn’t easy, and took ample experimentation.

Meanwhile, a TPU (Tensor Processing Unit) — in general — is credited as making possible the on-device machine learning models that show the AR preview and ultimately combine the two shots. Running on a GPU or CPU wouldn’t have been fast enough for Add Me.

Add 9to5Google to your Google News feed. 

FTC: We use income earning auto affiliate links. More.