Thereβs a considerably regarding new pattern going viral: Individuals are utilizing ChatGPT to determine the placement proven in footage.
This week, OpenAI launched its latest AI fashions, o3 and o4-mini, each of which might uniquely βpurposeβ via uploaded pictures. In observe, the fashions can crop, rotate, and zoom in on pictures β even blurry and distorted ones β to completely analyze them.
These image-analyzing capabilities, paired with the fashionsβ potential to look the net, make for a potent location-finding software. Customers on X rapidly found that o3, particularly, is sort of good at deducing cities, landmarks, and even eating places and bars from delicate visible clues.
Wow, nailed it and never even a tree in sight. pic.twitter.com/bVcoe1fQ0Z
β swax (@swax) April 17, 2025
In lots of instances, the fashions donβt look like drawing on βrecollectionsβ of previous ChatGPT conversations, or EXIF knowledge, which is the metadata hooked up to pictures that reveal particulars comparable to the place the picture was taken.
X is crammed with examples of customers giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to think about itβs enjoying βGeoGuessr,β a web based sport that challenges gamers to guess areas from Google Road View pictures.
it is a enjoyable ChatGPT o3 characteristic. geoguessr! pic.twitter.com/HrcMIxS8yD
β Jason Barnes (@vyrotek) April 17, 2025
Itβs an apparent potential privateness concern. Thereβs nothing stopping a nasty actor from screenshotting, say, an individualβs Instagram Story and utilizing ChatGPT to attempt to doxx them.
o3 is insane
I requested a buddy of mine to present me a random picture
They gave me a random picture they took in a library
o3 is aware of it in 20 seconds and itβs proper pic.twitter.com/0K8dXiFKOYβ Yumi (@izyuuumi) April 17, 2025
After all, this might be accomplished even earlier than the launch of o3 and o4-mini. Trendster ran various pictures via o3 and an older mannequin with out image-reasoning capabilities, GPT-4o, to match the fashionsβ location-guessing expertise. Surprisingly, GPT-4o arrived on the similar, appropriate reply as o3 most of the time β and took much less time.
There was no less than one occasion throughout our transient testing when o3 discovered a spot GPT-4o couldnβt. Given an image of a purple, mounted rhino head in a dimly-lit bar, o3 appropriately answered that it was from a Williamsburg speakeasy β not, as GPT-4o guessed, a U.Ok. pub.
Thatβs to not counsel o3 is flawless on this regard. A number of of our exams failed β o3 received caught in a loop, unable to reach at a solution it was moderately assured about, or volunteered a mistaken location. Customers on X famous, too, that o3 could be fairly far off in its location deductions.
However the pattern illustrates among the rising dangers offered by extra succesful, so-called reasoning AI fashions. There look like few safeguards in place to stop this kind of βreverse location lookupβ in ChatGPT, and OpenAI, the corporate behind ChatGPT, doesnβt tackle the problem in its security report for o3 and o4-mini.
Weβve reached out to OpenAI for remark. Weβll replace our piece in the event that they reply.
Up to date 10:19 p.m. Pacific: Hours after this story was revealed, an OpenAI spokesperson despatched Trendster the next assertion:
βOpenAI o3 and o4-mini deliver visible reasoning to ChatGPT, making it extra useful in areas like accessibility, analysis, or figuring out areas in emergency response. Weβve labored to coach our fashions to refuse requests for personal or delicate data, added safeguards meant to ban the mannequin from figuring out non-public people in pictures, and actively monitor for and take motion in opposition to abuse of our utilization insurance policies on privateness.β