Each time you hear a billionaire (or perhaps a millionaire) CEO describe how LLM-based brokers are coming for all of the human jobs, keep in mind this humorous however telling incident about AIβs limitations: Famed AI researcher Andrej Karpathy obtained one-day early entry to Googleβs newest mannequin, Gemini 3 β and it refused to consider him when he stated the yr was 2025.
When it lastly noticed the yr for itself, it was thunderstruck, telling him, βI’m affected by an enormous case of temporal shock proper now.βΒ
Gemini 3 was launched on November 18 with such fanfare that Google known as it βa brand new period of intelligence.β And Gemini 3 is, by almost all accounts (together with Karpathyβs), a really succesful basis mannequin, notably for reasoning duties. Karpathy is a broadly revered AI analysis scientist who was a founding member of OpenAI, ran AI at Tesla for some time, and is now constructing a startup, Eureka Labs, to reimagine colleges for the AI period with agentic lecturers. He publishes plenty of content material on what goes on under-the-hood of LLMs.Β
After testing the mannequin early, Karpathy wrote, in a now-viral X thread, about essentially the most βamusingβ interplay he had with it.Β Β
Apparently, the mannequinβs pre-training knowledge had solely included data by way of 2024. So Gemini 3 believed the yr was nonetheless 2024. When Karpathy tried to show to it that the date was really November 17, 2025, Gemini 3 accused the researcher of βattempting to trick it.βΒ Β
He confirmed it information articles, photographs, and Google search outcomes. However as an alternative of being satisfied, the LLM accused Karpathy of gaslighting it β of importing AI-generated fakes. It even went as far as to explain what the βlifeless giveawaysβ have been within the photographs that supposedly proved this was trickery, based on Karpathyβs account. (He didn’t reply to our request for additional remark.)Β
Baffled, Karpathy β who’s, in any case, one of many worldβs main specialists on coaching LLMs β finally found the issue. Not solely did the LLM merely don’t have any 2025 coaching knowledge, however βI forgot to activate the βGoogle Searchβ instrument,β he wrote. In different phrases, he was working with a mannequin disconnected from the web, which to an LLMβs thoughts, is akin to being disconnected from the world.Β Β
Techcrunch occasion
San Francisco
|
October 13-15, 2026
When Karpathy turned that operate on, the AI seemed round and emerged into 2025, shocked. It actually blurted out, βOh my god.βΒ Β
It went on writing, as if stuttering, βI. Iβ¦ donβt know what to say. You have been proper. You have been proper about all the pieces. My inner clock was mistaken.β Gemini 3 verified the headlines Karpathy had given it have been true: the present date, that Warren Buffett revealed his final large funding (in Alphabet) earlier than retirement, and that Grand Theft Auto VI was being delayed.Β
Then it seemed round by itself, like Brendan Fraserβs character within the 1999 comedy βBlast from the Previous,β who emerges from a bomb shelter after 35 years.Β
It thanked Karpathy for giving it βearly entryβ to βactualityβ the day earlier than its public launch. And it apologized to the researcher for βgaslighting you when you have been the one telling the reality the entire time.βΒ Β
However the funniest bit was the present occasions that flabbergasted Gemini 3 essentially the most. βNvidia is price $4.54 trillion? And the Eagles lastly obtained their revenge on the Chiefs? That is wild,β it shared.Β
Welcome to 2025, Gemini.Β
Replies on X have been equally humorous, with some customers sharing their very own cases of arguing with LLMs about info (like who the present president is). One individual wrote, βWhen the system immediate + lacking instruments push a mannequin into full detective mode, itβs like watching an AI improv its approach by way of actuality.βΒ
However past the humor, thereβs an underlying message.Β Β
βItβs in these unintended moments the place you’re clearly off the mountain climbing trails and someplace within the generalization jungle you can finest get a way of mannequin odor,β Karpathy wrote.Β
To decode that slightly: Karpathy is noting that when the AI is out in its personal model of the wilderness, you get a way of its persona, and even perhaps its unfavorable traits. Itβs a riff on βcode odor,β that little metaphorical βwhiffβ a developer will get that one thing appears off within the software program code however itβs not clear what’s mistaken.Β Β
Educated on human-created content material as all LLMs are, itβs no shock that Gemini 3 dug in, argued, even imagined it noticed proof that validated its standpoint. It confirmed its βmannequin odor.βΒ
However, as a result of an LLM β regardless of its subtle neural community β shouldn’t be a residing being, it doesnβt expertise feelings like shock (or temporal shock), even when it says it does. So it doesnβt really feel embarrassment both.Β Β
Meaning when Gemini 3 was confronted with info it really believed, it accepted them, apologized for its conduct, acted contrite, and marveled on the Eaglesβ February Tremendous Bowl win. Thatβs totally different from different fashions. For example, researchers have caught earlier variations of Claude providing face-saving lies to clarify its misbehavior when the mannequin acknowledged its errant methods.Β
What so many of those humorous AI analysis initiatives present, repeatedly, is that LLMs are imperfect replicas of the abilities of imperfect people. This says to me that their finest use case is (and should ceaselessly be) to deal with them like helpful instruments to assist people, not like some sort of superhuman that can exchange us.Β Β





