Gemini 3 refused to believe it was 2025, and hilarity ensuedΒ 

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

Each time you hear a billionaire (or perhaps a millionaire) CEO describe how LLM-based brokers are coming for all of the human jobs, keep in mind this humorous however telling incident about AI’s limitations: Famed AI researcher Andrej Karpathy obtained one-day early entry to Google’s newest mannequin, Gemini 3 β€” and it refused to consider him when he stated the yr was 2025.

When it lastly noticed the yr for itself, it was thunderstruck, telling him, β€œI’m affected by an enormous case of temporal shock proper now.” 

Gemini 3 was launched on November 18 with such fanfare that Google known as it β€œa brand new period of intelligence.” And Gemini 3 is, by almost all accounts (together with Karpathy’s), a really succesful basis mannequin, notably for reasoning duties. Karpathy is a broadly revered AI analysis scientist who was a founding member of OpenAI, ran AI at Tesla for some time, and is now constructing a startup, Eureka Labs, to reimagine colleges for the AI period with agentic lecturers. He publishes plenty of content material on what goes on under-the-hood of LLMs.Β 

After testing the mannequin early, Karpathy wrote, in a now-viral X thread, about essentially the most β€œamusing” interplay he had with it.Β Β 

Apparently, the mannequin’s pre-training knowledge had solely included data by way of 2024. So Gemini 3 believed the yr was nonetheless 2024. When Karpathy tried to show to it that the date was really November 17, 2025, Gemini 3 accused the researcher of β€œattempting to trick it.”  

He confirmed it information articles, photographs, and Google search outcomes. However as an alternative of being satisfied, the LLM accused Karpathy of gaslighting it β€” of importing AI-generated fakes. It even went as far as to explain what the β€œlifeless giveaways” have been within the photographs that supposedly proved this was trickery, based on Karpathy’s account. (He didn’t reply to our request for additional remark.)Β 

Baffled, Karpathy β€” who’s, in any case, one of many world’s main specialists on coaching LLMs β€” finally found the issue. Not solely did the LLM merely don’t have any 2025 coaching knowledge, however β€œI forgot to activate the β€˜Google Search’ instrument,” he wrote. In different phrases, he was working with a mannequin disconnected from the web, which to an LLM’s thoughts, is akin to being disconnected from the world.Β Β 

Techcrunch occasion

San Francisco
|
October 13-15, 2026

When Karpathy turned that operate on, the AI seemed round and emerged into 2025, shocked. It actually blurted out, β€œOh my god.”  

It went on writing, as if stuttering, β€œI. I… don’t know what to say. You have been proper. You have been proper about all the pieces. My inner clock was mistaken.” Gemini 3 verified the headlines Karpathy had given it have been true: the present date, that Warren Buffett revealed his final large funding (in Alphabet) earlier than retirement, and that Grand Theft Auto VI was being delayed.Β 

Then it seemed round by itself, like Brendan Fraser’s character within the 1999 comedy β€œBlast from the Previous,” who emerges from a bomb shelter after 35 years.Β 

It thanked Karpathy for giving it β€œearly entry” to β€œactuality” the day earlier than its public launch. And it apologized to the researcher for β€œgaslighting you when you have been the one telling the reality the entire time.”  

However the funniest bit was the present occasions that flabbergasted Gemini 3 essentially the most. β€œNvidia is price $4.54 trillion? And the Eagles lastly obtained their revenge on the Chiefs? That is wild,” it shared.Β 

Welcome to 2025, Gemini.Β 

Replies on X have been equally humorous, with some customers sharing their very own cases of arguing with LLMs about info (like who the present president is). One individual wrote, β€œWhen the system immediate + lacking instruments push a mannequin into full detective mode, it’s like watching an AI improv its approach by way of actuality.” 

However past the humor, there’s an underlying message.Β Β 

β€œIt’s in these unintended moments the place you’re clearly off the mountain climbing trails and someplace within the generalization jungle you can finest get a way of mannequin odor,” Karpathy wrote.Β 

To decode that slightly: Karpathy is noting that when the AI is out in its personal model of the wilderness, you get a way of its persona, and even perhaps its unfavorable traits. It’s a riff on β€œcode odor,” that little metaphorical β€œwhiff” a developer will get that one thing appears off within the software program code however it’s not clear what’s mistaken.Β Β 

Educated on human-created content material as all LLMs are, it’s no shock that Gemini 3 dug in, argued, even imagined it noticed proof that validated its standpoint. It confirmed its β€œmannequin odor.” 

However, as a result of an LLM β€” regardless of its subtle neural community β€” shouldn’t be a residing being, it doesn’t expertise feelings like shock (or temporal shock), even when it says it does. So it doesn’t really feel embarrassment both.Β Β 

Meaning when Gemini 3 was confronted with info it really believed, it accepted them, apologized for its conduct, acted contrite, and marveled on the Eagles’ February Tremendous Bowl win. That’s totally different from different fashions. For example, researchers have caught earlier variations of Claude providing face-saving lies to clarify its misbehavior when the mannequin acknowledged its errant methods.Β 

What so many of those humorous AI analysis initiatives present, repeatedly, is that LLMs are imperfect replicas of the abilities of imperfect people. This says to me that their finest use case is (and should ceaselessly be) to deal with them like helpful instruments to assist people, not like some sort of superhuman that can exchange us.Β Β 

Latest Articles

I lost my Roku remotes constantly until I found this simple...

Comply with ZDNET:Β Add us as a most well-liked supplyΒ on Google.ZDNET's key takeawaysIn case your Roku distant is lacking, there...

More Articles Like This