Glue pizza? Gasoline spaghetti? Google explains what happened with its wonky AI search results

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

When you had been on social media over the previous week, you most likely noticed them. Screenshots of Google’s new AI-powered search summaries went viral, primarily as a result of Google was allegedly making wild suggestions like including glue to your pizza, cooking spaghetti with gasoline, or suggesting that it is best to eat rocks for optimum well being. 

That was just the start.

Different notably egregious examples additionally went viral, seemingly of the rogue AI characteristic suggesting mixing bleach and vinegar to scrub a washer, which might produce doubtlessly lethal chlorine fuel, or leaping off the Golden Gate Bridge in response to a question of “I am feeling depressed.”

So what occurred, and why did Google’s AI Overview suggest these issues?

First, Google says, the vast majority of what went viral wasn’t actual.

Many screenshots had been merely pretend: “A few of these faked outcomes have been apparent and foolish. Others have implied that we returned harmful outcomes for subjects like leaving canines in automobiles, smoking whereas pregnant, and despair.” These AI Overviews by no means appeared, Google says.

Second, quite a few screenshots had been from individuals aspiring to get foolish search outcomes — like ones about consuming rocks. “Prior to those screenshots going viral,” Google stated, “virtually nobody requested Google that query.” If no one is googling a given subject, it most likely means there’s not loads of info accessible about it, or an information void. In such circumstances, there was solely satirical content material the AI interpreted as correct.

Google admits that just a few odd or inaccurate outcomes did seem. Even these had been for uncommon queries, however they did expose some areas that want enchancment. The corporate was capable of decide a sample of issues that did not go proper and made greater than a dozen technical enhancements, together with:

  • Higher detection for nonsensical queries that should not present an AI Overview and restricted inclusion of satire and humor content material

  • Restricted use of user-generated content material in responses that might supply deceptive recommendation

  • Triggering restrictions for queries the place AI Overviews weren’t proving to be useful

  • Not exhibiting AI Overviews for exhausting information subjects the place freshness and factuality are essential and for many well being subjects

With billions of queries coming in each day, Google says, issues will get bizarre typically. The corporate says it is studying from the errors, and guarantees to maintain working to strengthen AI Overviews.

Latest Articles

Generative AI is finally finding its sweet spot, says Databricks chief...

If you happen to strip away all of the buzzwords about enterprise synthetic intelligence, resembling "agentic AI," the fact...

More Articles Like This