Home AI News This Week in AI: Can we (and could we ever) trust OpenAI?

This Week in AI: Can we (and could we ever) trust OpenAI?

0
This Week in AI: Can we (and could we ever) trust OpenAI?

Maintaining with an business as fast-moving as AI is a tall order. So till an AI can do it for you, right here’s a useful roundup of latest tales on the earth of machine studying, together with notable analysis and experiments we didn’t cowl on their very own.

By the best way, Trendster plans to launch an AI publication on June 5. Keep tuned. Within the meantime, we’re upping the cadence of our semiregular AI column, which was beforehand twice a month (or so), to weekly — so be looking out for extra editions.

This week in AI, OpenAI launched discounted plans for nonprofits and schooling clients and drew again the curtains on its most up-to-date efforts to cease unhealthy actors from abusing its AI instruments. There’s not a lot to criticize, there — a minimum of not on this author’s opinion. However I will say that the deluge of bulletins appeared timed to counter the corporate’s unhealthy press as of late.

Let’s begin with Scarlett Johansson. OpenAI eliminated one of many voices utilized by its AI-powered chatbot ChatGPT after customers identified that it sounded eerily much like Johansson’s. Johansson later launched an announcement saying that she employed authorized counsel to inquire in regards to the voice and get actual particulars about the way it was developed — and that she’d refused repeated entreaties from OpenAI to license her voice for ChatGPT.

Now, a chunk in The Washington Submit implies that OpenAI didn’t in truth search to clone Johansson’s voice and that any similarities have been unintentional. However why, then, did OpenAI CEO Sam Altman attain out to Johansson and urge her to rethink two days earlier than a splashy demo that featured the soundalike voice? It’s a tad suspect.

Then there’s OpenAI’s belief and issues of safety.

As we reported earlier within the month, OpenAI’s since-dissolved Superalignment group, liable for creating methods to control and steer “superintelligent” AI methods, was promised 20% of the corporate’s compute sources — however solely ever (and barely) acquired a fraction of this. That (amongst different causes) led to the resignation of the groups’ two co-leads, Jan Leike and Ilya Sutskever, previously OpenAI’s chief scientist.

Almost a dozen security specialists have left OpenAI prior to now yr; a number of, together with Leike, have publicly voiced issues that the corporate is prioritizing business initiatives over security and transparency efforts. In response to the criticism, OpenAI fashioned a brand new committee to supervise security and safety choices associated to the corporate’s initiatives and operations. Nevertheless it staffed the committee with firm insiders — together with Altman — fairly than exterior observers. This as OpenAI reportedly considers ditching its nonprofit construction in favor of a standard for-profit mannequin.

Incidents like these make it more durable to belief OpenAI, an organization whose energy and affect grows day by day (see: its offers with information publishers). Few firms, if any, are worthy of belief. However OpenAI’s market-disrupting applied sciences make the violations all of the extra troubling.

It doesn’t assist issues that Altman himself isn’t precisely a beacon of truthfulness.

When information of OpenAI’s aggressive techniques towards former workers broke — techniques that entailed threatening workers with the lack of their vested fairness, or the prevention of fairness gross sales, in the event that they didn’t signal restrictive nondisclosure agreements — Altman apologized and claimed he had no information of the insurance policies. However, based on Vox, Altman’s signature is on the incorporation paperwork that enacted the insurance policies.

And if former OpenAI board member Helen Toner is to be believed — one of many ex-board members who tried to take away Altman from his publish late final yr — Altman has withheld data, misrepresented issues that have been occurring at OpenAI and in some instances outright lied to the board. Toner says that the board discovered of the discharge of ChatGPT by means of Twitter, not from Altman; that Altman gave improper details about OpenAI’s formal security practices; and that Altman, displeased with an instructional paper Toner co-authored that solid a essential gentle on OpenAI, tried to control board members to push Toner off the board.

None of it bodes nicely.

Listed below are another AI tales of word from the previous few days:

  • Voice cloning made simple: A brand new report from the Heart for Countering Digital Hate finds that AI-powered voice cloning providers make faking a politician’s assertion pretty trivial.
  • Google’s AI Overviews wrestle: AI Overviews, the AI-generated search outcomes that Google began rolling out extra broadly earlier this month on Google Search, want some work. The corporate admits this — however claims that it’s iterating rapidly. (We’ll see.)
  • Paul Graham on Altman: In a sequence of posts on X, Paul Graham, the co-founder of startup accelerator Y Combinator, dismissed claims that Altman was pressured to resign as president of Y Combinator in 2019 on account of potential conflicts of curiosity. (Y Combinator has a small stake in OpenAI.)
  • xAI raises $6B: Elon Musk’s AI startup, xAI, has raised $6 billion in funding as Musk shores up capital to aggressively compete with rivals together with OpenAI, Microsoft and Alphabet.
  • Perplexity’s new AI function: With its new functionality Perplexity Pages, AI startup Perplexity is aiming to assist customers make studies, articles or guides in a extra visually interesting format, Ivan studies.
  • AI fashions’ favourite numbers: Devin writes in regards to the numbers completely different AI fashions select after they’re tasked with giving a random reply. Because it seems, they’ve favorites — a mirrored image of the info on which every was educated.
  • Mistral releases Codestral: Mistral, the French AI startup backed by Microsoft and valued at $6 billion, has launched its first generative AI mannequin for coding, dubbed Codestral. However it may well’t be used commercially, because of Mistral’s fairly restrictive license.
  • Chatbots and privateness: Natasha writes in regards to the European Union’s ChatGPT taskforce, and the way it affords a primary take a look at detangling the AI chatbot’s privateness compliance.
  • ElevenLabs’ sound generator: Voice cloning startup ElevenLabs launched a brand new instrument, first introduced in February, that lets customers generate sound results by means of prompts.
  • Interconnects for AI chips: Tech giants together with Microsoft, Google and Intel — however not Arm, Nvidia or AWS — have fashioned an business group, the UALink Promoter Group, to assist develop next-gen AI chip elements.