WitnessAI is building guardrails for generative AI models

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

Generative AI makes stuff up. It may be biased. Generally it spits out poisonous textual content. So can it’s “protected”?

Rick Caccia, the CEO of WitnessAI, believes it will probably.

“Securing AI fashions is an actual drawback, and it’s one which’s particularly shiny for AI researchers, nevertheless it’s completely different from securing use,” Caccia, previously SVP of selling at Palo Alto Networks, advised Trendster in an interview. “I consider it like a sports activities automobile: having a extra highly effective engine — i.e., mannequin — doesn’t purchase you something except you’ve got good brakes and steering, too. The controls are simply as necessary for quick driving because the engine.”

There’s definitely demand for such controls among the many enterprise, which — whereas cautiously optimistic about generative AI’s productivity-boosting potential — has considerations concerning the tech’s limitations.

Fifty-one p.c of CEOs are hiring for generative AI-related roles that didn’t exist till this yr, an IBM ballot finds. But solely 9% of firms say that they’re ready to handle threats — together with threats pertaining to privateness and mental property — arising from their use of generative AI, per a Riskonnect survey.

WitnessAI’s platform intercepts exercise between staff and the customized generative AI fashions that their employer is utilizing — not fashions gated behind an API like OpenAI’s GPT-4, however extra alongside the traces of Meta’s Llama 3 — and applies risk-mitigating insurance policies and safeguards.

“One of many guarantees of enterprise AI is that it unlocks and democratizes enterprise knowledge to the staff in order that they will do their jobs higher. However unlocking all that delicate knowledge too nicely –– or having it leak or get stolen — is an issue.”

WitnessAI sells entry to a number of modules, every targeted on tackling a special type of generative AI threat. One lets organizations implement guidelines to forestall staffers from specific groups from utilizing generative AI-powered instruments in methods they’re not alleged to (e.g., like asking about pre-release earnings experiences or pasting inner codebases). One other redacts proprietary and delicate information from the prompts despatched to fashions and implements methods to protect fashions in opposition to assaults which may pressure them to go off-script.

“We expect the easiest way to assist enterprises is to outline the issue in a means that is smart — for instance, protected adoption of AI — after which promote an answer that addresses the issue,” Caccia stated. “The CISO desires to guard the enterprise, and WitnessAI helps them do this by guaranteeing knowledge safety, stopping immediate injection and implementing identity-based insurance policies. The chief privateness officer desires to make sure that current — and incoming — rules are being adopted, and we give them visibility and a strategy to report on exercise and threat.”

However there’s one tough factor about WitnessAI from a privateness perspective: All knowledge passes by its platform earlier than reaching a mannequin. The corporate is clear about this, even providing instruments to watch which fashions staff entry, the questions they ask the fashions and the responses they get. Nevertheless it might create its personal privateness dangers.

In response to questions on WitnessAI’s privateness coverage, Caccia stated that the platform is “remoted” and encrypted to forestall buyer secrets and techniques from spilling out into the open.

“We’ve constructed a millisecond-latency platform with regulatory separation constructed proper in — a singular, remoted design to guard enterprise AI exercise in a means that’s essentially completely different from the standard multi-tenant software-as-a-service providers,” he stated. “We create a separate occasion of our platform for every buyer, encrypted with their keys. Their AI exercise knowledge is remoted to them — we will’t see it.”

Maybe that may allay prospects’ fears. As for employees frightened concerning the surveillance potential of WitnessAI’s platform, it’s a harder name.

Surveys present that folks don’t usually respect having their office exercise monitored, whatever the motive, and imagine it negatively impacts firm morale. Practically a 3rd of respondents to a Forbes survey stated they could contemplate leaving their jobs if their employer monitored their on-line exercise and communications.

However Caccia asserts that curiosity in WitnessAI’s platform has been and stays robust, with a pipeline of 25 early company customers in its proof-of-concept part. (It gained’t grow to be usually out there till Q3.) And, in a vote of confidence from VCs, WitnessAI has raised $27.5 million from Ballistic Ventures (which incubated WitnessAI) and GV, Google’s company enterprise arm.

The plan is to place the tranche of funding towards rising WitnessAI’s 18-person workforce to 40 by the tip of the yr. Progress will definitely be key to beating again WitnessAI’s rivals within the nascent area for mannequin compliance and governance options, not solely from tech giants like AWS, Google and Salesforce but in addition from startups equivalent to CalypsoAI.

“We’ve constructed our plan to get nicely into 2026 even when we had no gross sales in any respect, however we’ve already bought nearly 20 occasions the pipeline wanted to hit our gross sales targets this yr,” Caccia stated. “That is our preliminary funding spherical and public launch, however safe AI enablement and use is a brand new space, and all of our options are growing with this new market.”

We’re launching an AI e-newsletter! Join right here to start out receiving it in your inboxes on June 5.

Latest Articles

How to install Apple’s iOS 18.2 public beta – and what...

iPhone customers who need to take a look at the second spherical of AI-powered options through Apple Intelligence can...

More Articles Like This