AI brokers are imagined to make work simpler. HoweverΒ theyβreΒ additionally creating an entire new class of safety nightmares.Β
As corporations deploy AI-powered chatbots, brokers, and copilots throughout their operations,Β theyβreΒ dealing with a brand new threat: how do you let workers and AI brokers use highly effective AI instruments with out by accident leaking delicate information, violating compliance guidelines, orΒ opening the door to prompt-based injections? Witness AI simply raised $58 million to discover a answer, constructing what they name βthe arrogance layer for enterprise AI.βΒ
In the present day on TrendsterβsΒ FairnessΒ podcast, Rebecca Bellan was joined byΒ Barmak Meftah, co-founder and companion at Ballistic Ventures, andΒ Rick Caccia, CEO of Witness AI, to debate what enterprises are literally nervous about, whyΒ AI safetyΒ change into anΒ $800Β billionΒ toΒ $1.2Β trillionΒ market by 2031, and what occurs when AI brokers begin speaking to different AI brokers with out human oversight.Β
Hearken to the complete episode to listen to:Β Β
- How enterprises by accident leak delicate information by means of βshadow AIβΒ utilization.Β
- What CISOs are nervousΒ about proper now, how the issue has developed quickly overΒ 18 months, and what it’ll seem like over theΒ subsequentΒ 12 months.Β
- WhyΒ they assume conventional cybersecurity approachesΒ receivedβtΒ work for AIΒ brokers.Β
- Actual examples of AI brokers going rogue, together with one which threatened to blackmail an worker.Β
Subscribe to Fairness onΒ YouTube,Β Apple Podcasts,Β Overcast,Β SpotifyΒ and all of the casts. YouΒ can alsoΒ observe Fairness onΒ XΒ andΒ Threads, at @EquityPod.





