As shoppers, companies, and governments flock to the promise of low-cost, quick, and seemingly magical AI instruments, one query retains getting in the way in which: How do I maintain my information non-public?
Tech giants like OpenAI, Anthropic, xAI, Google, and others are quietly scooping up and retaining person information to enhance their fashions or monitor for security and safety, even in some enterprise contexts the place firms assume their info is off limits. For extremely regulated industries or firms constructing on the frontier, that grey space might be a dealbreaker. Fears about the place information goes, who can see it, and the way it is likely to be used are slowing AI adoption in sectors like healthcare, finance, and authorities.Β
Enter San Francisco-based startup Assured Safety, which goals to be βthe Sign for AI.β The corporateβs product, CONFSEC, is an end-to-end encryption software that wraps round foundational fashions, guaranteeing that prompts and metadata canβt be saved, seen, or used for AI coaching, even by the mannequin supplier or any third get together.
βThe second that you simply hand over your information to another person, youβve primarily lowered your privateness,β Jonathan Mortensen, founder and CEO of Assured Safety, informed Trendster. βAnd our productβs objective is to take away that trade-off.β
Assured Safety got here out of stealth on Thursday with $4.2 million in seed funding from Decibel, South Park Commons, Ex Ante, and Swyx, Trendster has completely discovered. The corporate desires to function an middleman vendor between AI distributors and their prospects β like hyperscalers, governments, and enterprises.
Even AI firms may see the worth in providing Assured Safetyβs software to enterprise shoppers as a strategy to unlock that market, stated Mortensen. He added that CONFSEC can also be well-suited for brand new AI browsers hitting the market, like Perplexityβs not too long ago launched Comet, to offer prospects ensures that their delicate information isnβt being saved on a server someplace that the corporate or unhealthy actors may entry, or that their work-related prompts arenβt getting used to βpractice AI to do your job.β
CONFSEC is modeled after Appleβs Non-public Cloud Compute (PCC) structure, which Mortensen says βis 10x higher than something on the market by way of guaranteeing that Apple can’t see your informationβ when it runs sure AI duties securely within the cloud.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Like Appleβs PCC, Assured Safetyβs system works by first anonymizing information by encrypting and routing it by means of providers like Cloudflare or Fastly, so servers by no means see the unique supply or content material. Subsequent, it makes use of superior encryption that solely permits decryption beneath strict circumstances.
βSo you may say youβre solely allowed to decrypt this in case you are not going to log the information, and also youβre not going to make use of it for coaching, and also youβre not going to let anybody see it,β Mortensen stated.Β
Lastly, the software program working the AI inference is publicly logged and open to evaluation in order that consultants can confirm its ensures.Β
βAssured Safety is forward of the curve in recognizing that the way forward for AI depends upon belief constructed into the infrastructure itself,β Jess LeΓ£o, companion at Decibel, stated in a press release. βWith out options like this, many enterprises merely canβt transfer ahead with AI.β
Itβs nonetheless early days for the year-old firm, however Mortensen stated CONFSEC has been examined, externally audited, and is production-ready. The group is in talks with banks, browsers, and search engines like google, amongst different potential shoppers, so as to add CONFSEC to their infrastructure stacks.Β
βYou carry the AI, we carry the privateness,β stated Mortensen.





