Plugging safety holes is important to retaining generative synthetic intelligence (AI) fashions protected from dangerous actors, dangerous picture technology, and different potential misuse. To make sure a few of its newest and largest AI initiatives are as protected as potential, Adobe on Wednesday expanded its bug bounty program, which rewards safety researchers for locating and disclosing bugs, to embody Content material Credentials and Adobe Firefly.
Content material Credentials are tamper-evident metadata connected to digital content material that function a “vitamin label”, letting customers see the content material’s “components,” such because the creator’s title, creation date, any instruments used to create the picture (together with generative AI fashions), and the edits made.
Within the period of AI-generated pictures, this provenance device may also help folks decide artificial from human-made content material. This solely works, nonetheless, if Content material Credentials are tamper-proof and used as designed. Adobe is now crowdsourcing safety efforts for Content material Credentials through its bug bounty program to bolster protections towards potential abuses, resembling incorrectly attaching credentials to the improper content material.
Some AI picture mills, like Adobe Firefly, robotically connect Content material Credentials to AI-generated content material. Firefly is Adobe’s group of generative AI fashions that may create pictures from prompts, different footage, and extra. This household of fashions is quickly accessible to the general public by a standalone net utility and a few of Adobe’s hottest purposes, together with Photoshop.
The discharge says Adobe desires safety researchers to check Firefly towards Open Worldwide Utility Safety Venture (OWASP)’s high safety dangers in giant language mannequin (LLM) purposes, resembling immediate injection, delicate info disclosure, and coaching knowledge poisoning. Adobe will then use this suggestions to focus its analysis and additional efforts on addressing Firefly’s weaknesses.
“By proactively participating with the safety group, we hope to realize further insights into the safety posture of our generative AI applied sciences, which, in flip, will present priceless suggestions to our inner safety program,” Adobe mentioned in its launch.
Adobe is inviting moral hackers fascinated by taking part within the bug bounty program to go to the Adobe HackerOne web page and to use through this way, which asks questions concerning the applicant’s safety analysis and experience.
Along with Content material Credentials and Adobe Firefly, the bug bounty program is accessible for many Adobe net apps and desktop and cell variations of its Inventive Cloud apps. Yow will discover the total listing of included apps on the Adobe Bug Bounty Program webpage.
Oddly, whereas the HackerOne web page lists rewards starting from $100 to $10,000, Adobe’s webpage says that “this program doesn’t present financial rewards for bug submissions.” It is unclear whether or not this refers solely to Adobe’s non-public bug bounty program.
Individually, OpenAI additionally has a bug bounty program, by which safety researchers could make anyplace from $200 to $20,000, relying on the kind of the vulnerability.