OpenAI declares bug bounty program to handle AI safety dangers
[ad_1]
Be part of high executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Study Extra
OpenAI, a number one synthetic intelligence (AI) analysis lab, introduced at present the launch of a bug bounty program to assist tackle rising cybersecurity dangers posed by highly effective language fashions like its personal ChatGPT.
This system — run in partnership with the crowdsourced cybersecurity firm Bugcrowd — invitations unbiased researchers to report vulnerabilities in OpenAI’s techniques in trade for monetary rewards starting from $200 to $20,000 relying on the severity. OpenAI stated this system is a part of its “dedication to growing protected and superior AI.”
Issues have mounted in latest months over vulnerabilities in AI techniques that may generate artificial textual content, photos and different media. Researchers discovered a 135% enhance in AI-enabled social engineering assaults from January to February, coinciding with the adoption of ChatGPT, in accordance with AI cybersecurity agency DarkTrace.
Whereas OpenAI’s announcement was welcomed by some consultants, others stated a bug bounty program is unlikely to completely tackle the big selection of cybersecurity dangers posed by more and more refined AI applied sciences
Occasion
Rework 2023
Be part of us in San Francisco on July 11-12, the place high executives will share how they’ve built-in and optimized AI investments for achievement and averted frequent pitfalls.
This system’s scope is restricted to vulnerabilities that would immediately affect OpenAI’s techniques and companions. It doesn’t seem to handle broader issues over malicious use of such applied sciences like impersonation, artificial media or automated hacking instruments. OpenAI didn’t instantly reply to a request for remark.
A bug bounty program with restricted scope
The bug bounty program comes amid a spate of safety issues, with GPT4 jailbreaks rising, which allow customers to develop directions on how you can hack computer systems and researchers discovering workarounds for “non-technical” customers to create malware and phishing emails.
It additionally comes after a safety researcher often known as Rez0 allegedly used an exploit to hack ChatGPT’s API and uncover over 80 secret plugins.
Given these controversies, launching a bug bounty platform offers a possibility for OpenAI to handle vulnerabilities in its product ecosystem, whereas situating itself as a company performing in good religion to handle the safety dangers launched by generative AI.
Sadly, OpenAI’s bug bounty program could be very restricted within the scope of threats it addresses. For example, the bug bounty program’s official web page notes: “Points associated to the content material of mannequin prompts and responses are strictly out of scope, and won’t be rewarded except they’ve a further immediately verifiable safety affect on an in-scope service.”
Examples of issues of safety that are thought of to be out of scope embody jailbreaks and security bypasses, getting the mannequin to “say dangerous issues,” getting the mannequin to jot down malicious code or getting the mannequin to let you know how you can do dangerous issues.
On this sense, OpenAI’s bug bounty program could also be good for serving to the group to enhance its personal safety posture, however does little to handle the safety dangers launched by generative AI and GPT-4 for society at massive.
VentureBeat’s mission is to be a digital city sq. for technical decision-makers to realize information about transformative enterprise expertise and transact. Uncover our Briefings.
[ad_2]
No Comment! Be the first one.