UPDATED 20:34 EDT / APRIL 11 2023

SECURITY

OpenAI teams with Bugcrowd to offer cybersecurity bug bounty program

OpenAI LP, the company behind ChatGPT, has teamed with crowdsourced cybersecurity startup Bugcrowd Inc. to offer a bug bounty program to address cybersecurity risks in its artificial intelligence models.

The bug bounty program is offering rewards from $200 to $20,000 to security researchers who report vulnerabilities, bugs or security flaws they discover in OpenAI’s systems. The more severe a discovered bug is, the higher the reward payout.

However, the bug bounty program does not extend to model issues or non-cybersecurity issues with the OpenAI application programming interface or ChatGPT. “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” Bugcrowd noted in a blog post today. “Addressing these issues often involves substantial research and a broader approach.”

Security researchers who wish to participate in the program must also follow “rules of engagement” that will assist OpenAI in distinguishing between good-faith hacking and malicious attacks. These include following the policy rules, reporting discovered vulnerabilities and refraining from violating privacy, disrupting systems, destroying data or harming the user experience.

Any vulnerability discovered must also be kept confidential until authorized for release by OpenAI’s security team. The company’s security team aims to provide authorization within 90 days of receiving a report.

Perhaps stating the obvious, security researchers are advised that they should not engage in extortion, threats or other tactics to elicit a response under duress. In the event any of those happen, OpenAI will deny safe harbor for any vulnerability disclosed.

The initial reaction to the news of the OpenAI bug bounty program in the cybersecurity community has been positive.

“While certain categories of bugs may be out-of-scope in the bug bounty, that doesn’t mean the organization isn’t prioritizing internal research and security initiatives around those categories,” Melissa Bischoping, director of endpoint security research at Tanium Inc., told SiliconANGLE. “Often, scope limitations are to help ensure the organization can triage and follow up on all bugs, and scope may be adjusted over time. Issues with ChatGPT writing malicious code or other harm or safety concerns, while definitely a risk, are not the type of issue that often qualifies as a specific ‘bug,’ and are more of an issue with the training model itself.”

Image: OpenAI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU