OpenAI is supplying white hat hackers up to $20,000 to come across security flaws as section of its bug bounty method released on April 11, 2023.
The ChatGPT developer declared the initiative as aspect of its motivation to protected artificial intelligence (AI). The organization has been less than scrutiny by security professionals due to the fact the start of the ChatGPT prototype in November 2022.
Talking to Infosecurity, Mike Thompson, information and facts security supervisor at Zen Internet claimed, “It is vital that OpenAI operates a bug bounty scheme as a issue of precedence, as the technology is from November 2022 the crazy giddiness that has ensued has completely overshadowed the likely risk.”

Protect your privacy by Mullvad VPN. Mullvad VPN is one of the famous brands in the security and privacy world. With Mullvad VPN you will not even be asked for your email address. No log policy, no data from you will be saved. Get your license key now from the official distributor of Mullvad with discount: SerialCart® (Limited Offer).
➤ Get Mullvad VPN with 12% Discount
Vulnerabilities in the Library
In its announcement, OpenAI acknowledged that despite its weighty financial commitment in exploration and engineering to assure its AI systems are secure and protected, vulnerabilities and flaws can arise.
“We think that transparency and collaboration are crucial to addressing this truth. That is why we are inviting the global local community of security scientists, moral hackers and technology lovers to enable us discover and address vulnerabilities in our units,” the firm claimed.
On March 23, OpenAI introduced it had set a vulnerability in ChatGPT4 which experienced authorized people to check out the titles of chats by other users throughout a nine-hour interval on March 20. Concerns were being lifted that the bug in the ChatGPT open up-supply library could guide to privacy fears.
Study far more: ChatGPT Vulnerability May possibly Have Exposed Users’ Payment Data
“This is not the limit of vulnerabilities discovered nor of what will at any time exist. A single of most successful methods for organizations to make sure the security posture of their goods is to launch a bug bounty system. This is time, analyzed and legitimate since 1995 when Netscape launch of the 1st bug bounty software. I am glad OpenAI sees this,” Zaira Pirzada, cybersecurity advisor at Lionfish Tech informed Infosecurity.
She additional that Sam Altman, CEO of OpenAI, is likely acknowledging that the that the public is as significantly a vital element of tests as they are of consuming.
The company has partnered with Bugcrowd to handle the submission and reward system.
Casey Ellis, founder and CTO of Bugcrowd, advised Infosecurity, “OpenAI’s selection to actively solicit comments from the hacker group on the security of their products is enormous and continuing validation of hackers as ‘the Internet’s Immune System’, and the transparency and accountability of the strategy will go a lengthy way to continuing to establish consumer trust in a relatively new market. I believe all rising technology providers and types can discover from this.”
The rewards array from $200 for very low-severity conclusions to up to $20,000 for excellent discoveries. At the time of writing about 10 vulnerabilities had been rewarded. As component of the system, ethical hackers are not permitted to launch facts about the vulnerabilities observed.
The scope of the plan involves OpenAI’s APIs and AP Keys, ChatGPT, 3rd party corporate targets associated to OpenAI, OpenAI investigation org and the OpenAI.com internet site. The bug bounty program is for standard application issues and not AI product issues.
Jake Moore, global security advisor at ESET mentioned that whilst the bug bounty system won’t deal with all attainable attack vectors, it acts as a further tool in the cybersecurity toolkit avoiding a new wave of threats.
Recent research by BlackBerry observed that 51% of security leaders expect ChatGPT to be at the heart of a thriving cyber-attack within a yr. The most significant security worries centre all-around how the large language model could be leveraged by cyber-risk actors to start attacks, including malware development and convincing social engineering scams.
Graphic credit rating: Koshiro K / Shutterstock.com
Some components of this write-up are sourced from:
www.infosecurity-magazine.com