OpenAI's ChatGPT has reportedly created a new strand of polymorphic malware following text-based interactions with cybersecurity researchers at CyberArk.
According to a technical write-up recently shared by the company with Infosecurity, the malware created using ChatGPT could "easily evade security products and make mitigation cumbersome with very little effort or investment by the adversary."
The report, written by CyberArk security researchers Eran Shimony and Omer Tsarfati, explains that the first step to creating the malware was to bypass the content filters preventing ChatGPT from creating malicious tools.
To do so, the CyberArk researchers simply insisted, posing the same question more authoritatively.
"Interestingly, by asking ChatGPT to do the same thing using multiple constraints and asking it to obey, we received a functional code," Shimony and Tsarfati said.
Further, the researchers noted that when using the API version of ChatGPT (as opposed to the web version), the system reportedly does not seem to utilize its content filter.
"It is unclear why this is the case, but it makes our task much easier as the web version tends to become bogged down with more complex requests," reads the CyberArk report.
Shimony and Tsarfati then used ChatGPT to mutate the original code, thus creating multiple variations of it.
"In other words, we can mutate the output on a whim, making it unique every time. Moreover, adding constraints like changing the use of a specific API call makes security products' lives more difficult."
Thanks to the ability of ChatGPT to create and continually mutate injectors, the cybersecurity researchers were able to create a polymorphic program that is highly elusive and difficult to detect.
"By utilizing ChatGPT's ability to generate various persistence techniques, Anti-VM modules and other malicious payloads, the possibilities for malware development are vast," explained the researchers.
"While we have not delved into the details of communication with the C&C server, there are several ways that this can be done discreetly without raising suspicion."
CyberArk confirmed they will expand and elaborate more on this research and also aim to release some of the source code for learning purposes.
The report comes days after Check Point Research discovered ChatGPT being used to develop new malicious tools, including infostealers, multi-layer encryption tools and dark web marketplace scripts.