ChatGPT: A new Danger in the Cybersecurity Realm

0
144

ChatGPT has taken the world by storm with more 100 million monthly users In January, it set the record for the fastest growing app since its launch in late 2022. This AI Chatbot has a wide range of uses, from writing articles to writing a business plan, it can even generate code. But what exactly is it, and what are the potential cyber security risks?

What is ChatGPT?

ChatGPT is an AI-powered natural language processing tool created by OpenAI. Designed to answer questions and assist with missions, it is now open to the public and free of charge. Additional features and functionality are also available with a paid subscription.

The app draws its data from textbooks, websites and articles, and uses it to shape its language and the answers to the questions presented. It’s suitable for chatbots, AI system conversions, and virtual assistant applications, but it also has the ability to develop code, write articles, translate, and debug, among other things.

Why is ChatGPT a cybersecurity risk?

Researchers have found that ChatGPT can develop code that can be used for malicious purposes. And while ChatGPT has some content filters to limit malicious output, these filters can be bypassed.

For example, the software company CyberArk was able to successfully bypass these filters and use the program to create Polymorphic malware. They also managed to use ChatGPT to modify the code, thus creating code that was very evasive and difficult to detect. Additionally, they were able to create programs that could be used in malware and ransomware attacks. Cyber ​​security solution provider Check Point also managed to use ChatGPT to create a A convincing spear phishing attack.

When Forbes magazine asked the AI Bot itself if it was a cyber security threatThey received an answer that it was not a threat, but added that “any technology can be misused.”

Since ChatGPT is an example of machine learning, the threat will continue to grow with the demand for malicious code. With the increasing input it receives, it will learn to generate more sophisticated responses, leading to the possibility of more sophisticated coding abilities. With these capabilities available to the public, it will require less skill from threat actors to carry out these attacks.

BlackFog can help protect against these attacks.

We did our own research and found that ChatGPT is capable of writing a PowerShell attack, if asked in a “non-malicious” way. Check out the video below to find out how the code was created, what happened during the attack and how Blackfog prevented the attacker from stealing the victim’s data.

A PowerShell script is quickly generated by ChatGPT and can easily be used in an attack.

As you can see, after installing the script on the victim’s device, the data is transferred every five seconds, but the victim is completely unaware that something is happening in the background.

Blackfog, once installed, immediately stopped the attack in its tracks and no additional data was extracted from the victims device. This happened automatically and without the need for any intervention from the user. The attacker then sees that the script has stopped functioning and has no choice but to abandon the attack, while the user now has peace of mind that their data is safe.

BlackFog’s Anti Data Exfiltration (ADX) technology automatically blocks all types of cyber threats and ensures that no unauthorized information leaves an organization’s devices or networks. The 24/7 protection is on the device, meaning no matter where employees work, as long as they have an internet connection, they are 100% protected.

As ChatGPT’s popularity grows, and the reality that its machine learning capabilities will only produce more sophisticated code, it’s inevitable that less skilled threat actors will be empowered to launch cyber attacks. To stay ahead of cyber criminals, organizations must evaluate their cyber security strategy and ensure they have third-generation defenses in place to combat these cyber attacks.

Source