IT Leaders Reveal Cyber Fears Around ChatGPT

The majority (51%) of security leaders expect ChatGPT to be at the heart of a successful cyber-attack within a year, according to new research by BlackBerry.

The survey of 1500 IT decision makers across North America, the UK and Australia also found that 71% believe nation-states are likely to already be using the technology for malicious purposes against other countries.

ChatGPT is an artificially intelligence (AI) powered language model developed by OpenAI, which has been deployed in a chatbot format, allowing users to receive a prompt and detailed response to any questions they ask it. The product was launched at the end of 2022.

Cyber-Threats from ChatGPT

Despite its enormous potential, information security experts have raised concerns over its possible use by cyber-threat actors to launch attacks, including malware development and convincing social engineering scams.

There are also fears it will be used to spread misinformation online in a quicker and more convincing manner.

These concerns were highlighted in BlackBerry’s new report. While respondents in all countries acknowledged ChatGPT’s capabilities to be used for ‘good,’ 74% viewed it as a potential cybersecurity threat.

The top worry for the IT leaders was the technology’s ability to craft more believable and legitimate sounding phishing emails (53%), followed by allowing less experienced cyber-criminals to improve their technical knowledge and develop more specialized skills (49%) and its use in spreading misinformation (49%).

While IT leaders have fears about ChatGPT creating phishing emails, one expert cautioned the AI tool may not be better than what cyber-criminals are already capable of.

Speaking to Infosecurity, Allan Liska, intelligence analyst at Recorded Future, noted that ChatGPT is not necessarily very good at these types of activities. “It can be used to create phishing emails, but cyber-criminals who carry out phishing campaigns already write better emails and come up with more creative methods of carrying out phishing attacks. It can also write malware, but not good malware, at least not yet,” he explained.

However, this situation will change, with the technology training itself all the time. Liska added: “The concerns are really twofold: ChatGPT is supposed to have guardrails that prevent it from carrying out these kinds of activities, but those guardrails are easily defeated. Eventually, it will get better at both and we don’t know what that looks like yet.”

Strengthening Cyber Defenses Through AI

Commenting on the research, Shishir Singh, CTO, cybersecurity at BlackBerry, said there is optimism that security professionals will be able to leverage ChatGPT to improve cyber defenses.

“It’s been well documented that people with malicious intent are testing the waters but, over the course of this year, we expect to see hackers get a much better handle on how to use ChatGPT successfully for nefarious purposes; whether as a tool to write better mutable malware or as an enabler to bolster their ‘skillset.’ Both cyber pros and hackers will continue to look into how they can utilize it best. Time will tell how who’s more effective,” he said.

The study also revealed that 82% of IT decision makers plan to invest in AI-driven cybersecurity in the next two years with almost half (48%) planning to invest before the end of 2023. BlackBerry believes this reflects growing concern that signature-based protection solutions will no longer be effective in protecting against increasingly sophisticated attacks emanating from technologies like ChatGPT.

Speaking to Infosecurity, Singh said it is vital organizations use AI to proactively fight AI threats, particularly regarding enhancing their prevention and detection capabilities.

“One of the key advantages of using AI in cybersecurity is its ability to analyze vast amounts of data in real-time. The sheer volume of data generated by modern networks makes it impossible for humans to keep up. AI can process data much faster, making it more efficient at identifying threats,” he noted.

“As cyber-attacks become more severe and sophisticated, and threat actors evolve their tactics, techniques, and procedures (TTP), traditional security measures become obsolete. AI can learn from previous attacks and adapt its defenses, making it more resilient against future threats.”

Singh added that AI is also crucial in mitigating advanced persistent threats APTs, “which are highly targeted and often difficult to detect.”

In addition to cyber-threats, privacy experts have discussed how the AI model is potentially breaching data protection rules, such as GDPR. This includes OpenAI’s methods for collecting the data ChatGPT is built upon and how it shares personal data with third parties.

Credit: Source link

Comments are closed.