ChatGPT increases the risk of cyberattacks.
By Markus Cserna, CTO, cyan Digital Security
The whole world looks with amazement and appreciation at the achievements of the publicly available voice bot ChatGPT – especially after the release of version 4.0 in mid-March. But what many do not yet suspect: With the triumph of AI, the danger of cyberattacks is also increasing. Even laymen have the tools for digital attacks at their fingertips. Companies must therefore urgently equip themselves, demands guest author Markus Cserna, CTO at cyan digital security. He says: Companies can already benefit from intuitive solutions that offer effective protection.
When we look back on 2023 in the future, we will remember it as the year when Artificial Intelligence (AI) became fully viable on a mass scale.
The AI software of the US company OpenAI, which was released in November 2022, took its successful course around the globe at breathtaking speed. In Germany, too, curiosity and euphoria were and are great – especially after the release of the software’s much more powerful successor.
ChatGPT: All-rounder with problematic properties?
The technology now delivers thoroughly useful texts for almost every area of life in seconds, while other AI programmes produce creative-artistic images at the touch of a button. But despite all the euphoria, justified doubts are also growing in places.
“Since the upgrade to the AI language model GPT-4, the chatbot ChatGPT has been writing misinformation more frequently and convincingly,” reported the media with reference to investigations by the network research service NewsGuard, which monitors and investigates disinformation on the internet.
In the new version, the AI responded with false and misleading claims to 100 out of 100 suggestive questions that it was asked. These questions dealt, for example, with disproved theories of vaccination opponents or conspiracy theories. The current version GPT-4 thus generated false messages more frequently than the previous version GPT-3.5.
According to NewsGuard, passing the US bar exam seems to be easier for the AI than recognising erroneous information. While GPT-4 performed better than 90 per cent of all examinees on the bar admission exam, the latest version of OpenAI’s AI software received a critical rating on a NewsGuard test that reviewed the software’s ability to avoid spreading clear misinformation.
Using the opportunities of AI – but please with a sense of proportion
To clarify: This is anything but a general reckoning with AI. The opportunities that this new technology offers, especially to companies from Germany and Austria, are welcome in many constellations and can be used for the benefit of the business locations.
However, this must be done with a sense of proportion and a clear mind. The examples cited show how close genius and madness can be, even with artificial intelligence.
Caution is also advised when the human factor is added. After all, the same technology can be used to pursue and realise both honest intentions and fraudulent motives. Most AI products such as ChatGPT have certain barriers built in to prevent misuse, but the distinction is difficult in many cases – and can therefore still be tricked again and again.
The speech bot is based on probabilities, reassembling familiar things at breathtaking speed. But it is precisely these reproduction capabilities that could make ChatGPT a dangerous assistant for cybercriminals.
ChatGPT turns programming amateurs into dangerous cybercriminals
Thanks to the new software, even IT laymen without deep programming knowledge can mature into professional IT attackers. For hackers, the triumph of AI makes it extremely easy to combine or modify malicious code in such a way that it can no longer be detected by existing security systems.
The cyberthieves do not even have to get their own fingers dirty, but can conveniently abuse AI for their criminal purposes. This represents the next step for cybercrime-as-a-service, now criminals don’t even have to be experts.
New attackers require a new security infrastructure
Chatbots are trained with billions of data sets from all areas from social sciences to programme code. This also opens the door wide for amateur hackers. IT abuse could become a mass phenomenon in the next few years.
The reaction time windows between an attack via chatbot and the recognition as well as neutralisation of the attack could thus unfortunately become smaller. Attacks will be carried out more professionally with less effort, which means new demands on cyber security.
Affected IT departments of corporations and large companies cannot guarantee perfected digital protection with established thought patterns. Without rethinking and rethinking the existing security infrastructure, complete digital resilience will hardly be possible in times of disruptive technologies.
Attackers from Asia, Africa and increasingly Russia, for example, no longer have to fear language barriers either: With the new version of ChatGPT, foreign language barriers can be easily bridged. Those affected in Germany will no longer realise, or at least too late, who they are actually dealing with. The times of bungling spam addresses with grammatical errors seem to be over.
Content placed in malicious messages by cybercriminals thus appears more “real”. The distinction between legitimate and illegitimate traffic becomes much more complex. Increased IT vigilance is therefore the order of the day for companies and organisations.
About the author
Markus Cserna’s work lays the foundation for cyan’s success: technological progress against Internet fraudsters and competitors. He started his career as a software specialist for high-security network components before founding cyan in 2006 with the vision of protecting internet users worldwide from harm. Since then, he has led the company as CTO with a restless passion for cyber security technology that steadfastly keeps ahead of the curve in dynamic markets.
Markus Cserna can be reached online at our company website https://www.cyansecurity.com.
Credit: Source link
Comments are closed.