Cybercrime. For many, the mention of this word might raise the semi-comic image of a far-off ‘prince’ emailing vulnerable individuals to request money in exchange for grandiose rewards. But cybercrime takes many forms, is far more common than you would think and in the last few years has reached an all-time high.
The example of the email from a prince or individual claiming to be a trusted source requesting money, personal data or to click a specific link is known as phishing. This is just one among many forms of cybercrime, other types of cybercrime include.
- Social Engineering – People are manipulated to gain access to confidential information.
- Spam – Emails/texts sent in large numbers which encourage individuals to clink links which install malware on their device.
- Malware and Ransomware – Intrusive software designed to steal data, damage, destroy or block access to systems until a ransom is paid.
- Whaling – Targeting individuals in senior positions for sensitive information or monetary gain. Another example of this is where they pretend to be someone in the organization, such as your boss. They use social media to build up a profile, and then target employees with an email from the boss with an urgent message concerning money matters.
- Island Hopping – Target an organization’s more vulnerable third-party partners to gain access to the company’s network.
Although some of these attacks might seem like something out of a spy-movie they are actually quite common occurrences. Cybercrime reached an all-time high during the pandemic with the UK Government reporting that almost half (46%) of businesses reported having cybersecurity breaches or attacks in 2020. The increase in remote working and relaxation of control environments are factors which likely contributed to the drastic increase in cybercrime (31%) during the peak of the pandemic.
Since the pandemic the situation has improved, the UK Government reported that 32% of businesses faced cybersecurity breaches or attacks in 2023. Yet, cybercrime remains notoriously prevalent and is a serious concern for many individuals and organizations. Developments in AI and Large Language Models (‘LLMs’) have contributed towards the proliferation of cybercrime. LLMs are being used to craft more convincing phishing messages and to bypass automated defences designed to identify suspicious behaviour through the use of bots. Furthermore, the use of public LLMs like ChatGPT and the consequent transfer of data to these third-party systems have boosted the risk of: data breaches; unauthorised access; re-identification and non-compliance.
(For more information about the risks of public LLMs and how private LLMs like SCOTi can help counter these challenges please check out our last blog!)
Yet, LLMs and generative AI can also be used to improve cybersecurity and increase individuals’ protection. As we have often emphasised, AI is neither inherently good or evil it is merely a tool, as such it can be used either to protect or attack.
The amount of data used in most organizations is far too large for any human cybersecurity professional to process or protect. This is where the essentiality of AI systems which can constantly scour for digital anomalies and detect cyber-attacks becomes abundantly clear. AI’s pattern recognition abilities can also help prevent cyberattacks by analysing past attack patterns to predict future threats and enable security employees to proactively strengthen defences.
Using LLMs to make the work of cybersecurity professionals easier is an important aspect of their contribution. For example, unlike humans, LLMs can rapidly analyse and respond to cyberattacks whenever they occur and mitigate the damage by isolating affected systems, deploying patches, or updating firewall rules. They can also provide real-time guidance to security analysts. Furthermore, they can bait attackers and deceive them into giving up their position or information about their attack by scouring the dark web for information or creating deepfakes.
LLMs also have a pre-emptive role to play in the fight against cybersecurity attacks. Code assistants can help reduce bugs in the software development process. In fact, empirical studies have shown that AI code assistants introduce less vulnerabilities than humans. The simulation of phishing attacks through LLMs can also improve the overall security of organizations by providing security training to employees.
The list of benefits outlined in this article is not exhaustive and it is clear that LLMs can do a lot to protect individuals from cybercrime but can also contribute to creating a number of threats. Given the prevalence of cybercrime it is important for individuals and organisations to be aware of the threats they pose and the different forms they can take. Cybercrime is not going anywhere and we should all learn how to protect ourselves as best we can.
Written by Celene Sandiford, smartR AI
Photocredit: https://deepai.org/machine-learning-model/text2img