Evolving Cybersecurity: Gen AI Threats and AI-Powered Defence

Technology continues to advance at an unprecedented pace, with artificial intelligence (AI) being at the forefront of this revolution. Integrating AI into various aspects of our lives has made things more convenient and efficient. However, as with any technological tool, there are also risks and challenges that come along with it. One area where we see these challenges manifesting is in the field of cybersecurity. Today, I will explore the evolving security threats that arise due to the rapid development of AI, often referred to as GenAI.

Thanks to Generative AI, the digital world is poised to witness a surge in evolving security threats, presenting a formidable challenge for cybersecurity experts and organisations. Anticipating advanced forms of cyberattacks, hyper-personalised phishing emails, and targeting private large language models through code injections, the year ahead demands heightened vigilance and innovative cybersecurity solutions from organisations and countries.

The Evolving Landscape of AI Cyber Attacks

The ever-evolving nature of cyber threats continues to escalate, with 2024 expected to witness a proliferation of advanced cyberattacks. As AI becomes more sophisticated, cyber attackers are also finding innovative ways to exploit these advancements for malicious purposes. The traditional methods of cyberattacks, such as phishing emails and code injections, are now being infused with higher levels of intelligence, making them significantly more challenging to detect and combat. These advanced forms of cyberattacks pose a significant threat to individuals, businesses, and even entire nations.

These advanced attacks may include sophisticated ransomware, zero-day exploits, deepfake phishing, and AI-driven malware that can adapt and evolve quickly. Integrating quantum-safe encryption, explored by companies like ISARA Corporation, becomes crucial in safeguarding sensitive data against quantum computing-powered threats, adding an additional layer of defence in the face of increasingly sophisticated cyber adversaries.

As the arms race between cyber attackers and defenders escalates, individuals, organisations, and governments must stay vigilant and adapt to these advanced forms of cyberattacks. Implementing robust security measures, raising awareness about the evolving threat landscape, and investing in cutting-edge cybersecurity technologies are essential steps in safeguarding against the ever-growing sophistication of cyber threats.

AI-Powered Cyberattacks

The landscape of cybersecurity threats has witnessed a surge in sophistication, with several high-profile incidents showcasing the evolving tactics of malicious actors. Notably, the MOVEit campaign orchestrated by the Russian-speaking group Clop underscored the exploitation of a critical vulnerability in Progress' MOVEit file transfer tool. In a departure from traditional ransomware attacks, Clop opted for data extortion, threatening to leak stolen data on the dark web unless victims paid their demands. Coveware estimated potential earnings between $75 million and $100 million for Clop, impacting 2,667 organisations and nearly 84 million individuals, including major entities like IBM, Cognizant, Deloitte, and prominent government agencies.

Another significant breach unfolded with the Barracuda Email Security Gateway (ESG) Attacks, where a critical vulnerability was exploited, affecting approximately 5 per cent of active ESG appliances. Attributed to UNC4841, a group believed to be aligned with China's government, the campaign disproportionately targeted government agencies, particularly in the US. Barracuda's rare recommendation for affected customers to replace compromised appliances highlighted the severity and persistence of the attacks.

In a separate incident, the Microsoft Cloud Email Breach raised concerns about the security of cloud email accounts belonging to US government agencies. Initially discovered in June, the breach impacted high-profile figures such as Commerce Secretary Gina Raimondo and US Ambassador to China Nicholas Burns. The compromise of 60,000 emails from 10 US State Department accounts prompted a federal investigation request from US Sen. Ron Wyden.

Microsoft later disclosed additional vulnerabilities that enabled the China-linked threat actor, known as "Storm-0558," to compromise cloud email accounts of US officials. Flaws in Azure Active Directory key handling and unauthorised access through a compromised corporate account added layers of complexity to the incident. 

These recent examples illuminate the multifaceted nature of cybersecurity threats, emphasising the need for continuous vigilance, proactive measures, and collaboration across industries to fortify digital defences against evolving and persistent adversaries. Considering this intricate threat landscape, addressing vulnerabilities and implementing robust security measures become imperative to safeguard sensitive data and maintain the integrity of digital ecosystems.

Hyper-personalised Phishing Emails

The art of deception in the digital world is taking a personalised turn, with hyper-personalised phishing emails emerging as a prevalent threat. In 2024, we might expect cybercriminals to employ advanced social engineering techniques, tailoring phishing attempts to individual preferences, behaviours, and even recent activities.

Startups like Tessian, which employs machine learning to analyse human behaviour in real time, are working towards developing solutions to detect and prevent these highly targeted phishing attacks. As email remains a primary vector for cyber threats, organisations must bolster their defences by integrating AI-driven tools to discern malicious intent from seemingly innocuous communication.

Code Injections Targeting Private LLMs

The rise of LLMs introduces a new frontier for cyber threats, with code injections targeting private LLMs becoming a concerning trend. Companies like OpenAI and Microsoft, pioneers in LLM development, invest in secure-by-design principles to mitigate these risks. In 2024, cyber attackers may attempt to exploit vulnerabilities in private LLMs through injected code, leading to unauthorised access, data breaches, or manipulation of AI-generated content.

As these models become integral to various industries, from healthcare to finance, safeguarding against code injections becomes paramount, necessitating a collaborative effort between AI researchers and cybersecurity experts to fortify these critical components of our digital infrastructure. 

The peril associated with code injections in Large Language Models, irrespective of their proprietary or open-source nature, manifests as a multifaceted threat. For proprietary models crafted by industry leaders such as OpenAI and Microsoft, the unauthorised introduction of code may result in clandestine access to sensitive data and intellectual property, potentially culminating in damaging data breaches.

The repercussions around open-source LLMs are similarly profound, with injected code jeopardising access to the model's architecture and training data, impacting a broad spectrum of users and organisations relying on the collaborative knowledge pool.

A critical concern is manipulating AI-generated content, where injected code could distort the output of proprietary LLMs, yielding misleading or biased information. This has the potential to significantly influence decision-making processes dependent on AI-generated insights. In the open-source domain, the collaborative usage of LLMs amplifies this risk, as manipulated information disseminates across diverse sectors, from journalism and research to online communication.

In addition, both proprietary and open-source LLMs are susceptible to compromised functionality and performance due to code injections. In proprietary models, this compromise may disrupt crucial business processes, while in the open-source landscape, it could impact a broad user community relying on these models for diverse applications.

Beyond the immediate technical ramifications, the erosion of trust in AI systems is a pervasive consequence. Whether proprietary or open source, code injections in LLMs instil scepticism among users, casting doubt on the reliability and security of AI-generated content.

As these models play an increasingly integral role in critical industries, a collaborative approach involving secure development practices, continuous monitoring, and active engagement from AI researchers and cybersecurity experts becomes imperative. Proactive measures are essential to fortify the integrity and security of LLMs, ensuring their continued contribution to various applications without compromising trust and reliability. 

Deepfake Phishing

The proliferation of deepfake technology presents multifaceted challenges within the corporate world. Enterprises might encounter significant reputational and financial danger if they become the focal points of deepfake-centric malevolent campaigns targeting their leadership or workforce. Such activities could include impersonation or spreading false information about the company.

A case in point occurred recently when a financial operative at a global conglomerate was defrauded of $25 million via an elaborate scheme leveraging deepfake phishing technology. During a video conferencing session, the criminals mimicked the organisation's Chief Financial Officer and other key personnel, utilising advanced multi-entity deepfakes that were sufficiently convincing to circumvent the employee's initial phishing misgivings.

The incident, part of a growing trend where artificial intelligence is weaponised for financial fraud, underscores the urgent need for digital literacy and robust verification mechanisms in the corporate world. It's a vivid reminder that in the age of AI, seeing is not always believing. 

With the democratisation of deepfake capabilities, malevolent actors may harness this technology to bypass biometric authentication protocols or illicitly infiltrate protected sectors. The ramifications of such actions could be profound, impacting not only the corporations but also individuals dependent on these mechanisms for their safeguarding and integrity.

Cracking AI-Assistant Chats

Since the launch of ChatGPT in 2022, AI-powered chatbots have become the new norm. Many of us share our deepest (company) secrets with them, blissfully unaware that no secret is safe in the digital world.

Recently, researchers unveiled a method allowing hackers to decipher encrypted AI-assistant chats, exposing sensitive conversations ranging from personal health inquiries to corporate secrets.

This side-channel attack, sparing Google Gemini but ensnaring other major AI chat services, exploits token-length sequences to unveil response content with notable accuracy. As AI interactions become ingrained in our daily lives, this vulnerability challenges the perceived security of encrypted chats. It raises critical questions about the balance between technological advancement and user privacy.

This revelation pushes us to ponder: in the quest to make AI assistants more human-like and responsive, have we inadvertently stripped away their discretion?

DarkGemini: ChatGPT for Criminals

Given the above examples, it was only a matter of time before we saw a Large Language Model trained specifically for nefarious purposes. Recently, a new AI chatbot known as DarkGemini emerged, available through a subscription on the dark web. For $45 per month, this chatbot provides capabilities that include generating reverse shells, creating malware, and locating individuals from their images. Touted as a tool to democratise GenAI for potential malicious users, DarkGemini exemplifies a troubling trend in AI misuse.

0:00
/0:39

Developers from the underground community launched Dark Gemini, an AI-driven service that is presumed to manipulate input to legitimate large language models (LLMs) to circumvent their ethical safeguards. This service aims at generating malicious code and locating individuals in images, showcasing the potential misuse of AI with relatively simple adaptations.

Despite not causing significant alarm within the cybersecurity community, Dark Gemini's development is noteworthy. It illustrates how even basic alterations to existing AI frameworks can enhance traditional cyber threats like phishing. Adapting a front-end interface to bypass LLM restrictions indicates a broader trend where minimalistic yet sophisticated AI tools are crafted for malevolent purposes. 

Moreover, the dark web has seen an increase in AI chatbots designed with malevolent intent, such as FraudGPT, WormGPT, and DarkBART. These bots leverage uncensored AI models derived from sources like Llama2 and the hybrid Wizard-Vicuna, further exacerbating the risk landscape. These models and tools, available through various dark web outlets, signify an increasing commodification of AI technologies for unethical and illegal purposes. 

Using AI to Secure Your Company

Although generative AI can and will be used to cause harm to organisations and individuals, it can also defend an organisation.

In fact, there are eight emerging areas where AI could fortify defences against increasingly sophisticated cyber threats. From automating vendor risk management to enhancing security training, AI's integration is poised to revolutionise how we protect digital domains.

Penetration testing, anomaly detection, code review improvements, dependency management, defence automation, and synthetic content verification are among the critical areas identified for AI's transformative impact, offering a glimpse into a future where security operations are reactive, predictive, and proactive.

AI can be used in a wide range of domains to protect your organisation, including highlighting AI's potential to streamline security operations and combat the dynamic threat landscape.

Incorporating (generative) AI in your security organisation takes time, and as we have seen, criminals are actively using these technologies to improve their operations. Unfortunately, while 97% of IT leaders trumpet the necessity of fortifying AI and MLOps systems, a mere fraction wields the confidence that budgets will allow them to do so. The stark dichotomy between understanding the importance of security and the preparedness to tackle adversarial threats highlights a gap that could have dire repercussions.

This stark contrast between the recognised imperative to shield AI infrastructures and the scant deployment of defences against adversarial onslaughts underscores broader industry inertia, underscoring the urgency and necessity of adopting AI not only to counteract but to stay ahead of cyber attackers in the relentless tech arms race.

Final Thoughts

The rapid development of AI, particularly GenAI, poses significant security threats that demand our attention. As cyber attackers continue to exploit the power of AI for their nefarious purposes, we must stay vigilant and adapt our cybersecurity strategies to tackle these evolving threats head-on. Measures such as advanced threat detection algorithms, improved employee training programs, and integration of innovative AI frameworks hold promise in mitigating these risks and ensuring a safer digital landscape for all.

Startups and established players alike must remain agile in developing and deploying proactive cybersecurity measures to stay ahead of the ever-mutating threat landscape, ensuring the continued security of our interconnected digital world. 

This year, the cybersecurity environment is poised for significant shifts as more sophisticated attacks loom on the horizon. If these expectations materialise, the consequences could be profound, ushering in a transformative era where the balance of power might shift from companies to criminals. This pivotal juncture in cybersecurity suggests that organisations and countries must be vigilant and proactive in adapting to emerging threats, potentially reshaping the dynamics of influence and control in the digital age.

Images: Midjourney