Over the years, I’ve watched as artificial intelligence (AI) has evolved from the scenes of sci-fi movies into a powerful, real-world tool that’s reshaping the world as we know it with exciting possibilities …and significant risks! It’s transforming how we approach cyber security, requiring us to rethink our defences against new kinds of threats. During my time organising and leading AI initiatives at Deloitte, I often pondered AI’s impact on organisations from a cybersecurity perspective - concepts that aren’t new and illustrated with recent real-world examples that highlight their continued relevance and the lessons we can learn.
Firstly, it’s worth elaborating a little on AI, as it’s a somewhat loosely used term these days. It seems everything claims to be “powered by AI”, even toothbrushes! So, when people refer to AI, they might actually mean a range of approaches. For example, they could be talking about statistical methods used to group data or identify anomalies and outliers, or about machine learning (ML) models that excel at recognising patterns. Others might mean generative AI (GenAI), which can craft remarkably human-like responses by predicting the next word or character, as seen in tools like ChatGPT. Then there’s artificial general intelligence (AGI), a level of AI that’s still in the future, although some believe it’s closer than we might think.
Each approach has unique capabilities and risks, especially when it comes to protecting sensitive business operations. However, the very technology that enhances our defences also introduces new vulnerabilities. Here’s a look at how AI is impacting cybersecurity today and what businesses can learn from recent real-world incidents.
Public AI models like ChatGPT, Claude, and others are incredibly powerful and easy to integrate into daily workflows. I had a colleague who used ChatGPT for a project, unknowingly feeding proprietary information into the tool. While convenient, this also introduced risks around data retention and unauthorised access.
Organisations using these models face potential data leakage and injection attacks. Since these models might retain data fragments, there’s a risk of unintentional exposure of sensitive information, especially if the AI tool is accessed by multiple users or integrated into a broad range of operations.
Developing custom AI offers businesses unmatched flexibility and control, but it also opens the door to specific security risks. I recall assisting a client with an AI model that would analyse customer data for insights. We realised how vulnerable this proprietary model was to data poisoning attacks, where an attacker could subtly introduce corrupt data to distort outputs.
In-house models are also susceptible to model extraction, where attackers attempt to steal a model’s proprietary algorithms. Once an AI model is compromised, the ramifications can include intellectual property theft, revenue loss, and even brand damage if the outputs of compromised models are used maliciously.
We recently were engaged to develop a virtual CEO for a client, where the AI agent would provide staff guidance and assist with decision making in absence of the very busy CEO. The Agent was trained on content provided by the CEO. The thought crossed our minds around how far could the CEO emulation go? AI has evolved to allow attackers to launch more convincing and complex attacks, especially through deepfake phishing, where ML-generated audio or video convincingly impersonates people. AI also powers polymorphic malware that alters its code to evade detection. This technology is now in the hands of not only lone attackers but also state-sponsored hackers and criminal organisations, making it a global challenge.