Implementing Proactive Measures for Better Cybersecurity
“Knowledge is knowing that Frankenstein is not the monster. Wisdom is knowing that Frankenstein is the monster.”
Knowledge is indeed recognizing that Frankenstein refers to the mad scientist who created the creature, not the creature itself. However, wisdom lies in understanding that the creature – often mistakenly called Frankenstein – is the tragic and misunderstood monster. In Mary Shelley’s classic novel, Frankenstein, Scientist Victor Frankenstein brings the creature to life through a scientific experiment.
Nowadays, in light of the Frankenstein story, scientist has brought to life the monster that has undoubtedly revolutionized numerous industries, but unfortunately, it has also opened new avenues for cybercriminals. The use of artificial intelligence (AI) to launch sophisticated attacks often referred to as AI attack vectors is one of the most concerning phenomena of recent times.
These attack vectors cunningly leverage AI technologies such as natural language processing (NLP), machine learning, and deep learning to craft highly convincing scams, in order to manipulate multimedia content, and deceive unsuspecting victims.
Derived from the corresponding concept of vector in biology, attack vectors in the fields of cybersecurity are specific paths or scenarios that can be exploited to break into an IT system, thus compromising its security.
In our contemporary time, the speedy progression and incorporation of AI into various sectors have not only engendered a transformation in efficiency and capability, but have also initiated a new frontier within cybersecurity challenges. Such evolving threat circumstances moulded by AI emphasize the need for robust countermeasures and awareness as we get used to this newly complex and rapidly changing field of expertise.
In this newsletter, we focus on the application of AI for system attacks, rather than delving into attacks, specifically targeting AI systems, which is a separate topic.
Concise Definition of an AI Attack Vector
An attack vector powered by AI is a pathway or method used by a hacker to illegally access a network or computer in an attempt to exploit system vulnerabilities. Hackers use quite a lot of attack vectors to launch attacks that take advantage of system weaknesses, cause a data breach, or steal authentication entry codes commonly known as login credentials.1
Such illegal hacking methods include sharing malware and computer viruses, malicious email attachments and malevolent web links, pop-up windows, and instant messages in which the attacker manages to defraud an employee.
Many security vector attacks are financially motivated, with attackers stealing money from people and organizations or data such as personally identifiable information (PII) to then hold the owners to ransom. The types of hackers are wide-ranging. They could be organized crime, disgruntled former employees, politically motivated organized groups, professional hacking groups, or state-sponsored groups.
[1] FORTINET. What is an Attack Vector? Types & How to Avoid Them (fortinet.com)
Examples of AI-Powered Attacks
Phishing Emails
Attackers can leverage AI to generate convincing phishing emails that mimic the writing style and communication patterns of legitimate senders, making them more difficult to detect.
The proliferation of malevolent AI tools like WormGPT and FraudGPT has streamlined the orchestration and enhanced the efficiency of such attacks. Unlike human-written emails, AI-generated ones are remarkably error-free and consistent. Additionally, AI can craft phishing emails in multiple languages, lending an air of authenticity. Furthermore, personalized spear phishing attacks, aimed at specific individuals or organizations, are now facilitated by AI.
Identifying AI phishing emails has become increasingly difficult due to their high quality. Shockingly, according to Egress Phishing Threat Trends Report 71% of AI-generated email attacks go undetected. To detect potential phishing attempts, consider the following points:
- Compare the email content with previous communications from the supposed sender. Inconsistencies in tone, style, or vocabulary may raise suspicion.
- Pay attention to generic greetings (“Dear user” or “Dear customer”) instead of personalized ones.
- Be cautious if an email contains unexpected attachments; verify their legitimacy through other channels.
- Be watchful when the request comes with an urgency factor. Spear phishing email also insist on confidentiality. By and large, such requests are deviations from the organization’s regular procedures.
The primary lesson that one can learn from phishing email is to never take any email at face value. It does not cost much to confirm.
71% of AI-generated email attacks go undetected by filtering tools.
2023 Egress Phishing Threat Trends Report
Voice Cloning
AI algorithms can replicate voices with remarkable accuracy, allowing fraudsters to impersonate individuals over phone calls or voice messages to deceive victims.
For instance, voice cloning can be used to impersonate trusted individuals or authority figures, thereby increasing the likelihood of deceiving the target into following malicious instructions or revealing sensitive data. As AI continues to advance, the potential for misuse in social engineering schemes grows, highlighting the need for increased awareness and robust security measures to counteract these threats.
According to the US Federal Trade Commission, imposter scams, which may involve voice cloning, were the most frequent type of fraud reported in 2022, with over 5,000 victims losing $11 million. A global study by McAfee found that one in four people surveyed had experienced or knew someone who had encountered an AI voice cloning scam1. These statistics highlight the growing concern over the misuse of AI technologies and the importance of developing robust security measures to protect against such fraud.
To detect voice cloning schemes, consider the following points:
- Listen for unnatural intonations and speech patterns, which may indicate a synthetic voice.
- Be alert for anomalies in familiar voices, such as unexpected language choices or peculiar intonations.
- Use advanced voice analysis software to analyze audio for signs of manipulation.
- Implement real-time voice authentication systems to verify identity during calls.
- Monitor for inconsistencies in communication, such as mismatched emotions or contextually odd responses.
- Educate employees and colleagues about the risks of voice cloning and how to recognize suspicious calls.
- Establish protocols for verifying identities in situations where voice cloning is a potential risk.
These steps can help individuals and organizations identify and protect against the misuse of voice cloning technology.
Deepfake Videos
By utilizing deep learning algorithms, malicious actors can create fake videos that convincingly depict individuals saying or doing things they never did, leading to reputational damage or financial loss. Deepfake technology poses a significant threat as it enables the creation of incredibly realistic audio and video content that can be used to manipulate individuals, organizations.
Detecting deep fake videos involves several techniques:
- Visual Clues: Look for inconsistencies in lip-syncing, facial expressions, or lighting that may indicate manipulation.
- Audio Anomalies: Listen for irregularities in the voice or audio that do not match the visual content.
- Third-Party Verification: Use tools that analyze the content’s digital footprint or blockchain to verify its authenticity.
- Critical Analysis: Apply critical thinking and scrutinize the source of the video, especially if it makes sensational claims.
- Report Suspicious Content: If a video seems dubious, report it to the appropriate authorities or platforms for further investigation.
The emergence of deepfake technology also poses a significant challenge to the integrity of elections. As seen in recent incidents, deepfakes have been used to impersonate political figures and spread misinformation, potentially swaying voter perceptions and undermining trust in democratic processes.
Case Study: Deepfake CFO Scam
A recent incident reported by CNN1 sheds light on the dangers posed by AI-driven attacks.
In February 2024, according to Hong Kong Police, a finance worker at a multinational firm fell victim to a cunning scam orchestrated using deepfake technology. The fraudsters utilized AI to create a convincing bodily replica of the company’s Chief Financial Officer (CFO) and deployed it during a video conference call.
Such an elaborate scam saw the finance worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations or falsified reproduction.
During the video call, the unsuspecting finance worker was led to believe that he was interacting with the legitimate CFO. Believing everyone else on the video call was real, the worker agreed to remit a total of $200 million Hong Kong dollars – that is about USD $25.6 million.
As a result, he authorized a payment of $25 million to the unsuspected fraudsters. This scamming incident serves as a stark reminder of the sophistication and effectiveness of AI-powered attack vectors.
Such a Deepfake Video case is one of several recent cyber-attacks in which fraudsters are believed to have used deepfake technology to modify publicly available video and other footage in order to steal money from victimized people.
[1] CABLE NEWS NETWORK – CNN. Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ | CNN
Preventative Measures and Security Tips
Even if AI-based vector attacks pose significant challenges, there are measures that individuals and organizations can take to mitigate the risks:
- Employee Training: Educate employees about the potential dangers of AI-based attacks, including deepfakes technologies, email phishing and vishing scams. Encourage skepticism and provide training on how to identify suspicious communications and requests
- Verification Protocols: Implement robust verification procedures for high-value financial transactions or sensitive communications. Such double-checking may include multi-factor authentication, confirmation calls, or in-person verification for critical actions.
- AI Detection Tools: Invest in AI-powered detection tools that can identify and flag suspicious content, such as deepfake videos or AI-generated phishing emails. These tools can help organizations stay one step ahead of cybercriminals.
- Stay Informed: Keep abreast of the latest developments in AI technology and cyber threats. By staying informed, organizations can adapt their cybersecurity measures to address emerging risks effectively.
- Transparent Collaboration: Foster transparent collaboration between cybersecurity professionals, AI experts, and law enforcement agencies to develop strategies for combatting collaboratively cyberattacks resulting from attack vectors generated by AI.
Conclusion
Beyond the realm of finance, AI-powered phishing attacks have become increasingly complex and difficult to detect. With tools like WormGPT, and ChatGPT, cybercriminals can create convincing messages that mimic human communication, making it challenging for human beings to distinguish between legitimate and malicious emails.
As AI technology continues to evolve, it is imperative for organizations to adopt proactive approaches to cybersecurity and leverage appropriate safeguards, in order for them to enhance their resilience against AI-proliferated threats.
Contributions
Special thanks to the National Research Council of Canada for their financial support
Author: Victor Oriola
Executive Editor: Alan Bernardi
Reviser, Proofreader & Translator: Ravi Jay Gunnoo