What You Need to Know:
5 Ways Cybercriminals are Using AI
Modern cybercriminals are leveraging artificial intelligence (AI) to enhance their attacks. From phishing and deepfakes to malware generation and content localization, AI is being used in numerous ways to breach security defenses.
Let's delve into how cybercriminals use AI to steal credentials and gain unauthorized access to high-value networks:
The Value of Stolen Credentials
Stolen credentials, especially current and working sets of usernames and passwords, are invaluable to cybercriminals. These credentials enable them to access systems and take over accounts with a reduced risk of triggering threat alerts. Once inside, they often engage in network reconnaissance, privilege escalation, data exfiltration, and other preparatory steps for further attacks. This could lead to ransomware deployment or the establishment of an advanced persistent threat (APT), such as the notorious Volt Typhoon.
AI-Powered Password-Based Attacks
Cybercriminals don’t solely rely on stolen credentials. They also employ various methods to compromise remote access points or guess credentials. Here are some common tactics:
- Credential Stuffing: Using stolen credentials from one breach to access multiple accounts, exploiting the common practice of reusing passwords across different services.
- Password Spraying: Automated attempts to match a few common passwords across many usernames, particularly effective against cloud services and remote access points.
- Brute Force Attacks: Automated processes that guess passwords using all possible combinations of characters, often starting with known default usernames and passwords.
This data underscores the relentless efforts of cybercriminals to exploit vulnerabilities and access sensitive information. What are the main drivers of this increase?
How AI Enhances Credential Theft:
Phishing and Social Engineering
AI can generate highly convincing phishing emails that mimic legitimate communications. These emails can be personalized based on data analysis, making them more likely to deceive recipients into providing their credentials or granting system access.
Deepfakes
AI is used to create realistic videos or audio clips that impersonate trusted figures. For instance, in 2019, a voice-phishing (vishing) attack used a deepfake to convince an employee to transfer $243,000. More recently, a group of deepfake bank employees used a Zoom call to steal $35 million.
Malware Development
Generative AI (GenAI) can create ‘smart’ malware that adapts its code to evade detection by traditional security systems. Such malware is often used to steal credentials and other sensitive information from infected systems.
Automated Reconnaissance
AI enables quick data processing, allowing cybercriminals to identify targets and vulnerabilities rapidly. They can scan and map networks, gather information from public sources (OSINT), and identify misconfigurations and other vulnerabilities.
AI Chatbots
Threat actors deploy AI chatbots to automate social engineering attacks, such as phishing or pretexting, increasing the scale and efficiency of their operations.
Protecting Against AI-Powered Attacks
Defending against AI-enhanced credential theft requires a combination of strong security policies, user education, and advanced security solutions. Here are some key measures:
Use Strong, Unique Passwords
Employ a password manager to generate and store complex passwords, ensuring that each account has a unique password.
Enable Multi-Factor Authentication (MFA)
MFA adds an extra layer of security by requiring a second form of verification, such as a device or hardware token, in addition to the username and password.
Monitor for Phishing Attacks
Always verify the source of emails and be cautious with emails requesting sensitive information or containing links.
Update and Patch Systems
Regularly update software and apply security patches promptly to protect against vulnerabilities that AI-powered scans might exploit.
Use AI-Powered Security Tools
Implement comprehensive network and endpoint protection with AI-powered security solutions to detect unusual activity and unexpected network traffic.
Educate Yourself and Others
Conduct security awareness campaigns to teach employees how to recognize and defend against email threats, AI-powered attacks, and social engineering.
Monitor Account Activity
Regularly check your accounts and set up alerts for unusual activity to detect potential breaches early.
Use Secure Connections
Avoid accessing sensitive information over public networks and ensure communications are encrypted (HTTPS).
Adopt Zero-Trust Principles
Zero trust continuously verifies the identity and trustworthiness of devices and users, potentially denying access even if credentials are stolen.
While no single article can cover all the ways AI is used in cyberattacks, understanding these tactics and implementing robust security measures can help protect your organization. For more detailed insights on password protection and cybersecurity in the age of AI, check out Emre’s blog on Password Protection.