How AI is Making Your Passwords Less Safe Than You Think
Neural networks trained on billions of leaked passwords have learned how humans think. Your 'creative' substitutions, lucky numbers, and favourite years are already in the model.
From Brute Force to Brain Force
Traditional password cracking is a numbers game: try every possible combination of characters until one matches. A GPU cluster can test 10 billion guesses per second, but even that brute strength runs out against a long password — the combinations explode exponentially.
AI changes the game entirely. Instead of trying random combinations, machine learning models are trained on billions of real passwords from previous breaches (LinkedIn, Adobe, RockYou, and hundreds more). They don't guess randomly — they guess the way humans do.
What AI Has Learned About You
After analysing hundreds of millions of leaked passwords, neural networks have identified patterns that feel personal but are statistically predictable:
- Trailing numbers: Over 60% of passwords end with 1–4 digits. The most common are 1, 123, 1234, and the current year.
- Capitalised first letter: When people use an uppercase letter, it is almost always the first character.
- Leet substitutions: @ for a, 3 for e, 0 for o, 1 for i — attackers have entire dictionaries of these mappings.
- Seasonal words: Summer, Winter, Spring, Autumn are among the most common base words globally.
- Special character at the end: When required, people add ! or . as the last character.
Sunsh1ne2024! — you are following a template that hundreds of thousands of other people also arrived at independently. The model has seen it before.
PassGAN and Neural Network Cracking
In 2022, researchers published PassGAN — a Generative Adversarial Network trained on the RockYou dataset. Unlike rule-based tools (like Hashcat with mangling rules), PassGAN generates password candidates autonomously by learning the underlying statistical distribution of real passwords.
It does not know the rules of password composition. It simply generates strings that look like real passwords. Tested against held-out passwords from the same dataset, it matched passwords that traditional rule-based tools had missed entirely.
How AI Reduces the Effective Search Space
Entropy measures how many combinations must be tried in the worst case. But AI does not search the full space — it prioritises the most likely candidates first. This effectively shrinks the search space from a theoretical number to a human-behavioural one.
A password like Summer2024! has about 60 bits of theoretical entropy. But an AI model, knowing that "Season + Year + !" is a top-tier human pattern, might rank it in the top 10 million guesses — equivalent to roughly 23 bits of real-world entropy.
What Still Works Against AI
AI models are trained on human choices. They struggle with genuinely random data because random data has no pattern to learn. Two strategies remain robust:
- Password managers: Generate 20+ character strings of pure randomness (
xK9#mP2$qL7!nR4@vZ). No pattern. No AI model can prioritise this. - Long random passphrases: Five or more truly random, unrelated words. Not a sentence, not lyrics — genuinely random word selection. The key word is random.
The Bottom Line
The threat model has changed. An 8-character "complex" password that would have taken GPU clusters years to crack in 2018 can now be prioritised by an AI model and found in hours. Length, combined with genuine randomness, is the only reliable defence.