Why Password Complexity Rules Make You Less Secure
Mandatory complexity rules cause predictable patterns: Pa$$w0rd, Summer2024!, Password1. Here is the research and what to require instead.
The rule that teaches the wrong lesson
When you tell someone their password must contain an uppercase letter, a number, and a special character, you are not improving their security. You are teaching them a formula: take a word, capitalise the first letter, append a number, add a symbol at the end.
The result is a massive percentage of passwords that look like this: Password1!, Summer2024!, Welcome123!. These satisfy all four complexity requirements. They are all in every serious cracking dictionary. They crack in seconds.
The failure mode of complexity rules is not that users are careless — it is that the rules are legible. When you tell someone the exact requirements a password must meet, you tell attackers exactly what patterns to target.
What the research actually shows
The empirical case against mandatory complexity rules comes from multiple independent lines of research. The most influential:
The Komanduri et al. study (Carnegie Mellon, 2011) analysed 12,000 passwords collected under different policy regimes. Passwords created under "basic complexity" policies — the uppercase/lowercase/number/symbol requirement — had lower entropy than passwords created under "length-only" policies requiring 16+ characters. The complexity-required passwords clustered on predictable patterns. The long passwords were more varied.
The Microsoft Research study (2010) found that frequent password rotation — a natural companion to complexity requirements — caused users to make minimal, predictable changes. Spring2023! becomes Summer2023! becomes Fall2023!. The password is technically new. The cracking resistance is essentially unchanged.
NIST's 2017 reversal formalised what security researchers had been saying for years. SP 800-63B, the authoritative US government guidance on digital authentication, explicitly removed mandatory complexity requirements from its recommendations. The document is blunt: "composition rules... often make it harder for people to choose strong passwords." NIST recommends checking passwords against breached credential lists instead.
The substitution problem
Complexity requirements create a predictable substitution pattern that attackers exploit directly. Modern cracking tools — hashcat, John the Ripper — ship with rule sets specifically designed to apply common complexity-satisfying transforms to dictionary words:
- Capitalise the first letter:
password→Password - Replace 'a' with '@':
Password→P@ssword - Replace 'o' with '0':
P@ssword→P@ssw0rd - Append a common suffix:
P@ssw0rd→P@ssw0rd1!
These four rules, applied in sequence to a dictionary of 10,000 common words, generate most of what users produce under standard complexity policies. The entire search space that looks "complex" is actually a few hundred thousand candidates, not the billions that true entropy would imply.
What actually improves security
The research consensus on effective controls:
- Length requirements: Minimum 12–15 characters significantly increases the brute-force search space. Longer passwords are better even without complexity requirements, because length adds entropy multiplicatively.
- Breached credential checks: Checking new passwords against known-compromised lists (the HIBP API contains 900+ million entries) directly prevents the reuse of known-bad credentials. This catches more real-world attacks than complexity requirements do.
- No mandatory rotation: Forcing rotation leads to predictable patterns. Change passwords on evidence of compromise, not on a calendar schedule.
- Password managers: The root problem is that humans are bad at generating random-looking strings. Password managers solve this by generating and storing truly random credentials. Organisations that actively support password manager use see better credential hygiene than organisations that mandate complexity.
- MFA: A phished or breached password is useless to an attacker if a second factor is required. MFA reduces the practical impact of weak passwords more than any policy change to password composition.
Why organisations keep the old rules
If the research is this clear, why do most organisations still enforce complexity requirements?
Three reasons: compliance theatre, audit expectations, and institutional inertia.
Many compliance frameworks — particularly older versions and healthcare-specific regulations — were written when complexity requirements were considered best practice. Auditors trained on those frameworks look for complexity rules and mark their absence as a finding, even if the control is demonstrably ineffective. Organisations that want to remove complexity requirements often face resistance from QSAs or internal compliance teams who interpret the absence of a checkbox as a deficiency.
The practical path forward: implement the effective controls (length, breach checking, MFA), and document explicitly why you've moved away from complexity requirements — citing NIST SP 800-63B as the authoritative reference. "We removed mandatory complexity in favour of a 15-character minimum and HIBP breach checking, consistent with NIST 800-63B Section 5.1.1.2" is a defensible audit position.
The takeaway
Complexity requirements are not harmless. They actively degrade security by producing predictable patterns while creating the appearance of rigor. The security community has known this for over a decade. The right controls are length, randomness (ideally via a password generator), breach checking, and MFA.
The best password policy is one that maximises the probability users choose truly unpredictable credentials — which means minimising the cognitive burden of following the policy and providing tools that do the hard work of randomness for them.