bcrypt vs Argon2: choosing password hashing settings
bcrypt vs Argon2 explained: compare security traits, real-world performance costs, and how to choose safe parameters for modern web backends.

What problem password hashing is solving
Password hashing lets a backend store a password without storing the password itself. When someone signs up, the server runs the password through a one-way function and saves the result (the hash). At login, it hashes the password the user typed and compares the result with what was stored.
A hash is not encryption. There is no way to decrypt it. That one-way property is exactly why hashing is used for passwords.
So why not use a normal fast hash like SHA-256? Because fast is what attackers want. If a database is stolen, attackers do not guess passwords by logging in one attempt at a time. They guess offline using the stolen hash list, pushing guesses as fast as their hardware allows. With GPUs, fast hashes can be tested at enormous scale. Even with unique salts, a fast hash is still cheap to brute-force.
Here is the realistic failure mode: a small web app loses its user table in a breach. The attacker gets emails and password hashes. If those hashes were made with a fast function, common passwords and small variations fall quickly. Then the attacker tries the same password on other sites (credential stuffing), or uses it to access higher-privilege features inside your app.
A good password hash makes guessing expensive. The goal is not “unbreakable.” The goal is “too slow and costly to be worth it.”
A password hashing setup should be:
- One-way (verify, not reverse)
- Slow per guess
- Expensive for parallel hardware (especially GPUs)
- Fast enough that real logins still feel normal
- Adjustable so you can raise the cost over time
bcrypt and Argon2, in one minute
When you compare bcrypt vs Argon2, you are choosing how you want to slow down password guessing after a database leak.
bcrypt is the older, widely supported option. It is designed to be expensive on the CPU, and it has one main tuning knob: the cost factor. It is also “boring” in a good way: easy to find in libraries, easy to deploy, and predictable.
Argon2 is newer and was designed to be memory-hard. It can force each password guess to use a meaningful amount of RAM, not just CPU. That matters because attackers often win by running huge numbers of guesses in parallel on GPUs or specialized hardware. Memory is harder and more expensive to scale at that kind of parallelism.
Argon2 has three variants:
- Argon2i: emphasizes resistance to some side-channel attacks
- Argon2d: emphasizes GPU resistance, with more side-channel considerations
- Argon2id: a practical mix of both, and the common default for password hashing
If your stack supports Argon2id and you can tune memory safely, it is usually the best modern default. If you need maximum compatibility across older systems, bcrypt is still a solid choice when configured with a high enough cost factor.
Security properties that matter most
The core question is simple: if an attacker steals the password database, how expensive is it to guess passwords at scale?
With bcrypt, you control cost (work factor). Higher cost means each guess takes longer. That slows down attackers and also slows down your own login checks, so you tune it to a point that is painful for attackers but still acceptable for users.
With Argon2id, you can add memory-hardness on top of time cost. Each guess needs CPU time and RAM accessed in a specific pattern. GPUs can be extremely fast at compute-heavy work, but they lose a lot of their advantage when each parallel guess needs substantial memory.
Salts are non-negotiable. A unique, random salt per password:
- prevents precomputed tables from being reused across your database
- ensures identical passwords do not produce identical hashes across users
Salts do not make weak passwords strong. They mainly protect you after a database leak by forcing attackers to do real work per user.
bcrypt strengths and limits you should know
bcrypt is still widely used, mostly because it is easy to deploy everywhere. It tends to be a good fit when you need broad interoperability, when your stack has limited crypto options, or when you want one simple tuning lever.
The biggest “gotcha” is the 72-byte password limit. bcrypt only uses the first 72 bytes of the password and ignores the rest. This can surprise people using long passphrases or password managers.
If you choose bcrypt, make password length behavior explicit. Either enforce a maximum length (in bytes, not characters) or handle long inputs in a consistent way across all services. The main thing is to avoid silent truncation that changes what the user thinks their password is.
bcrypt is also less resistant to modern parallel cracking hardware than memory-hard options. Its defense is still valid, but it relies heavily on choosing a cost factor that keeps each guess expensive.
If you are building a new system or you have high-value accounts (paid plans, admin roles), migrating new hashes to Argon2id while continuing to accept existing bcrypt hashes until users log in is a common, low-risk path.
Argon2 strengths and tradeoffs
Argon2 was built for password hashing. Argon2id is the variant most teams pick because it balances GPU resistance with reasonable protection against side-channel concerns.
Argon2id gives you three parameters:
- Memory (m): how much RAM each hash uses while running
- Time/iterations (t): how many passes it makes over that memory
- Parallelism (p): how many lanes it uses (helps on multi-core CPUs)
Memory is the main advantage. If each guess requires a meaningful amount of RAM, attackers cannot run as many guesses in parallel without paying heavily for memory capacity and bandwidth.
The downside is operational: more memory per hash means fewer concurrent logins before your servers feel pressure. If you set memory too high, login bursts can cause queuing, timeouts, or even out-of-memory failures. You also need to think about abuse: many concurrent login attempts can become a resource problem if you do not cap work.
To keep Argon2id safe and usable, tune it like a performance feature:
- benchmark on production-like hardware
- limit concurrent hashing work (worker caps, queues)
- rate-limit login attempts and lock out repeated failures
- keep settings consistent across services so one weak endpoint does not become the target
Performance costs in real web backends
With password hashing, “faster is better” is usually the wrong goal. You want each guess to be expensive for attackers while logins still feel snappy for real users.
A practical way to set this is a time budget per verification on your actual production hardware. Many teams aim for something like 100 to 300 ms per hash check, but the right number depends on your traffic and servers. The difference between bcrypt and Argon2 is what you are spending: bcrypt is mostly CPU time, while Argon2 can also reserve memory.
Pick a target time, then measure
Choose a target hash time and test it in conditions that resemble production. Measure both signup/password-change hashing and login verification, but treat login as the hot path.
A lightweight measurement plan:
- test 1, 10, and 50 concurrent login checks and record p50 and p95 latency
- repeat runs to reduce noise from caching and CPU boosting
- measure the database call separately so you know what hashing really costs
- test with the same container and CPU limits you deploy
Spikes matter more than averages
Most systems fail during peaks. If a marketing email sends a wave of users to the login page, your hashing settings decide whether the system stays responsive.
If one verification takes 250 ms and your server can handle 40 in parallel before queuing, a burst of 500 login attempts can turn into multi-second waits. In that situation, a small reduction in cost plus strong rate limits can improve real security more than pushing parameters to the point where the login endpoint becomes fragile.
Keep interactive login predictable
Not every password operation needs the same urgency. Keep the interactive login cost stable, then do heavy work off the critical path. A common pattern is rehash-on-login (upgrade a user’s hash right after a successful login) or background jobs for migrations and imports.
How to choose parameters step by step
Parameter tuning is about raising attacker cost per guess without making sign-ins slow or destabilizing your servers.
-
Pick an algorithm your stack supports well. If Argon2id is available and well supported, it is usually the default choice. If you need broad compatibility, bcrypt is still fine.
-
Set a target time per hash on production-like hardware. Pick something that keeps logins smooth during peak load.
-
Tune to hit that time. With bcrypt, adjust the cost factor. With Argon2id, balance memory, iterations, and parallelism. Memory is the lever that changes the attacker economics most.
-
Store algorithm and settings with the hash. Most standard hash formats embed these details. Also make sure your database field is long enough so hashes are never truncated.
-
Plan upgrades with rehash-on-login. When a user logs in, if their stored hash uses weaker settings than your current policy, rehash and replace it.
A practical starting point
If you need a baseline before measuring, start conservatively and adjust based on timing.
- For bcrypt, many teams start around cost 12 and move based on real measurements.
- For Argon2id, a common baseline is memory in the tens to a few hundred MB, time cost 2 to 4, and parallelism 1 to 2.
Treat these as starting points, not rules. The right settings are the ones that fit your traffic, hardware, and peak login bursts.
Common mistakes that weaken password storage
Most password storage failures come from setup gaps, not from a broken algorithm.
Salt mistakes are a big one. Each password needs its own unique salt stored with the hash. Reusing salts, or using one global salt for every user, makes it easier for attackers to reuse work and compare accounts.
Cost neglect is another. Teams often ship with a low cost because login feels faster, then never review it. Hardware improves, attackers scale up, and your once-okay settings become cheap.
Argon2 over-tuning is common too. Setting memory extremely high can look good on paper, then cause slow logins, request backlogs, or out-of-memory errors during real spikes.
Password length handling matters, especially with bcrypt’s 72-byte behavior. If you allow long passwords but silently truncate them, you create confusing behavior and reduce security.
A few practical habits prevent most of this:
- use unique per-password salts (let the library generate them)
- load test and revisit settings on a schedule
- tune Argon2 memory for peak traffic, not just single-login benchmarks
- make password length limits explicit and consistent
- put concurrency limits and monitoring around the login endpoint
Quick checklist for a safer setup
Keep this short list nearby when you ship and when you change infrastructure:
- Unique salt per password, generated randomly and stored with the hash
- Hashing cost that survives peak traffic, verified with load tests on production-like hardware
- Parameters stored with the hash, so you can verify old accounts and still raise cost later
- Online attack controls, including rate limits and short lockouts for repeated failures
- An upgrade path, usually rehash-on-login
A simple sanity check: run a staging test that includes a burst of logins (successful and failed) and watch end-to-end latency plus CPU and RAM usage. If the login path struggles, tune cost and tighten rate limits. Do not “fix” it by cutting essentials like salts.
A realistic example: tuning for a small web app
Picture a small SaaS app with a few thousand users. Most of the day is steady, but you see short login bursts after a newsletter or at the start of the workday. This is where the choice becomes capacity planning.
You choose Argon2id to raise the cost of offline cracking. Pick a target verification time on your real server hardware (for example, 100 to 250 ms), then tune parameters to hit it while watching RAM, because memory settings can limit how many logins you can handle at once.
A practical tuning loop looks like this:
- start with modest iterations and parallelism
- increase memory until concurrency becomes uncomfortable
- adjust iterations to fine-tune time cost
- retest with simulated bursts, not just single requests
If you already have older hashes with weaker settings, keep verifying them but upgrade quietly. On successful login, rehash with your current settings and store the new value. Over time, active users move to stronger hashes without forced resets.
After release, monitor login like any other critical endpoint: tail latency (p95/p99), CPU and RAM during bursts, failed-login spikes, and how quickly old hashes are being replaced.
Next steps: ship safely and keep improving
Write your policy down and treat it as a living setting. For example: “Argon2id with X memory, Y iterations, Z parallelism” or “bcrypt cost factor N,” plus the date you chose it and when you will review it (every 6 to 12 months is a good starting cadence).
Keep an upgrade path so you are not stuck with old hashes. Rehash-on-login is simple and works well in most systems.
A strong hash helps, but it does not replace online abuse controls. Rate limits, lockouts, and careful password reset flows matter just as much for real-world security.
If you build your backend with a no-code platform like AppMaster, it is worth checking that your authentication module uses strong password hashing by default and that your hashing cost is tuned on the same kind of infrastructure you will deploy on. That small bit of upfront testing is often the difference between “secure and smooth” and “secure but unusable under load.”
FAQ
Password hashing lets you verify a login without storing the actual password. You store a one-way hash, then hash the user’s input and compare the results; if your database leaks, attackers still have to guess passwords instead of reading them.
Encryption is reversible with a key, so if that key is stolen or mismanaged, passwords can be recovered. Hashing is one-way by design, so even you can’t “decrypt” the stored value back into the original password.
Fast hashes are great for attackers because they can try guesses offline at very high speed, especially with GPUs. Password hashes should be intentionally slow (and ideally memory-hard) so large-scale guessing becomes expensive.
A salt is a unique, random value stored alongside each password hash. It prevents identical passwords from producing identical hashes and stops attackers from reusing precomputed tables, but it does not make weak passwords strong by itself.
Pick Argon2id if your stack supports it well and you can tune memory safely, because it’s designed to resist parallel cracking. Choose bcrypt when you need maximum compatibility and a simpler tuning model, and then set a sufficiently high cost factor.
bcrypt has a 72-byte limit: it only uses the first 72 bytes of a password and ignores the rest. To avoid surprises, enforce a clear max length in bytes or handle long inputs consistently so users don’t get silent truncation.
Memory is the main lever because it limits how many guesses attackers can run in parallel without paying heavily for RAM and bandwidth. Too much memory can also hurt you by reducing how many logins your servers can process at once, so you tune for peak traffic, not just single tests.
Aim for a predictable verification time on your real deployment hardware, often around 100–300 ms per check, then load-test concurrency. The right setting is the one that stays responsive during login bursts while still making offline guessing costly.
Store the algorithm and its parameters with the hash so you can verify old users and raise cost later. A common approach is rehash-on-login: after a successful login, if the stored hash is weaker than your current policy, recompute it with the new settings and save it.
Common failures include missing or reused salts, shipping with a low cost and never revisiting it, and over-tuning Argon2 memory until logins time out during spikes. Also watch for password length handling issues (especially with bcrypt) and protect the login endpoint with rate limits and short lockouts.


