04 thg 1, 2026·6 phút đọc

bcrypt vs Argon2: chọn thiết lập băm mật khẩu

bcrypt vs Argon2 giải thích: so sánh tính an toàn, chi phí hiệu năng thực tế và cách chọn tham số an toàn cho backend web hiện đại.

bcrypt vs Argon2: chọn thiết lập băm mật khẩu

What problem password hashing is solving

Password hashing lets a backend store a password without storing the password itself. When someone signs up, the server runs the password through a one-way function and saves the result (the hash). At login, it hashes the password the user typed and compares the result with what was stored.

A hash is not encryption. There is no way to decrypt it. That one-way property is exactly why hashing is used for passwords.

So why not use a normal fast hash like SHA-256? Because fast is what attackers want. If a database is stolen, attackers do not guess passwords by logging in one attempt at a time. They guess offline using the stolen hash list, pushing guesses as fast as their hardware allows. With GPUs, fast hashes can be tested at enormous scale. Even with unique salts, a fast hash is still cheap to brute-force.

Here is the realistic failure mode: a small web app loses its user table in a breach. The attacker gets emails and password hashes. If those hashes were made with a fast function, common passwords and small variations fall quickly. Then the attacker tries the same password on other sites (credential stuffing), or uses it to access higher-privilege features inside your app.

A good password hash makes guessing expensive. The goal is not “unbreakable.” The goal is “too slow and costly to be worth it.”

A password hashing setup should be:

  • One-way (verify, not reverse)
  • Slow per guess
  • Expensive for parallel hardware (especially GPUs)
  • Fast enough that real logins still feel normal
  • Adjustable so you can raise the cost over time

bcrypt and Argon2, in one minute

When you compare bcrypt vs Argon2, you are choosing how you want to slow down password guessing after a database leak.

bcrypt is the older, widely supported option. It is designed to be expensive on the CPU, and it has one main tuning knob: the cost factor. It is also “boring” in a good way: easy to find in libraries, easy to deploy, and predictable.

Argon2 is newer and was designed to be memory-hard. It can force each password guess to use a meaningful amount of RAM, not just CPU. That matters because attackers often win by running huge numbers of guesses in parallel on GPUs or specialized hardware. Memory is harder and more expensive to scale at that kind of parallelism.

Argon2 has three variants:

  • Argon2i: emphasizes resistance to some side-channel attacks
  • Argon2d: emphasizes GPU resistance, with more side-channel considerations
  • Argon2id: a practical mix of both, and the common default for password hashing

If your stack supports Argon2id and you can tune memory safely, it is usually the best modern default. If you need maximum compatibility across older systems, bcrypt is still a solid choice when configured with a high enough cost factor.

Security properties that matter most

The core question is simple: if an attacker steals the password database, how expensive is it to guess passwords at scale?

With bcrypt, you control cost (work factor). Higher cost means each guess takes longer. That slows down attackers and also slows down your own login checks, so you tune it to a point that is painful for attackers but still acceptable for users.

With Argon2id, you can add memory-hardness on top of time cost. Each guess needs CPU time and RAM accessed in a specific pattern. GPUs can be extremely fast at compute-heavy work, but they lose a lot of their advantage when each parallel guess needs substantial memory.

Salts are non-negotiable. A unique, random salt per password:

  • prevents precomputed tables from being reused across your database
  • ensures identical passwords do not produce identical hashes across users

Salts do not make weak passwords strong. They mainly protect you after a database leak by forcing attackers to do real work per user.

bcrypt strengths and limits you should know

bcrypt is still widely used, mostly because it is easy to deploy everywhere. It tends to be a good fit when you need broad interoperability, when your stack has limited crypto options, or when you want one simple tuning lever.

The biggest “gotcha” is the 72-byte password limit. bcrypt only uses the first 72 bytes of the password and ignores the rest. This can surprise people using long passphrases or password managers.

If you choose bcrypt, make password length behavior explicit. Either enforce a maximum length (in bytes, not characters) or handle long inputs in a consistent way across all services. The main thing is to avoid silent truncation that changes what the user thinks their password is.

bcrypt is also less resistant to modern parallel cracking hardware than memory-hard options. Its defense is still valid, but it relies heavily on choosing a cost factor that keeps each guess expensive.

If you are building a new system or you have high-value accounts (paid plans, admin roles), migrating new hashes to Argon2id while continuing to accept existing bcrypt hashes until users log in is a common, low-risk path.

Argon2 strengths and tradeoffs

Đảm bảo auth ngay từ đầu
Thêm xác thực vào ứng dụng và lưu giữ tham số thuật toán cùng mỗi hash.
Bắt đầu miễn phí

Argon2 was built for password hashing. Argon2id is the variant most teams pick because it balances GPU resistance with reasonable protection against side-channel concerns.

Argon2id gives you three parameters:

  • Memory (m): how much RAM each hash uses while running
  • Time/iterations (t): how many passes it makes over that memory
  • Parallelism (p): how many lanes it uses (helps on multi-core CPUs)

Memory is the main advantage. If each guess requires a meaningful amount of RAM, attackers cannot run as many guesses in parallel without paying heavily for memory capacity and bandwidth.

The downside is operational: more memory per hash means fewer concurrent logins before your servers feel pressure. If you set memory too high, login bursts can cause queuing, timeouts, or even out-of-memory failures. You also need to think about abuse: many concurrent login attempts can become a resource problem if you do not cap work.

To keep Argon2id safe and usable, tune it like a performance feature:

  • benchmark on production-like hardware
  • limit concurrent hashing work (worker caps, queues)
  • rate-limit login attempts and lock out repeated failures
  • keep settings consistent across services so one weak endpoint does not become the target

Performance costs in real web backends

With password hashing, “faster is better” is usually the wrong goal. You want each guess to be expensive for attackers while logins still feel snappy for real users.

A practical way to set this is a time budget per verification on your actual production hardware. Many teams aim for something like 100 to 300 ms per hash check, but the right number depends on your traffic and servers. The difference between bcrypt and Argon2 is what you are spending: bcrypt is mostly CPU time, while Argon2 can also reserve memory.

Pick a target time, then measure

Choose a target hash time and test it in conditions that resemble production. Measure both signup/password-change hashing and login verification, but treat login as the hot path.

A lightweight measurement plan:

  • test 1, 10, and 50 concurrent login checks and record p50 and p95 latency
  • repeat runs to reduce noise from caching and CPU boosting
  • measure the database call separately so you know what hashing really costs
  • test with the same container and CPU limits you deploy

Spikes matter more than averages

Most systems fail during peaks. If a marketing email sends a wave of users to the login page, your hashing settings decide whether the system stays responsive.

If one verification takes 250 ms and your server can handle 40 in parallel before queuing, a burst of 500 login attempts can turn into multi-second waits. In that situation, a small reduction in cost plus strong rate limits can improve real security more than pushing parameters to the point where the login endpoint becomes fragile.

Keep interactive login predictable

Not every password operation needs the same urgency. Keep the interactive login cost stable, then do heavy work off the critical path. A common pattern is rehash-on-login (upgrade a user’s hash right after a successful login) or background jobs for migrations and imports.

How to choose parameters step by step

Benchmark trước khi chọn tham số
Tinh chỉnh chi phí băm bằng cách đo độ trễ đăng nhập trên môi trường triển khai của bạn.
Kiểm tra

Parameter tuning is about raising attacker cost per guess without making sign-ins slow or destabilizing your servers.

  1. Pick an algorithm your stack supports well. If Argon2id is available and well supported, it is usually the default choice. If you need broad compatibility, bcrypt is still fine.

  2. Set a target time per hash on production-like hardware. Pick something that keeps logins smooth during peak load.

  3. Tune to hit that time. With bcrypt, adjust the cost factor. With Argon2id, balance memory, iterations, and parallelism. Memory is the lever that changes the attacker economics most.

  4. Store algorithm and settings with the hash. Most standard hash formats embed these details. Also make sure your database field is long enough so hashes are never truncated.

  5. Plan upgrades with rehash-on-login. When a user logs in, if their stored hash uses weaker settings than your current policy, rehash and replace it.

A practical starting point

If you need a baseline before measuring, start conservatively and adjust based on timing.

  • For bcrypt, many teams start around cost 12 and move based on real measurements.
  • For Argon2id, a common baseline is memory in the tens to a few hundred MB, time cost 2 to 4, and parallelism 1 to 2.

Treat these as starting points, not rules. The right settings are the ones that fit your traffic, hardware, and peak login bursts.

Common mistakes that weaken password storage

Lên kế hoạch nâng cấp sạch
Sử dụng mẫu rehash-on-login để nâng chi phí theo thời gian mà ít gián đoạn.
Thiết lập

Most password storage failures come from setup gaps, not from a broken algorithm.

Salt mistakes are a big one. Each password needs its own unique salt stored with the hash. Reusing salts, or using one global salt for every user, makes it easier for attackers to reuse work and compare accounts.

Cost neglect is another. Teams often ship with a low cost because login feels faster, then never review it. Hardware improves, attackers scale up, and your once-okay settings become cheap.

Argon2 over-tuning is common too. Setting memory extremely high can look good on paper, then cause slow logins, request backlogs, or out-of-memory errors during real spikes.

Password length handling matters, especially with bcrypt’s 72-byte behavior. If you allow long passwords but silently truncate them, you create confusing behavior and reduce security.

A few practical habits prevent most of this:

  • use unique per-password salts (let the library generate them)
  • load test and revisit settings on a schedule
  • tune Argon2 memory for peak traffic, not just single-login benchmarks
  • make password length limits explicit and consistent
  • put concurrency limits and monitoring around the login endpoint

Quick checklist for a safer setup

Keep this short list nearby when you ship and when you change infrastructure:

  • Unique salt per password, generated randomly and stored with the hash
  • Hashing cost that survives peak traffic, verified with load tests on production-like hardware
  • Parameters stored with the hash, so you can verify old accounts and still raise cost later
  • Online attack controls, including rate limits and short lockouts for repeated failures
  • An upgrade path, usually rehash-on-login

A simple sanity check: run a staging test that includes a burst of logins (successful and failed) and watch end-to-end latency plus CPU and RAM usage. If the login path struggles, tune cost and tighten rate limits. Do not “fix” it by cutting essentials like salts.

A realistic example: tuning for a small web app

Bắt đầu backend an toàn
Tạo backend có đăng ký, đăng nhập và đặt lại mật khẩu bằng công cụ trực quan.
Tạo ngay

Picture a small SaaS app with a few thousand users. Most of the day is steady, but you see short login bursts after a newsletter or at the start of the workday. This is where the choice becomes capacity planning.

You choose Argon2id to raise the cost of offline cracking. Pick a target verification time on your real server hardware (for example, 100 to 250 ms), then tune parameters to hit it while watching RAM, because memory settings can limit how many logins you can handle at once.

A practical tuning loop looks like this:

  • start with modest iterations and parallelism
  • increase memory until concurrency becomes uncomfortable
  • adjust iterations to fine-tune time cost
  • retest with simulated bursts, not just single requests

If you already have older hashes with weaker settings, keep verifying them but upgrade quietly. On successful login, rehash with your current settings and store the new value. Over time, active users move to stronger hashes without forced resets.

After release, monitor login like any other critical endpoint: tail latency (p95/p99), CPU and RAM during bursts, failed-login spikes, and how quickly old hashes are being replaced.

Next steps: ship safely and keep improving

Write your policy down and treat it as a living setting. For example: “Argon2id with X memory, Y iterations, Z parallelism” or “bcrypt cost factor N,” plus the date you chose it and when you will review it (every 6 to 12 months is a good starting cadence).

Keep an upgrade path so you are not stuck with old hashes. Rehash-on-login is simple and works well in most systems.

A strong hash helps, but it does not replace online abuse controls. Rate limits, lockouts, and careful password reset flows matter just as much for real-world security.

If you build your backend with a no-code platform like AppMaster, it is worth checking that your authentication module uses strong password hashing by default and that your hashing cost is tuned on the same kind of infrastructure you will deploy on. That small bit of upfront testing is often the difference between “secure and smooth” and “secure but unusable under load.”

Câu hỏi thường gặp

Password hashing giải quyết vấn đề gì?

Password hashing cho phép bạn xác thực đăng nhập mà không lưu mật khẩu thực sự. Bạn lưu một giá trị băm một chiều, rồi băm giá trị người dùng nhập và so sánh; nếu database bị lộ, kẻ tấn công vẫn phải đoán mật khẩu thay vì đọc thẳng chúng.

Tại sao không chỉ mã hóa mật khẩu thay vì băm?

Mã hóa (encryption) có thể đảo ngược nếu có khóa; nếu khóa đó bị lộ hoặc quản lý kém, mật khẩu có thể khôi phục. Băm là một chiều theo thiết kế, nên ngay cả bạn cũng không thể “giải mã” chuỗi băm về mật khẩu gốc.

Tại sao SHA-256 không phù hợp để lưu mật khẩu?

Các hàm băm nhanh là mơ ước của kẻ tấn công vì họ có thể thử đoán offline ở tốc độ rất cao, nhất là với GPU. Hash cho mật khẩu cần cố ý chậm (và lý tưởng là tiêu tốn bộ nhớ) để việc đoán hàng loạt trở nên đắt đỏ.

Salt là gì, và nó có thực sự làm mật khẩu an toàn hơn không?

Salt là một giá trị ngẫu nhiên, khác nhau cho mỗi mật khẩu và được lưu cùng với hash. Nó ngăn các mật khẩu giống nhau sinh cùng hash và chặn việc tái sử dụng bảng tiền tính; nhưng bản thân salt không làm mật khẩu yếu trở nên mạnh.

Khi nào tôi nên chọn Argon2id thay vì bcrypt?

Chọn Argon2id nếu stack của bạn hỗ trợ tốt và bạn có thể tinh chỉnh bộ nhớ an toàn, vì nó thiết kế để chống tấn công song song. Chọn bcrypt khi bạn cần tương thích rộng và mô hình tinh chỉnh đơn giản hơn, rồi đặt hệ số cost đủ cao.

Nhược điểm lớn của bcrypt với mật khẩu dài là gì?

Vấn đề lớn với bcrypt là giới hạn 72 byte: bcrypt chỉ dùng 72 byte đầu của mật khẩu và bỏ phần còn lại. Để tránh bất ngờ, bạn nên quy định rõ giới hạn chiều dài tối đa (tính theo byte) hoặc xử lý đầu vào dài một cách nhất quán.

Tham số Argon2id nào quan trọng nhất và vì sao?

Tham số ảnh hưởng nhất là bộ nhớ, vì nó giới hạn số lần đoán mà kẻ tấn công có thể thực hiện song song mà không trả thêm chi phí cho RAM và băng thông. Tuy nhiên bộ nhớ quá lớn cũng có thể giảm khả năng xử lý đăng nhập đồng thời của bạn—vì vậy tinh chỉnh theo lưu lượng đỉnh, không chỉ theo bài test đơn lẻ.

Mật băm nên chậm đến mức nào trong backend web?

Hãy nhắm tới thời gian xác thực dự đoán được trên phần cứng triển khai thực tế của bạn, thường khoảng 100–300 ms mỗi lần kiểm tra, rồi kiểm thử tải về độ tương đồng. Cấu hình đúng là cái còn phản hồi tốt trong các đợt truy cập cao và vẫn làm cho việc tấn công offline trở nên đắt đỏ.

Làm sao nâng cấp tham số băm mà không bắt mọi người đổi mật khẩu?

Lưu thuật toán và tham số cùng với hash để bạn vẫn xác minh được tài khoản cũ và tăng chi phí sau này. Một cách phổ biến là rehash-on-login: sau khi đăng nhập thành công, nếu hash cũ yếu hơn chính sách hiện tại thì tính lại với tham số mới và lưu lại.

Những lỗi phổ biến làm yếu lưu trữ mật khẩu là gì?

Những lỗi phổ biến bao gồm thiếu salt hoặc dùng lại salt, khởi chạy với chi phí thấp rồi không xem lại khi phần cứng tiến bộ, và tinh chỉnh Argon2 quá tay khiến đăng nhập bị treo trong đợt cao điểm. Còn cần lưu ý xử lý chiều dài mật khẩu (đặc biệt với bcrypt) và bảo vệ endpoint đăng nhập bằng giới hạn tỷ lệ và khóa tạm thời.

Dễ dàng bắt đầu
Tạo thứ gì đó tuyệt vời

Thử nghiệm với AppMaster với gói miễn phí.
Khi bạn sẵn sàng, bạn có thể chọn đăng ký phù hợp.

Bắt đầu