PostgreSQL advisory locks for concurrency-safe workflows
Learn PostgreSQL advisory locks to stop double-processing in approvals, billing, and schedulers with practical patterns, SQL snippets, and simple checks.

The real problem: two processes do the same work
Double-processing is when the same item gets handled twice because two different actors both think they are responsible. In real apps, it shows up as a customer getting charged twice, an approval being applied twice, or an “invoice ready” email going out two times. Everything can look fine in testing, then break under real traffic.
It usually happens when timing gets tight and more than one thing can act:
Two workers pick up the same job at the same time. A retry fires because a network call was slow, but the first attempt is still running. A user double-clicks Approve because the UI froze for a second. Two schedulers overlap after a deploy or a clock drift. Even one tap can become two requests if a mobile app resends after a timeout.
The painful part is that each actor is behaving “reasonably” on its own. The bug is the gap between them: neither knows the other is already processing the same record.
The goal is simple: for any given item (an order, an approval request, an invoice), only one actor should be allowed to do the critical work at a time. Everyone else should either wait briefly or back off and try again.
PostgreSQL advisory locks can help. They give you a lightweight way to say “I’m working on item X” using the database you already trust for consistency.
Set expectations, though. A lock is not a full queue system. It won’t schedule jobs for you, guarantee ordering, or store messages. It’s a safety gate around the part of the workflow that must never run twice.
What PostgreSQL advisory locks are (and are not)
PostgreSQL advisory locks are a way to make sure only one worker does a piece of work at a time. You choose a lock key (like “invoice 123”), ask the database to lock it, do the work, then release it.
The word “advisory” matters. Postgres doesn’t know what your key means, and it won’t protect anything automatically. It only tracks one fact: this key is locked or it isn’t. Your code has to agree on the key format and has to take the lock before it runs the risky part.
It also helps to compare advisory locks to row locks. Row locks (like SELECT ... FOR UPDATE) protect actual table rows. They’re great when the work maps cleanly to one row. Advisory locks protect a key you choose, which is useful when the workflow touches many tables, calls external services, or starts before a row even exists.
Advisory locks are useful when you need:
- One-at-a-time actions per entity (one approval per request, one charge per invoice)
- Coordination across multiple app servers without adding a separate locking service
- Protection around a workflow step that’s bigger than a single row update
They are not a replacement for other safety tools. They don’t make operations idempotent, they don’t enforce business rules, and they won’t stop duplicates if a code path forgets to take the lock.
They’re often called “lightweight” because you can use them without schema changes or extra infrastructure. In many cases, you can fix double-processing by adding one lock call around a critical section while keeping the rest of the design the same.
Lock types you will actually use
When people say “PostgreSQL advisory locks,” they usually mean a small set of functions. Picking the right one changes what happens on errors, timeouts, and retries.
Session vs transaction locks
A session-level lock (pg_advisory_lock) lasts as long as the database connection lasts. That can be convenient for long-running workers, but it also means a lock can linger if your app crashes in a way that leaves a pooled connection hanging around.
A transaction-level lock (pg_advisory_xact_lock) is tied to the current transaction. When you commit or roll back, PostgreSQL releases it automatically. For most request-response workflows (approvals, billing clicks, admin actions), this is the safer default because it’s hard to forget to release.
Blocking vs try-lock
Blocking calls wait until the lock is available. Simple, but it can make a web request feel stuck if another session is holding the lock.
Try-lock calls return immediately:
pg_try_advisory_lock(session-level)pg_try_advisory_xact_lock(transaction-level)
Try-lock is often better for UI actions. If the lock is taken, you can return a clear message like “Already processing” and ask the user to retry.
Shared vs exclusive
Exclusive locks are “one at a time.” Shared locks allow multiple holders but block an exclusive lock. Most double-processing problems use exclusive locks. Shared locks are useful when many readers can proceed, but a rare writer must run alone.
How locks are released
Release depends on the type:
- Session locks: released on disconnect, or explicitly with
pg_advisory_unlock - Transaction locks: released automatically when the transaction ends
Choosing the right lock key
An advisory lock only works if every worker tries to lock the exact same key for the exact same piece of work. If one code path locks “invoice 123” and another locks “customer 45,” you can still get duplicates.
Start by naming the “thing” you want to protect. Make it concrete: one invoice, one approval request, one scheduled task run, or one customer’s monthly billing cycle. That choice decides how much concurrency you allow.
Pick a scope that matches the risk
Most teams end up with one of these:
- Per record: safest for approvals and invoices (lock by invoice_id or request_id)
- Per customer/account: useful when actions must be serialized per customer (billing, credit changes)
- Per workflow step: when different steps can run in parallel, but each step must be one-at-a-time
Treat scope as a product decision, not a database detail. “Per record” prevents double clicks from charging twice. “Per customer” prevents two background jobs from generating overlapping statements.
Choose a stable key strategy
You generally have two options: two 32-bit integers (often used as namespace + id), or one 64-bit integer (bigint), sometimes created by hashing a string ID.
Two-int keys are easy to standardize: pick a fixed namespace number per workflow (for example, approvals vs billing), and use the record ID as the second value.
Hashing can be handy when your identifier is a UUID, but you need to accept a small collision risk and be consistent everywhere.
Whatever you choose, write the format down and centralize it. “Almost the same key” in two places is a common way to reintroduce duplicates.
Step by step: a safe pattern for one-at-a-time processing
A good advisory-lock workflow is simple: lock, verify, act, record, commit. The lock isn’t the business rule by itself. It’s a guardrail that makes the rule reliable when two workers hit the same record at the same time.
A practical pattern:
- Open a transaction when the outcome must be atomic.
- Acquire the lock for the specific unit of work. Prefer a transaction-scoped lock (
pg_advisory_xact_lock) so it releases automatically. - Re-check state in the database. Don’t assume you’re first. Confirm the record is still eligible.
- Do the work and write a durable “done” marker in the database (status update, ledger entry, audit row).
- Commit and let the lock go. If you used a session-level lock, unlock before returning the connection to the pool.
Example: two app servers receive “Approve invoice #123” within the same second. Both start, but only one gets the lock for 123. The winner checks that invoice #123 is still pending, marks it approved, writes the audit/payment record, and commits. The second server either fails fast (try-lock) or waits briefly, then re-checks and exits without creating a duplicate.
Where advisory locks fit: approvals, billing, schedulers
Advisory locks fit best when the rule is straightforward: for a specific thing, only one process can do the “winning” work at a time. You keep your existing database and app code, but add a small gate that makes race conditions much harder to trigger.
Approvals
Approvals are classic concurrency traps. Two reviewers (or the same person double-clicking) can hit Approve within milliseconds. With a lock keyed to the request ID, only one transaction performs the state change. Everyone else quickly learns the outcome and can show a clear message like “already approved” or “already rejected.”
This is common in customer portals and admin panels where many people watch the same queue.
Billing
Billing usually needs a stricter rule: one payment attempt per invoice, even when retries happen. A network timeout can make a user click Pay again, or a background retry can run while the first attempt is still in flight.
A lock keyed to the invoice ID ensures only one path talks to the payment provider at a time. The second attempt can return “payment in progress” or read the latest payment status. That prevents duplicate work and reduces the risk of double charges.
Schedulers and background workers
In multi-instance setups, schedulers can accidentally run the same window in parallel. A lock keyed to the job name plus the time window (for example, “daily-settlement:2026-01-29”) ensures only one instance runs it.
The same approach works for workers that pull items from a table: lock on the item ID so only one worker can process it.
Common keys people lock on include a single approval request ID, a single invoice ID, a job name plus time window, a customer ID for “one export at a time,” or a unique idempotency key for retries.
A realistic example: stopping double-approval in a portal
Picture an approval request in a portal: a purchase order is waiting, and two managers click Approve within the same second. Without protection, both requests can read “pending” and both can write “approved,” creating duplicate audit entries, duplicate notifications, or downstream work triggered twice.
PostgreSQL advisory locks give you a straightforward way to make this action one-at-a-time per approval.
The flow
When the API receives an approve action, it first takes a lock based on the approval id (so different approvals can still be processed in parallel).
A common pattern is: lock on approval_id, read current status, update status, then write an audit record, all in one transaction.
BEGIN;
-- One-at-a-time per approval_id
SELECT pg_try_advisory_xact_lock($1) AS got_lock; -- $1 = approval_id
-- If got_lock = false, return "someone else is approving, try again".
SELECT status FROM approvals WHERE id = $1 FOR UPDATE;
-- If status != 'pending', return "already processed".
UPDATE approvals
SET status = 'approved', approved_by = $2, approved_at = now()
WHERE id = $1;
INSERT INTO approval_audit(approval_id, actor_id, action, created_at)
VALUES ($1, $2, 'approved', now());
COMMIT;
What the second click experiences
The second request either can’t get the lock (so it quickly returns “Already being processed”) or it gets the lock after the first finishes, then sees the status is already approved and exits without changing anything. Either way, you avoid double-processing while keeping the UI responsive.
For debugging, log enough to trace each attempt: request id, approval id and computed lock key, actor id, outcome (lock_busy, already_approved, approved_ok), and timing.
Handling wait, timeouts, and retries without freezing the app
Waiting for a lock sounds harmless until it turns into a spinning button, a stuck worker, or a backlog that never clears. When you can’t get the lock, fail fast where a human is waiting and wait only where waiting is safe.
For user actions: try-lock and respond clearly
If someone clicks Approve or Charge, don’t block their request for seconds. Use try-lock so the app can answer right away.
A practical approach is: try to lock, and if it fails, return a clear “busy, try again” response (or refresh the item state). That reduces timeouts and discourages repeated clicks.
Keep the locked section short: validate state, apply the state change, commit.
For background jobs: blocking is OK, but cap it
For schedulers and workers, blocking can be fine because no human is waiting. But you still need limits, otherwise one slow job can stall a whole fleet.
Use timeouts so a worker can give up and move on:
SET lock_timeout = '2s';
SET statement_timeout = '30s';
SELECT pg_advisory_lock(123456);
Also set a maximum expected runtime for the job itself. If billing usually finishes in under 10 seconds, treat 2 minutes as an incident. Track start time, job id, and how long locks are held. If your job runner supports cancellation, cancel tasks that exceed the cap so the session ends and the lock is released.
Plan retries on purpose. When a lock isn’t acquired, decide what happens next: reschedule soon with backoff (and a little randomness), skip best-effort work for this cycle, or mark the item as contended if repeated failures need attention.
Common mistakes that cause stuck locks or duplicates
The most common surprise is session-level locks that never get released. Connection pools keep connections open, so a session can outlive a request. If you take a session lock and forget to unlock, the lock can stay held until that connection is recycled. Other workers will wait (or fail) and it can be hard to see why.
Another source of duplicates is locking but not checking state. A lock only ensures one worker runs the critical section at a time. It doesn’t guarantee the record is still eligible. Always re-check inside the same transaction (for example, confirm pending before you move to approved).
Lock keys also trip teams up. If one service locks on order_id and another locks on a differently computed key for the same real-world resource, you now have two locks. Both paths can run at once, which creates a false sense of safety.
Long lock holds are usually self-inflicted. If you do slow network calls while holding the lock (payment provider, email/SMS, webhooks), a short guardrail becomes a bottleneck. Keep the locked section focused on fast database work: validate state, write the new state, record what should happen next. Then trigger side effects after the transaction commits.
Finally, advisory locks don’t replace idempotency or database constraints. Treat them as a traffic light, not a proof system. Use unique constraints where they fit, and use idempotency keys for external calls.
Quick checklist before you ship
Treat advisory locks like a small contract: everyone on the team should know what the lock means, what it protects, and what is allowed to happen while it’s held.
A short checklist that catches most issues:
- One clear lock key per resource, written down and reused everywhere
- Acquire the lock before anything irreversible (payments, emails, external API calls)
- Re-check state after the lock is held and before writing changes
- Keep the locked section short and measurable (log lock wait and execution time)
- Decide what “lock busy” means for each path (UI message, retry with backoff, skip)
Next steps: apply the pattern and keep it maintainable
Pick one place where duplicates hurt the most and start there. Good first targets are actions that cost money or change state permanently, like “charge invoice” or “approve request.” Wrap only that critical section with an advisory lock, then expand to nearby steps once you trust the behavior.
Add basic observability early. Log when a worker can’t get a lock, and how long locked work takes. If lock waits spike, it usually means the critical section is too big or a slow query is hiding inside it.
Locks work best on top of data safety, not instead of it. Keep clear status fields (pending, processing, done, failed) and back them with constraints where you can. If a retry happens at the worst moment, a unique constraint or an idempotency key can be the second line of defense.
If you’re building workflows in AppMaster (appmaster.io), you can apply the same pattern by keeping the critical state change inside one transaction and adding a small SQL step to take a transaction-level advisory lock before the “finalize” step.
Advisory locks are a good fit until you truly need queue features (priorities, delayed jobs, dead-letter handling), you have heavy contention and need smarter parallelism, you must coordinate across databases with no shared Postgres, or you need stricter isolation rules. The goal is boring reliability: keep the pattern small, consistent, visible in logs, and backed by constraints.
FAQ
Use an advisory lock when you need “only one actor at a time” for a specific unit of work, like approving a request, charging an invoice, or running a scheduled window. It’s especially helpful when multiple app instances can touch the same item and you don’t want to add a separate locking service.
Row locks protect actual rows you select and are great when the whole operation maps cleanly to a single row update. Advisory locks protect a key you choose, so they work even when the workflow touches many tables, calls external services, or starts before the final row exists.
Default to pg_advisory_xact_lock (transaction-level) for request/response actions because it releases automatically when you commit or roll back. Use pg_advisory_lock (session-level) only when you truly need the lock to outlive a transaction and you’re confident you will always unlock before returning the connection to the pool.
For UI-driven actions, prefer try-lock (pg_try_advisory_xact_lock) so the request can fail fast and return a clear “already processing” response. For background workers, a blocking lock can be fine, but cap it with lock_timeout so one stuck task doesn’t stall everything.
Lock the smallest thing that must not run twice, usually “one invoice” or “one approval request.” If you lock too broadly (like per customer) you may reduce throughput; if you lock too narrowly (like per step without a shared key) you can still get duplicates.
Pick one stable key format and use it everywhere that can perform the same critical action. A common approach is two integers: a fixed namespace for the workflow plus the entity ID, so different workflows don’t accidentally block each other while still coordinating correctly.
No. A lock only prevents concurrent execution; it doesn’t prove the operation is safe to repeat. You still need to re-check state inside the transaction (for example, verify the item is still pending) and rely on unique constraints or idempotency where they fit.
Keep the locked section short and database-focused: acquire the lock, re-check eligibility, write the new state, and commit. Do slow side effects (payments, emails, webhooks) after the commit or via an outbox-style record so you don’t hold the lock during network delays.
The most common cause is a session-level lock held by a pooled connection that never got unlocked due to a code path bug. Prefer transaction-level locks, and if you must use session locks, ensure pg_advisory_unlock runs reliably before the connection returns to the pool.
Log the entity ID and the computed lock key, whether the lock was acquired, how long it took to acquire, and how long the transaction ran. Also log the outcome like lock_busy, already_processed, or processed_ok so you can distinguish contention from genuine duplicates.


