Triggers vs background job workers for reliable notifications
Learn when triggers vs background job workers are safer for notifications, with practical guidance on retries, transactions, and preventing duplicates.

Why notification delivery breaks in real apps
Notifications sound simple: a user does something, then an email or SMS goes out. Most real failures come down to timing and duplication. Messages get sent before the data is truly saved, or they get sent twice after a partial failure.
A ânotificationâ can be a lot of things: email receipts, SMS one-time codes, push alerts, in-app messages, Slack or Telegram pings, or a webhook to another system. The shared problem is always the same: youâre trying to coordinate a database change with something outside your app.
The outside world is messy. Providers can be slow, return timeouts, or accept a request while your app never receives the success response. Your own app can crash or restart mid-request. Even âsuccessfulâ sends can be re-run because of infrastructure retries, worker restarts, or a user pressing the button again.
Common causes of broken notification delivery include network timeouts, provider outages or rate limits, app restarts at the wrong moment, retries that re-run the same send logic without a unique guard, and designs where a database write and an external send happen as one combined step.
When people ask for âreliable notifications,â they usually mean one of two things:
- deliver exactly once, or
- at least never duplicate (duplicates are often worse than a delay).
Getting both fast and perfectly safe is hard, so you end up choosing tradeoffs between speed, safety, and complexity.
This is why the choice between triggers and background job workers isnât just an architecture debate. Itâs about when a send is allowed to happen, how failures are retried, and how you prevent duplicate emails or SMS when something goes wrong.
Triggers and background workers: what they mean
When people compare triggers to background job workers, theyâre really comparing where the notification logic runs and how tightly itâs tied to the action that caused it.
A trigger is âdo it now when X happens.â In many apps, that means sending an email or SMS right after a user action, inside the same web request. Triggers can also live at the database level: a database trigger runs automatically when a row is inserted or updated. Both types feel immediate, but they inherit the timing and limits of whatever fired them.
A background worker is âdo it soon, but not in the foreground.â Itâs a separate process that pulls jobs from a queue and tries to complete them. Your main app records what should happen, then returns quickly, while the worker handles the slower, failure-prone parts like calling an email or SMS provider.
A âjobâ is the unit of work the worker processes. It typically includes who to notify, which template, what data to fill in, the current status (queued, processing, sent, failed), how many attempts have happened, and sometimes a scheduled time.
A typical notification flow looks like this: you prepare message details, enqueue a job, send via a provider, record the result, then decide whether to retry, stop, or alert someone.
Transaction boundaries: when itâs actually safe to send
A transaction boundary is the line between âwe tried to save itâ and âitâs truly saved.â Until the database commits, the change can still be rolled back. That matters because notifications are hard to take back.
If you send an email or SMS before the commit, you can message someone about something that never happened. A customer might get âYour password was changedâ or âYour order is confirmed,â and then the write fails due to a constraint error or timeout. Now the user is confused, and support has to untangle it.
Sending from inside a database trigger looks tempting because it fires automatically when data changes. The catch is that triggers run inside the same transaction. If the transaction rolls back, you may already have called an email or SMS provider.
Database triggers also tend to be harder to observe, test, and retry safely. And when they perform slow external calls, they can hold locks longer than expected and make database issues harder to diagnose.
A safer approach is the outbox idea: record the intent to notify as data, commit it, then send it after.
You make the business change and, in the same transaction, insert an outbox row that describes the message (who, what, which channel, plus a unique key). After commit, a background worker reads pending outbox rows, sends the message, then marks them sent.
Immediate sends can still be fine for low-impact, informational messages where being wrong is acceptable, like âWeâre processing your request.â For anything that must match the final state, wait until after commit.
Retries and failure handling: where each approach wins
Retries are usually the deciding factor.
Triggers: fast, but brittle on failures
Most trigger-based designs have no good retry story.
If a trigger calls an email/SMS provider and the call fails, you usually end up with two bad choices:
- fail the transaction (and block the original update), or
- swallow the error (and silently lose the notification).
Neither is acceptable when reliability matters.
Trying to loop or delay inside a trigger can make things worse by keeping transactions open longer, increasing lock time, and slowing down the database. And if the database or app dies mid-send, you often canât tell whether the provider received the request.
Background workers: designed for retries
A worker treats sending as a separate task with its own state. That makes it natural to retry only when it makes sense.
As a practical rule, you generally retry temporary failures (timeouts, transient network issues, server errors, rate limits with a longer wait). You generally donât retry permanent problems (invalid phone numbers, malformed emails, hard rejections such as unsubscribed users). For âunknownâ errors, you cap attempts and make the state visible.
Backoff is what keeps retries from making things worse. Start with a short wait, then increase it each time (for example 10s, 30s, 2m, 10m), and stop after a fixed number of attempts.
To make this survive deploys and restarts, store retry state with each job: attempt count, next attempt time, last error (short and readable), last attempt time, and a clear status like pending, sending, sent, failed.
If your app restarts mid-send, a worker can re-check stuck jobs (for example status = sending with an old timestamp) and retry them safely. This is where idempotency becomes essential so a retry doesnât double-send.
Preventing duplicate emails and SMS with idempotency
Idempotency means you can run the same âsend notificationâ action more than once and the user still gets it once.
The classic duplication case is a timeout: your app calls an email or SMS provider, the request times out, and your code retries. The first request may have actually succeeded, so the retry creates a duplicate.
A practical fix is to give every message a stable key and treat that key as the single source of truth. Good keys describe what the message means, not when you tried to send it.
Common approaches include:
- a generated
notification_idcreated when you decide âthis message should exist,â or - a business-derived key like
order_id + template + recipient(only if that truly defines uniqueness).
Then store a send ledger (often the outbox table itself) and make all retries consult it before sending. Keep states simple and visible: created (decided), queued (ready), sent (confirmed), failed (confirmed failure), canceled (no longer needed). The critical rule is that you allow only one active record per idempotency key.
Provider-side idempotency can help when itâs supported, but it doesnât replace your own ledger. You still need to handle your retries, deployments, and worker restarts.
Also treat âunknownâ outcomes as first-class. If a request timed out, donât immediately send again. Mark it as pending confirmation and retry safely by checking provider delivery status when possible. If you canât confirm, delay and alert instead of double-sending.
A safe default pattern: outbox + background worker (step by step)
If you want a safe default, the outbox pattern plus a worker is hard to beat. It keeps sending outside your business transaction, while still guaranteeing the intent to notify is saved.
The flow
Treat âsend a notificationâ as data you store, not an action you fire.
You save the business change (for example, an order status update) in your normal tables. In the same database transaction, you also insert an outbox record with recipient, channel (email/SMS), template, payload, and an idempotency key. You commit the transaction. Only after that point can anything be sent.
A background worker regularly picks up pending outbox rows, sends them, and records the result.
Add a simple claiming step so two workers donât grab the same row. This can be a status change to processing or a locked timestamp.
Blocking duplicates and handling failures
Duplicates often happen when a send succeeds but your app crashes before it records âsent.â You solve that by making the âmark sentâ write safe to repeat.
Use a uniqueness rule (for example, a unique constraint on the idempotency key and channel). Retry with clear rules: limited attempts, increasing delays, and only for retryable errors. After the last retry, move the job into a dead-letter state (like failed_permanent) so someone can review and manually reprocess it.
Monitoring can stay simple: counts of pending, processing, sent, retrying, and failed_permanent, plus the oldest pending timestamp.
Concrete example: when an order moves from âPackedâ to âShipped,â you update the order row and create one outbox row with idempotency key order-4815-shipped. Even if the worker crashes mid-send, reruns wonât double-send because the âsentâ write is protected by that unique key.
When background workers are the better choice
Database triggers are good at reacting the moment data changes. But if the job is âdeliver a notification reliably under messy real-world conditions,â background workers usually give you more control.
Workers are the better fit when you need time-based sends (reminders, digests), high volume with rate limits and backpressure, tolerance for provider variability (429 limits, slow responses, short outages), multi-step workflows (send, wait for delivery, then follow up), or cross-system events that need reconciliation.
A simple example: you charge a customer, then send an SMS receipt, then email an invoice. If SMS fails due to a gateway issue, you still want the order to stay paid and you want a safe retry later. Putting that logic in a trigger risks mixing âdata is correctâ with âa third party is available right now.â
Background workers also make operational control easier. You can pause a queue during an incident, inspect failures, and retry with delays.
Common mistakes that cause missed or duplicate messages
The fastest way to get unreliable notifications is to âjust send itâ wherever it feels convenient, then hope retries will save you. Whether you use triggers or workers, the details around failure and state decide if users get one message, two messages, or none.
A common trap is sending from a database trigger and assuming it canât fail. Triggers run inside the database transaction, so any slow provider call can stall the write, hit timeouts, or lock tables longer than you expect. Worse, if the send fails and you roll back the transaction, you might retry later and send twice if the provider actually accepted the first call.
Mistakes that show up repeatedly:
- Retrying everything the same way, including permanent errors (bad email, blocked number).
- Not separating âqueuedâ from âsent,â so you canât tell whatâs safe to retry after a crash.
- Using timestamps as dedupe keys, so retries naturally bypass âuniqueness.â
- Making provider calls in the user request path (checkout and form submit shouldnât wait on gateways).
- Treating provider timeouts as ânot delivered,â when many are actually âunknown.â
A simple example: you send an SMS, the provider times out, and you retry. If the first request actually succeeded, the user gets two codes. The fix is to record a stable idempotency key (like a notification_id), mark the message as queued before sending, then mark it as sent only after a clear success response.
Quick checks before you ship notifications
Most notification bugs arenât about the tool. Theyâre about timing, retries, and missing records.
Confirm you only send after the database write is safely committed. If you send inside the same transaction and it later rolls back, users can get a message about something that never happened.
Next, make every notification uniquely identifiable. Give each message a stable idempotency key (for example order_id + event_type + channel) and enforce it in storage so a retry canât create a second ânewâ notification.
Before release, check these basics:
- Sending happens after commit, not during the write.
- Each notification has a unique idempotency key, and duplicates are rejected.
- Retries are safe: the system can run the same job again and still send at most once.
- Every attempt is recorded (status, last_error, timestamps).
- Attempts are capped, and stuck items have a clear place to review and reprocess.
Test restart behavior on purpose. Kill the worker mid-send, restart it, and verify nothing double-sends. Do the same while the database is under load.
A simple scenario to validate: a user changes their phone number, then you send an SMS verification. If the SMS provider times out, your app retries. With a good idempotency key and attempt log, you either send once or safely try again later, but you donât spam.
Example scenario: order updates without double-sending
A store sends two kinds of messages: (1) an order confirmation email right after payment, and (2) SMS updates when the package is out for delivery and delivered.
Hereâs what goes wrong when you send too early (for example, inside a database trigger): the payment step writes an orders row, the trigger fires and emails the customer, and then the payment capture fails a second later. Now you have a âThanks for your orderâ email for an order that never became real.
Now imagine the opposite problem: delivery status changes to âOut for delivery,â you call your SMS provider, and the provider times out. You donât know if it sent the message. If you immediately retry, you risk two SMS messages. If you donât retry, you risk sending none.
A safer flow uses an outbox record plus a background worker. The app commits the order or status change, and in the same transaction writes an outbox row like âsend template X to user Y, channel SMS, idempotency key Z.â Only after commit does a worker deliver messages.
A simple timeline looks like this:
- Payment succeeds, transaction commits, outbox row for the confirmation email is saved.
- Worker sends the email, then marks the outbox as sent with a provider message ID.
- Delivery status changes, transaction commits, outbox row for the SMS update is saved.
- Provider times out, worker marks the outbox as retryable and tries again later using the same idempotency key.
On retry, the outbox row is the single source of truth. Youâre not creating a second âsendâ request, youâre finishing the first one.
For support, this is clearer too. They can see messages stuck in âfailedâ with the last error (timeout, bad phone number, blocked email), how many attempts were made, and whether itâs safe to retry without double-sending.
Next steps: pick a pattern and implement it cleanly
Pick a default and write it down. Inconsistent behavior usually comes from mixing triggers and workers randomly.
Start small with an outbox table and one worker loop. The first goal isnât speed, itâs correctness: store what you intend to send, send it after commit, and only mark it sent when the provider confirms.
A simple rollout plan:
- Define events (order_paid, ticket_assigned) and which channels they can use.
- Add an outbox table with event_id, recipient, payload, status, attempts, next_retry_at, sent_at.
- Build one worker that polls pending rows, sends, and updates status in one place.
- Add idempotency with a unique key per message and âdo nothing if already sent.â
- Split errors into retryable (timeouts, 5xx) vs not retryable (bad number, blocked email).
Before you scale volume, add basic visibility. Track the pending count, failure rate, and the age of the oldest pending message. If the oldest pending keeps growing, you likely have a stuck worker, a provider outage, or a logic bug.
If youâre building in AppMaster (appmaster.io), this pattern maps cleanly: model the outbox in the Data Designer, write the business change and outbox row in one transaction, then run the send-and-retry logic in a separate background process. That separation is what keeps notification delivery reliable even when providers or deployments misbehave.
FAQ
Background workers are usually the safer default because sending is slow and failure-prone, and workers are built for retries and visibility. Triggers can be fast, but theyâre tightly coupled to the transaction or request that fired them, which makes failures and duplicates harder to handle cleanly.
Itâs dangerous because the database write can still roll back. You can end up notifying users about an order, password change, or payment that never actually committed, and you canât âundoâ an email or SMS after it leaves your system.
A database trigger runs inside the same transaction as the row change. If it calls an email/SMS provider and the transaction later fails, you may have sent a real message about a change that didnât stick, or you may stall the transaction due to a slow external call.
The outbox pattern stores the intent to send as a row in your database, in the same transaction as the business change. After the commit, a worker reads pending outbox rows, sends the message, and marks it as sent, which makes timing and retries much safer.
If the provider times out, the real outcome is often âunknown,â not âfailed.â A good system records the attempt, delays, and retries safely using the same message identity, instead of immediately sending again and risking a duplicate.
Use idempotency: give each notification a stable key that represents what the message means (not when you tried). Store that key in a ledger (often the outbox table) and enforce one active record per key, so retries finish the same message rather than creating a new one.
Retry temporary errors like timeouts, 5xx responses, or rate limits (with a wait). Donât retry permanent errors like invalid addresses, blocked numbers, or hard bounces; mark them failed and make them visible so someone can fix the data instead of spamming retries.
A background worker can scan for jobs stuck in sending past a reasonable timeout, move them back to retryable, and try again with backoff. This only works safely if every job has recorded state (attempts, timestamps, last error) and idempotency prevents double-sends.
It means you canât answer âis it safe to retry?â Store clear statuses like pending, processing, sent, and failed, plus attempt count and last error. That makes support and debugging practical, and it lets your system recover without guessing.
Model an outbox table in the Data Designer, write the business update and the outbox row in one transaction, then run send-and-retry logic in a separate background process. Keep one idempotency key per message and record attempts, so deploys, retries, and worker restarts donât create duplicates.


