Aug 12, 2025·7 min read

Triggers vs background job workers for reliable notifications

Learn when triggers vs background job workers are safer for notifications, with practical guidance on retries, transactions, and preventing duplicates.

Triggers vs background job workers for reliable notifications

Why notification delivery breaks in real apps

Notifications sound simple: a user does something, then an email or SMS goes out. Most real failures come down to timing and duplication. Messages get sent before the data is truly saved, or they get sent twice after a partial failure.

A “notification” can be a lot of things: email receipts, SMS one-time codes, push alerts, in-app messages, Slack or Telegram pings, or a webhook to another system. The shared problem is always the same: you’re trying to coordinate a database change with something outside your app.

The outside world is messy. Providers can be slow, return timeouts, or accept a request while your app never receives the success response. Your own app can crash or restart mid-request. Even “successful” sends can be re-run because of infrastructure retries, worker restarts, or a user pressing the button again.

Common causes of broken notification delivery include network timeouts, provider outages or rate limits, app restarts at the wrong moment, retries that re-run the same send logic without a unique guard, and designs where a database write and an external send happen as one combined step.

When people ask for “reliable notifications,” they usually mean one of two things:

  • deliver exactly once, or
  • at least never duplicate (duplicates are often worse than a delay).

Getting both fast and perfectly safe is hard, so you end up choosing tradeoffs between speed, safety, and complexity.

This is why the choice between triggers and background job workers isn’t just an architecture debate. It’s about when a send is allowed to happen, how failures are retried, and how you prevent duplicate emails or SMS when something goes wrong.

Triggers and background workers: what they mean

When people compare triggers to background job workers, they’re really comparing where the notification logic runs and how tightly it’s tied to the action that caused it.

A trigger is “do it now when X happens.” In many apps, that means sending an email or SMS right after a user action, inside the same web request. Triggers can also live at the database level: a database trigger runs automatically when a row is inserted or updated. Both types feel immediate, but they inherit the timing and limits of whatever fired them.

A background worker is “do it soon, but not in the foreground.” It’s a separate process that pulls jobs from a queue and tries to complete them. Your main app records what should happen, then returns quickly, while the worker handles the slower, failure-prone parts like calling an email or SMS provider.

A “job” is the unit of work the worker processes. It typically includes who to notify, which template, what data to fill in, the current status (queued, processing, sent, failed), how many attempts have happened, and sometimes a scheduled time.

A typical notification flow looks like this: you prepare message details, enqueue a job, send via a provider, record the result, then decide whether to retry, stop, or alert someone.

Transaction boundaries: when it’s actually safe to send

A transaction boundary is the line between “we tried to save it” and “it’s truly saved.” Until the database commits, the change can still be rolled back. That matters because notifications are hard to take back.

If you send an email or SMS before the commit, you can message someone about something that never happened. A customer might get “Your password was changed” or “Your order is confirmed,” and then the write fails due to a constraint error or timeout. Now the user is confused, and support has to untangle it.

Sending from inside a database trigger looks tempting because it fires automatically when data changes. The catch is that triggers run inside the same transaction. If the transaction rolls back, you may already have called an email or SMS provider.

Database triggers also tend to be harder to observe, test, and retry safely. And when they perform slow external calls, they can hold locks longer than expected and make database issues harder to diagnose.

A safer approach is the outbox idea: record the intent to notify as data, commit it, then send it after.

You make the business change and, in the same transaction, insert an outbox row that describes the message (who, what, which channel, plus a unique key). After commit, a background worker reads pending outbox rows, sends the message, then marks them sent.

Immediate sends can still be fine for low-impact, informational messages where being wrong is acceptable, like “We’re processing your request.” For anything that must match the final state, wait until after commit.

Retries and failure handling: where each approach wins

Retries are usually the deciding factor.

Triggers: fast, but brittle on failures

Most trigger-based designs have no good retry story.

If a trigger calls an email/SMS provider and the call fails, you usually end up with two bad choices:

  • fail the transaction (and block the original update), or
  • swallow the error (and silently lose the notification).

Neither is acceptable when reliability matters.

Trying to loop or delay inside a trigger can make things worse by keeping transactions open longer, increasing lock time, and slowing down the database. And if the database or app dies mid-send, you often can’t tell whether the provider received the request.

Background workers: designed for retries

A worker treats sending as a separate task with its own state. That makes it natural to retry only when it makes sense.

As a practical rule, you generally retry temporary failures (timeouts, transient network issues, server errors, rate limits with a longer wait). You generally don’t retry permanent problems (invalid phone numbers, malformed emails, hard rejections such as unsubscribed users). For “unknown” errors, you cap attempts and make the state visible.

Backoff is what keeps retries from making things worse. Start with a short wait, then increase it each time (for example 10s, 30s, 2m, 10m), and stop after a fixed number of attempts.

To make this survive deploys and restarts, store retry state with each job: attempt count, next attempt time, last error (short and readable), last attempt time, and a clear status like pending, sending, sent, failed.

If your app restarts mid-send, a worker can re-check stuck jobs (for example status = sending with an old timestamp) and retry them safely. This is where idempotency becomes essential so a retry doesn’t double-send.

Preventing duplicate emails and SMS with idempotency

Make sends observable
Model notification jobs in PostgreSQL and keep retries and status visible.
Start Building

Idempotency means you can run the same “send notification” action more than once and the user still gets it once.

The classic duplication case is a timeout: your app calls an email or SMS provider, the request times out, and your code retries. The first request may have actually succeeded, so the retry creates a duplicate.

A practical fix is to give every message a stable key and treat that key as the single source of truth. Good keys describe what the message means, not when you tried to send it.

Common approaches include:

  • a generated notification_id created when you decide “this message should exist,” or
  • a business-derived key like order_id + template + recipient (only if that truly defines uniqueness).

Then store a send ledger (often the outbox table itself) and make all retries consult it before sending. Keep states simple and visible: created (decided), queued (ready), sent (confirmed), failed (confirmed failure), canceled (no longer needed). The critical rule is that you allow only one active record per idempotency key.

Provider-side idempotency can help when it’s supported, but it doesn’t replace your own ledger. You still need to handle your retries, deployments, and worker restarts.

Also treat “unknown” outcomes as first-class. If a request timed out, don’t immediately send again. Mark it as pending confirmation and retry safely by checking provider delivery status when possible. If you can’t confirm, delay and alert instead of double-sending.

A safe default pattern: outbox + background worker (step by step)

If you want a safe default, the outbox pattern plus a worker is hard to beat. It keeps sending outside your business transaction, while still guaranteeing the intent to notify is saved.

The flow

Treat “send a notification” as data you store, not an action you fire.

You save the business change (for example, an order status update) in your normal tables. In the same database transaction, you also insert an outbox record with recipient, channel (email/SMS), template, payload, and an idempotency key. You commit the transaction. Only after that point can anything be sent.

A background worker regularly picks up pending outbox rows, sends them, and records the result.

Add a simple claiming step so two workers don’t grab the same row. This can be a status change to processing or a locked timestamp.

Blocking duplicates and handling failures

Duplicates often happen when a send succeeds but your app crashes before it records “sent.” You solve that by making the “mark sent” write safe to repeat.

Use a uniqueness rule (for example, a unique constraint on the idempotency key and channel). Retry with clear rules: limited attempts, increasing delays, and only for retryable errors. After the last retry, move the job into a dead-letter state (like failed_permanent) so someone can review and manually reprocess it.

Monitoring can stay simple: counts of pending, processing, sent, retrying, and failed_permanent, plus the oldest pending timestamp.

Concrete example: when an order moves from “Packed” to “Shipped,” you update the order row and create one outbox row with idempotency key order-4815-shipped. Even if the worker crashes mid-send, reruns won’t double-send because the “sent” write is protected by that unique key.

When background workers are the better choice

Use the outbox pattern today
Build an outbox plus worker flow so email and SMS only send after commit.
Try AppMaster

Database triggers are good at reacting the moment data changes. But if the job is “deliver a notification reliably under messy real-world conditions,” background workers usually give you more control.

Workers are the better fit when you need time-based sends (reminders, digests), high volume with rate limits and backpressure, tolerance for provider variability (429 limits, slow responses, short outages), multi-step workflows (send, wait for delivery, then follow up), or cross-system events that need reconciliation.

A simple example: you charge a customer, then send an SMS receipt, then email an invoice. If SMS fails due to a gateway issue, you still want the order to stay paid and you want a safe retry later. Putting that logic in a trigger risks mixing “data is correct” with “a third party is available right now.”

Background workers also make operational control easier. You can pause a queue during an incident, inspect failures, and retry with delays.

Common mistakes that cause missed or duplicate messages

Track every attempt
Create a simple ledger with statuses like pending, processing, sent, and failed.
Start Now

The fastest way to get unreliable notifications is to “just send it” wherever it feels convenient, then hope retries will save you. Whether you use triggers or workers, the details around failure and state decide if users get one message, two messages, or none.

A common trap is sending from a database trigger and assuming it can’t fail. Triggers run inside the database transaction, so any slow provider call can stall the write, hit timeouts, or lock tables longer than you expect. Worse, if the send fails and you roll back the transaction, you might retry later and send twice if the provider actually accepted the first call.

Mistakes that show up repeatedly:

  • Retrying everything the same way, including permanent errors (bad email, blocked number).
  • Not separating “queued” from “sent,” so you can’t tell what’s safe to retry after a crash.
  • Using timestamps as dedupe keys, so retries naturally bypass “uniqueness.”
  • Making provider calls in the user request path (checkout and form submit shouldn’t wait on gateways).
  • Treating provider timeouts as “not delivered,” when many are actually “unknown.”

A simple example: you send an SMS, the provider times out, and you retry. If the first request actually succeeded, the user gets two codes. The fix is to record a stable idempotency key (like a notification_id), mark the message as queued before sending, then mark it as sent only after a clear success response.

Quick checks before you ship notifications

Most notification bugs aren’t about the tool. They’re about timing, retries, and missing records.

Confirm you only send after the database write is safely committed. If you send inside the same transaction and it later rolls back, users can get a message about something that never happened.

Next, make every notification uniquely identifiable. Give each message a stable idempotency key (for example order_id + event_type + channel) and enforce it in storage so a retry can’t create a second “new” notification.

Before release, check these basics:

  • Sending happens after commit, not during the write.
  • Each notification has a unique idempotency key, and duplicates are rejected.
  • Retries are safe: the system can run the same job again and still send at most once.
  • Every attempt is recorded (status, last_error, timestamps).
  • Attempts are capped, and stuck items have a clear place to review and reprocess.

Test restart behavior on purpose. Kill the worker mid-send, restart it, and verify nothing double-sends. Do the same while the database is under load.

A simple scenario to validate: a user changes their phone number, then you send an SMS verification. If the SMS provider times out, your app retries. With a good idempotency key and attempt log, you either send once or safely try again later, but you don’t spam.

Example scenario: order updates without double-sending

Deploy your sender worker
Deploy your worker to AppMaster Cloud or your own cloud when you’re ready.
Start Building

A store sends two kinds of messages: (1) an order confirmation email right after payment, and (2) SMS updates when the package is out for delivery and delivered.

Here’s what goes wrong when you send too early (for example, inside a database trigger): the payment step writes an orders row, the trigger fires and emails the customer, and then the payment capture fails a second later. Now you have a “Thanks for your order” email for an order that never became real.

Now imagine the opposite problem: delivery status changes to “Out for delivery,” you call your SMS provider, and the provider times out. You don’t know if it sent the message. If you immediately retry, you risk two SMS messages. If you don’t retry, you risk sending none.

A safer flow uses an outbox record plus a background worker. The app commits the order or status change, and in the same transaction writes an outbox row like “send template X to user Y, channel SMS, idempotency key Z.” Only after commit does a worker deliver messages.

A simple timeline looks like this:

  • Payment succeeds, transaction commits, outbox row for the confirmation email is saved.
  • Worker sends the email, then marks the outbox as sent with a provider message ID.
  • Delivery status changes, transaction commits, outbox row for the SMS update is saved.
  • Provider times out, worker marks the outbox as retryable and tries again later using the same idempotency key.

On retry, the outbox row is the single source of truth. You’re not creating a second “send” request, you’re finishing the first one.

For support, this is clearer too. They can see messages stuck in “failed” with the last error (timeout, bad phone number, blocked email), how many attempts were made, and whether it’s safe to retry without double-sending.

Next steps: pick a pattern and implement it cleanly

Pick a default and write it down. Inconsistent behavior usually comes from mixing triggers and workers randomly.

Start small with an outbox table and one worker loop. The first goal isn’t speed, it’s correctness: store what you intend to send, send it after commit, and only mark it sent when the provider confirms.

A simple rollout plan:

  • Define events (order_paid, ticket_assigned) and which channels they can use.
  • Add an outbox table with event_id, recipient, payload, status, attempts, next_retry_at, sent_at.
  • Build one worker that polls pending rows, sends, and updates status in one place.
  • Add idempotency with a unique key per message and “do nothing if already sent.”
  • Split errors into retryable (timeouts, 5xx) vs not retryable (bad number, blocked email).

Before you scale volume, add basic visibility. Track the pending count, failure rate, and the age of the oldest pending message. If the oldest pending keeps growing, you likely have a stuck worker, a provider outage, or a logic bug.

If you’re building in AppMaster (appmaster.io), this pattern maps cleanly: model the outbox in the Data Designer, write the business change and outbox row in one transaction, then run the send-and-retry logic in a separate background process. That separation is what keeps notification delivery reliable even when providers or deployments misbehave.

FAQ

Should I use triggers or background workers for notifications?

Background workers are usually the safer default because sending is slow and failure-prone, and workers are built for retries and visibility. Triggers can be fast, but they’re tightly coupled to the transaction or request that fired them, which makes failures and duplicates harder to handle cleanly.

Why is it risky to send a notification before the database commit?

It’s dangerous because the database write can still roll back. You can end up notifying users about an order, password change, or payment that never actually committed, and you can’t “undo” an email or SMS after it leaves your system.

What’s the biggest problem with sending from a database trigger?

A database trigger runs inside the same transaction as the row change. If it calls an email/SMS provider and the transaction later fails, you may have sent a real message about a change that didn’t stick, or you may stall the transaction due to a slow external call.

What is the outbox pattern in plain terms?

The outbox pattern stores the intent to send as a row in your database, in the same transaction as the business change. After the commit, a worker reads pending outbox rows, sends the message, and marks it as sent, which makes timing and retries much safer.

What should I do when an email/SMS provider request times out?

If the provider times out, the real outcome is often “unknown,” not “failed.” A good system records the attempt, delays, and retries safely using the same message identity, instead of immediately sending again and risking a duplicate.

How do I prevent duplicate emails or SMS when retries happen?

Use idempotency: give each notification a stable key that represents what the message means (not when you tried). Store that key in a ledger (often the outbox table) and enforce one active record per key, so retries finish the same message rather than creating a new one.

Which errors should I retry vs treat as permanent?

Retry temporary errors like timeouts, 5xx responses, or rate limits (with a wait). Don’t retry permanent errors like invalid addresses, blocked numbers, or hard bounces; mark them failed and make them visible so someone can fix the data instead of spamming retries.

How do background workers handle restarts or crashes mid-send?

A background worker can scan for jobs stuck in sending past a reasonable timeout, move them back to retryable, and try again with backoff. This only works safely if every job has recorded state (attempts, timestamps, last error) and idempotency prevents double-sends.

What job data do I need to make notification delivery observable?

It means you can’t answer “is it safe to retry?” Store clear statuses like pending, processing, sent, and failed, plus attempt count and last error. That makes support and debugging practical, and it lets your system recover without guessing.

How would I implement this pattern in AppMaster?

Model an outbox table in the Data Designer, write the business update and the outbox row in one transaction, then run send-and-retry logic in a separate background process. Keep one idempotency key per message and record attempts, so deploys, retries, and worker restarts don’t create duplicates.

Easy to start
Create something amazing

Experiment with AppMaster with free plan.
When you will be ready you can choose the proper subscription.

Get Started