May 12, 2025·7 min read

Idempotent endpoints in Go: keys, dedup tables, retries

Design idempotent endpoints in Go with idempotency keys, dedup tables, and retry-safe handlers for payments, imports, and webhooks.

Idempotent endpoints in Go: keys, dedup tables, retries

Why retries create duplicates (and why idempotency matters)

Retries happen even when nothing is "wrong." A client times out while the server is still working. A mobile connection drops and the app tries again. A job runner gets a 502 and automatically re-sends the same request. With at-least-once delivery (common with queues and webhooks), duplicates are normal.

That’s why idempotency matters: repeated requests should lead to the same final result as a single request.

A few terms are easy to mix up:

  • Safe: calling it doesn’t change state (like a read).
  • Idempotent: calling it many times has the same effect as calling it once.
  • At-least-once: the sender retries until it “sticks,” so the receiver must handle duplicates.

Without idempotency, retries can cause real damage. A payment endpoint can charge twice if the first charge succeeded but the response never reached the client. An import endpoint can create duplicate rows when a worker retries after a timeout. A webhook handler can process the same event twice and send two emails.

The key point: idempotency is an API contract, not a private implementation detail. Clients need to know what they can retry, what key to send, and what response they can expect when a duplicate is detected. If you change behavior silently, you break retry logic and create new failure modes.

Idempotency also doesn’t replace monitoring and reconciliation. Track duplicate rates, log “replay” decisions, and periodically compare external systems (like a payment provider) with your database.

Pick the idempotency scope and rules for each endpoint

Before you add tables or middleware, decide what “same request” means and what your server promises to do when a client retries.

Most issues show up on POST because it often creates something or triggers a side effect (charge a card, send a message, start an import). PATCH can also need idempotency if it triggers side effects, not just a simple field update. GET should not change state.

Define the scope: where a key is unique

Pick a scope that matches your business rules. Too broad blocks valid work. Too narrow allows duplicates.

Common scopes:

  • Per endpoint + customer
  • Per endpoint + external object (for example, invoice_id or order_id)
  • Per endpoint + tenant (for multi-tenant systems)
  • Per endpoint + payment method + amount (only if your product rules allow it)

Example: for a “Create payment” endpoint, make the key unique per customer. For “Ingest webhook event,” scope it to the payment provider event ID (global uniqueness from the provider).

Decide what you repeat on duplicates

When a duplicate arrives, return the same outcome as the first successful attempt. In practice, that means replaying the same HTTP status code and the same response body (or at least the same resource ID and state).

Clients depend on this. If the first try succeeded but the network dropped, the retry should not create a second charge or a second import job.

Pick a retention window

Keys should expire. Keep them long enough to cover realistic retries and delayed jobs.

  • Payments: 24 to 72 hours is common.
  • Imports: a week can be reasonable if users may retry later.
  • Webhooks: match the provider’s retry policy.

Define “same request”: explicit key vs body hash

An explicit idempotency key (header or field) is usually the cleanest rule.

A body hash can help as a backstop, but it breaks easily with harmless changes (field order, whitespace, timestamps). If you use hashing, normalize the input and be strict about which fields are included.

Idempotency keys: how they work in practice

An idempotency key is a simple contract between client and server: “If you see this key again, treat it as the same request.” It’s one of the most practical tools for retry-safe APIs.

The key can come from either side, but for most APIs it should be client-generated. The client knows when it’s retrying the same action, so it can reuse the same key across attempts. Server-generated keys help when you first create a “draft” resource (like an import job) and then let clients retry by referencing that job ID, but they don’t help with the very first request.

Use a random, unguessable string. Aim for at least 128 bits of randomness (for example, 32 hex chars or a UUID). Don’t build keys from timestamps or user IDs.

On the server, store the key with enough context to detect misuse and replay the original result:

  • Who made the call (account or user ID)
  • Which endpoint or operation it applies to
  • A hash of the important request fields
  • Current status (in-progress, succeeded, failed)
  • The response to replay (status code and body)

A key should be scoped, typically per user (or per API token) plus endpoint. If the same key is reused with a different payload, reject it with a clear error. That prevents accidental collisions where a buggy client sends a new payment amount using an old key.

On replay, return the same result as the first successful attempt. That means the same HTTP status code and the same response body, not a fresh read that might have changed.

Dedup tables in PostgreSQL: a simple, reliable pattern

A dedicated deduplication table is one of the simplest ways to implement idempotency. The first request creates a row for the idempotency key. Every retry reads that same row and returns the stored result.

What to store

Keep the table small and focused. A common structure:

  • key: the idempotency key (text)
  • owner: who the key belongs to (user_id, account_id, or API client ID)
  • request_hash: a hash of the important request fields
  • response: the final response payload (often JSON) or a pointer to a stored result
  • created_at: when the key was first seen

The unique constraint is the core of the pattern. Enforce uniqueness on (owner, key) so one client can’t create duplicates, and two different clients don’t collide.

Also store a request_hash so you can detect key misuse. If a retry arrives with the same key but a different hash, return an error instead of mixing two different operations.

Retention and indexing

Dedup rows shouldn’t live forever. Keep them long enough to cover real retry windows, then clean them up.

For speed under load:

  • Unique index on (owner, key) for fast insert or lookup
  • Optional index on created_at to make cleanup cheap

If the response is large, store a pointer (for example, a result ID) and keep the full payload elsewhere. That reduces table bloat while keeping retry behavior consistent.

Step-by-step: a retry-safe handler flow in Go

Keep control with source code
Get real source code output when you need to review or self-host your services.
Export code

A retry-safe handler needs two things: a stable way to identify “the same request again,” and a durable place to store the first outcome so you can replay it.

A practical flow for payments, imports, and webhook ingestion:

  1. Validate the request, then derive three values: an idempotency key (from a header or client field), an owner (tenant or user ID), and a request hash (hash of the important fields).

  2. Start a database transaction and try to create a dedup record. Make it unique on (owner, key). Store request_hash, status (started, completed), and placeholders for the response.

  3. If the insert conflicts, load the existing row. If it’s completed, return the stored response. If it’s started, either wait briefly (simple polling) or return 409/202 so the client retries later.

  4. Only when you successfully “own” the dedup row, run the business logic once. Write side effects inside the same transaction when possible. Persist the business result plus the HTTP response (status code and body).

  5. Commit, and log with the idempotency key and owner so support can trace duplicates.

A minimal table pattern:

create table idempotency_keys (
  owner_id text not null,
  idem_key text not null,
  request_hash text not null,
  status text not null,
  response_code int,
  response_body jsonb,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now(),
  primary key (owner_id, idem_key)
);

Example: a “Create payout” endpoint times out after charging. The client retries with the same key. Your handler hits the conflict, sees a completed record, and returns the original payout ID without charging again.

Payments: charge exactly once, even with timeouts

Payments are where idempotency stops being optional. Networks fail, mobile apps retry, and gateways sometimes time out after they already created the charge.

A practical rule: the idempotency key guards charge creation, and the payment provider ID (charge/intent ID) becomes the source of truth after that. Once you store a provider ID, don’t create a new charge for the same request.

A pattern that handles retries and gateway uncertainty:

  • Read and validate the idempotency key.
  • In a database transaction, create or fetch a payment row keyed by (merchant_id, idempotency_key). If it already has a provider_id, return the saved result.
  • If no provider_id exists, call the gateway to create a PaymentIntent/Charge.
  • If the gateway succeeds, persist provider_id and mark the payment as “succeeded” (or “requires_action”).
  • If the gateway times out or returns an unknown result, store status “pending” and return a consistent response that tells the client it’s safe to retry.

The key detail is how you treat timeouts: don’t assume failure. Mark the payment as pending, then confirm by querying the gateway later (or via a webhook) using the provider ID once you have it.

Error responses should be predictable. Clients build retry logic around what you return, so keep status codes and error shapes stable.

Imports and batch endpoints: dedup without losing progress

Make imports retry-friendly
Create restartable import jobs that return the same job ID on retries.
Build now

Imports are where duplicates hurt most. A user uploads a CSV, your server times out at 95%, and they hit retry. Without a plan, you either create duplicate rows or force them to start over.

For batch work, think in two layers: the import job and the items inside it. Job-level idempotency stops the same request from creating multiple jobs. Item-level idempotency stops the same row from being applied twice.

A job-level pattern is to require an idempotency key per import request (or derive one from a stable request hash plus the user ID). Store it with an import_job record and return the same job ID on retries. The handler should be able to say, “I’ve seen this job, here’s its current state,” instead of “start again.”

For item-level dedup, rely on a natural key that already exists in the data. For example, each row might include an external_id from the source system, or a stable combo like (account_id, email). Enforce it with a unique constraint in PostgreSQL and use upsert behavior so retries don’t create duplicates.

Before you ship, decide what a replay does when a row already exists. Keep it explicit: skip, update specific fields, or fail. Avoid “merge” unless you have very clear rules.

Partial success is normal. Instead of returning one big “ok” or “failed,” store per-row outcomes tied to the job: row number, natural key, status (created, updated, skipped, error), and an error message. On a retry, you can re-run safely while keeping the same results for rows that already finished.

To make imports restartable, add checkpoints. Process in pages (for example, 500 rows at a time), store the last processed cursor (row index or source cursor), and update it after each page commits. If the process crashes, the next attempt resumes from the last checkpoint.

Webhook ingestion: dedup, validate, then process safely

Generate a Go backend
Generate a Go backend that keeps side effects consistent across timeouts and retries.
Create backend

Webhook senders retry. They also send events out of order. If your handler updates state on every delivery, you’ll eventually double-create records, double-send emails, or double-charge.

Start by choosing the best dedup key. If the provider gives you a unique event ID, use that. Treat it as the idempotency key for the webhook endpoint. Only fall back to a hash of the payload when there is no event ID.

Security comes first: verify the signature before you accept anything. If the signature fails, reject the request and don’t write a dedup record. Otherwise an attacker could “reserve” an event ID and block real events later.

A safe flow under retries:

  • Verify signature and basic shape (required headers, event ID).
  • Insert the event ID into a dedup table with a unique constraint.
  • If the insert fails due to duplicate, return 200 immediately.
  • Store the raw payload (and headers) when it’s useful for audit and debugging.
  • Enqueue processing and return 200 quickly.

Acknowledging quickly matters because many providers have short timeouts. Do the smallest reliable work in the request: verify, dedup, persist. Then process asynchronously (worker, queue, background job). If you can’t do async, keep processing idempotent by keying internal side effects to the same event ID.

Out-of-order delivery is normal. Don’t assume “created” arrives before “updated.” Prefer upserts by external object ID and track the last processed event timestamp or version.

Storing raw payloads helps when a customer says “we never got the update.” You can re-run processing from the stored body after you fix a bug, without asking the provider to resend.

Concurrency: staying correct under parallel requests

Retries get messy when two requests with the same idempotency key arrive at the same time. If both handlers run the “do work” step before either saves the result, you can still double charge, double import, or double enqueue.

The simplest coordination point is the database transaction. Make the first step “claim the key” and let the database decide who wins. Common options:

  • Unique insert into a dedup table (the database enforces one winner)
  • SELECT ... FOR UPDATE after creating (or finding) the dedup row
  • Transaction-level advisory locks keyed by a hash of the idempotency key
  • Unique constraints on the business record as a final backstop

For long-running work, avoid holding a row lock while you call external systems or run minutes-long imports. Instead, store a small state machine in the dedup row so other requests can exit fast.

A practical set of states:

  • in_progress with started_at
  • completed with cached response
  • failed with an error code (optional, depending on your retry policy)
  • expires_at (for cleanup)

Example: two app instances receive the same payment request. Instance A inserts the key and marks in_progress, then calls the provider. Instance B hits the conflict path, reads the dedup row, sees in_progress, and returns a quick “still processing” response (or waits briefly and rechecks). When A finishes, it updates the row to completed and stores the response body so later retries get the exact same output.

Common mistakes that break idempotency

Build retry-safe APIs faster
Design retry-safe endpoints with clear idempotency rules and predictable replays.
Try AppMaster

Most idempotency bugs aren’t about fancy locking. They’re “almost correct” choices that fail under retries, timeouts, or two users doing similar actions.

A common trap is treating the idempotency key as globally unique. If you don’t scope it (by user, account, or endpoint), two different clients can collide and one will get the other’s result.

Another issue is accepting the same key with a different request body. If the first call was for $10 and the replay is for $100, you shouldn’t silently return the first result. Store a request hash (or key fields), compare on replay, and return a clear conflict error.

Clients also get confused when replays return a different response shape or status code. If the first call returned 201 with a JSON body, the replay should return the same body and a consistent status code. Changing replay behavior forces clients to guess.

Mistakes that frequently cause duplicates:

  • Relying only on an in-memory map or cache, then losing dedup state on restart.
  • Using a key without scoping (cross-user or cross-endpoint collisions).
  • Not validating payload mismatches for the same key.
  • Doing the side effect first (charge, insert, publish) and writing the dedup record after.
  • Returning a new generated ID on each retry instead of replaying the original result.

A cache can speed up reads, but the source of truth should be durable (usually PostgreSQL). Otherwise retries after a deploy can create duplicates.

Also plan cleanup. If you store every key forever, tables grow and indexes slow down. Set a retention window based on real retry behavior, delete old rows, and keep the unique index small.

Quick checklist and next steps

Treat idempotency as part of your API contract. Every endpoint that might be retried by a client, a queue, or a gateway needs a clear rule for what “same request” means and what “same result” looks like.

A checklist before shipping:

  • For each retryable endpoint, is the idempotency scope defined (per user, per account, per order, per external event) and written down?
  • Is dedup enforced by the database (a unique constraint on the idempotency key and scope), not just “checked in code”?
  • On replay, do you return the same status code and response body (or a documented stable subset), not a fresh object or a new timestamp?
  • For payments, do you handle unknown outcomes safely (timeout after submit, gateway says “processing”) without charging twice?
  • Do logs and metrics make it obvious when a request was first-seen vs replayed?

If any item is a “maybe,” fix it now. Most failures show up under stress: parallel retries, slow networks, and partial outages.

If you’re building internal tools or customer-facing apps on AppMaster (appmaster.io), it helps to design idempotency keys and the PostgreSQL dedup table early. That way, even as the platform regenerates Go backend code when requirements change, your retry behavior stays consistent.

FAQ

Why do retries create duplicate charges or duplicate records even when my API is correct?

Retries are normal because networks and clients fail in ordinary ways. A request can succeed on the server but the response never reaches the client, so the client retries and you end up doing the same work twice unless the server can recognize and replay the original result.

What should I use as an idempotency key, and who should generate it?

Send the same key on every retry of the same action. Generate it on the client as a random, unguessable string (for example, a UUID), and do not reuse it for a different action.

How should I scope idempotency keys so they don’t collide across users or tenants?

Scope it to match your business rule, usually per endpoint plus a caller identity such as user, account, tenant, or API token. This prevents two different customers from accidentally colliding on the same key and receiving each other’s results.

What should my API return when it receives a duplicate request with the same key?

Return the same outcome as the first successful attempt. In practice, replay the same HTTP status code and response body, or at least the same resource ID and state, so clients can safely retry without creating a second side effect.

What if the client accidentally reuses the same idempotency key with a different request body?

Reject it with a clear conflict-style error instead of guessing. Store and compare a hash of the important request fields, and if the key matches but the payload doesn’t, fail fast to avoid mixing two different operations under one key.

How long should I retain idempotency keys in my database?

Keep keys long enough to cover realistic retries, then delete them. A common default is 24–72 hours for payments, a week for imports, and for webhooks you should match the sender’s retry policy so late retries still dedupe correctly.

What’s the simplest PostgreSQL schema pattern for idempotency?

A dedicated dedup table works well because the database can enforce a unique constraint and survive restarts. Store the owner scope, the key, a request hash, a status, and the response to replay, then make (owner, key) unique so only one request “wins.”

How do I handle two identical requests arriving at the same time?

Claim the key inside a database transaction first, then do the side effect only if you successfully claimed it. If another request arrives in parallel, it should hit the unique constraint, see in_progress or completed, and return a wait/replay response instead of running the logic twice.

How do I prevent double-charging when the payment gateway times out?

Treat timeouts as “unknown,” not “failed.” Record a pending state and, if you have a provider ID, use it as the source of truth so retries return the same payment result instead of creating a new charge.

How can I make imports retry-safe without forcing users to start over or creating duplicates?

Dedup at two levels: job-level and item-level. Make retries return the same import job ID, and enforce a natural key for rows (like an external ID or (account_id, email)) with unique constraints or upserts so reprocessing doesn’t create duplicates.

Easy to start
Create something amazing

Experiment with AppMaster with free plan.
When you will be ready you can choose the proper subscription.

Get Started