Cursor vs offset pagination for fast admin screen APIs
Learn cursor vs offset pagination with a consistent API contract for sorting, filters, and totals that keeps admin screens fast on web and mobile.

Why pagination can make admin screens feel slow
Admin screens often start as a simple table: load the first 25 rows, add a search box, done. It feels instant with a few hundred records. Then the dataset grows, and the same screen starts to stutter.
The usual problem isn't the UI. It's what the API has to do before it can return page 12 with sorting and filters applied. As the table gets bigger, the backend spends more time finding matches, counting them, and skipping over earlier results. If every click triggers a heavier query, the screen feels like it's thinking instead of responding.
You tend to notice it in the same places: page changes get slower over time, sorting becomes sluggish, search feels inconsistent across pages, and infinite scroll loads in bursts (fast, then suddenly slow). In busy systems you may even see duplicates or missing rows when data changes between requests.
Web and mobile UIs also push pagination in different directions. A web admin table encourages jumping to a specific page and sorting by many columns. Mobile screens usually use an infinite list that loads the next chunk, and users expect each pull to be equally quick. If your API is built only around page numbers, mobile often suffers. If it's built only around next/after, web tables can feel limited.
The goal isn't just return 25 items. It's fast, predictable paging that stays stable as the data grows, with rules that work the same way for tables and infinite lists.
Pagination basics your UI depends on
Pagination is splitting a long list into smaller chunks so the screen can load and render quickly. Instead of asking the API for every record, the UI asks for the next slice of results.
The most important control is page size (often called limit). Smaller pages usually feel faster because the server does less work and the app draws fewer rows. But pages that are too small can feel jumpy because users must click or scroll more often. For many admin tables, 25 to 100 items is a practical range, with mobile usually preferring the lower end.
A stable sort order matters more than most teams expect. If the order can change between requests, users see duplicates or missing rows while paging. Stable sorting usually means ordering by a primary field (like created_at) plus a tie-breaker (like id). This matters whether you use offset or cursor pagination.
From the client’s point of view, a paginated response should include the items, a next-page hint (page number or cursor token), and only the counts the UI truly needs. Some screens need an exact total for “1-50 of 12,340”. Others only need has_more.
Offset pagination: how it works and where it hurts
Offset pagination is the classic page N approach. The client asks for a fixed number of rows and tells the API how many rows to skip first. You’ll see it as limit and offset, or as page and pageSize that the server converts into an offset.
A typical request looks like this:
GET /tickets?limit=50&offset=950- “Give me 50 tickets, skipping the first 950.”
It matches common admin needs: jump to page 20, scan older records, or export a big list in chunks. It’s also easy to talk about internally: “Look at page 3 and you’ll see it.”
The problem shows up on deep pages. Many databases still have to walk past the skipped rows before returning your page, especially when the sort order isn’t backed by a tight index. Page 1 might be fast, but page 200 can become noticeably slower, which is exactly what makes admin screens feel laggy when users scroll or jump around.
The other problem is consistency when data changes. Imagine a support manager opens page 5 of tickets sorted by newest first. While they’re looking, new tickets arrive or older tickets are deleted. Insertions can shift items forward (duplicates across pages). Deletions can shift items back (records disappear from the user’s browsing path).
Offset pagination can still be fine for small tables, stable datasets, or one-off exports. On large, active tables, the edge cases show up quickly.
Cursor pagination: how it works and why it stays steady
Cursor pagination uses a cursor as a bookmark. Instead of saying “give me page 7,” the client says “continue after this exact item.” The cursor usually encodes the last item’s sort values (for example, created_at and id) so the server can resume from the right place.
The request is usually just:
limit: how many items to returncursor: an opaque token from the previous response (often calledafter)
The response returns items plus a new cursor that points to the end of that slice. The practical difference is that cursors don’t ask the database to count and skip rows. They ask it to start from a known position.
That’s why cursor pagination stays fast for scroll-forward lists. With a good index, the database can jump to “items after X” and then read the next limit rows. With offsets, the server often has to scan (or at least skip) more and more rows as the offset grows.
For UI behavior, cursor pagination makes “Next” natural: you take the returned cursor and send it back on the next request. “Previous” is optional and trickier. Some APIs support a before cursor, while others fetch in reverse and flip results.
When to choose cursor, offset, or a hybrid
The choice starts with how people actually use the list.
Cursor pagination fits best when users mostly move forward and speed matters most: activity logs, chats, orders, tickets, audit trails, and most mobile infinite scroll. It also behaves better when new rows are inserted or deleted while someone is browsing.
Offset pagination makes sense when users frequently jump around: classic admin tables with page numbers, go-to-page, and quick back-and-forth navigation. It’s simple to explain, but it can get slower on large datasets and less stable when the data changes underneath you.
A practical way to decide:
- Choose cursor when the main action is “next, next, next.”
- Choose offset when “jump to page N” is a real requirement.
- Treat totals as optional. Accurate totals can be expensive on huge tables.
Hybrids are common. One approach is cursor-based next/prev for speed, plus an optional page-jump mode for small, filtered subsets where offsets stay fast. Another is cursor retrieval with page numbers based on a cached snapshot, so the table feels familiar without turning every request into heavy work.
A consistent API contract that works on web and mobile
Admin UIs feel faster when every list endpoint behaves the same. The UI can change (web table with page numbers, mobile infinite scroll), but the API contract should stay steady so you don’t re-learn pagination rules for each screen.
A practical contract has three parts: rows, paging state, and optional totals. Keep the names identical across endpoints (tickets, users, orders), even if the underlying paging mode differs.
Here is a response shape that works well for both web and mobile:
{
"data": [ { "id": "...", "createdAt": "..." } ],
"page": {
"mode": "cursor",
"limit": 50,
"nextCursor": "...",
"prevCursor": null,
"hasNext": true,
"hasPrev": false
},
"totals": {
"count": 12345,
"filteredCount": 120
}
}
A few details make this easy to reuse:
page.modetells the client what the server is doing without changing field names.limitis always the requested page size.nextCursorandprevCursorare present even if one is null.totalsis optional. If it’s expensive, return it only when the client asks.
A web table can still show “Page 3” by keeping its own page index and calling the API repeatedly. A mobile list can ignore page numbers and just request the next chunk.
If you’re building both web and mobile admin UIs in AppMaster, a stable contract like this pays off quickly. The same list behavior can be reused across screens without custom pagination logic per endpoint.
Sorting rules that keep pagination stable
Sorting is where pagination usually breaks. If the order can change between requests, users see duplicates, gaps, or “missing” rows.
Make sorting a contract, not a suggestion. Publish the allowed sort fields and directions, and reject anything else. That keeps your API predictable and prevents clients from requesting slow sorts that look harmless in development.
A stable sort needs a unique tie-breaker. If you sort by created_at and two records share the same timestamp, add id (or another unique column) as the last sort key. Without it, the database is free to return equal values in any order.
Practical rules that hold up:
- Allow sorting only on indexed, well-defined fields (for example
created_at,updated_at,status,priority). - Always include a unique tie-breaker as the final key (for example
id ASC). - Define a default sort (for example
created_at DESC, id DESC) and keep it consistent across clients. - Document how nulls sort (for example “nulls last” for dates and numbers).
Sorting also drives cursor generation. A cursor should encode the last item’s sort values in order, including the tie-breaker, so the next page can query “after” that tuple. If the sort changes, old cursors become invalid. Treat sort parameters as part of the cursor contract.
Filters and totals without breaking the contract
Filters should feel separate from pagination. The UI is saying, “show me a different set of rows,” and only then it asks, “page through that set.” If you mix filter fields into your pagination token or treat filters as optional and unvalidated, you get hard-to-debug behavior: empty pages, duplicates, or a cursor that suddenly points into a different dataset.
A simple rule: filters live in plain query parameters (or a request body for POST), and the cursor is opaque and only valid for that exact filter plus sort combination. If the user changes any filter (status, date range, assignee), the client should drop the old cursor and start from the beginning.
Be strict about what filters are allowed. It protects performance and keeps behavior predictable:
- Reject unknown filter fields (don’t silently ignore them).
- Validate types and ranges (dates, enums, IDs).
- Cap wide filters (for example, max 50 IDs in an IN list).
- Apply the same filters to data and totals (no mismatched numbers).
Totals are where many APIs get slow. Exact counts can be expensive on large tables, especially with multiple filters. You generally have three options: exact, estimated, or none. Exact is great for small datasets or when users truly need “showing 1-25 of 12,431.” Estimated is often enough for admin screens. None is fine when you only need “Load more.”
To avoid slowing every request, make totals optional: compute them only when the client asks (for example with a flag like includeTotal=true), cache them briefly per filter set, or return totals only on the first page.
Step by step: design and implement the endpoint
Start with defaults. A list endpoint needs a stable sort order, plus a tie-breaker for rows that share the same value. For example: createdAt DESC, id DESC. The tie-breaker (id) is what prevents duplicates and gaps when new records are added.
Define one request shape and keep it boring. Typical parameters are limit, cursor (or offset), sort, and filters. If you support both modes, make them mutually exclusive: either the client sends cursor, or it sends offset, but not both.
Keep a consistent response contract so web and mobile UIs can share the same list logic:
items: the page of recordsnextCursor: the cursor to fetch the next page (ornull)hasMore: boolean so the UI can decide whether to show “Load more”total: total matching records (nullunless requested if counting is expensive)
Implementation is where the two approaches diverge.
Offset queries are usually ORDER BY ... LIMIT ... OFFSET ..., which can slow down on large tables.
Cursor queries use seek conditions based on the last item: “give me items where (createdAt, id) is less than the last (createdAt, id)”. That keeps performance steadier because the database can use indexes.
Before you ship, add guardrails:
- Cap
limit(for example, max 100) and set a default. - Validate
sortagainst an allowlist. - Validate filters by type and reject unknown keys.
- Make
cursoropaque (encode the last sort values) and reject malformed cursors. - Decide how
totalis requested.
Test with data changing underneath you. Create and delete records between requests, update fields that affect sorting, and verify you don’t see duplicates or missing rows.
Example: tickets list that stays fast on web and mobile
A support team opens an admin screen to review the newest tickets. They need the list to feel instant, even while new tickets arrive and agents update older ones.
On the web, the UI is a table. The default sort is by updated_at (newest first), and the team often filters to Open or Pending. The same endpoint can support both actions with a stable sort and a cursor token.
GET /tickets?status=open&sort=-updated_at&limit=50&cursor=eyJ1cGRhdGVkX2F0IjoiMjAyNi0wMS0yNVQxMTo0NTo0MloiLCJpZCI6IjE2OTMifQ==
The response stays predictable for the UI:
{
"items": [{"id": 1693, "subject": "Login issue", "status": "open", "updated_at": "2026-01-25T11:45:42Z"}],
"page": {"next_cursor": "...", "has_more": true},
"meta": {"total": 128}
}
On mobile, the same endpoint powers infinite scroll. The app loads 20 tickets at a time, then sends next_cursor to fetch the next batch. No page-number logic, and fewer surprises when records change.
The key is that the cursor encodes the last-seen position (for example, updated_at plus id as a tie-breaker). If a ticket gets updated while the agent is scrolling, it may move toward the top on the next refresh, but it won’t cause duplicates or gaps in the already-scrolled feed.
Totals are useful, but expensive on large datasets. A simple rule is to return meta.total only when the user applies a filter (like status=open) or explicitly asks for it.
Common mistakes that cause duplicates, gaps, and lag
Most pagination bugs aren’t in the database. They come from small API decisions that look fine in testing, then fall apart when data changes between requests.
The most common cause of duplicates (or missing rows) is sorting on a field that isn’t unique. If you sort by created_at and two items share the same timestamp, the order can flip between requests. The fix is simple: always add a stable tie-breaker, usually the primary key, and treat the sort as a pair like (created_at desc, id desc).
Another common issue is letting clients request any page size. One large request can spike CPU, memory, and response times, which slows every admin screen. Pick a sane default and a hard max, and return an error when the client asks for more.
Totals can also hurt. Counting all matching rows on every request can become the slowest part of the endpoint, especially with filters. If the UI needs totals, fetch them only when asked (or return an estimate), and avoid blocking list scrolling on a full count.
Mistakes that most often create gaps, duplicates, and lag:
- Sorting without a unique tie-breaker (unstable order)
- Unlimited page sizes (server overload)
- Returning totals every time (slow queries)
- Mixing offset and cursor rules in one endpoint (confusing client behavior)
- Reusing the same cursor when filters or sort change (wrong results)
Reset pagination whenever filters or sorting changes. Treat a new filter as a new search: clear the cursor/offset and start from the first page.
Quick checklist before you ship
Run this once with the API and UI side by side. Most issues happen in the contract between the list screen and the server.
- Default sort is stable and includes a unique tie-breaker (for example
created_at DESC, id DESC). - Sorting fields and directions are whitelisted.
- A max page size is enforced, with a sensible default.
- Cursor tokens are opaque, and invalid cursors fail in a predictable way.
- Any filter or sort change resets pagination state.
- Totals behavior is explicit: exact, estimated, or omitted.
- The same contract supports both a table and infinite scroll without special cases.
Next steps: standardize your lists and keep them consistent
Pick one admin list people use every day and make it your gold standard. A busy table like Tickets, Orders, or Users is a good starting point. Once that endpoint feels fast and predictable, copy the same contract across the rest of your admin screens.
Write the contract down, even if it’s brief. Be explicit about what the API accepts and what it returns so the UI team doesn’t guess and accidentally invent different rules per endpoint.
A simple standard to apply to every list endpoint:
- Allowed sorts: exact field names, direction, and a clear default (plus a tie-breaker like
id). - Allowed filters: which fields can be filtered, value formats, and what happens on invalid filters.
- Totals behavior: when you return a count, when you return “unknown”, and when you skip it.
- Response shape: consistent keys (
items, paging info, applied sort/filters, totals). - Error rules: consistent status codes and readable validation messages.
If you’re building these admin screens with AppMaster (appmaster.io), it helps to standardize the pagination contract early. You can reuse the same list behavior across your web app and native mobile apps, and you spend less time chasing pagination edge cases later.
FAQ
Offset pagination uses limit plus offset (or page/pageSize) to skip rows, so deeper pages often get slower as the database has to walk past more records. Cursor pagination uses an after token based on the last item’s sort values, so it can jump to a known position and stay fast as you keep moving forward.
Because page 1 is usually cheap, but page 200 forces the database to skip a large number of rows before it can return anything. If you also sort and filter, the work grows, so each click feels more like a new heavy query than a quick fetch.
Always use a stable sort with a unique tie-breaker, such as created_at DESC, id DESC or updated_at DESC, id DESC. Without the tie-breaker, records with the same timestamp can swap order between requests, which is a common cause of duplicates and “missing” rows.
Use cursor pagination for lists where people mostly move forward and speed matters, like activity logs, tickets, orders, and mobile infinite scroll. It stays consistent when new rows are inserted or deleted, because the cursor anchors the next page to an exact last-seen position.
Offset pagination fits best when “jump to page N” is a real UI feature and users regularly bounce around. It’s also convenient for small tables or stable datasets, where deep-page slowdown and shifting results are unlikely to matter.
Keep one response shape across endpoints and include the items, paging state, and optional totals. A practical default is returning items, a page object (with limit, nextCursor/prevCursor or offset), and a lightweight flag like hasNext so both web tables and mobile lists can reuse the same client logic.
Because exact COUNT(*) on large, filtered datasets can become the slowest part of the request and make every page change feel laggy. A good default is to make totals optional, return them only when requested, or return has_more when the UI only needs “Load more.”
Treat filters as part of the dataset, and treat the cursor as valid only for that exact filter and sort combination. If a user changes any filter or sort, reset pagination and start from the first page; reusing an old cursor after changes is a common way to get empty pages or confusing results.
Whitelist allowed sort fields and directions, and reject anything else so clients can’t accidentally request slow or unstable ordering. Prefer sorting on indexed fields and always append a unique tie-breaker like id to keep the order deterministic across requests.
Enforce a maximum limit, validate filters and sort parameters, and make cursor tokens opaque and strictly validated. If you’re building admin screens in AppMaster, keeping these rules consistent across all list endpoints makes it easier to reuse the same table and infinite-scroll behavior without custom pagination fixes per screen.


