Aug 04, 2025·7 min read

PostgreSQL search everywhere: full-text, trigram, partial indexes

Learn how to design PostgreSQL search everywhere for internal screens by choosing full-text search, trigram indexes, and partial indexes for fast results.

PostgreSQL search everywhere: full-text, trigram, partial indexes

What “search everywhere” really means for internal tools

On an internal screen, “search everywhere” usually means: “Help me find the exact record I’m thinking of, fast, even if I don’t remember it perfectly.” People aren’t browsing. They’re trying to jump straight to one customer, ticket, invoice, or device.

That’s why slow search feels worse than a slow page. A page load happens once. Search happens many times in a row, often while someone’s on a call or doing triage. If results take 2-3 seconds, users change the query, backspace, try another term, and you end up with more load and more frustration.

From one search box, users expect a bundle of behaviors: partial matches ("alex" finds "Alexander"), tolerance for small typos ("microsfot" still finds "Microsoft"), sensible “best result” ordering (exact IDs or emails float to the top), a bit of recency bias, and filters that apply by default (open tickets, active customers).

The tricky part is that one input often hides multiple intents. An agent might paste a ticket number, type a name fragment, search an email, or enter a phone number. Each intent wants a different strategy, different indexes, and sometimes a different ranking rule.

So don’t start with indexes. Start by listing the few search intents your users actually have, and separate identity fields (IDs, emails) from fuzzy fields (names, subjects) and long text (notes).

Start by naming the data and the search behaviors

Before you pick an index, write down what people actually type. “PostgreSQL search everywhere” sounds like one feature, but it’s usually a mix of very different searches.

Internal tools blend “hard” identifiers (order ID, ticket number, invoice code) with “soft” text (customer name, email, notes, tags). Those groups behave differently in PostgreSQL, so treating them the same is a fast path to slow queries.

Next, separate the behaviors:

  • Exact lookup: someone searching TCK-104883 expects one precise result.
  • Fuzzy lookup: someone typing john smth wants a forgiving match across names (and maybe emails) and will scan a short list.
  • Filter-driven search: someone selecting “Status = Open” and “Assigned to = Me” is mostly filtering; the text box is secondary.

Decide early whether results must be ranked (best matches first) or simply filtered. Ranking matters for notes and longer descriptions. For IDs and emails, ranking often feels random and adds cost.

A short checklist is usually enough:

  • Which fields get searched every day?
  • Which inputs are exact (IDs, codes), fuzzy (names), or long text (notes)?
  • Which filters apply to almost every search?
  • Do you need “best match” ordering, or is any match acceptable?
  • How fast will the table grow: thousands, hundreds of thousands, or millions?

If you name those decisions up front, the index choices later stop feeling like guesswork.

The baseline: exact matches and why ILIKE often hurts

Lock down the easy wins first. For many internal screens, a plain B-tree index already gives you instant results for exact matches like IDs, order numbers, emails, and external references.

If people paste an exact value, make sure your query is truly exact. WHERE id = ... or WHERE email = ... can be extremely fast with a normal index. A unique index on email often pays off twice: speed and better data quality.

Trouble starts when “search everywhere” quietly turns into ILIKE. A query like name ILIKE '%ann%' has a leading wildcard, so PostgreSQL can’t use a normal B-tree index. It ends up checking lots of rows, and it gets predictably slower as the table grows.

Prefix search can work, but only when the pattern is anchored at the start: name ILIKE 'ann%'. Even then, details matter (collation, case handling, and whether you indexed the same expression you query). If your UI must be case-insensitive, a common approach is to query lower(name) and create a matching index on lower(name).

It also helps to agree on what “snappy” means:

  • About 200 ms or less for the database work on a warm cache
  • Under 1 second end-to-end including network and rendering
  • No visible loading state for common searches

With targets like that, it’s easier to decide whether you can stick with exact and prefix matches, or whether it’s time for full-text search or trigram indexes.

When full-text search is the right tool

Full-text search is the best fit when people type natural language and expect the system to find the right items, not just exact matches. Think ticket messages, internal notes, long descriptions, knowledge base articles, and call logs.

The big win is ranking. Instead of returning a long list where the best result is buried, full-text search can sort by relevance. In internal tools, that matters: someone needs an answer in seconds, not after scanning 50 rows.

At a high level, full-text search has three moving parts:

  • A tsvector (the searchable text, stored or generated)
  • A tsquery (what the user typed, converted into a query)
  • A language configuration (how words are normalized)

Language configuration is where behavior becomes visible. PostgreSQL removes common stop words (like “the” or “and”) and applies stemming, so “pay”, “paid”, and “payment” can match. That’s great for notes and messages, but it can surprise people when they search for a short common word and get nothing.

Synonyms are another decision point. They help when your company uses different words for the same thing (for example, “refund” vs “chargeback”), but they need a little care over time. Keep the synonym list short and based on what support or ops actually type.

A practical example: searching “can’t login after reset” should pull tickets where the message says “cannot log in after password reset”, even if the wording differs. That “find relevant” behavior is what full-text search is built for, and it’s usually a better choice than trying to make ILIKE act like a search engine.

When trigram indexes win

Ship a Real Admin Panel
Create a support or ops admin panel that finds tickets, customers, and notes quickly.
Build Admin App

Trigram indexes are a strong choice when users type fragments, make typos, or only remember “something like it.” They shine on short text fields where full-text search is too strict: person names, company names, ticket subjects, SKUs, order numbers, and product codes.

A trigram is a 3-character chunk of text. PostgreSQL compares two strings by how many trigrams they share. That’s why it can match "Jon Smth" to "John Smith", or "ACM" to "ACME", and it can find results when the query is the middle of a word.

This is often the fastest path to a forgiving “PostgreSQL search everywhere” box when the job is “find me the right row,” not “find documents about a topic.”

Full-text search is great for longer text and ranking by meaning, but it doesn’t naturally handle partial strings and small typos on short fields. Trigram search is built for that kind of fuzziness.

Keep write cost reasonable

Trigram indexes are larger and add overhead on writes, so be selective. Index the columns people actually use:

  • Name, email, company, username
  • Short identifiers (SKU, code, reference)
  • A concise title field (not a large notes/comments field)

If you can name the exact fields your team types into the search box, you can usually keep trigram indexing small and fast.

Partial indexes for the filters people actually use

Build Search-First Internal Tools
Build an internal search screen with PostgreSQL models, filters, and predictable results.
Try AppMaster

A “search everywhere” box usually has hidden defaults. People search inside a workspace, on active items, and with deleted excluded. If those filters are present in almost every request, make the common case fast by indexing only the rows that match them.

A partial index is a normal index with a WHERE clause. PostgreSQL keeps it smaller because it only stores entries for rows you care about most. That often means fewer pages to read and better cache hit rates.

Common partial-index targets include active rows (status = 'active'), soft deletes (deleted_at IS NULL), tenant scoping, and “recent” windows (for example, last 90 days).

The key is to match your UI. If the screen always hides deleted rows, your queries should always include deleted_at IS NULL, and your partial index should use the same condition. Small mismatches, like using is_deleted = false in one place and deleted_at IS NULL in another, can stop the planner from using the index.

Partial indexes also work alongside full-text search and trigram indexes. For example, indexing text search only for non-deleted rows keeps the index size under control.

Trade-off: partial indexes are less helpful for rare queries. If someone occasionally searches across deleted records or across all workspaces, PostgreSQL may fall back to a slower plan. Handle that with a separate admin-only path, or add a second index only if the rare query becomes common.

Mixing approaches without turning search into a mystery

Most teams end up mixing techniques because one search box has to handle different intents. The goal is to make the order of operations clear so results feel predictable.

A simple priority order helps, whether you implement it as separate queries or as one query with clear CASE logic.

A predictable priority ladder

Start strict, then get fuzzier only if needed:

  • Exact match first (IDs, email, ticket number, SKU) using B-tree indexes
  • Prefix match next where it makes sense
  • Trigram match after that for typos and fragments on names and titles
  • Full-text search last for longer notes, descriptions, and free-form content

When you stick to the same ladder, users learn what the box “means.” They stop thinking the system is broken when “12345” finds a ticket instantly while “refund policy” searches longer text.

Filter first, then fuzz

Fuzzy search gets expensive when it has to consider the whole table. Narrow the candidate set with the filters people actually use (status, assigned team, date range, account), then run trigram or full-text on what remains. Even a fast trigram index can feel slow if you ask it to score millions of rows.

It’s also worth writing a one-paragraph rule that non-technical teammates can understand, like: “We match ticket number exactly, then customer name with typo tolerance, then search notes.” That shared definition prevents arguments later about why a row showed up.

Step-by-step: pick an approach and implement it safely

Prototype Multi-Intent Search
Prototype a single search box that routes IDs, emails, and names to the right query path.
Create Prototype

A fast “search everywhere” box is a set of small decisions. Write them down first, and the database work gets simpler.

  1. Define the inputs. Is it one box only, or one box plus filters (status, owner, date range)?
  2. Choose match types per field. IDs and codes want exact matches. Names and emails often need prefix or fuzzy matching. Long notes and descriptions are better with natural language search.
  3. Add the right indexes and confirm they’re used. Create the index, then check your real query with EXPLAIN (ANALYZE, BUFFERS).
  4. Add ranking or sorting that matches intent. If users type “invoice 1042”, exact matches should rise. If they type a misspelled name, similarity ranking should win.
  5. Test with real queries. Try typos, very short terms (like “al”), long pasted text, empty input, and “only filters” mode.

To ship safely, change one thing at a time and keep rollback easy. For new indexes on large tables, prefer CREATE INDEX CONCURRENTLY so you don’t block writes. If you can, ship behind a feature flag and compare latency before and after.

A practical pattern for “PostgreSQL search everywhere” is: exact match first (fast and precise), trigram matching for “human” fields where people misspell, and full-text search for long text that benefits from ranking.

A realistic example: one search box in a support admin panel

Picture a support admin panel where the team has one search box, but expects it to find customers, tickets, and even notes. This is the classic “one input, many meanings” problem.

The first win is to make intent visible without adding friction. If the query looks like an email or phone number, treat it as a customer lookup. If it looks like a ticket ID (for example, "TKT-10482"), route it straight to tickets. Everything else falls back to text search across ticket subject and notes.

For customer lookup, trigram indexes usually feel best. Names and company strings are messy, and people type fragments. A trigram index can make searches like “jon smi” or “acm” fast and forgiving.

For ticket notes, use full-text search. Notes are real sentences, and you usually want relevant matches, not “contains this substring.” Ranking helps when dozens of tickets mention the same keyword.

Filters matter more than most teams expect. If agents live in “open tickets,” add a partial index that only covers open rows. Do the same for “active customers.” It keeps indexes smaller and makes the common path fast.

Very short queries deserve rules, otherwise the database does expensive work for noise:

  • 1-2 characters: show recent open tickets and recently updated customers
  • 3+ characters: run trigram for customer fields and full-text for ticket text
  • No clear intent: show a mixed list, but cap each group (for example, 10 customers and 10 tickets)

Common mistakes that make search slow or confusing

Turn Search Rules Into UI
Design tables in the Data Designer, then wire up exact, fuzzy, and text search flows.
Start Building

Most “why is search slow?” bugs are self-inflicted. The goal isn’t to index everything, it’s to index what people actually do.

A common trap is adding indexes on many columns “just in case.” Reads can improve, but every insert and update now has extra work. In internal tools where records change all day (tickets, orders, users), write speed matters.

Another mistake is using full-text search when what you really need is typo-tolerant lookup on names or emails. Full-text is great for documents and descriptions. It’s not a magic fix for “Jon” vs “John” or “gmail.con” vs “gmail.com.” That’s usually trigram territory.

Filters can also quietly break your plan. If most searches happen with a fixed filter (like status = 'open' or org_id = 42), the best index may be a partial index that matches that condition. If you forget this, PostgreSQL may scan far more rows than you expect.

A few mistakes show up repeatedly:

  • Adding many indexes without measuring write cost
  • Expecting full-text search to behave like typo-tolerant autocomplete
  • Ignoring how common filters change which index can be used
  • Testing on small, clean data instead of real term frequency (common words vs rare IDs)
  • Sorting by a column with no supporting index, forcing a slow sort

Example: a support screen searches tickets by subject, customer name, and ticket number, then sorts by latest activity. If latest_activity_at isn’t indexed for the filtered set (for example, open tickets), that sort can wipe out the speed you gained from the search index.

Quick checks before you ship

Filter First, Then Fuzzy
Add default filters like status and workspace first so fuzzy search stays cheap.
Add Filters

Before you call a “search everywhere” feature done, get concrete about the behavior you’re promising.

  • Are people trying to find a record by an exact identifier (ticket number, email)?
  • Do they expect fuzzy matching for typos?
  • Do they want ranked results from longer notes and descriptions?

If you mix modes, decide which one wins when they conflict.

Then identify the 2-3 fields that drive most searches. If 80% of searches are by email, name, and ticket ID, optimize those first and treat the rest as secondary.

A short pre-ship checklist:

  • Confirm the main match mode per field (exact lookup, fuzzy match, or ranked text)
  • List the filters users apply daily and make sure indexes match those combinations
  • Decide how to handle very short and empty queries (for example, require 2-3 characters for fuzzy search; show “recent” for empty)
  • Make ordering explainable: most recent, best text match, or a simple combined rule

Finally, test with realistic data size and timing, not just correctness. A query that feels instant with 1,000 rows can drag at 1,000,000.

Next steps: turn the plan into a fast internal search screen

A search box stays fast when the team agrees on what it should do. Write the rules in plain language: what “matches” means (exact, prefix, typo-tolerant), which fields are searched, and how filters change the result set.

Keep a small test set of real searches and treat it like a regression suite. Ten to twenty queries is usually enough: a couple of common names, a few partial emails, a typo, a long note snippet, and one “empty results” case. Run them before and after changes so performance work doesn’t quietly break relevance.

If you’re building internal tools with AppMaster (appmaster.io), it helps to define those search rules alongside the data model and business logic, so UI behavior and database choices don’t drift as requirements change.

FAQ

What does “search everywhere” usually mean in an internal tool?

Treat it as “find the exact record I mean, quickly,” not as browsing. Start by writing down the few real intents users have (ID lookup, name/email lookup with typos, long-note search) and the default filters they almost always use. Those decisions tell you which queries to run and which indexes are worth paying for.

Why does `ILIKE '%...%'` make search slow?

ILIKE '%term%' has a leading wildcard, so PostgreSQL usually can’t use a normal B-tree index and ends up scanning lots of rows. It may feel fine on small tables, then slows down sharply as data grows. If you need substring or typo-tolerant matching, plan for trigram or full-text instead of hoping ILIKE will scale.

What’s the fastest way to handle exact lookups like IDs or emails?

Use exact comparisons like WHERE id = $1 or WHERE email = $1 and back them with a B-tree (often unique for emails or codes). Exact lookups are the cheapest searches you can do, and they also make results feel predictable. If users paste a full ticket number or email, route to this path first.

How do I do case-insensitive prefix search without breaking indexes?

Prefer a prefix pattern like name ILIKE 'ann%' and keep it consistent with how you index. For reliable case-insensitive behavior, many teams query lower(name) and create an index on the same expression so the planner can use it. If you can’t anchor the pattern at the start, prefix search won’t be enough.

When should I use trigram indexes for a search box?

Use trigram indexing when users type fragments, make small typos, or only remember “something like it,” especially on short fields like names, subjects, codes, and usernames. It works well for matching the middle of a string and for near-matches such as misspellings. Be selective about which columns you index because trigram indexes add size and write overhead.

When is PostgreSQL full-text search the better choice?

Use full-text search when people search sentences or keywords in longer content like notes, messages, descriptions, or knowledge-base style text. Its big benefit is relevance ranking, so the best matches rise to the top instead of forcing users to scan a long list. Expect language behavior like stemming and stop-word removal, which is helpful for prose but can surprise users on very short common words.

How do partial indexes help “search everywhere” screens?

Add partial indexes when most searches include the same filters, such as deleted_at IS NULL, status = 'open', or a tenant/workspace constraint. Because the index only covers the common subset, it stays smaller and is often faster in real workloads. Make sure your queries use the exact same filter condition as the partial index, or PostgreSQL may ignore it.

How can I combine exact, trigram, and full-text search without confusing users?

Use a consistent priority ladder so results feel stable: exact match first for IDs/emails, then prefix where it fits, then trigram for forgiving name/title matches, and full-text for long notes and descriptions. Apply the default filters early to reduce how many rows fuzzy search has to consider. This keeps performance and relevance from feeling random as data grows.

What should I do with 1–2 character searches or empty input?

Set simple rules like requiring 3+ characters before running fuzzy search, and use short queries to show recent or commonly accessed records instead. Very short inputs create lots of noise and can trigger expensive work for low value. Also decide how to handle empty input so the UI doesn’t hammer the database with “match everything” queries.

How do I validate performance and ship search changes safely?

Create the index, then verify the real query with EXPLAIN (ANALYZE, BUFFERS) on realistic data sizes, not just a dev dataset. Roll changes out one at a time and keep rollback easy; on large tables, build new indexes concurrently to avoid blocking writes. If you’re building the screen in AppMaster, define the search rules alongside the data model and business logic so the UI behavior stays consistent as requirements change.

Easy to start
Create something amazing

Experiment with AppMaster with free plan.
When you will be ready you can choose the proper subscription.

Get Started