Aug 09, 2025·6 min read

Testing Go REST handlers: httptest and table-driven checks

Testing Go REST handlers with httptest and table-driven cases gives you a repeatable way to check auth, validation, status codes, and edge cases before release.

Testing Go REST handlers: httptest and table-driven checks

What you should be confident about before release

A REST handler can compile, pass a quick manual check, and still fail in production. Most failures aren’t syntax problems. They’re contract problems: the handler accepts what it should reject, returns the wrong status code, or leaks details in an error.

Manual testing helps, but it’s easy to miss edge cases and regressions. You try a happy path, maybe one obvious error, and move on. Then a small change in validation or middleware quietly breaks behavior you assumed was stable.

The goal of handler tests is simple: make the handler’s promises repeatable. That includes authentication rules, input validation, predictable status codes, and error bodies clients can safely depend on.

Go’s httptest package is a great fit because you can exercise a handler directly without starting a real server. You build an HTTP request, pass it to the handler, and inspect the response body, headers, and status code. Tests stay fast, isolated, and easy to run on every commit.

Before release, you should know (not hope) that:

  • Auth behavior is consistent for missing tokens, invalid tokens, and wrong roles.
  • Inputs are validated: required fields, types, ranges, and (if you enforce it) unknown fields.
  • Status codes match the contract (for example, 401 vs 403, 400 vs 422).
  • Error responses are safe and consistent (no stack traces, same shape every time).
  • Non-happy paths are handled: timeouts, downstream failures, and empty results.

A “Create ticket” endpoint might work when you send perfect JSON as an admin. Tests catch what you forget to try: an expired token, an extra field the client accidentally sends, a negative priority, or the difference between “not found” and “internal error” when a dependency fails.

Define the contract for each endpoint

Write down what the handler promises to do before you write tests. A clear contract keeps tests focused and stops them from turning into guesses about what the code “meant.” It also makes refactors safer because you can change internals without changing behavior.

Start with inputs. Be specific about where each value comes from and what’s required. An endpoint might take an id from the path, limit from the query string, an Authorization header, and a JSON body. Note the rules that matter: allowed formats, min/max values, required fields, and what happens when something is missing.

Then define outputs. Don’t stop at “returns JSON.” Decide what success looks like, which headers matter, and what errors look like. If clients depend on stable error codes and a predictable JSON shape, treat that as part of the contract.

A practical checklist is:

  • Inputs: path/query values, required headers, JSON fields, and validation rules
  • Outputs: status code, response headers, JSON shape for success and error
  • Side effects: what data changes and what gets created
  • Dependencies: database calls, external services, current time, generated IDs

Also decide where handler tests stop. Handler tests are strongest at the HTTP boundary: auth, parsing, validation, status codes, and error bodies. Push deeper concerns into integration tests: real database queries, network calls, and full routing.

If your backend is generated (for example, AppMaster produces Go handlers and business logic), a contract-first approach is even more useful. You can regenerate code and still verify that each endpoint keeps the same public behavior.

Set up a minimal httptest harness

A good handler test should feel like sending a real request, without starting a server. In Go, that usually means: build a request with httptest.NewRequest, capture the response with httptest.NewRecorder, and call your handler.

Calling the handler directly gives fast, focused tests. This is ideal when you want to validate behavior inside the handler: auth checks, validation rules, status codes, and error bodies. Using a router in tests is helpful when the contract depends on path params, route matching, or middleware order. Start with direct calls and add the router only when you need it.

Headers matter more than most people think. A missing Content-Type can change how the handler reads the body. Set the headers you expect in every case so failures point to logic, not test setup.

Here’s a minimal pattern you can reuse:

req := httptest.NewRequest(http.MethodPost, "/v1/widgets", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/json")
rec := httptest.NewRecorder()

handler.ServeHTTP(rec, req)
res := rec.Result()
defer res.Body.Close()

To keep assertions consistent, it helps to use one small helper to read and decode the response body. In most tests, check the status code first (so failures are easy to scan), then the key headers you promise (often Content-Type), then the body.

If your backend is generated (including a Go backend produced by AppMaster), this harness still applies. You’re testing the HTTP contract users depend on, not the code style behind it.

Design table-driven cases that stay readable

Table-driven tests work best when each case reads like a tiny story: the request you send and what you expect back. You should be able to scan the table and understand coverage without jumping around the file.

A solid case usually has: a clear name, the request (method, path, headers, body), the expected status code, and a check for the response. For JSON bodies, prefer asserting a few stable fields (like an error code) instead of matching the entire JSON string, unless your contract demands strict output.

A simple case shape you can reuse

Keep the case struct focused. Put one-off setup in helpers so the table stays small.

type tc struct {
	name       string
	method     string
	path       string
	headers    map[string]string
	body       string
	wantStatus int
	wantBody   string // substring or compact JSON
}

For different inputs, use small body strings that show the difference at a glance: a valid payload, one missing field, one wrong type, and one empty string. Avoid building JSON with lots of formatting in the table - it gets noisy fast.

When you see repeated setup (token creation, common headers, default body), push it into helpers like newRequest(tc) or baseHeaders().

If one table starts mixing too many ideas, split it. One table for success paths and another for error paths is often easier to read and debug.

Auth checks: the cases that usually get skipped

Ship predictable REST endpoints
Generate handlers, validation, and error responses you can test with httptest from day one.
Build API

Auth tests often look fine on the happy path, then fail in production because one “small” case was never exercised. Treat auth as a contract: what the client sends, what the server returns, and what must never be revealed.

Start with token presence and validity. A protected endpoint should behave differently when the header is missing versus present but wrong. If you use short-lived tokens, test expiry too, even if you simulate it by injecting a validator that returns “expired.”

Most gaps are covered by these cases:

  • No Authorization header -> 401 with a stable error response
  • Malformed header (wrong prefix) -> 401
  • Invalid token (bad signature) -> 401
  • Expired token -> 401 (or your chosen code) with a predictable message
  • Valid token but wrong role/permissions -> 403

The 401 vs 403 split matters. Use 401 when the caller isn’t authenticated. Use 403 when they are authenticated but not allowed. If you blur these, clients will retry needlessly or show the wrong UI.

Role checks also aren’t enough on “user-owned” endpoints (like GET /orders/{id}). Test ownership: user A shouldn’t see user B’s order even with a valid token. That should be a clean 403 (or 404, if you intentionally hide existence), and the body shouldn’t leak anything. Keep the error generic. Don’t hint that “order belongs to user 42.”

Input rules: validate, reject, and explain clearly

Many pre-release bugs are input bugs: missing fields, wrong types, unexpected formats, or payloads that are too large.

Name every input your handler accepts: JSON body fields, query params, and path params. For each one, decide what happens when it’s missing, empty, malformed, or out of range. Then write cases that prove the handler rejects bad input early and returns the same kind of error every time.

A small set of validation cases usually covers most risk:

  • Required fields: missing vs empty string vs null (if you allow null)
  • Types and formats: number vs string, email/date/UUID formats, boolean parsing
  • Size limits: max length, max items, payload too large
  • Unknown fields: ignored vs rejected (if you enforce strict decoding)
  • Query and path params: missing, not parseable, and default behavior

Example: a POST /users handler accepts { "email": "...", "age": 0 }. Test email missing, email as 123, email as "not-an-email", age as -1, and age as "20". If you require strict JSON, also test { "email":"[email protected]", "extra":"x" } and confirm it fails.

Make validation failures predictable. Pick a status code for validation errors (some teams use 400, others use 422) and keep the error body shape consistent. Tests should assert both the status and a message (or details field) that points to the exact input that failed.

Status codes and error bodies: make them predictable

Go from backend to apps
Build web and native mobile apps on the same backend with one no-code workflow.
Build Apps

Handler tests get easier when API failures are boring and consistent. You want every error to map to a clear status code and return the same JSON shape, regardless of who wrote the handler.

Start with a small, agreed mapping from error types to HTTP status codes:

  • 400 Bad Request: malformed JSON, missing required query params
  • 404 Not Found: resource ID doesn’t exist
  • 409 Conflict: unique constraint or state conflict
  • 422 Unprocessable Entity: valid JSON but fails business rules
  • 500 Internal Server Error: unexpected failures (db down, nil pointer, third-party outage)

Then keep the error body stable. Even if message text changes later, clients should still have predictable fields to rely on:

{ "code": "user_not_found", "message": "User was not found", "details": { "id": "123" } }

In tests, assert the shape, not just the status line. A common failure is returning HTML, plain text, or an empty body on errors, which breaks clients and hides bugs.

Also test headers and encoding for error responses:

  • Content-Type is application/json (and charset is consistent if you set it)
  • Body is valid JSON even on failures
  • code, message, and details exist (details can be empty, but shouldn’t be random)
  • Panics and unexpected errors return a safe 500 without leaking stack traces

If you add a recover middleware, include one test that forces a panic and confirms you still get a clean JSON error response.

Edge cases: failures, time, and non-happy paths

Connect common integrations
Add Stripe payments, messaging, or OpenAI integrations and keep API behavior consistent.
Explore Integrations

Happy-path tests prove the handler works once. Edge-case tests prove it keeps behaving when the world is messy.

Force dependencies to fail in specific, repeatable ways. If your handler calls a database, cache, or external API, you want to see what happens when those layers return errors you don’t control.

These are worth simulating at least once per endpoint:

  • Timeout from a downstream call (context deadline exceeded)
  • Not found from storage when the client expected data
  • Unique constraint violation on create (duplicate email, duplicate slug)
  • Network or transport error (connection refused, broken pipe)
  • Unexpected internal error (generic “something went wrong”)

Keep tests stable by controlling anything that can vary between runs. A flaky test is worse than no test because it trains people to ignore failures.

Make time and randomness predictable

If the handler uses time.Now(), IDs, or random values, inject them. Pass a clock function and an ID generator into the handler or service. In tests, return fixed values so you can assert exact JSON fields and headers.

Use small fakes, and assert “no side effects”

Prefer tiny fakes or stubs over full mocks. A fake can record calls and let you assert that nothing happened after a failure.

For example, in a “create user” handler, if the database insert fails with a unique constraint error, assert the status code is correct, the error body is stable, and no welcome email was sent. Your fake mailer can expose a counter (sent=0) so the failure path proves it didn’t trigger side effects.

Common mistakes that make handler tests unreliable

Handler tests often fail for the wrong reason. The request you build in a test isn’t the same shape as a real client request. That leads to noisy failures and false confidence.

One common issue is sending JSON without the headers your handler expects. If your code checks Content-Type: application/json, forgetting it can make the handler skip JSON decoding, return a different status code, or take a branch that never happens in production. The same goes for auth: a missing Authorization header is not the same as an invalid token. Those should be different cases.

Another trap is asserting the whole JSON response as a raw string. Small changes like field order, spacing, or new fields break tests even when the API is still correct. Decode the body into a struct or map[string]any, then assert what matters: status, error code, message, and a couple of key fields.

Tests also get unreliable when cases share mutable state. Reusing the same in-memory store, global variables, or a singleton router across table rows can leak data between cases. Each test case should start clean, or reset state in t.Cleanup.

Patterns that usually cause brittle tests:

  • Building requests without the same headers and encoding real clients use
  • Asserting full JSON strings instead of decoding and checking fields
  • Reusing shared database/cache/global handler state across cases
  • Packing auth, validation, and business logic assertions into one oversized test

Keep each test focused. If one case fails, you should know whether it was auth, input rules, or error formatting within seconds.

A quick pre-release checklist you can reuse

Deploy where you run
Push apps to AppMaster Cloud, AWS, Azure, or Google Cloud when you are ready.
Deploy App

Before you ship, tests should prove two things: the endpoint follows its contract, and it fails in safe, predictable ways.

Run these as table-driven cases, and make each case assert both the response and any side effects:

  • Auth: no token, bad token, wrong role, correct role (and confirm the “wrong role” case doesn’t leak details)
  • Inputs: missing required fields, wrong types, boundary sizes (min/max), unknown fields you want to reject
  • Outputs: status code, key headers (like Content-Type), required JSON fields, consistent error shape
  • Dependencies: force one downstream failure (DB, queue, payment, email), verify a safe message, confirm no partial writes
  • Idempotency: repeat the same request (or retry after a timeout) and confirm you don’t create duplicates

After that, add one sanity assertion that gets skipped: confirm the handler didn’t touch what it shouldn’t. For example, in a failed validation case, verify no record was created and no email was sent.

If you build APIs with a tool like AppMaster, this same checklist still applies. The point is the same: prove the public behavior stays stable.

Example: one endpoint, a small table, and what it catches

Say you have a simple endpoint: POST /login. It accepts JSON with email and password. It returns 200 with a token on success, 400 for invalid input, 401 for wrong credentials, and 500 if the auth service is down.

A compact table like this covers most of what breaks in production.

func TestLoginHandler(t *testing.T) {
	// Fake dependency so we can force 200/401/500 without hitting real systems.
	auth := &FakeAuth{ /* configure per test */ }
	h := NewLoginHandler(auth)

	tests := []struct {
		name       string
		body       string
		authHeader string
		setup      func()
		wantStatus int
		wantBody   string
	}{
		{"success", `{"email":"[email protected]","password":"secret"}`, "", func() { auth.Mode = "ok" }, 200, `"token"`},
		{"missing password", `{"email":"[email protected]"}`, "", func() { auth.Mode = "ok" }, 400, "password"},
		{"bad email format", `{"email":"not-an-email","password":"secret"}`, "", func() { auth.Mode = "ok" }, 400, "email"},
		{"invalid JSON", `{`, "", func() { auth.Mode = "ok" }, 400, "invalid JSON"},
		{"unauthorized", `{"email":"[email protected]","password":"wrong"}`, "", func() { auth.Mode = "unauthorized" }, 401, "unauthorized"},
		{"server error", `{"email":"[email protected]","password":"secret"}`, "", func() { auth.Mode = "error" }, 500, "internal"},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			tt.setup()
			req := httptest.NewRequest(http.MethodPost, "/login", strings.NewReader(tt.body))
			req.Header.Set("Content-Type", "application/json")
			if tt.authHeader != "" {
				req.Header.Set("Authorization", tt.authHeader)
			}

			rr := httptest.NewRecorder()
			h.ServeHTTP(rr, req)

			if rr.Code != tt.wantStatus {
				t.Fatalf("status = %d, want %d, body=%s", rr.Code, tt.wantStatus, rr.Body.String())
			}
			if tt.wantBody != "" && !strings.Contains(rr.Body.String(), tt.wantBody) {
				t.Fatalf("body %q does not contain %q", rr.Body.String(), tt.wantBody)
			}
		})
	}
}

Walk one case end-to-end: for “missing password,” you send a body with only email, set Content-Type, run it through ServeHTTP, then assert 400 and an error that clearly points at password. That single case proves your decoder, validator, and error response format work together.

If you want a faster way to standardize contracts, auth modules, and integrations while still shipping real Go code, AppMaster (appmaster.io) is built for that. Even then, these tests remain valuable because they lock in the behavior your clients rely on.

Easy to start
Create something amazing

Experiment with AppMaster with free plan.
When you will be ready you can choose the proper subscription.

Get Started