Stored procedures vs visual workflows: where logic should live
Stored procedures vs visual workflows: a practical way to decide what logic belongs in the database, in drag-and-drop workflows, or in custom code.

What this decision really means
Business logic is the set of rules that decide what’s allowed, what happens next, and what the system should do when real people use it. It’s not the data itself. It’s the behavior: who can do what, under which conditions, and with which side effects.
So the stored procedures vs visual workflows argument is really about where these rules should live so they stay easy to change, hard to break, and clear to the people who own the process.
Most teams end up with three possible “homes” for logic:
- In the database (stored procedures, triggers, constraints): rules run close to the data. This can be fast and consistent, but harder for non-database specialists to change.
- In visual workflows (drag-and-drop process builders): rules are expressed as steps and decisions. This is often easier to read, review, and adjust as the process changes.
- In custom code (services, apps, scripts): rules are written in a programming language. This gives maximum flexibility, but changes usually require more engineering discipline and testing.
The choice affects day-to-day speed and long-term maintenance. Put logic in the wrong place and you get slow delivery (every change needs the one person who knows the database), more errors (rules duplicated in multiple places), and painful debugging (nobody knows why a record was rejected).
Most systems contain several kinds of business logic. Common examples include validation (required fields, allowed ranges), approvals, pricing and discounts, notifications, and access rules.
A practical way to think about it: the database is great at protecting data integrity, visual workflows are great at expressing business processes, and custom code is great when the rule is too unique or complex to model cleanly. Platforms like AppMaster sit strongly in the middle: you can model data, then implement the process as readable visual logic without scattering rules across many apps.
The criteria: what you are optimizing for
This isn’t really a taste question. It’s about what you’re trying to protect: the data, the people changing the system, and the speed of change.
The outcomes that matter most
Name the top outcomes for your project. Most teams are balancing some mix of these:
- Correctness: rules must run the same way every time, even under load.
- Clarity: someone new should understand what happens without guesswork.
- Speed: the logic must run fast and close to the data when needed.
- Auditability: you need to prove who changed what, and when it ran.
- Change rate: you expect requirements to change weekly, not yearly.
You rarely maximize all five. Pushing logic closer to the database can improve correctness and speed, but it can reduce clarity for people who don’t live in SQL.
Who will change the logic (and how often)
Be honest about who will own changes day to day. A rule that ops needs to tweak every week shouldn’t require a DBA to deploy a stored procedure. At the same time, a rule that affects money shouldn’t be editable without review.
Think in terms of change friction. If requirements change often, you want a place where updates are safe, visible, and quick to ship. Visual workflow tools (for example, AppMaster’s Business Process Editor) can work well when business owners and engineers need to collaborate on logic without editing low-level code. If changes are rare and the rule is critical, higher friction can be acceptable.
A quick way to pressure-test ownership:
- Who gets paged when it breaks at 2 a.m.?
- How fast do you need to patch a rule?
- Do changes need approvals or a paper trail?
- Will multiple apps depend on the same rule?
- Is the logic mostly data shaping, or business decisions?
Note constraints early. Some industries require strict access control, separation of duties, or detailed logs. Also consider data access limits: if only certain services should see certain fields, that affects where logic can safely run.
When logic belongs in stored procedures
Stored procedures are pieces of logic that run inside the database. In PostgreSQL, they are written in SQL (and sometimes a database language like PL/pgSQL). Instead of your app pulling rows out, looping, and pushing changes back, the database does the work right where the data lives.
A good rule is simple: put logic in the database when the main job is protecting data and doing bulk data work, not coordinating people or systems.
Where stored procedures shine
Stored procedures are a strong fit for rules that must always be true, no matter which app or integration touches the database. Think of guardrails that keep bad data out.
They also excel at set-based updates, where one statement can update thousands of rows safely and quickly. Simple calculations that are purely about data, like computing totals or applying a fixed discount formula, can also live here when it reduces round trips and keeps results consistent.
Example: when an order is marked paid, a procedure can atomically update the order status, decrement inventory, and write an audit row. If any step fails, the whole change rolls back.
When stored procedures become risky
Stored procedures can be harder to test and version than application code, especially if your team doesn’t treat database changes like real releases. Logic can also become “hidden” from the app, creating coupling you only discover later.
Debugging changes too. Errors may surface as database messages with less context about what the user did. New teammates can struggle because the rules are split across the app and the database, and database logic is easy to miss during onboarding.
If you use a visual tool for most logic, reserve stored procedures for the small, stable core that must run close to the data. Keep everything else where it’s easier to read, trace, and change.
When logic fits best in visual workflows
Visual workflows are step-by-step process logic you can read like a checklist: when something happens, run these actions in this order, with clear decision points. They’re less about heavy calculation and more about how work moves through people, systems, and time.
They shine when you care about shared understanding. If product, ops, support, and engineering all need to agree on how a process works, a visual workflow makes the rules visible. That visibility is often the difference between “the system is broken” and “the process changed last week.”
Visual workflows are usually a good fit for approvals and reviews, routing, notifications and reminders, timed steps (wait 2 days, then escalate), and integrations (call Stripe, send a message, update a CRM).
Example: a customer requests a refund. The workflow checks the order age, routes to a manager if it’s over a threshold, notifies finance when approved, and sends the customer an update. Each step is easy to point to and discuss in plain language, which helps stakeholders sign off and helps new team members understand the “why.”
Tools like AppMaster’s Business Process Editor are built for this kind of logic: you can see the path, the conditions, and the side effects (messages, API calls, status changes) without digging through database scripts.
To keep workflows from turning into spaghetti, keep them small and readable. Give each workflow one outcome, use clear names for steps and branches, limit deeply nested decisions, and log key choices so you can answer “why did this happen?” later.
When a workflow starts doing complex data crunching or touching many tables, that’s usually a signal to move part of the logic elsewhere. Visual workflows work best as the conductor, not the entire orchestra.
When custom code is the right tool
Custom code is logic you write and maintain as software: functions, services, or small libraries that run as part of your app. It’s the most flexible option, and that’s why it should be used on purpose, not by default.
Custom code earns its place when the logic is hard to express safely in a database procedure or a drag-and-drop workflow. If you find yourself bending tools to fit the problem, code is often clearer and easier to keep correct.
Strong signals you should reach for code:
- The problem is algorithmic (pricing rules, route planning, scoring, matching, fraud checks) and has lots of edge cases.
- You need an unusual integration (a partner API with odd auth, complex retries, strict idempotency rules).
- Performance is sensitive (high-volume processing, heavy computations, careful caching) and you need tight control.
- You must share the same logic in multiple places (web, mobile, batch jobs) without copying it.
- You need strong automated tests around the logic because mistakes are expensive.
Code also makes ownership clearer. A team can be responsible for reviewing changes, keeping tests green, and documenting behavior. That beats “it lives in three workflows and nobody is sure which one runs first.”
Example: a refund decision engine that considers order history, fraud signals, shipping status, and time windows. You can still keep the approval steps in a visual workflow, but the decision itself is often better as code with unit tests and version control.
The cost is real. Custom code needs engineering time, reviews, and ongoing maintenance. Changes may take longer than editing a workflow, and you need a release process. AppMaster can reduce how much code you need by covering common parts with visual logic and modules, while still letting teams export source code and extend where it’s truly necessary.
A step-by-step framework you can reuse
Teams often skip the most useful part: writing the rule clearly, then choosing a home that matches how the rule behaves.
Use this framework whenever new logic shows up:
- Write the rule as one sentence, then label it. If it’s about valid data (constraints, uniqueness, totals that must match), it’s a data rule. If it’s about steps and handoffs (approvals, waits, notifications), it’s a process rule. If it’s heavy math or complex transformations, it’s a computation rule.
- Ask who edits it and how often. If non-technical people must change it weekly, don’t bury it in SQL or a code release. If it rarely changes and must be enforced every time, the database is a stronger candidate.
- Check failure impact and the audit trail you need. If a mistake can cause money loss, compliance issues, or hard-to-fix data, prefer a place with clear logging and tight control.
- Choose the location and define the boundary. Be explicit about inputs, outputs, and errors. Example: “Given an
order_id, returnallowed_refund_amountor a clear reason code.” That boundary keeps logic from leaking everywhere. - Pick one layer to stay thin. Decide what should be mostly “dumb” so you don’t duplicate rules. Common choices are: keep the database thin (data integrity only), keep workflows thin (orchestration only), or keep code thin (glue only).
Rule of thumb: put data rules closest to the data, put process rules in a workflow tool, and put computation rules where they’re easiest to test and version.
If you’re using a platform like AppMaster, you can treat the database as guardrails (tables, relationships, basic constraints) and use the visual Business Process Editor for the “who does what next” parts, while reserving custom code for the few cases that truly need it.
Common mistakes that cause messy systems
Messy systems rarely happen because of one bad choice. They happen when logic gets scattered, hidden, or copied until nobody is sure what the system actually does.
Duplication is the classic problem: the same rule exists in two places, but they drift over time. Example: the database rejects refunds over $500 unless there’s an approval record, but a workflow still sends the refund request to payments because it checks a different limit. Both “work” until the first real edge case, then support gets a mystery bug.
Hidden rules are next. Triggers, stored procedures, and quick fixes in the database can be invisible to the people building the UI or workflows. If the rule isn’t documented near the workflow or API that depends on it, changes become guesswork and testing turns into trial and error.
Overstuffed workflows create a different kind of mess. A long drag-and-drop chain with dozens of branches becomes a fragile artifact that nobody wants to touch. In tools like AppMaster, it’s easy to keep adding blocks because it’s fast, but speed today can turn into confusion later if the workflow has no clear boundaries.
Two opposite “too much” mistakes cause long-term pain:
- Too much in the database: every policy change turns into a migration project, and small product tweaks wait on database releases.
- Too much in app code: basic data rules (required fields, allowed statuses, unique constraints) get forgotten, and bad data slips in through imports, admin tools, or future integrations.
A simple habit prevents most of this: keep each rule in one primary home, and write down where it lives and why. If you can’t answer “where is this enforced?” in 10 seconds, you’re already paying the mess tax.
Quick checks: decide in 2 minutes
You’re choosing where a rule is easiest to keep correct, visible, and changeable.
Start with one question: is this rule about data correctness, meaning it must never be bypassed? If yes, push it closer to the database. If it’s about steps, approvals, or notifications, keep it closer to the workflow layer.
A fast checklist:
- Is it enforcing data correctness (preventing negative inventory, blocking duplicate “active” records)? Lean database.
- Does it touch many tables and need set-based updates (lots of rows at once)? Lean database.
- Does it need a clear, human-readable audit trail of who approved what and when? Lean workflow.
- Will non-engineers need to change it weekly or monthly? Lean workflow.
- Does it call external services (payments, messaging, AI)? Lean application or workflow, not the database.
Now think about failure. A rule that can fail should fail in a way humans can recover from.
If you need safe retries and clear error messages, prefer an orchestration layer where you can track state and handle exceptions step by step. Visual workflow tools often make this easier because each step is explicit and can be logged.
A practical tie-breaker:
- If the system must stay correct even when someone writes a new app later, enforce it in the database.
- If the process is meant to be read and reviewed by ops teams, keep it in a visual workflow.
- If it involves complex integrations, heavy computation, or special libraries, use custom code.
Example: “Refund amount cannot exceed original payment” is correctness, so enforce it near the data. “Refunds over $500 require manager approval and then send a Telegram message” is a workflow. In AppMaster, that approval chain fits naturally in the Business Process Editor, while strict constraints stay in the data model.
Example scenario: refunds with approvals
A common real-world case is a refund that needs manager approval above a certain amount, plus notifications and a clean audit trail.
Start by defining one source of truth: a single Refund record with amounts and a clear status field (for example: requested, needs_approval, approved, rejected, processing, paid, failed). Every part of the system should read and write these same fields, instead of keeping parallel states in different places.
What belongs in the database
Put the rules that protect money and data consistency closest to the data.
Use constraints (and sometimes a stored procedure) to ensure you can’t refund more than the captured payment amount, refund an order that’s already fully refunded, create two active refund requests for the same order, or change key amounts after the refund is approved.
Also keep the atomic update here: when a refund request is created, write the Refund row and update the Order totals in one transaction. If either write fails, nothing should partially update.
What fits best in a visual workflow
Approval steps are process, not data protection. A visual workflow is a good home for routing the request to the right manager, waiting for a decision, updating the status, sending reminders, and notifying the requester.
A simple flow might be: create request -> if amount is over limit, set status to needs_approval -> notify manager -> if approved, set to approved -> notify requester -> if no response in 24 hours, send a reminder.
In a tool like AppMaster, this maps cleanly to a Business Process that reacts to status changes and triggers email, SMS, or Telegram messages.
What should be custom code
Payment providers have edge cases that don’t always fit neatly into rules or drag-and-drop steps. Keep provider-specific logic in custom code, such as partial refunds with fees or multi-capture payments, webhook reconciliation (provider says “paid” but your app says “processing”), and idempotency and retry handling when the provider times out.
The key is that custom code shouldn’t invent its own statuses. It reads the Refund record, performs the provider action, then writes back the next status and confirmed amounts so the database remains the ledger everyone trusts.
Next steps: make the choice stick
A good decision only helps if it stays true six months later. The goal is to make your “where should this logic live?” choice easy to see, easy to test, and hard to accidentally bypass.
Create a simple logic map: a short list of your key rules and the home you picked for each one. Keep it brief and update it when a rule changes. Include the rule name, where it lives (database, workflow, custom code), why (one sentence), what it reads and writes, and who approves changes.
Write down boundaries as non-negotiables that protect your system when people add features later. A helpful format is: “The database guarantees X” and “Workflows enforce Y.” For example, the database guarantees that a refund record can’t exist without an order, while the workflow enforces that refunds over $500 require manager approval.
Plan testing before you change anything. You don’t need a big test plan, just a few cases you’ll rerun every time the rule changes:
- Happy path (expected input, expected result)
- Failure path (missing data, invalid status, duplicate request)
- Concurrency (two people triggering the same action at once)
- Security (a user trying to skip steps or call an endpoint directly)
Set ownership and review rules too. Decide who can edit stored procedures, who can edit workflows, and what needs peer review. That’s where many systems either stay healthy or drift into “nobody knows why this works.”
If you want drag-and-drop workflows without giving up real backend structure, a platform like AppMaster (appmaster.io) can be a practical middle ground: model your data, express the process in the Business Process Editor, and regenerate and deploy as requirements change.
Pick one high-impact rule, map it, add the boundaries, and write three test cases. That single habit prevents most logic sprawl.
FAQ
Put it where it stays correct, visible, and easy to change. Keep data integrity rules close to the database, keep step-by-step business processes in workflows, and use code when the rule is too complex or needs tight control and tests.
Use stored procedures for data protection and bulk data work: enforcing invariants across all apps, doing set-based updates, and running atomic transactions that must always be consistent. Keep them small and stable so they don’t become hidden “surprise logic.”
Visual workflows work best for process rules: approvals, routing, notifications, reminders, waiting steps, and integrations that follow a human-readable sequence. They’re ideal when non-engineers or cross-functional teams need to review and adjust how work moves through the system.
Choose custom code for algorithmic or unusual logic: complex pricing, fraud decisions, matching/scoring, advanced retries and idempotency, or integrations that need special libraries and careful error handling. Code is also best when you need strong automated tests around expensive mistakes.
Put the non-negotiable money and consistency rules in the database, and keep the approval and communication steps in a workflow. If you mix them, you’ll either block legitimate process changes with database releases or let bad data slip through when someone bypasses the UI.
Keep each rule in one primary home, then make other layers call it instead of re-implementing it. Duplication is what creates “it worked in the UI but failed in the database” bugs when limits, statuses, or validation drift out of sync.
Keep workflows small and focused: one clear outcome, simple branching, and readable step names. When a workflow starts doing heavy data crunching or touching many tables, split out the computation into code or move the integrity part to the database.
Treat database logic like real software changes: version it, review it, test it, and document where it’s enforced. Also ensure errors produce actionable messages at the workflow or API layer so people can understand what failed and what to do next.
Enforce access and integrity constraints at the data layer, then keep the process trace (who approved what and when) in the workflow layer with explicit status changes and logs. This separation makes audits easier because you can prove both the data rules and the decision trail.
AppMaster is a practical middle ground when you want structured data plus readable process logic. You can model PostgreSQL-backed data and express business processes in a visual Business Process Editor, while still reserving stored procedures for core guardrails and code for edge cases.


