- Every script type has a fixed governance budget, from 1,000 units for User Events to 10,000 for Scheduled Scripts
- Scripts pass UAT and fail in production because data volume exposes the cumulative cost of record.load and nested searches
- record.load at 10 units is the most overused API in NetSuite; submitFields and lookupFields cost 90 percent less
- Measure usage at key points with runtime.getCurrentScript().getRemainingUsage(), do not assume
- Map/Reduce is the only safe answer for any workload that touches more than 10,000 records
- Governance failures are silent in logs unless you opt in, which is why most teams discover them in production
Every SuiteScript you deploy runs inside a fixed governance budget. The budget is measured in units. When the budget runs out, the script terminates with SSS_USAGE_LIMIT_EXCEEDED. There is no retry, no warning, no partial commit. The record does not save. The workflow state does not advance. The queued job fails silently.
And yet governance points are the single most under-discussed subject in NetSuite development. Scripts ship to production because they passed UAT on fifty records. Six months later the vendor onboards a client with fifty thousand, and the script that worked perfectly starts failing every day at 09:00.
This article explains why governance limits exist, how to measure what your script actually consumes, and the four refactor patterns that free ninety percent of a typical unit budget.
Why governance exists at all
NetSuite is a multi-tenant platform. One customer's runaway script cannot be allowed to starve another customer's workload. Oracle implements this as a per-execution resource quota rather than as process isolation, because the script runs inside the NetSuite runtime rather than in a sandboxed container.
This means governance is enforced deterministically by the platform. Every API call costs a known, fixed number of units. The runtime counts them. When the counter hits the script type's limit, the execution terminates.
This is different from a performance budget. Your script can run for the full allowed time and still fail governance. It is a call-counting exercise, not a clock-watching one.
The six script types and their budgets
| Script Type | Governance Limit | Typical Use |
|---|---|---|
| Client Script | 1,000 units | Field-level validation, UI updates |
| User Event | 1,000 units | Before Submit / After Submit record logic |
| Suitelet | 1,000 units | Custom pages and forms |
| Portlet | 1,000 units | Dashboard widgets |
| RESTlet | 5,000 units | External integration endpoints |
| Scheduled Script | 10,000 units | Batch processing |
| Map/Reduce Script | 10,000 units per stage | High-volume processing with checkpoints |
| Mass Update | 10,000 units per record | Bulk record updates |
| Workflow Action Script | 1,000 units | Custom workflow actions |
The lowest budget is 1,000 units. That sounds like a lot. It is not. A single record.load() costs 10 units. A full record.save() costs 20 units. A search.create() plus pagination through 4 pages of results costs 25 units. Three nested searches inside a loop can consume the budget in a handful of iterations.
The four ways scripts fail at scale
A script that consumed 400 units processing one record might consume 4,000 units processing ten, and 40,000 processing a hundred. The cause is almost always one of four patterns.
Loading records when you only need a field
record.load() returns a fully hydrated record object ready for editing. This is expensive. NetSuite must fetch every field on the record, every sublist row, and every associated lookup. Cost: 10 units per call.
If all you need is one or two fields, search.lookupFields() returns the same values at 1 unit per call. A User Event that does record.load for each customer it processes can be rewritten to use lookupFields and typically drops from 3,000 units to 300.
Saving the whole record when only one field changed
record.save() writes every field of a dynamic record back to the database. Cost: 20 units. But if your script only modified one field, record.submitFields.submitFields() writes only the named fields for 10 units, and fires different sourcing events. For many update patterns, submitFields is both cheaper and safer.
The exception: submitFields does not fire User Event triggers on the target record. If your downstream logic depends on User Events firing, you need record.save().
Nested searches inside a loop
This is the killer. A User Event that loads the customer, loops through their contacts, and for each contact runs a search, costs 5 units plus 5 units per contact. Twenty contacts: 105 units. Five hundred contacts: 2,505 units. And that is before any record operations.
The fix: combine the logic into a single saved search with filters and results spanning the relationship. One 5-unit search replaces N 5-unit searches.
Event chains that accumulate across scripts
User Events fire other User Events. A beforeSubmit on an Invoice can trigger afterSubmit on a related Sales Order, which can trigger afterSubmit on Customer, which can fire its own secondary logic. Each script starts with a fresh budget, but the cascade can touch a dozen records in one commit, and any one of them hitting 1,000 units terminates the whole chain.
This class of failure is the hardest to diagnose because the failing script is not the one you think you are debugging.
The measurement toolkit
Governance is invisible by default. You have to opt in to see it.
The single most useful API is runtime.getCurrentScript().getRemainingUsage(). It returns the number of units still available in the current execution. Log this value at key points:
var script = runtime.getCurrentScript();
log.audit('GOV-START', 'Remaining: ' + script.getRemainingUsage());
// ... expensive operation ...
log.audit('GOV-MID', 'Remaining: ' + script.getRemainingUsage());
A User Event that starts with 1,000 units and ends with 280 consumed 720 units for that record. Multiply by expected production volume to predict whether it will hold.
For Map/Reduce and Scheduled scripts, use reschedule() to pick up where you left off when you run low. The Map/Reduce framework does this natively between stages. For a plain Scheduled Script, you check remaining usage and call scheduled.reschedule() with a status object containing the resumption point.
The four refactor swaps
| Expensive | Units | Better | Units | When |
|---|---|---|---|---|
record.load() |
10 | search.lookupFields() |
1 | You only need a handful of fields |
record.save() on dynamic |
20 | record.submitFields() |
10 | You changed one or two fields and do not need User Events to fire |
Nested search.create() in loop |
5n | Single saved search spanning the relationship | 5 | The data relationship is expressable as a search join |
search.runPaged().fetch() loop |
5 + 5 per page | search.runPaged().each() |
5 total | Streaming results rather than collecting them |
These four swaps applied systematically will cut typical script consumption by 70 to 90 percent. They also make the code shorter and more readable.
When to split: scheduled versus Map/Reduce
Once a script genuinely needs to touch ten thousand records, no amount of refactoring will fit it inside a Scheduled Script's 10,000-unit budget. At this point you have two choices.
Scheduled Script with self-rescheduling:
- Process records in batches of a few hundred
- Persist a bookmark (last processed ID) in a custom record
- Call scriptTask.create({taskType: task.TaskType.SCHEDULED_SCRIPT}) to requeue
- Simple to reason about, slow to complete
Map/Reduce: - Framework handles chunking, parallel execution and checkpointing - Each stage (getInputData, map, reduce, summarize) gets its own 10,000-unit budget - Automatically parallelises across Oracle infrastructure - Near-linear throughput scaling
The threshold we use in practice: below 1,000 records, a Scheduled Script is fine. From 1,000 to 10,000, a Scheduled Script with self-rescheduling works but is slow. Above 10,000, Map/Reduce is the only answer. Above 50,000, Map/Reduce with tuned concurrency settings is the only answer.
The governance review checklist
Before any User Event, Client Script or Suitelet ships to production, we run it through a six-point review:
- What is the worst-case volume per execution, and what is the unit cost at that volume?
- Has
record.loadbeen replaced bylookupFieldswherever possible? - Are any searches inside loops, and if so, can they be replaced with a single joined search?
- Are events firing other events in a chain, and has the total cross-script consumption been measured?
- Is there logging of
getRemainingUsage()at start, middle and end? - For scheduled workloads: what is the rescheduling or Map/Reduce plan if the budget is exceeded?
A script that cannot answer all six questions is not ready for production. It may work in UAT. It may even work for months in production. But when the volume arrives, it fails silently, and you find out from an angry user call at 09:00 in the morning.
The bottom line
Governance points are not a theoretical constraint. They are the difference between a script that works forever and a script that works until it does not. The good news: the refactor patterns are mechanical, well-documented and low-risk. The better news: a script written governance-aware from the start almost never needs to be rewritten later.
If you take one practice from this article, make it the final review step. Before any script ships, someone other than the developer asks: "What is the worst case, and how do we know it fits?"
Need a governance review, architecture assessment, or custom SuiteScript delivered to this standard?
Book a Free Consultation