- One sandbox is sufficient only for trivial admin work; any organisation with two concurrent workstreams needs at least two sandboxes
- DEV and UAT must never share a sandbox - UAT contamination destroys testing credibility and delays go-live
- Sandbox refreshes overwrite everything; SDF is the rollback mechanism because it redeploys customisation after refresh
- Refresh cadence should be predictable: UAT before every test cycle, DEV on a quarterly or per-release cadence
- Freeze windows around UAT cycles are not bureaucratic - they are how you prevent the test environment from changing underneath the testers
- Additional sandboxes are billable line items from Oracle; treat them as infrastructure and budget accordingly
- The promotion path DEV -> UAT -> Production is mandatory, not optional, for any change that touches custom scripts, workflows or records
A NetSuite production environment includes one standard sandbox. For a small single-workstream organisation, this is adequate. For anyone running more than one concurrent change - which means almost every organisation six months past go-live - it is not.
The failure mode is predictable. Two projects are in flight. The first project's UAT is scheduled. The second project's development is continuing. Both share the same sandbox. UAT testers find defects that are not in the build under test - they are side effects of the other project's in-flight changes. The UAT report becomes uninterpretable. The sign-off meeting turns into a debugging meeting. The release date slips.
This article makes the case for a minimum two-sandbox topology, describes the cycle patterns that work, and lays out the refresh and promotion governance that keeps sandboxes useful rather than cluttered.
Why one sandbox is not enough
Imagine a single bathroom in an office of twenty people. It works in theory. In practice, the queue starts at 09:00.
One sandbox has the same problem. It serves multiple competing demands:
- Developers need a live, mutable environment where they can break things
- Testers need a stable, controlled environment where the only things that change are the ones being tested
- End users need a clean environment for training
- Finance may need a sandbox for period-end simulation
These purposes conflict. Developers breaking things is incompatible with testers needing stability. Testers running test cycles is incompatible with trainers needing a pristine environment for user training.
The solution is not time-slicing. It is separate sandboxes with separate purposes.
The minimum viable topology
A two-sandbox topology separates the two most incompatible purposes: development and testing.
- SB1 = DEV: where developers work. State is messy. Data changes constantly. Refresh is infrequent (quarterly) because developers need continuity of in-progress work.
- SB2 = UAT: where testers work. State is controlled. Data is clean. Refresh is frequent (before every UAT cycle) to ensure parity with production.
This is the minimum serious topology. It is adequate for an organisation running one or two concurrent workstreams.
The three-sandbox topology
Organisations running three or more concurrent workstreams need more separation. The standard pattern:
- SB1 = Dev 1: workstream A development
- SB2 = Dev 2: workstream B development
- SB3 = UAT: shared testing environment for the workstream that is currently in UAT
The Dev sandboxes are owned by their workstreams. UAT is scheduled, with a visible calendar that workstreams book against. Only one workstream is in UAT at a time - that discipline is how testing remains interpretable.
For the biggest programmes, a fourth sandbox dedicated to end-user training is added. Training sandboxes need extreme data cleanliness because non-technical users notice every anomaly. The worst place to run training is a development sandbox whose demo data was last tidied six months ago.
The refresh cycle
A sandbox refresh copies production data and configuration into the sandbox, overwriting whatever was there. This is destructive and non-reversible.
The common misconception is that a refresh wipes your custom scripts and workflows. It does not. Sandbox refreshes bring over production's custom objects - including any that are currently live. What they do wipe is sandbox-only customisation that has not yet been promoted to production.
The implication: anything in the sandbox that is not also in production, or tracked in SDF source control, is lost on refresh. This includes:
- Unreleased workflows
- Unreleased custom scripts
- Unreleased custom records
- Test data created in the sandbox
- Role configurations specific to the sandbox
All of these are recoverable only if they exist in SDF source or are re-created after the refresh.
The refresh protocol
Every refresh follows the same protocol:
- Announce the refresh window at least 48 hours in advance
- Confirm all in-flight changes are committed to SDF
- Take a final snapshot of custom objects via SDF import
- Request the refresh through the NetSuite Customer Centre
- Wait for completion (typically a few hours)
- Redeploy the SDF project to recover custom objects not in production
- Re-apply any sandbox-specific configuration (test data, sandbox-only roles)
- Communicate availability
Skipping step two is the most common cause of post-refresh panic. Teams assume they will remember their changes, then realise after refresh that a week's work is gone because it was not in SDF.
Refresh cadence
Different sandboxes refresh at different rhythms.
| Sandbox | Recommended Cadence | Trigger |
|---|---|---|
| UAT | Before every UAT cycle | Start of a test round |
| DEV | Quarterly or per-major-release | When dev state is sufficiently divergent from prod |
| Training | Before each training course | Trainer request |
| Dedicated subsidiary/feature sandboxes | As needed | Specific use case |
The sandbox that most frequently goes wrong is DEV. Teams let it diverge for too long. After six months, the sandbox is so far from production that testing in it is unreliable. The discipline is to refresh DEV on a fixed cadence - quarterly is standard - whether or not anyone asks for it.
The promotion path
Every change follows a one-way path: DEV -> UAT -> Production. No exceptions. Promotions between sandboxes use SDF, not manual configuration.
The rationale: manual configuration between environments creates drift. Even a diligent developer re-creating the same workflow in UAT as in DEV will make small unintended differences. Over time these accumulate. Eventually the configurations are not the same even though everyone believes they are.
SDF removes this class of error. The SDF project is the source of truth. Deployments to UAT and Production apply the same project definition. The only differences between environments are data-dependent references, which SDF handles through account-specific values.
Rules for the promotion path:
- Every change that passes UAT is deployed to Production from SDF, not from the UAT sandbox
- Hotfixes do not skip UAT unless explicitly authorised under incident procedures
- Anything deployed to Production must exist in the SDF repository
- The UAT sandbox must be a deployment target of the same SDF project that will deploy to Production
- Emergency access to Production for direct configuration is documented, approved and time-boxed
Freeze windows
During a UAT cycle, the UAT sandbox should not change. No new code deploys, no configuration edits, no test data resets. This is a freeze window.
Freeze windows are not optional. They are how you get a reliable UAT result. A test signed off on Wednesday that passes in a sandbox that changed on Thursday is not really signed off.
Typical UAT freeze window:
- Start: when UAT cycle begins (entry criteria met, testers granted access)
- End: when all test scripts complete, defects are triaged and sign-off is granted
- Scope: the UAT sandbox only - DEV and others continue as normal
- Exceptions: only defect fixes for the current release, with a re-test cycle
A well-run UAT freeze window takes discipline. The payoff is signed-off UAT that the business trusts.
Sandbox permissions
Who can push to which sandbox is a governance question as much as a technical one.
- DEV: Developer role has full access. Architect role has full access. Others read-only.
- UAT: Architect role has deployment access (via SDF). Testers have functional user access. Developers have read-only for diagnostic purposes.
- Training: Training administrator has access. Read-only for others.
- Production: Architect role with emergency override. Controlled Change Board approval for standard deployments.
The principle: the closer to production, the fewer hands that touch the environment, and the more disciplined the change process.
The business case for extra sandboxes
Additional sandboxes cost money. A second sandbox is typically a low-four-figure annual line item on the Oracle renewal. A third sandbox adds another tranche.
The question customers ask: "do we really need this?" The answer depends on cost of rework. In every programme we have run, a single contaminated UAT cycle - where testers cannot tell whether a defect is from the release under test or from another in-flight change - costs more in re-test time, lost confidence and schedule slip than the sandbox cost for the year.
Presenting the business case as "one delayed release versus one annual sandbox fee" is usually enough to win approval.
The bottom line
Sandbox strategy is not sexy. It does not feature in sales pitches. It is infrastructure in the purest sense: invisible when it works, visible only when it fails. And when it fails, the failure is expensive.
The minimum viable topology is two sandboxes with a clean DEV/UAT split, a refresh cadence aligned to test cycles, and promotion discipline enforced through SDF. Any organisation running more than a single workstream at a time needs at least this.
If you are already running on one sandbox and experiencing UAT contamination, schedule the conversation with finance now. The second sandbox pays for itself the first time UAT is not interrupted.
Need a governance review, architecture assessment, or custom SuiteScript delivered to this standard?
Book a Free Consultation