Back to blog
PrimentraPrimentra
·March 1, 2026·14 min read

SQL Server Staging Tables: Import Actions, Merge Modes, and Per-Field Rules

Home/Blog/SQL Server Staging Tables: Import Actions, Merge Modes, and Per-Field Rules

Friday afternoon. You get an email: “Here’s the updated branch list from the ERP. Can you load this into production by Monday?”

Attached: a CSV with 4,200 rows. Some are new branches. Some update existing ones. A few have blank fields where production currently has values. And one column that used to be “Region” is now labeled “Territory” with different codes.

You BULK INSERT the file into a staging table. That part takes three seconds. The hard part — deciding how to move that data into production without breaking anything — is where most DBAs spend the next four hours.

This post walks through every configuration option in Primentra’s staging engine: seven import actions, five merge modes, per-field overrides, sentinel values, and the validation layer that runs before any data touches production. If you want to automate batch processing on top of these staging tables, the staging scheduler deep-dive covers how to trigger processing automatically.

Staging pipeline — validate before you load

CSV
SSIS
API
stg.Branches
Code
Name
Region
Status
PROD
dbo.Branches

What is a staging table?

A staging table is a permanent intermediate table where raw data lands before it gets validated, transformed, and loaded into production. It sits between your data source (a CSV, an SSIS package, an API payload) and the tables your application actually reads.

I keep staging tables in a dedicated stg schema. They mirror the production table’s column names but store everything as NVARCHAR — so you can catch type conversion errors during validation instead of during the insert.

 Staging tableTemp table (#temp)
LifetimePermanent — persists between sessionsGone when the session ends
StorageDedicated database/schema, own filegrouptempdb — shared with every other process
IndexesFull indexing, statistics, partitioningSupports indexes but contends for tempdb
VisibilityAny process can read itOnly the creating session (unless ##global)
Best forRecurring ETL, audit trails, large loadsAd-hoc transforms, small one-off scripts

How the staging engine works

Every entity in Primentra can have its own staging table. When you enable staging for an entity, the system creates a table in the stg schema with one column per attribute — all typed as NVARCHAR. Type casting happens during processing, not during the load. If a value can’t be cast to the expected type, the error gets logged with the original value intact.

Every staging table also gets system columns that control how each row is processed:

ColumnPurpose
CodeBusiness key — matches rows between staging and production
NameDisplay name (optional, falls back to Code)
ImportActionWhat to do with this row: insert, update, delete, etc.
ImportStatus0 = Ready, 1 = OK, 2 = Error, 3 = Processing
BatchIdLinks the row to a processing batch for audit
BatchTagFreeform label for grouping rows (e.g. source file name)
ErrorCodeBitwise flags indicating which validations failed

Seven import actions

Each staging row carries an ImportAction value that tells the engine what to do with that row. The entity configuration sets a default, but you can override it per row. A single batch can mix inserts, updates, and deletes.

ValueNameWhat it does
0UpsertInsert if Code is new, update if it exists. The default for most loads.
1Insert OnlyStrict insert. Row is rejected if the Code already exists in production.
2Update OnlyStrict update. Row is rejected if the Code is not found in production.
3DeleteDelete the matching production row. Blocked if other entities reference it via domain attributes.
4PurgeDelete the row regardless of references. Domain attributes in other entities that pointed to this row become orphaned.
5Delete CascadeClear domain references in other entities (set to empty), then delete. The safe way to remove referenced records.
6Purge CascadeClear domain references and delete, skipping all validation checks. The nuclear option.

Upsert: the workhorse

ImportAction 0 routes each row based on whether its Code exists in production. The engine pre-computes a lookup table of existing codes, then runs INSERT and UPDATE as separate steps. No MERGE statement — separate INSERT/UPDATE is safer to debug and avoids the known concurrency issues with MERGE at scale.

-- Step 1: Insert rows where Code doesn't exist in production
INSERT INTO EntityRows (EntityId, Code, Name)
SELECT @EntityId, s.Code, ISNULL(s.Name, s.Code)
FROM stg.Branches s
WHERE s.BatchId = @BatchId
  AND s.ImportStatus = 3        -- Processing
  AND s.ImportAction = 0        -- Upsert
  AND NOT EXISTS (
      SELECT 1 FROM EntityRows er
      WHERE er.EntityId = @EntityId AND er.Code = s.Code
  );

-- Step 2: Update rows where Code already exists
UPDATE er
SET er.Name = ISNULL(s.Name, er.Name),
    er.ModifiedAt = GETUTCDATE()
FROM EntityRows er
JOIN stg.Branches s ON s.Code = er.Code
WHERE er.EntityId = @EntityId
  AND s.BatchId = @BatchId
  AND s.ImportAction IN (0, 2);  -- Upsert or Update Only

Upsert does not delete production rows that are missing from the staging batch. If staging has 200 rows and production has 250, those 50 extra production rows stay put. There is no “sync” mode that mirrors the staging table to production. If you need to remove records, use ImportAction 3–6 on the specific rows that should be deleted.

Delete vs. Purge vs. Cascade

The four delete variants exist because master data records are often referenced by other entities. A “Region” entity row might be used as a domain attribute in Branches, Customers, and Suppliers. Deleting that region without handling the references would leave orphaned foreign keys everywhere.

ActionReferences exist?What happens
Delete (3)YesRow is rejected with error: "Record is referenced by other entities"
Delete (3)NoRow is deleted
Purge (4)Yes or NoRow is deleted. References become orphaned (no cleanup).
Delete Cascade (5)YesDomain references in other entities are set to NULL first, then the row is deleted.
Purge Cascade (6)Yes or NoReferences cleared, row deleted, all validation skipped.

The cascade logic runs before the delete: it finds every EntityValues.DomainValue that points to the target row and sets it to NULL. Only after the references are cleared does the row get removed. This is why Delete Cascade (5) is the recommended option for removing referenced records — it cleans up after itself.

Five merge modes

Import action decides whether a row gets inserted, updated, or deleted. Merge mode decides how individual fields get written during that insert or update. This is where the real granularity lives.

ModeStaging NULL =Staging has a value =
OverwriteKeep production value (no change)Replace production value
Overwrite AllClear production value (write NULL)Replace production value
Fill EmptyNo changeWrite only if production field is NULL
IgnoreNo changeNo change (field is skipped entirely)
Overwrite with SentinelKeep production valueReplace (unless value = sentinel marker, which clears the field)

Overwrite (the default)

Non-NULL staging values replace production values. NULL is ignored — production keeps what it has. A source system that sends incomplete data cannot accidentally wipe out fields.

-- Overwrite mode: only update if staging has a value
UPDATE ev
SET ev.TextValue = s.Region
FROM EntityValues ev
JOIN EntityRows er ON ev.EntityRowId = er.Id
JOIN stg.Branches s ON s.Code = er.Code
WHERE ev.AttributeId = @RegionAttrId
  AND s.BatchId = @BatchId
  AND s.Region IS NOT NULL;     -- NULL = no change

Overwrite All

Every staging value is written to production, including NULLs. If the staging column is empty, the production field is cleared. Only use this when the source owns every field and missing values are intentional.

-- Overwrite All: NULLs in staging clear production values
UPDATE ev
SET ev.TextValue = s.Region     -- NULL writes NULL
FROM EntityValues ev
JOIN EntityRows er ON ev.EntityRowId = er.Id
JOIN stg.Branches s ON s.Code = er.Code
WHERE ev.AttributeId = @RegionAttrId
  AND s.BatchId = @BatchId;     -- no NULL filter

Fill Empty

Only write to production fields that are currently NULL. If production already has a value — any value — the staging data is ignored for that field. This is for data enrichment: filling in gaps from a secondary source without overriding what the primary source established.

-- Fill Empty: only write where production is NULL
UPDATE ev
SET ev.TextValue = s.Region
FROM EntityValues ev
JOIN EntityRows er ON ev.EntityRowId = er.Id
JOIN stg.Branches s ON s.Code = er.Code
WHERE ev.AttributeId = @RegionAttrId
  AND s.BatchId = @BatchId
  AND s.Region IS NOT NULL      -- staging has something
  AND ev.TextValue IS NULL;     -- production is empty

Ignore

The field is skipped during processing. No read, no write, no comparison. Use Ignore as the entity default when you want to control exactly which fields are writable. Set individual fields to Overwrite or Fill Empty as needed. Anything not explicitly configured stays untouched.

Overwrite with Sentinel

Overwrite has no way to express “write NULL” — NULL in staging already means no change. Sentinel values solve this.

You pick a sentinel value per data type — a marker that would never appear in real data. The defaults are:

  • Text: ~NULL~
  • Number: -98765432101234567890
  • DateTime: 5555-11-22T12:34:56

All three are configurable per entity. When the staging engine sees a sentinel, it writes NULL to production. Regular values overwrite as normal. NULL means no change.

DECLARE @SentinelText NVARCHAR(20) = '~NULL~';

UPDATE ev
SET ev.TextValue = CASE
    WHEN s.Region = @SentinelText THEN NULL     -- sentinel: clear the field
    ELSE s.Region                                -- real value: overwrite
    END
FROM EntityValues ev
JOIN EntityRows er ON ev.EntityRowId = er.Id
JOIN stg.Branches s ON s.Code = er.Code
WHERE ev.AttributeId = @RegionAttrId
  AND s.BatchId = @BatchId
  AND s.Region IS NOT NULL;     -- NULL still means no change

Per-field rules

The merge mode above applies as the entity-wide default. Per-field rules let you override it at the attribute level. Each attribute can also have its own error handling strategy.

A typical configuration for a Branches entity where the ERP owns most fields but local offices manage some:

AttributeMerge modeOn errorWhy
RegionAuto (inherits Overwrite)Error RowInvalid region breaks downstream reports
StatusOverwriteError RowERP is the authority on branch status
Desk NumberFill EmptySkip FieldLocal office data, don’t override
ClassificationOverwrite with SentinelError RowNeeds ability to clear obsolete codes
Internal NotesIgnoreSkip FieldNot managed through staging at all

The “Auto” merge mode means “use whatever the entity default is.” When the engine processes a field, it checks: does this attribute have a specific merge mode? If yes, use it. If it’s set to Auto, fall back to the entity-level default. This means you can change the entity default and all Auto fields follow, without reconfiguring each one individually.

Error handling: Error Row vs. Skip Field

Each attribute has an OnValidationError setting with two options:

  • Error Row — a validation failure on this field blocks the entire row. The row gets ImportStatus 2 (Error), the ErrorCode gets the relevant bit set, and none of the row’s data touches production.
  • Skip Field — the invalid field is skipped, but the rest of the row continues processing. The error still gets logged to StagingErrorDetails for audit purposes, but the row’s ErrorCode is not set. The row ends up with ImportStatus 1 (OK) because it was processed — just without that one field.

This is per-attribute, so you can be strict on critical fields and lenient on optional ones. Region must be a valid domain reference, so Error Row makes sense — loading a branch with a broken region cascades errors into every report that joins on it. But rejecting an entire branch because the Notes field is 10 characters too long makes no sense. Skip the field, log the warning, process the row.

What gets validated before the load

All validation runs against the staging table before any data touches production. Rows that fail validation get flagged with a bitwise error code so you can see exactly which checks failed.

CheckError codeWhat it catches
Duplicate Code1Two rows in the same batch have the same Code
Empty Code2Code is NULL or blank — every row needs a business key
Code exists (insert-only)4ImportAction = 1 but the Code is already in production
Code not found8ImportAction = 2/3/4/5/6 but the Code doesn’t exist
Invalid ImportAction16ImportAction is not 0–6
Delete blocked64ImportAction = 3/4 but other entities reference this row
Required field empty128A required attribute is NULL on an insert/upsert row
Text too long256Text value exceeds the attribute’s max length
Invalid integer512Value can’t be cast to INT
Invalid decimal1024Value can’t be cast to DECIMAL
Invalid datetime2048Value can’t be cast to DATETIME2
Invalid boolean4096Value is not 0/1/true/false/yes/no
Domain value not found8192Referenced Code doesn’t exist in the domain entity

Error codes are bitwise, so a row with both a duplicate code and an invalid datetime gets ErrorCode 2049 (1 | 2048). The StagingErrorDetails table stores one row per error per staging row, with the attribute name, the staged value that failed, and a human-readable message. You can export the full error report as CSV from the Batch History tab.

Common configurations

Most staging setups fall into one of these patterns:

ScenarioImport actionMerge modeWhy
ERP sends a full entity refresh weeklyUpsert (0)OverwriteNew rows get inserted, existing rows get updated, NULLs in the source don’t erase data
Secondary system enriches specific fieldsUpsert (0)Fill EmptyOnly fills gaps — never overrides what the primary source set
Source must be able to clear fieldsUpsert (0)Overwrite with Sentinel~NULL~ explicitly clears a value, NULL means no change
Source sends the complete state of every fieldUpsert (0)Overwrite AllNULLs in staging intentionally clear production — careful with this one
Initial data load, no rows should exist yetInsert Only (1)OverwriteRejects duplicates — catches mapping errors before they hit production
Correction batch for existing records onlyUpdate Only (2)OverwriteRejects new codes — prevents accidental inserts from typos
Decommissioning a set of recordsDelete Cascade (5)Cleans up domain references before removing rows

The safe default

If you’re setting up staging for the first time, use Upsert + Overwrite. It covers the most common case: a source that sends a mix of new and changed records, where empty fields in the source shouldn’t clear production values.

Overwrite with Sentinel is the answer when users need to blank out a field — the default Overwrite mode has no way to express “write NULL.” Fill Empty is for secondary data sources that fill gaps without overriding the primary. Per-field rules make sense when different teams own different attributes of the same entity. For a broader look at the data model that staging feeds into, models, entities, and attributes explains how Primentra structures reference data.

The delete variants are for lifecycle management: branch closures, discontinued products, retired region codes. Start with Delete Cascade (5) — it clears domain references before removing the row.

Everything in this post is configurable through the UI (Staging Config tab on any entity) or directly in the StagingConfigurations and StagingFieldRules tables. The processing stored procedure reads these settings at batch time, so you can adjust the configuration between batches without touching any code.


Primentra’s staging engine runs inside your SQL Server — no external ETL tool, no cloud service, no agent. If you’re writing custom T-SQL for every entity import, this is what replaces those Friday afternoons.

More from the blog

Inside the Staging Scheduler: How Primentra Automates Batch Processing14 min readThe Primentra REST API: read and write master data from any system7 min readMicrosoft MDS End of Life: Your Migration Checklist for SQL Server 20258 min read

Ready to migrate from Microsoft MDS?

Join the waitlist and be the first to try Primentra. All features included.

Download Free TrialTry DemoCompare MDM tools