Back to blog
PrimentraPrimentra
·May 1, 2026·9 min read

Slowly changing dimensions in master data: not every field needs a time machine

Home/Blog/Slowly changing dimensions in master data: not every field needs a time machine

Customer 482 · billing country

What was the value on June 15, 2025?

Jan 2025
NL
Historical
Mar 2025
BE
Historical
Today
DE
Current

Without a timeline: the answer is “DE”

Last quarter's revenue-by-country report quietly reclassifies the past.

The first time someone asked me what the customer's billing country had been in Q3, I had to answer with a shrug. The customer master held one address. It was the current address. The data warehouse had loaded it freshly during last night's run. Anyone pulling last quarter's revenue by country was getting today's countries against last quarter's revenue.

That mismatch is a slowly changing dimension problem, and it sits at the seam between master data and reporting. Most master data tools handle it badly. Most master data programs do not handle it at all. The good news is the answer is smaller than the textbooks make it sound.

The Kimball types still apply, just not to everything

The classical framework names three responses to a master attribute change. Most data warehouse architects know them by heart. They apply just as cleanly to master data, with one important caveat that gets ignored: the choice is per-attribute, not per-system.

SCD typeBehaviorUse it for
Type 1Overwrite. Old value is gone.Phone numbers, contact names, descriptions, anything no report cares about historically.
Type 2Insert a new row with valid_from / valid_to dates.Customer billing address, classification, supplier payment terms, cost center, price list.
Type 3Add a "previous value" column on the master row.Almost never. Survives in legacy systems where adding columns was easier than adding tables.

Type 1 is the default in almost every master data system, including MDS and including most ERPs. Type 2 is what every data warehouse architect wants, because it lets reports answer point-in-time questions without rebuilding history from raw extracts. Type 3 is a compromise nobody is happy with, and it shows up mostly where adding columns was politically easier than adding rows.

Most attributes are Type 1, and that is fine

If a customer's phone number changes, no report cares about the old one. If a supplier's contact name changes, no report cares about the old one. If a product description gets a typo fix, definitely no report cares about the old one.

Type 1 is the right answer for the large majority of master data attributes. The audit trail still records who changed what and when, so compliance can reconstruct events. But the data warehouse does not need a row per version, and the reports that depend on those attributes do not get rebuilt.

The attributes that need history are usually a short list:

  • Customer billing address (sales by region depends on it)
  • Customer industry classification (segment reports depend on it)
  • Supplier payment terms (cash flow forecasts depend on them)
  • Cost center hierarchy (financial consolidation depends on it)
  • Product price lists (revenue analysis depends on them)

Five attributes, not five hundred. Decide which ones matter before you build the time machine.

The audit trail is not the same as a temporal model

A common mistake: assuming that because the master data system has an audit trail, point-in-time queries are solved. They are not.

An audit trail records change events. At 14:32 on March 4, user X changed Customer 482's country from NL to BE. That is a list of edits. To answer “what was Customer 482's country on December 31?” from an audit trail, you have to walk every change event for that record from creation to the target date, applying each one, reconstructing the state. That works for one record. It does not work for a million-row sales report.

A temporal model exposes effective-dated rows. Each version of the record has a valid_from and a valid_to. The query that asks for the country on December 31 is a single SELECT with WHERE @AsOfDate BETWEEN valid_from AND valid_to. The data warehouse can join to it directly.

Audit trail is for compliance. Temporal model is for reporting. They are different problems with different solutions, and the same system can do both as long as the design separates the change log from the queryable timeline.

Where MDS left this unfinished

Microsoft MDS had a versioning concept, but it was a heavyweight one. A version was a snapshot of the entire model. To compare states between two points in time, you compared two whole versions of every entity. Most teams I worked with used versions for staging large changes, not for time travel.

The few teams that did try to use MDS versions for point-in-time reporting found the queries did not perform, the comparisons across versions were painful, and the warehouse team eventually gave up and built their own SCD Type 2 layer downstream. That downstream layer is where most organizations actually store their master data history today.

That works, but it pushes a governance question into a technical layer. The ETL job extracts the current state, compares it to yesterday's snapshot, and inserts a new dimension row when something changed. Nobody on the business side ever sees this. When a steward wants to know why the cost center changed in March, the warehouse cannot answer. The warehouse only knows what shifted, not why.

The cleaner pattern is to keep the temporal model in the master data system, where the steward who made the change can also annotate why it changed. The warehouse then publishes the history. It does not invent it.

A pragmatic design that ships

The pattern I keep recommending after the third “we need point-in-time master data” conversation:

  1. Keep the master record current. One row per entity, always reflecting the latest approved state. This is what every operational system queries.
  2. Mark a small set of attributes as historicized. Per attribute, opt in. When that attribute changes, write a row to a history table with the old value, the change date, and the user who made it.
  3. Publish a temporal view to the data warehouse. The view joins the master record with the history rows for the historicized attributes only, exposing valid_from and valid_to columns. The warehouse SCD Type 2 dimension becomes a thin wrapper around this view, not a parallel reconstruction effort.
  4. Keep the audit trail separate. The audit log records every edit on every field, not just the historicized ones, with full who/what/when context. Compliance hits the audit log. Reporting hits the temporal view.

This stops the warehouse team from inventing history independently. It stops compliance from depending on the warehouse to answer regulatory questions. And it keeps operational systems simple, because they only see the current state.

The decisions that have to be explicit

Four questions come up in every implementation, and the master data team has to answer them rather than leaving them to the warehouse.

When does a change become effective?

The date of the edit, the date the user types in, or a future-dated activation. Customer addresses often need future dating because someone tells you in March that they are moving in April. Phone numbers do not. Pick a default and let the steward override per attribute.

Is a correction a new version or an overwrite?

A steward fixes a typo in last week's address change. Does the previous version stay in the timeline, or does the corrected version replace it? Both are defensible. Document the choice. The honest answer for most teams is that genuine corrections collapse the prior version, and any change that affects reported numbers gets a new row.

What happens when a record is deleted?

Hard delete removes everything, including the history, which silently breaks reports that referenced the deleted record. Soft delete keeps the timeline intact and adds a final "deleted" version. For historicized attributes, soft delete is almost always the right answer.

Who can edit a historical row?

Usually nobody, except for legitimate corrections that themselves get logged. A historical row that gets quietly edited defeats the entire point. Permissions on the timeline matter as much as permissions on the live record.

The MDS-era answer to most of these was “we did not think about that, the warehouse handles it.” The mature answer is that the master data system owns the timeline and the warehouse subscribes to it.

What this looks like in Primentra

Primentra's approval workflows already capture the change events that feed an SCD Type 2 dimension. Every approved changeset has a timestamp, a user, and a list of attribute changes with old and new values. Combined with effective dating on the attributes you choose to historicize, that gives you a queryable timeline without standing up a separate temporal store.

The integration views Primentra publishes can include the temporal join, so the warehouse loads point-in-time master data on its existing schedule, with no extra ETL logic. A row that changed last Tuesday at 14:32 shows up in the dimension table with valid_from = 2026-04-22 14:32, and last Tuesday's reports continue to reflect the value as it was at the time.

You still have to make the per-attribute decision about which fields are worth historicizing. The tooling helps. It does not absolve you from naming the five attributes that actually matter.

What to do this week

Pick one report that is wrong because the master data is current and the report covers the past. Customer revenue by country, supplier spend by category, headcount by cost center. Any of those will do. Find the master attribute that drives the grouping. That is your first historicized attribute.

Build the temporal view for that one attribute. Wire the warehouse dimension to it. Compare last quarter's report before and after. Then do the next one. The list is short. Most teams finish it in a quarter.

If your master data system overwrites every change without a timeline, last quarter's reports cannot be reproduced. If it versions everything by default, the storage cost is real and the queries get slow. The middle path is a small set of historicized attributes, an effective-dated view for the warehouse, and an audit trail that keeps compliance honest.

Common questions

What is a slowly changing dimension in master data?

A slowly changing dimension is an attribute on a master record whose value changes over time in a way that downstream reports need to remember. The Kimball framework names three responses: Type 1 overwrites the old value, Type 2 keeps a versioned history with effective dates, Type 3 keeps one previous value alongside the current one. In master data, Type 1 is the default and the right answer for most attributes. Type 2 is the right answer for the small set of attributes that drive historical reports.

How is a master data audit trail different from a temporal model?

An audit trail logs change events for compliance: who changed what, when. A temporal model exposes the data with effective-dated rows, so a query can ask "what was the value on date X" with a single SELECT. Audit trails work for reconstructing one record at a time. Temporal models are required for reporting across millions of records. A mature master data system has both.

Why did Microsoft MDS not handle point-in-time master data well?

MDS versioning operated at the model level, not the row or attribute level. Comparing two points in time meant comparing entire versions of every entity, which was slow and required custom queries. Most teams used MDS versions for staging large changes and built their own SCD Type 2 layer in the data warehouse for actual point-in-time reporting. The history ended up living in an ETL pipeline the business never saw.

Should every master data attribute be historicized?

No. Versioning every attribute creates storage cost, query overhead, and maintenance burden that produces no business value for fields nobody reports on. Identify the five to ten attributes per domain whose history actually drives reporting. Historicize those. Let everything else stay Type 1, with the audit trail covering compliance needs.

A timeline the warehouse can subscribe to

Primentra runs on SQL Server, captures every approved change with full attribute detail, and publishes integration views the warehouse can consume directly. The 60-day trial covers the full approval and audit stack, including the import wizards for moving legacy records in.

Start free trial →Try the demo →

More from the blog

Migrating master data to a new ERP without carrying the mess9 min readSurvivorship rules: picking the winner when two master records disagree9 min readMaster data decay: why your MDM goes stale in four months (and how to stop it)7 min read

Ready to migrate from Microsoft MDS?

Join the waitlist and be the first to try Primentra. All features included.

Download Free TrialTry DemoCompare MDM tools