I've watched the same pattern play out at four different MDS shops. The project lands, the data is cleaned, the stewards are trained. Reports run green for a few weeks. Then quietly, somewhere around month four, downstream systems start complaining. Finance asks why a retired cost center is still showing up. Procurement says a supplier bounced an email. The data that was clean in January is noticeably wrong by May.
Nobody broke anything. The world just moved. And the MDM platform let it happen.
The math of decay
The numbers are not new. Gartner and Experian put B2B contact data decay at around 30% per year. SiriusDecisions have pushed the number as high as 70% in certain categories. Those are annual figures — spread across a year, they feel abstract. Break them down and they get concrete fast.
A 30% annual decay rate compounds to roughly 2.5% per month. Seventy percent compounds to closer to 10%. Pick the middle of the road — say 6% per month — and after four months, about 22% of the records in your master data set are wrong. That is every fifth row. Every fifth supplier payment. Every fifth HR lookup. Every fifth BI dimension entry.
The four-month number is not a law. It is a rule of thumb for when uncontrolled master data crosses the threshold where people stop trusting it. Some entities decay slower (country codes, currencies). Some decay faster (employee records in a growing company). The average is uncomfortable.
Why it happens faster than teams expect
Master data describes the real world. The real world does not hold still. People leave. Companies rebrand. Departments restructure. A supplier sells to another supplier. A product line gets reclassified halfway through a launch. None of these events file a ticket with the MDM team.
The feeder systems — HR, ERP, CRM, procurement — run on their own schedules. A terminated employee might be flagged in HR the day they leave, but nobody updates the employee record in MDM until an email bounces three months later. By then the data has already been wrong in twelve reports.
And then there is the editing problem. Most MDM tools — MDS included — let anyone with edit permissions save changes straight into the record. No review, no second pair of eyes, no way to roll back a bad paste. Good data gets overwritten by mistakes faster than stewards can catch them.
What MDS did, and what it missed
To be fair to MDS: it had a business rules engine. You could enforce formats, require fields, block invalid values. That stopped malformed data. It did almost nothing for stale data.
MDS had transaction logs. You could read who changed what, after the change had already gone live. Useful for forensics. Useless for prevention. The audit trail was a report, not a gate.
What MDS did not have: approval workflows in front of edits, a clean changeset concept, or a way to assign a human owner to a record and hold them accountable for keeping it current. Those gaps are where decay moves in.
What actually keeps master data usable
Three mechanisms, working together. Each one fails without the other two.
Approval workflows in front of every change
A steward proposes an edit. A reviewer approves it. Only then does it commit. Nothing bypasses the workflow — not bulk imports, not admin users, not weekend hotfixes. The moment you allow an exception, decay creeps back in through the exception.
Clear ownership for every record
Every entity has a data steward. Every steward has a list of records they own. A record without an owner is a record nobody will update when the real world changes. Permissions make this enforceable — stewards see their records, reviewers see what needs approving, everyone else sees read-only.
Full audit trail — who, what, when, why
Every changeset carries a comment field and a reviewer signature. Six months later, when someone asks why a supplier record was changed, the answer is in the audit log. Not in an email thread. Not in someone's memory. The audit trail is the institutional memory.
How Primentra approaches this
Every change in Primentra is a changeset. A steward opens a record, edits attributes, and submits — the edits do not go live. A reviewer opens the changeset, sees every modified attribute side by side with its previous value, approves or rejects, and only then does the change commit. No admin backdoor. No bulk import that skips the workflow. If an entity is governed, every edit is reviewed.
The audit log captures the full picture: who proposed the change, who approved it, the before and after values, the comment they attached, the timestamp. Six months later, when finance asks why a cost center was renamed, the answer is a single query away.
Ownership lives in the permission model. Every entity has assigned data stewards. They see their records in their worklist. They are the named humans responsible for keeping those records current. Everybody else — including most of the IT organization — reads the data but cannot change it. An attribute without a responsible steward is a configuration bug, not a shrug-worthy reality.
None of this stops the real world from changing. Suppliers will still move. Employees will still leave. What it does is make sure the response — the updated record — goes through the same quality gate every time, so the fix arrives in a reviewed state instead of a typo.
The part nobody wants to hear
Governance does not prevent decay by itself. It makes decay visible and fixable. The last mile is a review cadence — quarterly for high-churn entities, annually for the rest. A worklist that tells each steward which of their records have not been confirmed in the last 90 days is worth more than any business rule.
The four-month cliff is not inevitable. It is the default for tools that treat master data as a one-time cleanup project instead of an ongoing practice. Pick a tool that treats every edit as a reviewed change, and the curve flattens.
Frequently asked questions
How fast does master data decay?
B2B contact data estimates range from roughly 30% per year (Gartner, Experian) to 70% per year in some categories (SiriusDecisions). That works out to 2–6% of records going stale each month. After four months, 10–25% of the master data you thought was clean is already wrong — suppliers who moved, employees who left, cost centers renamed, product codes reclassified.
Why does master data decay faster than most teams expect?
Three reasons. Master data describes the real world, and the real world changes. The sources feeding MDM are not synchronized, so updates arrive unevenly. And most MDM tools let users edit records without review — so good data gets overwritten by mistakes faster than stewards can catch them.
Did MDS prevent data decay?
Not really. MDS had business rules for formats and required fields, and transaction logs that recorded changes. What it lacked was an approval workflow in front of edits, a clear model of record ownership, and a reviewable changeset concept. The audit trail was something you read after the fact — not something that gated the change.
What is the most effective way to prevent master data decay?
Three mechanisms together: every change goes through an approval workflow so bad edits never reach production; every attribute has a named owner responsible for keeping it current; every change is fully audited. Validation rules alone stop malformed data, not stale data.
How often should master data be reviewed?
High-churn entities — customers, suppliers, employees — benefit from a quarterly review cycle. Low-churn entities — product categories, country codes, legal entities — are usually fine annually. The review is a steward confirming their records, prompted by the tool rather than remembered.
Tired of watching master data go stale?
Primentra runs every edit through an approval workflow, assigns clear ownership, and keeps a full audit trail of every change. It runs on SQL Server and deploys in a day.