Back to blog
PrimentraPrimentra
·April 29, 2026·8 min read

How to run an MDM proof of concept that actually proves something

Home/Blog/How to run an MDM proof of concept that actually proves something

MDM POC: the four stages that matter

Setup
Load your real data
Configure
Build entities + rules
Stress test
Break things on purpose
Decide
Buy or walk

Skip any of the first three and you have a demo, not a POC.

Most MDM proof-of-concepts are theater. The vendor's solution engineer drives the keyboard, the demo data has been hand-curated for two years, and every workflow lights up green on the first click. Then the order gets signed. Six months later the same data problems are back, only now they live in the new tool.

A real POC is uncomfortable. It surfaces the things vendors do not want to show you, the things your own team has been carrying as tribal knowledge, and the things that will make or break the implementation. Here is how to run one that actually decides something.

Stop calling demos POCs

A demo is a guided tour. The vendor controls the environment, the data, and the script. They have rehearsed it forty times. The version you see has been polished by a solution engineer who only shows the workflows that work cleanly.

A POC is your team driving the tool against your data, trying to make it fail. The mental shift matters. In a demo, you are looking for things to like. In a POC, you are looking for things that break. If your evaluation never reaches the second mode, you are buying based on hope.

One rule keeps the line clear: during a POC, the vendor does not touch the keyboard. They can answer questions, point at menus, explain concepts. The moment they take over to “just show you how it works,” you have slipped back into demo mode.

Use your real data

The single highest-leverage decision in any MDM POC is what data you load. Vendors will offer to provide sample data, or to take your sample and pre-clean it. Both should be declined.

Use a raw extract from the legacy system. Whichever domain you are evaluating, pull it as it is, with the trailing whitespace, the inconsistent capitalization, the seven different ways of writing “Ltd.” and the addresses that span three columns because someone exported them weirdly in 2014. That is the data your tool has to handle on day one of production. Anything cleaner is fiction.

Volume matters too. Five hundred records will not stress anything. If your real population is 50,000 records, load 50,000. If it is 2 million, load at least 200,000. Tools that handle a thousand rows cleanly often start groaning around 100k. Matching, validation, and search are the bottlenecks you bought the tool for, and none of them show up at small volumes.

Three things every MDM POC must reproduce

Beyond the data load, three concrete tasks separate a real POC from a polished walkthrough. If you cannot get all three running unaided in two weeks, the answer is probably no. Not because the tool is bad. Because the configuration tax is too high, and your team will be paying it forever.

1. Build one entity from scratch

Pick a single domain. Suppliers is usually the right test. Have a data steward, not the vendor, define the attributes, mark the required fields, set up the validation rules, and configure who can edit what. If that takes three weeks of vendor-led configuration sessions, your team will not maintain it after go-live. The whole point of buying a tool is that the day-to-day work belongs to your team, not a consulting firm on retainer.

2. Run an approval workflow

Configure a basic approval flow: someone proposes a new supplier, a steward reviews, and the record either commits or gets rejected with a reason. Test what happens when the steward is on holiday. Test what happens when the requester edits the record while it is in review. Test the audit trail. Email-based approvals do not count. Spreadsheet logs do not count. The tool either has structured workflow or it does not.

3. Push data to a downstream system

An MDM tool that hoards governed data is useless. Pick one consumer. The ERP, the data warehouse, a BI tool, whatever sits closest to your daily reporting. Get governed records flowing into it. Even if it is read-only, even if it is just a CSV drop into a SQL Server staging table. The integration is where most MDM projects stall, and a POC that never tests it leaves the hardest problem for go-live.

Common POC traps

TrapWhy it skews the result
Vendor drives the keyboardYou are watching a magic show, not testing software. If your steward cannot configure a rule unaided, the answer is no.
Sanitized sample dataClean records hide every problem you bought the tool to solve. Use the messy extract.
Happy path scenarios onlyThe demo always works on the demo data. Test what happens when an import file has the wrong encoding, a duplicate VAT, and a missing required field.
No exit criteriaWithout a written pass/fail list, the loudest voice in the room decides. Write the criteria before kickoff, not after.
Steering-committee evaluationSlide-deck audiences pick whoever presents best. Hands-on users pick whoever lets them get work done.

Write the exit criteria first

Before kickoff, write down the answers you need. Not the questions. The answers. “A steward configured the supplier entity in under two days.” “The duplicate-detection rule found at least 80 percent of the seeded duplicates.” “A 50,000-row import finished in under ten minutes.” “An approval rejection generated a readable audit entry without the steward writing free text.”

When the criteria are written before the vendor enters the room, you have something concrete to score against. Without them, the loudest opinion at the closeout meeting wins, and that opinion is usually whoever was most charmed by the demo on day three.

Six to ten criteria is enough. More than that turns into a checklist nobody finishes. Fewer than that leaves too much room for interpretation.

What failure looks like, and why it is good

A POC that runs cleanly from start to finish is not a successful POC. It is one where the team did not push hard enough. Real evaluations turn up rough edges: a workflow that does not handle a corner case, a permission model that is more rigid than expected, an import that mangles a particular character set.

Those findings are the entire point. Better to discover them in a four-week POC than in month three of production. Vendors who acknowledge the rough edges, explain them, and either commit to a fix or document the workaround are the ones worth working with. Vendors who deflect, blame the data, or insist their solution engineer can “just take a look” are showing you the support model you are about to buy.

The boring decision rule

At the end of the POC, the steward should be able to say, without help, “I could run this every day.” The IT person should be able to say, without help, “I can integrate this without a six-month consulting engagement.” If both are true, the tool is a candidate. If either is uncertain, walk away or extend the evaluation.

That rule sounds unglamorous. It is also the only rule that survives contact with production.

Common questions

How long should an MDM proof of concept last?

Two to four weeks is enough for a focused POC on a single domain. Longer than that and you are no longer testing the tool, you are running an unpaid implementation. Shorter than that and you have not gotten past the polished demo path. The clock starts when your own data is loaded into the vendor environment, not when the kickoff call ends. If the vendor needs three weeks to load your sample file, the POC is already telling you something.

What data should I use for an MDM POC?

Your real, messy data — not a sanitized sample. The point of a POC is to see how the tool behaves on the records you actually have, with the duplicates, blanks, encoding issues, and edge cases that are the real reason you are buying an MDM tool. A clean export of 500 well-formed supplier records proves nothing. A 50,000-row extract straight from the legacy system, NULLs and all, is the test that matters.

Who should be in the room during an MDM proof of concept?

A data steward who knows the domain, an IT person who will own the integration, and one decision-maker who can say yes or no at the end. Procurement and security can be looped in for their pieces but should not be the primary evaluators. The fatal pattern is letting the vendor present to a steering committee that has no domain knowledge — those sessions reward slide quality, not tool quality. Keep the room small and the people involved hands-on.

What is the difference between an MDM demo and a proof of concept?

A demo is a guided tour through the vendor’s curated environment, with their data, their workflow, and their solution engineer at the keyboard. A proof of concept is your team driving the tool with your data, configuring your rules, and trying to break it. The difference shows up in the failure modes. A demo never fails because it is rehearsed. A POC is supposed to surface failures — that is the entire point. If your POC ran clean from start to finish, it was a demo wearing a different name.

A 60-day trial is a real POC window

Primentra installs on your SQL Server, runs against your real data, and gives your team the keyboard from day one. No solution-engineer-led demos, no curated sandbox, no consulting engagement to configure the first entity. The 60-day trial includes everything. Long enough to run the POC the way it should be run.

Start free trial →Try the demo →

More from the blog

Migrating master data to a new ERP without carrying the mess9 min readProduct master data management: why it is harder than customer or supplier9 min readMDM is not a data catalog. And a catalog is not MDM.8 min read

Ready to migrate from Microsoft MDS?

Join the waitlist and be the first to try Primentra. All features included.

Download Free TrialTry DemoCompare MDM tools