Most MDS migration plans have the same blind spot. Teams spend months on the model — getting entities right, defining validation rules, assigning data stewards, cleaning historical records. The governance part. Then a few weeks before go-live, someone asks: how does the ERP actually get this data?
In MDS, the answer was subscriber views. SQL views in the mdm schema that downstream systems queried directly. Clunky and tightly coupled, but predictable. You knew where to look when something broke.
When you migrate away from MDS, you lose subscriber views along with everything else. The distribution question does not disappear — it just moves from "configure this in MDS" to "figure this out yourself." Most migration projects answer that question too late.
What subscriber views were
When you built an MDS model, you configured which entities and versions to expose, and MDS created SQL views with names like mdm.viw_SYSTEM_1_SUPPLIER_1. Downstream systems connected directly to the MDS SQL Server database and ran SELECT queries against these views — from a SQL Agent job, an SSIS package, or an ETL tool.
That is the entire mechanism. SQL views in a shared database. No HTTP protocol, no authentication layer, no incremental tracking. Just direct database access.
Which worked, until it did not:
- Any model change (adding, renaming, or removing an attribute) could alter the view structure and silently break downstream queries
- Cloud-hosted consumers, SaaS tools, and systems outside your SQL Server network could not connect at all
- Getting incremental updates (records changed since last night) required building your own change-detection logic on top
- Access control was database-level permissions only — no record of which system queried what, or when
Three distribution patterns
When building or migrating your MDM system, you have three realistic options for getting master data to downstream consumers.
Subscription views
The MDM system generates SQL views that downstream systems can query directly — the closest equivalent to what MDS subscriber views provided. Systems on the same SQL Server network get a familiar interface: run a SELECT, get governed master data. In Primentra, subscription views are generated automatically per entity. The limitation is the same as MDS: only works for systems that can reach the SQL Server database directly.
Data dump / export
Export entity records as CSV or Excel on a schedule, drop to a shared folder or SFTP location, let the downstream system import on its own cadence. Old-fashioned and surprisingly durable — works with anything that can read a file. The main costs: near-real-time sync is impossible, and someone has to manage the last-successful-import state when a job fails partway through.
REST API
The MDM system exposes an HTTP API. Downstream systems call GET /api/v1/entities/{id}/records and get back JSON. Supports authentication, works with cloud-native tooling, and handles incremental fetches via query parameters. This is where most new MDM implementations land — for good reason.
Why REST API is the right default
Power BI has a REST connector. Azure Data Factory has a REST connector. Informatica, MuleSoft, every iPaaS platform — REST. A Python script on a cron job. An Azure Function triggered on a schedule. Any ERP vendor's middleware team knows how to consume HTTP.
SQL views, by contrast, require a direct SQL Server connection from inside the downstream system's network. That works fine inside your data center. It stops working the moment a consumer is cloud-hosted, on a separate network, or behind a firewall you do not control. That describes most organizations in 2026.
The incremental problem is cleaner with REST too. Add a ?updatedSince=2026-04-01T00:00:00Z parameter, and consumers pull only records modified after a given timestamp. No full table scans, no custom change-tracking queries, no delta files to parse and rotate.
Authentication is built in. Each downstream system gets its own API key. You can see exactly which system called what endpoint, how many times, and when. Revoking access for one system means deleting that key. Nothing else is affected.
A note on real-time distribution
Webhooks and message queues (Azure Service Bus, Kafka) are technically superior when you need sub-second propagation. If a product price change in MDM must reach an e-commerce front-end within seconds, scheduled REST polling will not work.
But mid-market MDM teams managing supplier or cost center master data rarely need that. A scheduled API sync every hour covers most use cases. Build event streaming when you actually need it, not because the architecture diagram looks more sophisticated with a message bus in it.
What most teams get wrong
Teams treat distribution as an implementation detail rather than a design decision. They finalize the data model, set up governance, run pilot testing, and somewhere near go-live start asking how to get data into the ERP.
Two things worth getting right before you get that far:
Define the consumer interface before the data model stabilizes
The fields your downstream systems need should inform which attributes you govern in MDM. If the ERP needs a standardized country ISO code and a vendor type code, model those as validated, governed attributes — not free-text fields data stewards fill in inconsistently.
Agree on the integration protocol before consumers start building
A data warehouse team planning for direct SQL access will not appreciate learning two weeks before go-live that the new system exposes REST JSON. Integration is not just a technical choice — it determines what every downstream team needs to build on their side.
Document your subscriber views before you turn them off
Before switching off MDS, document every consumer: which systems query which views, which columns they use, which entity version and approval status filters they apply, and the job or process name that runs the query. This becomes the integration contract for the new system. Without it, you will not know which API endpoints to build, and you will miss consumers until they start failing silently.
I have seen migrations where nobody could answer which systems were still using which views. The MDS database had 18 subscriber views defined. Three had not been queried in over a year. Four were queried daily by undocumented jobs. Turning off MDS caused those four jobs to fail. The failures did not surface immediately. The jobs ran overnight, and the data warehouse team noticed wrong numbers three days later.
Document first. Migrate second.
Frequently asked questions
What were MDS subscriber views?
Subscriber views were SQL views that Microsoft MDS created automatically in the mdm schema of the MDS SQL Server database. You configured which entities and versions to expose, and MDS generated named views — for example, mdm.viw_SYSTEM_1_SUPPLIER_1. Downstream systems connected directly to the MDS database and queried these views via SQL, usually from a scheduled job or ETL process. They were removed when Microsoft removed MDS from SQL Server 2025.
What is the best approach for distributing master data to downstream systems?
REST API is the right default for new implementations. It works with any downstream system that can make HTTP calls — which includes every modern ERP, BI tool, data warehouse, and integration platform. It supports authentication, handles incremental fetches via query parameters, and does not require direct database access. Subscription views work well when the downstream system is on the same SQL Server network. Data dumps via CSV or SFTP work when the downstream system has no API support.
How does incremental master data sync work with a REST API?
You add a query parameter to the GET endpoint — typically updatedSince or modifiedAfter — and the API returns only records modified after that timestamp. The downstream system stores the timestamp of its last successful sync and passes it on the next call. For example: GET /api/v1/entities/suppliers/records?updatedSince=2026-04-01T00:00:00Z returns only supplier records modified after April 1. This avoids full table scans and keeps sync jobs fast even for large datasets.
Do we need real-time distribution for master data?
Usually not. Master data — suppliers, products, cost centers — changes at a business pace. A new supplier record gets approved once, then may not change for months. A scheduled REST API sync every hour covers most mid-market use cases. Where near-real-time matters — a product price change that must reach the e-commerce front-end within seconds — you need webhooks or a message queue. But that is a minority of MDM scenarios.
What should I document before migrating off MDS subscriber views?
Before switching off subscriber views, document every consumer: which systems query which views, which columns they use, any filters applied (entity version, approval status), and the job or process name that runs the query. This becomes your integration contract for the new system. Without it, you will not know which API endpoints to build, and you will miss consumers until they start failing silently.
Need a REST API for your master data?
Primentra runs on SQL Server, deploys in a day, and ships a full REST API for distributing your golden records to downstream systems. Authentication, incremental sync, and per-consumer API keys included.