When a semantic model starts creating friction — slow reports, confusing measures, hard-to-trace calculations — the instinct is often to rebuild from scratch.
That is sometimes the right call. More often than teams expect, a targeted patch solves the immediate problem faster and with less risk.
The question is: how do you tell which situation you are in?
This is the question most teams never stop to answer explicitly. They patch by default because it feels safer, and the model slowly drifts further from a clean state. Or they rebuild because the current model “feels messy,” and reset stakeholder trust in the process. Both failures are avoidable. What’s needed is a framework, not an instinct.
Why the decision costs more than it looks
The reason this decision matters more than most model-level calls: both modes are expensive, and the costs are asymmetric.
A patch looks cheap — a few hours, a few DAX tweaks, no new sign-offs. But if the structural issue is real, patches compound. Each one makes the next one harder. The model ends up with more special cases, more workarounds, more one-off USERELATIONSHIP pins, and a slightly worse answer to the question “what does this measure really mean?”
A redesign looks safe — everything is clean, everything is deliberate, nothing is carrying history. But redesigns reset consumer trust. Every number gets re-validated. Every sign-off gets re-done. Every stakeholder asks “why is this different from last month?” — and they are not wrong to ask. If the rebuild happens when a patch would have worked, the cost of that re-validation is pure overhead.
The decision, in other words, is not between “effort now” and “effort later.” It is between two different kinds of effort with two different kinds of risk.
Signs a patch is still viable
A patch works when the structural foundation is sound but the surface layer has accumulated noise. In practice, that looks like:
- Measure logic is correct but disorganized. Calculations return the right numbers, but naming is inconsistent, folders are messy, or documentation is missing. This is a cleanup, not a redesign.
- Relationships are stable. The star schema is intact. Tables join correctly. There is no circular dependency or ambiguous path. The model just needs pruning, not restructuring.
- Performance problems are localized. A few heavy visuals or one expensive measure dominate the load time. You can tune those without touching the rest.
- Consumers still trust the output. If stakeholders rely on the current numbers and the numbers are correct, preserving that trust is valuable. A full rebuild resets trust to zero.
- New requirements fit the existing shape. The next thing the business wants from this model extends the current grain or adds a dimension in an obvious place. The model can hold the new ask without contortions.
When these conditions hold, a targeted patch — renaming, reorganising display folders, tuning specific DAX, removing unused columns, adding a small calculated table — is lower-risk and delivers results faster.
Signs the model needs a structural redesign
A redesign becomes the right call when patching would leave the underlying problems intact. Common indicators:
- The grain is wrong. Tables are at the wrong level of detail, forcing complex workarounds in DAX. No amount of measure cleanup fixes a grain mismatch.
- Relationships require constant overrides. If you are regularly using
USERELATIONSHIP,CROSSFILTER, orTREATASto work around the default model shape, the schema itself needs rethinking. - Source tables have diverged from the model’s assumptions. Upstream schema changes broke assumptions the model was built on. Patching individual columns does not fix a structural mismatch between source and model.
- Multiple teams depend on conflicting interpretations. If different consumers expect different definitions from the same measure and the model cannot serve both without fragile branching logic, the model scope needs to be re-scoped — not patched.
- Performance is slow everywhere, not just in specific visuals. When every page and every slicer interaction is slow, the problem is usually at the storage or relationship layer, not at the DAX layer.
- Nobody can explain the whole model in one sitting. If the model has become opaque even to the team that owns it, patches will not restore legibility. Legibility is structural.
A decision matrix
The single sharpest tool here is a short matrix. Walk the current model down the left-hand column. If most rows land in the patch column, patch. If most land in the redesign column, accept the redesign cost. If they split, the model is in the grey zone — and the worked example below is for you.
| Dimension | Patch is viable | Redesign is honest |
|---|---|---|
| Grain of fact tables | Fact grain matches the questions being asked | Fact grain forces workarounds in every measure |
| Relationship shape | Star schema intact; joins are the default active path | Regular use of USERELATIONSHIP, TREATAS, bi-directional filters to force answers |
| Measure correctness | Numbers are correct; cleanup is about naming and organisation | Measures produce different answers in different visuals without clear reason |
| Source stability | Upstream schema is stable; column meanings haven’t drifted | Upstream has changed; model is patched to hide the drift |
| Stakeholder trust | Stakeholders rely on the numbers; a reset is costly | Stakeholders already distrust the numbers; reset cost is lower |
| Performance profile | Hot spots are localised to specific visuals or measures | Performance is poor across the whole model, not just specific paths |
| Next requirement fit | Fits the existing grain with additive changes | Requires a new grain, a new table family, or a new fact at a different resolution |
| Documentation feasibility | The model can be explained to a new reviewer in an hour | Nobody on the team can explain the full model end to end |
This isn’t a scoring exercise — there’s no threshold that resolves the decision. But if the model fails on grain, relationships, or source-stability, the other rows rarely redeem it. Those three are structural. Everything below them is surface.
A worked example
Consider a composite drawn from real cross-sell dashboard work (anonymised): a customer-360 Power BI model that started with three source tables — customer profile, product holdings, insurance — and was extended over a year of stakeholder feedback. Fifteen or so iterations. Multiple stakeholder groups now depend on it. Two symptoms have shown up:
- A new request — “show me cross-sell propensity by channel” — requires a filter path the current model doesn’t support.
- The weekly refresh has grown from four minutes to eighteen.
Which is this?
Walk the matrix.
- Grain: Fact tables are at customer-month level. The new ask wants customer-channel-month, which the current grain doesn’t hold. → Redesign signal.
- Relationships: Currently two inactive relationships pinned with
USERELATIONSHIPin a handful of measures. Not pervasive. → Patch signal. - Measure correctness: Numbers reconcile to source. No known drift. → Patch signal.
- Source stability: Stable for the last six months. → Patch signal.
- Stakeholder trust: Strong; three teams now use it in Monday reviews. → Patch signal (high reset cost).
- Performance: Refresh regression is in one fact table’s historical snapshot. Visual render times are fine. → Patch signal (localised).
- Next requirement fit: The channel ask requires a grain that doesn’t exist today. → Redesign signal.
- Documentation: Team can explain the model; one-hour walkthrough. → Patch signal.
Verdict: mostly patch-viable, but with a structural gap on grain for the specific new ask.
The honest move here is a scoped extension, not a full redesign: add a channel fact at the right grain, wire it into the existing model through the existing customer dimension, and keep the old measures untouched. The refresh regression is a separate, localised fix. Nothing about the existing sign-offs needs to reset.
The mistake would be to see one redesign signal (grain) and rebuild the whole thing. The matrix is telling you the model is fine for the questions it currently answers — the new question needs a new artefact, not a new model.
Failure modes that catch teams out
Four anti-patterns show up repeatedly. Name them, and you’ll recognise them earlier.
1. The Drift
The team patches by default for a year. Each patch makes sense in isolation. But nobody has asked “is the patch still the cheapest option?” in a while. The model now has 40 measures that start with _hidden_ and three tables nobody can explain. Redesign cost is now very high because of accumulated stakeholder dependencies on those hidden measures. The only way out is explicit: spend a week mapping the model’s current shape, decide consciously, and commit.
2. The Rebuild Trap
A new team member arrives, reads the model, and feels it is “a mess.” A rebuild gets scoped. Six weeks later the new model is clean but has 80% of the old measures and 40% of the old business alignment. Trust resets. Numbers are “different” in subtle ways that turn into a month of reconciliation. The patch would have been three days. The rule: aesthetics are not a technical justification. A messy but correct model beats a clean but unvalidated one.
3. The Fork
Instead of choosing, the team keeps the old model and starts a new one “for the next use case.” Both are maintained. The old one accumulates tech debt because it’s being deprecated “soon.” The new one accumulates scope because the old one is still there to catch anything that doesn’t fit. Six months later there are two models, both half-maintained, and the team is worse off than before. Forks are a deferred decision, not a solution.
4. The Silent Redesign
The team calls it a patch but quietly rewrites half the model under the hood. No new sign-offs. No stakeholder communication. Numbers shift subtly. Three weeks later a finance stakeholder notices a number is off by 0.4% — and trust is damaged for the whole model, not just the patched part. If the work changes how numbers are calculated, it is a redesign regardless of how the effort is labelled. Treat it that way.
Before you commit to either
One structural habit makes both modes safer: get the current model into source control before you start.
PBIP, TMDL, and PBIR make this practical. Once the model lives as plain text in Git, both options get cheaper:
- Patches become a reviewable diff instead of a saved
.pbixfile. - Redesigns can be compared to the original measure by measure, which shortens validation.
- If a patch ends up being a silent redesign, the diff makes that visible before deploy, not after.
If the model is not yet under source control, that’s the first week of work no matter which path you choose. It’s not overhead; it’s the foundation that lets the decision be honest.
What good looks like
A well-managed model goes through both modes over its lifecycle:
- Patches for surface cleanup, performance tuning, measure additions that fit the current grain.
- Scoped extensions for new requirements that need a new grain but don’t invalidate the existing shape.
- Redesigns when the structural shape no longer fits the business questions the model needs to answer.
The discipline is knowing which mode you are in — and being honest about when a patch has stopped being enough.
Checklist before your next decision
- Walk the matrix once. Note which rows land in patch vs. redesign.
- If the model fails on grain, relationships, or source-stability, accept the redesign cost.
- If it only fails on the “next requirement fit” row, consider a scoped extension instead of a full redesign.
- Decide explicitly. Communicate the decision. Don’t drift.
- If the current model isn’t in source control yet, fix that first — it makes both paths safer.
The question was never really “patch or redesign?” It’s “what does this model need to be for the questions that will come next twelve months?” The matrix is the way to answer that question in the open, not in the comments of someone else’s PR.