Framework

The 5 E's — a working manual for Master Data Governance and Data Quality Management.

Engage, Evaluate, Evidence, Establish, Execute. Five phases I run inside every Diagnostic and Acceleration Sprint — framework-neutral and applied across the DAMA-DMBOK wheel: Data Governance operating models, policies and standards, Master & Reference Data, Stewardship and RACI, Golden Record / Mastering decisions, Metadata, and Data Quality. Most engagements touch four or five of those at once. What follows is the framework with real artefacts attached, not a methodology slide.

Where the 5 E's apply

Across the DAMA-DMBOK wheel — not just data quality

The 5 E's are framework-neutral. I've run them on operating-model redesigns, policy & standards rollouts, Master and Reference Data programmes, Golden Record / mastering decisions, metadata initiatives, S/4HANA migration prep, and yes — the data quality work the artefacts on this page lean toward.

Most engagements touch four or five of these knowledge areas at once. The DQ examples are over-represented below because that's where the most concrete tooling lives; the rest produces equally tangible artefacts — operating models, RACI matrices, policy libraries, mastering rulesets — they're just less screenshot-friendly.

For the curious: I'm one of roughly 250 DAMA CDMP® Master-certified Data Governance experts worldwide — verifiable certificate. The credential isn't why this works — but it's part of why I can run the framework with conviction across all of these areas, not just one.

Data Governance Master & Reference Data Data Quality Data Architecture Metadata Management Stewardship & Operating Model Policies & Standards Golden Record / Mastering Data Integration S/4HANA Data Migration Data Storage & Operations Data Security & Privacy

Most-frequent engagement areas (highlighted) · the rest sit in the toolkit when needed

01

Engage — the strategic foundation

Every governance failure I've ever inherited started the same way: someone drew a target operating model before they understood the room. Engage is the critical alignment phase the rest of the framework stands on. I embed for two to four weeks, run six to eight stakeholder interviews, walk the actual data flows, and inventory who's responsible for what — formally and informally.

Governance & Architecture angle

  • Domain inventory: customer, vendor, material, finance, employee
  • System-of-record map per domain (often disputed)
  • Existing policies, standards, decision-right baselines

Stewardship & Quality angle

  • Existing roles: who owns, who stewards, who consumes
  • Source-system list — every ERP / CRM / DW per company code
  • Existing DQ rules — written down or carried in someone's head

What the phase produces

  • Stakeholder map
  • Data domain inventory
  • Source-system register
  • Existing-policy register
  • Engagement charter & scope memo
02

Evaluate — profiling the data, not the slides

Most maturity assessments rate process. I rate data. Same five-point scale (ad-hoc → managed → defined → measured → optimised) but the inputs are profiling outputs, not interview answers. I run SQL against every active source system, measure completeness, conformance, uniqueness, and cross-reference integrity per critical field, and only then map the findings against the governance maturity dimensions. This is where opinions stop and numbers start.

Governance & Mastering angle

  • Maturity heatmap across DAMA knowledge areas × N domains
  • Reference-data drift: do the lookup lists agree across systems?
  • Mastering readiness: golden-record candidates & survivorship gaps

Data Quality angle

  • Per-field profiling: NULL ratio, format conformance, distinct counts
  • Duplicate detection on natural keys (name + address blocks)
  • Orphan checks: transactions referencing missing master records

What the phase produces

  • Maturity heatmap
  • DQ profiling report per location
  • Reference-data drift report
  • Mastering readiness assessment
  • Prioritised gap list · quick-win shortlist
03

Evidence — numbers your CFO will defend

By Evidence, the work shifts from "is this bad?" to "how bad, in money?" I take the profiling output from Evaluate and convert it into something a CFO can defend in a steering committee — compliance rate per location, cost-of-poor-data attached to specific failure modes, and an ROI projection on fixing them. The number doesn't have to be perfect; it has to be defensible. Green ≥ 90%, yellow ≥ 70%, red below. That's the threshold I default to, and the one the dashboard colour-codes against.

Governance & Maturity angle

  • DAMA-style maturity heatmap — knowledge area × current vs target
  • Cost-of-poor-data: returns, write-offs, blocked invoices, migration risk
  • Executive scorecard: one-page readout for the sponsor

Data Quality & Operations angle

  • Compliance rate per domain × per location, colour-coded
  • Issue count per rule, per location, per run-date
  • Top-10 worst-offending records per domain — actionable list

What the phase produces

  • Executive scorecard
  • DAMA maturity heatmap
  • Cost-of-poor-data calc
  • ROI projection on remediation
  • Steering-committee deck
04

Establish — building the running gear

This is the hands-on phase most engagements skip. I stand up the operating model — domain owners, stewards, governance council cadence, RACI for change requests — and the rule library: the SQL profile checks, the DQ scoring logic, and the orchestration that runs them on schedule. Most clients leave Establish with a working toolset and a charter, not a binder. The aim is a self-sustaining capability that outlasts my engagement, not a deliverable that proves I was there.

Operating Model & Stewardship angle

  • Data domain operating model + RACI matrix per activity
  • Steward charter, council ToR, change-request workflow
  • Business glossary tied to source-system fields
  • Policy & standards library — written, versioned, signed-off

Mastering & Quality angle

  • Golden-record rules — match keys, survivorship logic, merge protocol
  • DQ rule library — versioned, parameterised by location
  • Scheduled refresh: VBA SQL Runner or Optimise live agent
  • Issue-routing: red items land in the steward's queue

What the phase produces

  • Operating model + RACI
  • Policy & standards pack
  • Golden-record ruleset
  • DQ rule library
  • Scheduled refresh job
  • Steward training materials
05

Execute — running it after I leave

Execute is what makes the difference between a deliverable and a capability. I stay on in a Retainer or Fractional shape long enough to see the first three monthly cycles land — DQ refreshes running, stewards triaging the red queue, the council reviewing trend lines instead of opinions. The goal isn't to keep me in the room; it's to make the room run without me. Most engagements close out cleanly. A few graduate into a Retainer. Either is a good outcome.

Council & Cadence angle

  • Monthly council with the trend dashboard as the agenda
  • Quarterly stewardship review — promotions, escalations, policy updates
  • Half-yearly DAMA maturity re-score against the heatmap baseline

Mastering & Quality Operations angle

  • Scheduled refresh via VBA SQL Runner (on-prem) or Optimise (cloud)
  • Mastering rule changes versioned and reviewed quarterly
  • Issue queue triaged by domain stewards, not me

What the phase produces

  • Monthly compliance trend
  • Stewardship cadence
  • Self-running rule library
  • Half-yearly maturity re-score
  • Six-month closeout report
Tools woven in

Two ways to run the rule library:
the spreadsheet or the SaaS

The 5 E's framework is tool-neutral, but I've shipped the same DQ pattern in two forms. Both produce the same artefacts — the SaaS version just runs continuously instead of on a button-click. Use either, both, or neither: the on-prem dashboard ships as part of the engagement at no extra cost; Optimise is a standalone SaaS subscription, opt-in.

On-prem · button-click

The DQ KPI Dashboard

Excel + VBA + ODBC. One workbook, a Config sheet that lists every location, two SQL scripts per domain (Script_01 Overview, Script_03 FullScope), and a "Run All KPIs" button that paints the result green/yellow/red and exports to PowerPoint for the steering committee.

Used inside Establish and Execute when the data has to stay on the client's network.

Discuss a deployment →
SaaS · live

Optimise — Data Quality SaaS

The cloud-native evolution of the same workflow. Automated DQ profiling, real-time monitoring, rule-based scoring, dashboards that refresh continuously instead of on demand. Used inside several engagements to give the client live visibility between programme cycles.

Used inside Evaluate, Establish, and Execute when the client wants always-on visibility.

Explore Optimise →
Where to take it next

Want this run
against your data?

The 5 E's is the framework I run inside every Diagnostic and Acceleration Sprint — the Diagnostic compresses Engage, Evaluate, and Evidence into 2–4 weeks; the Sprint adds Establish and the first cycles of Execute over 8–12 weeks. The easiest next step is a short call.

Book a Diagnostic call See all four offers