The Golden Rule of Enterprise AI: Golden Data In, Golden Outcomes Out

*A data leader and recovering geologist reviews Nate B. Jones's Enterprise AI Deployment Layer, and finds the bedrock everyone keeps skipping.*

John Hamlin, Founder, Truegility | Chief Operating Officer, Partner Projects | Process Excellence, Program Management, and Delivery Leadership

Source article: The Enterprise AI Deployment Layer: Why Model Access Isn't Enough by Nate B. Jones

I started my career as a geologist. We had a saying in the field: the assay is only as good as the sample. Twenty years later, working with AI agents inside finance and operations workflows, the rule has not changed. Garbage rock yields garbage numbers. Golden data yields golden outcomes. That is the rule.

Nate Jones argues that the strategic layer in enterprise AI has shifted from the model to the deployment work around it. He is right. Anthropic and OpenAI both moved on the same day to build that layer. Private equity wrote the checks. The mid-market is the target. The bet runs like this: six things must be true before AI changes a workflow, and most companies have built two.

I agree with the argument. I want to extend it.

Jones lists the six components of implementation architecture: workflow design, data access, authority, evaluation, audit trails, and recovery. Read that list as a practitioner who has spent twenty years inside finance, operations, and analytics, and one item carries the others. Data access is not one of six. It forms the bedrock. Every other component fails without it.

The model cannot fix the data

A model that reads stale records produces confident, wrong answers. A model that cannot tell the authoritative customer from the duplicate routes the refund to the wrong account. A model that lacks row-level permissions reads what it should not read and writes what it should not write. None of these failures show up in a demo. All of them show up in production, on the day the controller, the auditor, or the customer notices.

The harder truth: most mid-market companies do not have a golden record of their customers, products, vendors, employees, or chart of accounts. They have several versions of each, scattered across the ERP, the CRM, the billing system, the data warehouse, and the spreadsheets the finance team actually trusts. Humans reconcile the differences quietly, every day, by judgment. The model has no judgment. It has only the data you give it.

This is the part the deployment conversation tends to skip. The new ventures will install agents into companies that have never agreed on the definition of a customer. The agent will run. The numbers will move. The auditor will arrive.

Core business data is the prerequisite, not the byproduct

Call it master data, golden data, ground truth, or core business data. Geologists use the first; data leaders use the others. Same idea, different decade. The name matters less than the discipline. A company that wants AI inside its operating processes needs three things in place first.

A governed source of truth for the entities the business runs on. Customers, products, accounts, vendors, employees, locations. One record per entity, with a clear owner, a clear definition, and a clear lineage.

A set of hierarchies and rules that match how the business actually decides. How revenue rolls up. How the chart of accounts maps to management reporting. Which products belong to which lines. Which customers belong to which segments. These are not technical artifacts. They are the company's operating logic, written down.

A validation layer that catches the bad record before the agent acts on it. Not after.

Without these, the implementation architecture rests on sand. With them, the model has a chance.

What this means for the executive

Jones tells buyers to stop treating AI budgets as a model subscription. I would go further. Before you fund the agent, fund the data foundation it will stand on. The order matters. A KYC agent built on a duplicated customer master will produce duplicated KYC decisions. A month-end close agent built on an unreconciled chart of accounts will close the wrong books faster than before. A procurement agent built on a vendor list with three versions of the same supplier will commit the company to spend it cannot track.

The work is not glamorous. It rarely produces a demo. It produces something more valuable: a system the model can trust, and an auditor can defend.

For executives running mid-market companies, three questions are worth asking before the next AI pilot.

  • What is our golden record for the entity this workflow depends on, and who owns it? If the answer points to "the data warehouse" or "the ERP," dig further. Ownership belongs to a person, not a system.
  • What happens when the model reads a record that two systems disagree about? If no one has answered that question, the pilot is not ready.
  • Who decides when the definition changes? Business rules evolve. The segmentation a sales leader wants in Q1 may not match the segmentation finance wants in Q4. Someone has to govern the change, or the model will drift with it.

The implementation layer needs a data layer

Jones is right that the implementation layer is becoming the most important part of the enterprise AI stack. I would add one line to his conclusion: the implementation layer needs a data layer underneath it that is governed, validated, and owned. Anthropic and OpenAI can build the deployment capacity. The labs can install the agents. The PE firms can push the playbooks across portfolios. None of that work pays off in a company that has not decided what its own data means.

The companies that will pull ahead in the next phase of enterprise AI are not the ones with the best model access. They are the ones that already know which record is true, which hierarchy is current, and which rule applies. Everything else, including the agent, rests on that foundation.

The model is the prospector. The data is the vein. Map the vein before you sink the shaft.


About the source

This article reviews and extends The Enterprise AI Deployment Layer: Why Model Access Isn't Enough by Nate B. Jones, published May 14, 2026 on Nate's Substack. Nate writes and podcasts about how enterprises adopt and operate AI in production, with a focus on agentic systems, deployment architecture, and the practical work that separates pilots from operating change. His Substack publishes regular executive briefings, prompt kits, and implementation audits for builders, buyers, and operators working at the front edge of enterprise AI.

About the author

John Hamlin is the founder of Truegility, a governed data platform built on Microsoft Fabric and Power BI that delivers master data management, governed hierarchies, and validation for mid-market companies. He also serves as Chief Operating Officer, Partner Projects, leading process excellence, program management, and delivery for PE-backed engagements. He works with operating partners, CFOs, and data leaders on the data foundations that make AI deployment defensible.