Context Beats Data: Why a Finding Without Owner, Process, and Entity Is Just Noise
The first time I "delivered" a finding to a CEO, she handed it back. I had given her data. I had not given her context. And without context, the data was unactionable — which is to say, it was noise. What cartographers figured out 500 years before we did.
The first time I "delivered" a finding to a CEO, she handed it back.
This was about a year into consulting. I'd done what I thought was good work — sampled, walked through, identified a gap in the entity-level controls of a regional subsidiary. I wrote it up clean. Two paragraphs of finding, one paragraph of recommendation. I was proud.
The CEO read it twice. Then she slid it back across the table.
"Aisha, I don't disagree with any of this. But I genuinely don't know what you want me to do with it. Which entity? Whose process? Who do I call?"
I had given her data. I had not given her context. And without the context, the data was unactionable — which is to say, it was noise.
I think about that meeting often. It was the moment I understood that everything we do in GRC is anchoring, and the data is the easy part.
What cartographers figured out 500 years before us
The longitude problem nearly broke the British navy.
For most of the 17th and 18th centuries, ships routinely sank because captains could measure their latitude (with a sextant and the sun) but had no reliable way to measure their longitude. They knew, in a real sense, where they were — but only on one axis. The other axis was a guess. Whole fleets ran aground because a "where" without a reference frame is not a location. It's a feeling.
Parliament eventually offered a prize — the Longitude Act of 1714, £20,000 — to anyone who could solve it. John Harrison spent forty years building marine chronometers to do it. Once he succeeded, ships stopped sinking at the rate they had.
What changed wasn't the data. Captains had always had data. What changed was the anchoring. A coordinate became a location.
GRC is in its longitude phase. We have data. We don't have anchoring. And our ships keep running aground.
The shape of the problem
Sit in any GRC tool today and you'll see thousands of rows.
- Risks, scored 1–25.
- Controls, marked design-effective / operating-effective.
- Findings, color-coded by severity.
- KRIs, trending up or down.
- Regulatory requirements, mapped to controls.
Each row is a coordinate. Each row also drifts loose, because no two teams agree on what the row connects to. Whose risk is that? The legal entity? The business unit? The process? The system? The control owner? The director? When you scroll through a thousand-row risk register at a global organization, you cannot tell — from the row alone — whether the risk belongs to a person who is responsible, or to a category that is convenient.
Categories are convenient. Categories do not call you when things break.
In the boards I've sat with, the moment a question gets specific — "who owns this risk, and what process produces it?" — the team typically spends ninety seconds rooting around in adjacent systems. Sometimes they never find the answer. Sometimes they find three different answers from three different sources.
This is the audit equivalent of latitude without longitude. We feel where we are. We are not, in any operational sense, located.
What anchoring actually means
The fix is structural, not analytical. You don't fix a longitude problem by sampling more carefully — you fix it by adopting a reference frame.
In modern GRC, the reference frame is the organizational reference. It's the same idea, regardless of vendor:
- An Entity taxonomy that mirrors how the legal and operational structure really works.
- A Process taxonomy that names every process at a granularity coarse enough to argue with.
- An Owner model that names actual humans, not roles in the abstract.
- A connecting layer that anchors every risk, control, finding, KRI, regulatory requirement, and remediation step to a specific (Entity, Process, Owner) tuple.
Once that exists, every row in every tool has a coordinate that means something.
"Risk #4172 — payment reconciliation latency — owned by Marcus Yi at Singapore Treasury, Process P-117 (T+1 settlement)."
Now I can call Marcus. Now I can find P-117's runbook. Now I can compare this row to the seven other rows in P-117 and see whether we have a pattern.
If you cannot do that from a single row in your GRC tool, you do not have GRC. You have a database.
What it changes downstream
Anchoring sounds bureaucratic until you watch what happens when you actually do it. Three things change immediately.
1. Findings stop being orphans. Every finding has a default route to a real human, a real process, and a real entity-level control owner. The CAE no longer has to choreograph the handoff for each issue. The system does it.
2. Aggregation finally works. Roll-up by entity is meaningful, because the entities are real. Roll-up by process is meaningful, because the processes are named. You can finally say, with a straight face, "Singapore Treasury has 14 open findings, all from P-117 — we have a process problem, not a control problem." Before anchoring, you couldn't say either word.
3. AI agents become useful. This is the underappreciated one. Every agent in a modern GRC platform — RiskSentinel, EvidenceHunter, RegMapper — depends on context to operate. Give an agent a finding without an Entity / Process / Owner, and the best it can do is summarize. Give it a finding with full anchoring, and it can propose a control redesign, surface related findings across other entities, draft the remediation memo, and route the issue. Agents do not invent context. They consume it.
If you are wondering why your AI features feel disappointing, the answer is almost always: your data is not anchored.
What this looks like on Monday
A reframe and a test.
The reframe is simple. Every artifact in your GRC system — every risk, every control, every finding, every KRI — should be evaluated by a single question:
"If a person who has never seen this artifact before opens it, can they tell what to do, and whom to call?"
If the answer is yes, the artifact is anchored. If the answer is no, the artifact is data without context, which is to say, noise.
The test is the artifact in front of you right now. Pull a random finding out of your issue tracker. Ask the question. Watch what happens.
The first time most teams do this, they fail. Six months in, after the work of anchoring, every artifact passes. The GRC operation becomes — for the first time — located. The ships stop running aground.
The point
That's the work. That's the reason every essay in this series eventually comes back to context. The eight tracks of the album rest on top of this one. Without context, none of the others mean what they're supposed to mean.
You can have the best risk register in the world, the sharpest control library, the most rigorous testing methodology, and the cleverest AI agents — and if the rows aren't anchored to a real Entity, a real Process, and a real Owner, you are still navigating without longitude.
Fix that first. Everything else gets easier.
