Why we built the Trust Panel
By InternalWiki Team · 18 March 2026 · 5 min read
Most AI tools give you an answer and expect you to trust it. Type a question, get a response, move on. If you're using it to brainstorm blog post ideas, that's fine. If you're using it to confirm a parental leave entitlement before telling a new hire what they're owed, "just trust it" is not an acceptable answer.
The Trust Panel exists because the cost of a wrong answer in enterprise is real. Not theoretical. Real — in pounds, in legal exposure, in broken trust between employer and employee.
The problem with unsourced answers
Imagine an HR manager asks an AI assistant: "What's our parental leave policy?" The AI responds: "Employees are entitled to 12 weeks of paid parental leave." The HR manager tells the new hire. Six months later, the employee discovers they were actually entitled to 16 weeks — the policy was updated in January 2026, but the AI was drawing from a 2022 version of the handbook.
Who's at fault? The AI didn't flag that it was citing an outdated source. The HR manager had no way to check. The new hire lost four weeks of leave they were entitled to. This is what happens when answers arrive without sources.
The real cost of wrong answers
That scenario plays out with consequences that compound. The HR manager told the new hire 12 weeks. The employee plans their life around it — childcare arrangements, their partner's leave schedule, finances for the gap period. When they discover they were owed 16 weeks, it isn't just four lost weeks. It's a formal grievance. It's legal exposure for the company. It's broken trust between employer and employee that doesn't repair easily. One wrong AI answer. Real consequences measured in tens of thousands of dollars — in legal fees, HR time, and remediation — before the reputational cost enters the picture.
This is why the Trust Panel exists. Not as a design feature. As an engineering requirement.
What the Trust Panel shows
Every answer in InternalWiki comes with three things: citations, confidence, and freshness.
Citations are claim-level, not document-level. Each factual statement maps to a specific passage in a specific document. Click the citation badge and you see the exact paragraph the AI used. One more click opens the original document in Google Drive, Slack, or SharePoint.
Confidence is a score from 0–100% based on how many sources agree, how relevant they are, and how directly they answer the question. But it's important to understand what this score actually represents. It is not the AI's certainty. It is evidence strength.
94% means multiple current sources corroborate the answer independently — the policy document, the employment contract, and the HR handbook all agree. 62% means the AI found something relevant but the evidence is weak — perhaps only one source, or a source that hasn't been updated recently. The Trust Panel at 62% tells you to verify with a human before acting. 31% means the AI is effectively guessing from tangentially related content. At this level, the Trust Panel shows a warning and suggests who in the organisation to speak to instead.
The score is calculated from four signals: the number of independent sources supporting the claim, the relevance scores from retrieval, the degree of cross-source agreement (contradictions lower the score and trigger a conflict flag), and the freshness status of the supporting documents.
Freshness tells you whether each source is still current. Not by age — by content type. The system classifies every document into one of five freshness categories, each with different validity logic.
Evergreen documents — contracts, legal agreements, founding documents — are valid until explicitly superseded. A five-year-old employment contract showing a green “Still valid” badge is correct behaviour, not a bug.
Periodically updated documents — org charts, team directories, handbooks — have a natural refresh cycle. An org chart last updated four months ago might be stale; one updated last week is probably current.
Point-in-time documents — board minutes, decision records, announcements — describe a moment. They don't become less valid over time; they become historical.
Operationally live documents — sprint boards, project trackers, status pages — are only meaningful in their current state. Yesterday's version is already outdated.
Regulatory documents — compliance policies, data retention rules — are valid until the regulation they reference changes, regardless of when the document was written.
The design principle
The Trust Panel is built on a simple principle: the AI should show its work, not ask for your trust.
When a financial analyst presents findings, they show their sources. When a lawyer cites precedent, they provide the case reference. When a doctor explains a diagnosis, they walk through the evidence. In every high-stakes profession, the answer comes with proof. AI in enterprise should work the same way.
// Every answer includes structured trust metadata
{
"confidence": 0.94,
"citations": [
{
"claim": "16 weeks of paid parental leave",
"source": "Employee Handbook v4.2",
"passage": "Section 7.1: All full-time employees...",
"freshness": "current"
}
]
}We don't think trust is something you ask for. We think it's something you earn — by showing your sources, being honest about uncertainty, and making verification fast. That's what the Trust Panel does.
InternalWiki Team
Building the enterprise answer layer.
More from the blog
Get new posts delivered
Product updates, engineering deep dives, and what we're learning about enterprise AI. No spam. Unsubscribe anytime.
Join 200+ subscribers