The urgent need for a Tiered Governance Model for Agentic AI
- Rajesh Kodichath
- Sep 2
- 2 min read
Updated: Sep 15
The urgent need for a Tiered Governance Model for Agentic AIWhat happens when ungoverned AI agents cost millions, leak sensitive data, or misguide customers? With enterprises adopting AI agents at breakneck speed—Gartner predicts 33% of enterprise software will integrate agentic AI by 2028—we’re facing an imminent “agent sprawl” crisis, echoing the BI chaos of the 2000s.
Then, non-technical users built BI dashboards freely, causing confusion, duplication, and mistrust. We helped enterprises solve this with a tiered governance/certification model – an approach that become pretty popular:
Gold: Fully verified by IT, trusted data, validated logic.
Silver: Certified data, user-defined logic.
Bronze: Unverified—use at your own risk.
This balanced innovation with trust. Executives knew which reports to rely on for critical decisions.
Now, I see history is repeating — but the stakes are far higher. Agents aren’t just showing data — they’re interpreting it, acting on it, and interacting with customers. Without proper governance we have seen the consequences:
Hallucinations: Air Canada’s chatbot gave false promises, costing thousands in legal penalties.
Data Leaks: Samsung’s AI exposed proprietary code due to weak guardrails.
Bad Decisions: Poorly tested agents fail critical tasks, disrupting operations (e.g., 70% failure rate in office tasks – theregister.com).
Just as we did with BI, there is an urgent need for a clear system to categorize enterprise agents by trust level and we are helping pioneer this approach with our enterprise clients. Here are the 4 critical things we are considering for a start:
Data Quality: Trusted, verified sources.
Context & Grounding: Accurate, relevant knowledge.
Security Controls: Strict data access boundaries.
Testing & Auditing: Rigorous validation to prevent errors.
This approach should ensure a balance between speed, innovation, trust & accountability — allowing business users to rapidly prototype agents while giving executives and end users confidence in the results/outcomes they can expect.
Salesforce’s research on trusted AI and its Informatica acquisition further highlights the need for robust governance. Let’s not let this agent sprawl derail us.
How are you addressing this in your enterprise?
Comments