By Chris Redhage, Co-Founder, and Donna Thiel, Chief Compliance Officer | ProviderTrust
Building Trust in Healthcare AI Starts with Governance, Not Technology
When we founded ProviderTrust, our mission was simple: create safer healthcare for everyone.
For more than a decade, we’ve pursued that mission by building the most accurate and reliable dataset in healthcare — powering exclusion monitoring, credentialing, and primary source verification for the nation’s leading health systems and payers.
Today, that mission hasn’t changed. But the ways we fulfill that mission are evolving, and we’re committed to maintaining the highest standards of data integrity as our technology advances.
Artificial intelligence is rapidly becoming part of healthcare operations. And with it comes a new, unavoidable question:
Can you trust the AI being used across your organization and your vendors’ organizations, too?
The Problem: AI Is Moving Faster Than Governance
Across healthcare organizations, new expectations are emerging almost overnight:
- Compliance Officers are being asked if AI is auditable and defensible
- Credentialing teams are being asked how automated verification can be trusted
- HR teams are being asked how AI handles ambiguous screening results
- Data and Security Risk teams are being asked for model documentation and data lineage
- Procurement teams are being asked to assess AI risk in vendor contracts
In many cases, the honest answer today is: they don’t have enough visibility to know.
That’s the gap. Not innovation, but governance.
Our Approach: Start with the NIST AI Risk Management Framework
At ProviderTrust, we believe AI should not be introduced until it can be governed.
That’s why we built our AI Trust & Integrity Program grounded in the NIST AI Risk Management Framework (AI RMF) , the leading standard for managing AI risk.
NIST doesn’t prescribe technology. It defines how organizations build trust into AI systems from the start through four core functions:
- Govern - Establish Accountability
We are implementing formal oversight structures, policies, and clear ownership for every AI system. This includes the creation of our AI Integrity Council, chaired by our Chief Compliance Officer, to govern all material AI decisions.
- Map - Understand Risk in Context
Not all AI carries the same risk. We classify AI use cases based on their potential impact, especially in high-stakes areas like credentialing, exclusion monitoring, and compliance workflows. This ensures the right level of control is applied before any AI is introduced.
- Measure - Validate Before Deployment
AI should not be trusted because it works once — it must be proven to work consistently. We validate models for accuracy, bias and fairness, and reliability. And we document those results in ways our clients can actually use.
- Manage - Monitor and Improve Continuously
AI risk doesn’t end at deployment. We continuously monitor performance and drift, emerging risks, and regulatory changes and we adapt accordingly.
Governance in Practice: The AI Integrity Council
Frameworks are only effective if they are operationalized.
That’s why we established the ProviderTrust AI Integrity Council, the first formal AI governance body in the healthcare provider, employee, and vendor data space.
The Council will:
- Provide guidance for material AI decisions
- Include cross-functional leaders across compliance, product, data, and engineering
- Incorporate founding client members from leading healthcare organizations
- Publish an annual AI Integrity Report
This is not theoretical governance. It is active, accountable, and transparent.
A Critical Principle: Trust the Data Before You Trust the AI
There is a fundamental issue in many AI conversations: organizations focus on the model, but overlook the data.
At ProviderTrust, we take the opposite approach: AI is only as trustworthy as the data behind it.
Our data foundation is built on:
- Cross-referencing all of the industry’s disparate primary sources
- Using unique identifiers for matching
- Requiring 100% verification before automation
- Routing any ambiguity to human review
This approach will continue as we introduce AI. Because preventing risk in healthcare AI doesn’t start at the model. It starts at the data layer.
No AI technology works, or should be trusted, without verified, accurate data as its foundation.
Why This Matters Now
Healthcare is entering a new phase of accountability for AI. Regulators, accrediting bodies, and boards are no longer asking if AI is being used — they are asking:
- Is it governed?
- Is it auditable?
- Is it fair?
- Is it safe?
Our AI Trust & Integrity program is designed to help our clients confidently answer “yes” to all of those questions. It is aligned to:
- The NIST AI Risk Management Framework
- Emerging NCQA AI standards
- The Colorado AI Act and evolving federal guidance
This approach will continue as we introduce AI. Because preventing risk in healthcare AI doesn’t start at the model. It starts at the data layer.
What This Means for Our Clients
This program isn’t just about ProviderTrust. It’s about giving every stakeholder in your organization a clear, defensible answer to AI usage as it relates to their accountabilities:
- Compliance: Documented governance, audit trails, and bias testing
- Credentialing: Verified data with human oversight preserved
- HR: Safe handling of ambiguous screening outcomes
- Data and Security Risk: Model documentation, data lineage, and validation evidence
What Comes Next
This program is just the beginning. As we move forward, you will see:
- The expansion of our AI Integrity Council
- Publication of AI System Cards for transparency
- Ongoing engagement with regulatory and industry bodies
- An annual AI Integrity Report
Trust Is Not a Feature
AI will continue to evolve. Regulations will continue to evolve. Expectations will continue to evolve.
But one thing won’t change:
Trust is not a feature. It is a foundation. And at ProviderTrust, it always has been.
Chris Redhage
Co-Founder, ProviderTrust
Chris co-founded ProviderTrust in 2010. He is also Co-Founder and minority owner of Nashville SC (MLS), Founder of FairWhistle, and a former professional soccer player.
Donna Thiel
Chief Compliance Officer, ProviderTrust
Donna leads ProviderTrust’s Compliance Program and Client Advocacy. She has more than 30 years of healthcare compliance experience and has worked with compliance professionals in the healthcare industry throughout her career. Upon joining ProviderTrust she has worked with executives across our entire client base for nearly 10 years.