The Innovation Gap: Why Health Tech Policy Must Catch Up to AI's Real-World Risk
An Op-Ed
November 8, 2025
Author: Nadine Wilches, LCSW, CIMHP
The promise of health innovation, particularly with the rise of Artificial Intelligence (AI) and advanced data analytics, is personally and professionally exciting. Yet, alongside these developments, we see “innovation” that fails to address tangible healthcare problems and could instead worsen patient outcomes and stress providers.
My experience across technology, policy, and care delivery makes these gaps evident, especially in the mental and behavioral health space where I have specific expertise. There are genuinely incredible tools being developed by clinicians to close major healthcare gaps and prevent diseases and disorders.
I am particularly optimistic about the shift toward prevention through predictive analytics and Population Health Informatics. Considering the complex interplay of biological, environmental, behavioral, and social susceptibility factors offers hope for a truly transformative future in healthcare.
I am also encouraged by students who are discussing innovation, advancements, breakthroughs, failures, and challenges that still need solving, demonstrating a drive for opportunity and knowledge.
The State of AI: Real Impact, Real Harm
AI is undeniably moving beyond the experimental phase and is beginning to make a real impact in the US, delivering better insights to increase safety at both the preventive and intervention levels. However, we are far from reaching a "do no harm" standard. The potential for real harm is evident, illustrated by negative examples such as suicidality and completed suicides resulting from Large Language Models (LLMs) and the damaging use of AI in social media.
Currently, professional tools face better organizational scrutiny and are outpacing the less-regulated direct-to-consumer tools. Organizations, driven by the motivation for beneficial ROI and managing their own risk, are more cautious. Direct-to-consumer tools, on the other hand, have far less oversight. Consumer knowledge of the dangers varies greatly, making these tools particularly risky for vulnerable groups like children and the elderly. Moreover, AI is moving so fast that even the most educated cannot fully protect themselves.
The sweet spot for regulation lies in policies that actively prevent harm, the threat of harm, discrimination, and fear, while simultaneously offering a clear, positive impact on health and well-being.
The Policy Lag: Fragmentation and Misalignment
While health technology is seeing exciting speed in development, policy and regulation are struggling to keep pace, leading to friction in the system.
The biggest disconnect I see is fragmented compliance. Implementation of standards is often scattered and inconsistent, even within single sub-sectors like primary care. Fitting new technologies into old frameworks is challenging; integrating new technologies into new compliance frameworks is also a struggle. This confusing, slow process unnecessarily impedes adoption, even when the ultimate goal of elevating product standards is shared.
Furthermore, misaligned incentives plague the system. Payors often focus on reimbursement within a short, one-year window, which clashes with the long-term view necessary for proving real patient outcomes and Return on Investment (ROI) that clinical practices require.
While discussions around data biases are happening, in practice, the issue is not being looked at deeply enough to truly address equity. Managing biases in data and algorithms deserves more attention and structure, as it has the opportunity to address many healthcare gaps that result from limitations in research and its application in healthcare settings.
Prioritizing 'Do No Harm' and Precision Equity
If policy is indeed lagging, the primary focus must be on ensuring that ‘do no harm’ is paramount. This must extend beyond clinical tools to products marketed as "non-clinical" or "wellness," especially those processing health data or engaging in health topics. A simple disclaimer is not sufficient to prevent harm when no clinical licensure or oversight is required. We cannot allow scope creep where unregulated tools effectively function as health interventions.
To drive equitable care, policymakers and health systems must leverage AI to move beyond mere speed and toward Precision for Equity. This involves using data to disaggregate health predictors and drivers by demographics, social determinants, and environmental factors, recognizing that different people have different needs.
AI can then aggregate this knowledge to help examine population trends, identify high-risk groups, and prioritize the focused resources needed for wide-spread adoption of preventive and individualized care. With the right intentions, we have been successful in this approach.
The Toughest Barriers to Scale
The promise of health tech is often killed at the scaling stage by several persistent barriers:
Fragmented IT Systems: Outdated, complex, and siloed Electronic Health Record (EHR) systems actively disrupt workflow and present enormous customization challenges. FHIR (Fast Healthcare Interoperability Resources) only solves a small portion of the problem of real-time data through APIs. We need to break the closed-loop problems and adopt a truly interoperable process, similar to the banking industry's framework.
Proving ROI: While the science and benefit predictions may be sound, demonstrating long-term ROI takes significant time and resources, leading to painfully long sales cycles in a system that is fundamentally change-averse.
Organizational Readiness: Innovation fails without multi-level buy-in. Securing the commitment of the right person with the leverage to drive adoption while simultaneously building relationships and reinforcing change support across a large system is a major challenge.
To improve equitable access for underserved populations, the most critical policy shift is addressing financial barriers for primary care, community healthcare non-profits, and higher education serving disadvantaged groups. Healthtech innovation is currently too closely tied to reimbursement, meaning that access inherently depends on health insurance, which excludes the very populations we need to reach. We must embed Equity-by-Design into the policies that govern technology access and literacy.
A Shift to a Lifecycle Approach
Looking ahead, I expect to see a necessary shift from fixed to lifecycle oversight in law and policy. Products are fluid, evolving entities. Regulation must allow for safe evolution within defined boundaries to avoid over-restriction, while still governing for SAFETY IN AI and EQUITY.
We need to replace fixed compliance models with a focus on Ethics. Future healthcare leaders, lawyers, and policymakers must be trained on the technical mechanisms of AI/ML and APIs, developing computational ethics standards. Innovators must shift their focus from proving novelty to PROVING clinical, pre-clinical, or preventive value.
Importantly, we must eliminate the ability for companies to hide under a non-industry label like "wellness" or “chat bot” while processing health data and engaging in health topics without proper safety regulations. The path to meaningful innovation is not found in chaotically fast, unregulated development, but in a measured, ethical, and equitable process that guarantees a better future of health for everyone. Let's build that future together.
For Press or Comments: contact@mindarchhealth.com
