From the Becker's Exhibit Hall: AI Has Arrived. Now What?
April 23, 2026

Walking the exhibit hall at Becker's 16th Annual Meeting in Chicago, it didn't take long to notice the pattern. Booth after booth, session after session, the same word appeared on every banner, every slide deck, every product one-pager: AI.
That's not new. Healthcare conferences have been talking about artificial intelligence for years. What was different this April was the texture of the conversation. The hype hasn't disappeared, and there's still no shortage of vendors promising radical transformation, but something has shifted. The most interesting sessions at Becker's weren't the triumphant ones. They were the honest ones: "Ambient AI Real Talk: Adoption, Resistance and ROI." "Forecasting AI's Role Amid Healthcare Transformation." "Strategic ROI with AI-Powered Workflow Redesign: A Real-World Use Case." Health system leaders aren't asking whether AI matters anymore. They're asking harder questions: What happened after the pilot? Where is the ROI actually coming from? Why is my team still resistant six months in?
Those are the right questions. And as a company building AI-powered chart abstraction for academic medical centers, registries, and clinical trials, they're also the questions we think about every day.
AI Is Real, and Health Systems Know It
Let's start with what's undeniable. AI has arrived in healthcare, and the numbers are significant. Seventy-five percent of U.S. health systems are now using at least one AI application, up from 59% in 2025, according to a 2026 survey from Eliciting Insights. Clinical note-taking and ambient documentation have led the charge, but adoption is spreading fast into revenue cycle, care coordination, diagnostics, and beyond.
The Becker's AI Summit made this tangible. CommonSpirit Health presented a portfolio of 230 AI tools with a reported $100M in impact. Cleveland Clinic's Chief Digital Officer was on stage. The message from health system leaders was consistent: this technology has real potential to address some of healthcare's most stubborn problems: clinician burnout, administrative overload, the cost of managing data at scale.
At Brim, we see this reflected in our own customers. The academic medical centers, registries, and research teams we work with are no longer asking "should we be thinking about AI?" They're in it. They've started projects, run pilots, seen results. What they want to know now is how to expand confidently, how to validate what the AI is actually doing, and how to build processes that hold up over time.
But "AI" Is Not a Solution; It's a Capability
Here's where the Becker's conversation gets interesting, and where we think a lot of the health system energy is well-placed.
AI is a technology. An extraordinarily capable one, but a technology nonetheless. It doesn't arrive pre-loaded with a use case. It doesn't know your registry. It doesn't know the nuances of your surgical outcomes data or the edge cases in your oncology notes. The value of AI in any given context is entirely dependent on how it's deployed: what problem it's solving, how it's been trained and validated for that problem, and whether the humans working with it can understand and trust what it's doing.
The challenge for health system executives is separating genuine AI value from the excitement around innovation and marketing hype, and carefully vetting each solution against its cost, time-to-value, and true ROI. That framing captures something important. The vendors at Becker's with "AI" on their banners were selling radically different things. AI for prior authorization, for ambient documentation, for patient scheduling, for supply chain, for chart abstraction. The underlying models may overlap, but the solutions are completely distinct. What makes each one succeed or fail is everything that surrounds the model: the workflow design, the validation methodology, and the human review that catches what the AI gets wrong.
One overarching theme from early AI implementation in healthcare is that problems arise when organizations lose sight of the need to start small and build trust. We'd add to that: problems also arise when organizations conflate the capability (AI) with the solution (the specific application, with all its surrounding infrastructure). Buying an AI product isn't a strategy, but solving a specific problem with AI, carefully and verifiably, is.
This is a distinction we've seen play out in our own domain. As we wrote in Beyond DIY AI, many teams can get a compelling AI chart abstraction demo out of a large language model with a few days of work. Scaling that into something production-ready, with consistent accuracy, auditable outputs, and processes that hold up under institutional scrutiny is a fundamentally different challenge.
Why Brim Starts with the Problem, Not the Technology
Brim exists because chart abstraction is a real, painful, expensive problem that healthcare has been living with for decades. We believe that chart abstraction is also a problem that AI is genuinely well-suited to help solve.
Chart abstraction is the process of reading through unstructured clinical notes and extracting specific, structured data points: a diagnosis, a complication, a surgical outcome, a patient characteristic. It powers clinical registries, retrospective research, clinical trial screening, and quality improvement programs. It is also, in its traditional form, extraordinarily slow. One retrospective study estimated it would take 8–25 full-time staff to capture just one type of cancer recurrence at a single hospital. Multiply that across the number of variables most registries track, and across the number of institutions contributing data, and you start to understand why so much valuable clinical data never makes it into the analyses that could improve care.
When we built Brim, we didn't start with a model. We started with this problem. AI, and specifically, large language models, turns out to be well-matched to it, because chart abstraction is fundamentally a language task: read this note, apply this clinical definition, determine whether this criterion is met. LLMs can do that with impressive accuracy, and they can do it at a scale that no human team could match.
But the AI is not the solution. The solution is a system that makes AI output trustworthy enough to use in consequential clinical and research contexts. That means:
Structured variable definitions. The precision of an abstraction result is only as good as the precision of the question being asked. Brim's Abstraction Metadata Model (BAMM) provides a structured, machine-readable framework for defining exactly what the AI should extract and how, ensuring that the instructions are consistent, inspectable, and improvable over time.
Rigorous validation. AI that performs well on a pilot dataset may drift when document types change, when note styles shift, or when new edge cases emerge. We've written extensively about how to validate AI abstraction and how to keep it accurate as processes scale because we know accuracy isn't a one-time certification, it's an ongoing discipline.
Human-in-the-loop design. This is perhaps the most important one, and it connects directly to the conversation happening at Becker's about trust and adoption. As we discussed in The Black Box Problem, healthcare AI that operates as a black box will always face resistance, and rightly so. When a clinical researcher can see exactly what text the AI used to reach a conclusion, they can evaluate it, correct it, and if appropriate, learn to trust it. That explainability is what separates AI as a research tool from AI as a liability.
The field is converging on a clear principle: AI output is only as valuable as the validation infrastructure around it.
The Questions That Matter
What struck us most at Becker's wasn't the volume of AI interest. It was the maturity of the skepticism. Health system leaders aren't rejecting AI. They're applying appropriate rigor to it, asking the same questions they'd ask of any significant clinical or operational investment: Does this actually work? How do I know? What happens when it doesn't?
That's exactly the right instinct. And it maps closely to what we've learned building Brim. The institutions we work with, from Vanderbilt to CHOP to UCSF, where Brim recently launched as an institution-wide pilot aren't using Brim because it uses AI. They're using it because it solves a specific problem reliably, at scale, with outputs their teams can stand behind in publications and registry submissions.
If you're a health system leader thinking about where AI fits into your clinical data strategy, we'd suggest starting not with the technology but with the problem. What data is your institution generating that isn't being captured? What processes are consuming clinical staff time on tasks that AI could handle? What would it take for your team to actually trust the output?
Those questions have answers. And the answers look less like selecting an AI vendor and more like building a disciplined, validated, human-centered workflow that happens to be powered by AI. The technology is the easy part. The hard and valuable part is everything around it.
That's what we're building at Brim. We'd love to show you how it works.