Who We Are

When AI Pilots Show Promise, But Outcomes Stay Flat

Mickey Baines

By Mickey Baines, SVP of Market Development

March 31, 2026

As institutions begin exploring new approaches to better interpret and act on student signals, AI is often one of the first places they turn. 

In many cases, that exploration starts with pilots. 

Teams begin applying AI to specific parts of their work—drafting and refining email and text campaigns, testing subject lines, responding to inbound student questions more quickly, or summarizing student records to prepare for outreach. Advisors and student support teams use it to triage common questions, identify next steps for students who have stalled, or capture and organize notes from prior interactions. Leadership teams use it to analyze large data sets, surface trends, and move more quickly from information to decision. 

These efforts are practical, targeted, and often effective. They create immediate gains in speed, consistency, and in some cases, quality. For many institutions, they provide the first clear indication that AI can have a meaningful impact on day-to-day operations. 

But when leaders step back and look at outcomes—enrollment trends, yield, student follow-through, or operational efficiency—those results are not changing in the same way. 

The activity is increasing, tools are working and pilots are showing promise.  

Yet the broader impact remains difficult to point to with confidence. 

This is where many institutions begin to feel a familiar tension. The expectation is that successful pilots should lead to measurable improvement. If individual use cases are working, expanding those efforts should naturally produce broader results. But in practice, that progression is not automatic because most pilots are designed to improve tasks, not to change how decisions are made or how work flows across teams. 

The result? They remain isolated. 

An admissions team may improve the speed and quality of its communications. An advising team may respond to students more efficiently. A leadership team may gain faster access to insights. Each of these changes is valuable on its own, but without a shared structure connecting them, they do not compound. 

Instead, they accumulate, and accumulation, on its own, does not create momentum. 

When institutions begin to recognize this gap, the instinct is often to expand. More use cases. More pilots. More opportunities to apply AI across different parts of the organization. In many cases, this expansion is tied to the belief that additional tools are required to unlock additional value. 

This introduces a new constraint. 

As the number of potential use cases grows, so does the perceived need for additional platforms, licenses, and integrations. Budget becomes a limiting factor, decisions slow down, and progress becomes tied to technology selection rather than to clarity of purpose. 

At that point, momentum stalls—not because there is no opportunity to move forward, but because the path forward becomes gated by decisions that are not directly connected to outcomes. 

Teams continue to experiment within the boundaries of what they already have. New ideas are considered, but not always pursued. Activity remains high, but the broader transformation that leaders expected begins to feel increasingly out of reach. 

What’s missing is not effort, and it is not potential. 

What’s missing is a way to connect individual improvements to shared outcomes—to move from isolated gains to coordinated progress. 

The institutions that are beginning to see meaningful impact with AI are not necessarily the ones running the most pilots. They are the ones that have clarified what they are trying to change—how decisions are made, where work slows down, and where students experience friction—and are applying AI in ways that align to those priorities. 

In those environments, pilots do not exist in isolation. They build on one another. They reinforce shared direction. And over time, they begin to change how work actually happens across teams, not just within them. 

That is the difference between activity and momentum. 

And it is why some institutions can run a handful of pilots and begin to see measurable progress, while others can run dozens and still struggle to point to meaningful change. 

This is the point where institutions begin to shift from experimentation to structure—moving beyond isolated pilots toward a more intentional approach designed to turn early gains into sustained momentum. 

To see how we help institutions align priorities and better leverage AI within their operations, click here.