Who We Are

Better AI Output Starts Before the Question is Asked

Mickey Baines

By Mickey Baines, SVP of Market Development

April 10, 2026

Over the past few weeks, I’ve sat with a handful of senior leaders who told me they were using AI regularly. They weren’t wrong. They were in ChatGPT, Copilot, and other tools, asking questions, generating content, and exploring ideas as part of their day-to-day work. 

Then we pulled up a few of those conversations together. 

What showed up wasn’t a lack of effort or curiosity. It was how they were starting. Prompts were broad, often missing any real context about their institution. The goal was to get to an answer quickly, which meant the setup was rushed or skipped entirely. In a few cases, they were trying to push toward strategy-level outputs without giving the AI anything specific to work with. 

The output looked fine on the surface. It read well. It sounded thoughtful. It just didn’t hold up when you tried to use it. 

That gap is showing up everywhere right now.

 

The Issue Isn’t Whether People Are Using AI 

Across higher ed, we’ve clearly moved past the question of whether AI matters. Leaders are experimenting, teams are logging in, and work is being produced. From the outside, it looks like adoption is happening at a meaningful level. 

What’s less visible is the inconsistency in what people are getting back. Two people can ask similar questions and walk away with completely different levels of value. In many cases, the difference has nothing to do with the tool. It comes down to how the interaction was structured from the start. 

Most people are still approaching AI like a faster version of search. They ask a question, review the response, and adjust the prompt to try again. That approach holds up for simple tasks, but it starts to break down when the work requires specificity, context, or real decision support. 

That’s where prompting shifts from being a convenience to something that actually shapes outcomes. 

 

Where CRIT Comes In 

The framework we’ve been using in our AI Momentum work comes from Geoff Woods. It’s called CRIT, and it gives structure to how you engage with AI in a way that’s easy to apply but difficult to skip once you’ve used it correctly. 

At a high level, it forces you to slow down and build the interaction before you ask for anything in return. That shift alone changes the quality of what comes back. 

C – Context: Give the AI Something Real to Work With 

The most common issue we see is a lack of context. Leaders assume the AI understands their situation because they are asking institution-level questions, but the model has no awareness of their enrollment pressures, financial constraints, program mix, or internal priorities. 

When context is missing, the AI fills in the gaps with general patterns. The response often sounds polished, but it stays at a level that could apply to almost any institution. 

When you take the time to provide real inputs—your data, your constraints, the situation you are actually navigating—the response starts to align with your environment. It becomes something you can work with instead of something you have to reinterpret. 

R – Role: Introduce Perspective Into the Work 

Once context is established, assigning a role changes how the AI processes the request. Instead of responding in a generic voice, it begins to operate from a defined point of view. 

In higher ed, this becomes particularly useful when you align the role to how decisions are actually made. You can ask it to think like a VP of Enrollment balancing net tuition revenue, or a CFO evaluating financial sustainability, or even a small advisory group that reflects multiple perspectives at once. 

What changes here is not just tone. It’s how the response is constructed and what it prioritizes. 

I – Interview: Surface What You Haven’t Said Yet 

This is the step that most people skip, and it’s the one that consistently improves outcomes the fastest. 

Instead of moving directly to the task, you instruct the AI to ask you questions first. Keeping it to one question at a time forces a more deliberate exchange and prevents both sides from rushing through the setup. 

What tends to happen in this step is that gaps become visible. Information that felt obvious in your head starts to come out more clearly. Assumptions get challenged. In some cases, the questions themselves shift how you were thinking about the problem before you ever reach the output. 

In our work with clients, this is often the point where the quality of the interaction changes. The AI has more to work with, and the user has clarified their own thinking in the process. 

T – Task: Define What You Actually Need 

By the time you reach the task, the AI has enough context and direction to produce something useful. Instead of a broad request, you are giving it a clear assignment with expectations around format, tone, and constraints. 

At this stage, the interaction feels less like trial and error and more like execution. The output reflects the work you put into setting it up, which is exactly where the value comes from. 

 

What This Looks Like in Practice 

Take a common example. A leader wants to understand how to improve yield. A typical prompt might ask for strategies based on current trends, and the response will usually include a set of reasonable ideas. The challenge is that those ideas are not grounded in the institution’s actual situation. 

When the same scenario is approached using CRIT, the setup changes. Context includes recent applicant behavior, discount rate pressure, and program-level variation. The role reflects someone responsible for enrollment performance in a similar environment. The interview step surfaces additional details about constraints and priorities. Only then is the task defined. 

The difference shows up immediately. The response is more specific, the recommendations are more relevant, and the conversation moves closer to something that can be acted on.

 

Where This Changes Outcomes 

Two patterns show up consistently when teams begin using this approach more intentionally. 

The first is around context. Without it, even strong prompts produce surface-level outputs. With it, the same tools begin to reflect the institution’s reality in a way that supports actual decision-making. 

The second is around blind spots. The interview step brings forward questions that were not initially considered. In many cases, those questions reshape the direction of the work before the final response is even generated. 

These are small shifts in behavior, but they compound quickly when they are applied consistently.

 

Why This Matters Now 

In a recent conversation on how institutions are using AI in practice, one theme kept coming up. The difference in outcomes was not tied to the tool being used. It was tied to how people were engaging with it in their day-to-day work.  

That’s especially relevant for leaders. When leaders change how they interact with AI, it influences how decisions are explored, how quickly teams move, and how consistently work is executed across the organization. 

 

Where to Start 

The next time you open one of these tools, resist the urge to jump straight to the task. Take a minute to build the interaction first. Provide context that reflects your environment, assign a role that matches the perspective you need, and let the AI ask questions before you move forward. 

It does not take much longer, but it changes what you get back in a way that is immediately noticeable. 

And once you see that difference, it becomes very difficult to go back to the old way of working.