Is this AI project worth building? A framework for founders
The question is not 'can AI do this?' It can do almost anything in a demo. The question is whether the ROI clears the real cost—integration time, maintenance, and the three months before it stops surprising you. Here is the two-by-two.
Someone sends me a message every week that starts with some variation of: "I have an idea for an AI agent that could automate [X]. What do you think?"
What they are usually asking is: "Is this technically feasible?" And the answer is almost always yes. You can build an AI agent to do almost anything in a demo. The more useful question—the one that determines whether you should actually build it—is whether the thing you are about to build will deliver more value than it costs, in the real world, six months from now.
That question has four cells. I call it the Build-or-Skip Grid.
The two axes
The first axis is ROI clarity: do you have a concrete, measurable reason to believe this automation will deliver value, and do you know how to measure it? Not a general belief that "AI will make this faster"—a specific number. "This workflow currently takes three hours per week. If I automate it, those three hours get redeployed to the thing that actually moves revenue." Or: "We lose an estimated fifteen leads per month to slow follow-up. If this brings follow-up time from forty-eight hours to fifteen minutes, we should recover eight to ten of those leads at an average value of $400 each."
ROI clarity is not certainty. It is the presence of a hypothesis you can test. Low ROI clarity means you are building on vibes. High ROI clarity means you have a specific number you expect to move and a way to know whether it moved.
The second axis is maintenance cost: how hard is this thing to keep running after it is deployed? Maintenance cost is a function of three things—the fragility of the integrations it depends on, the frequency with which the underlying data or logic changes, and how many people in the organization can debug it when it breaks.
A workflow that reads from one stable Google Sheet and writes to one stable database has low maintenance cost. A workflow that talks to eight external APIs, uses an LLM for a decision that changes based on business logic updated monthly, and requires a developer to debug when it breaks has high maintenance cost. Both might be worth building. But "worth building" means something different depending on where they sit on this axis.
The four cells
High ROI clarity, low maintenance cost: build it. This is the clearest yes. You know what value it will deliver, you can measure it, and it will not eat your time after it ships. This is also the rarest cell—most ideas that are high-value are also high-complexity, because if they were easy and obvious, someone would have already built them.
High ROI clarity, high maintenance cost: build it with caution. Structure the first version to be simpler than your ideal version—fewer integrations, narrower scope, more conservative LLM use—because you need to prove the ROI hypothesis before you commit to the maintenance burden. A $3,000 simple version that proves the hypothesis is worth more than a $15,000 complex version that disproves it.
Low ROI clarity, low maintenance cost: experiment. Build a minimum viable version in a day or two and run it for thirty days. If it turns out to be useful, you will know. If it does not, you have lost two days, not three months. The condition for this cell is that the maintenance cost is genuinely low—if running the experiment still requires a developer to maintain it, it does not belong here.
Low ROI clarity, high maintenance cost: do not build it. This is the most common place where AI projects go wrong. The founder is excited. The demo is impressive. The integration is complex. The ROI case is vague. Twelve weeks later, the system is technically working and nobody is using it, and the team is spending four hours per week maintaining something they cannot explain the value of.
I am aware that "do not build it" is not a popular answer. It is the correct one.
The question to ask before you start
Before any AI project I take on, I ask the client one question: "What will you stop doing once this works?" Not "what will be better"—that is too vague. What specifically will change about how someone on your team spends their time, and is that time currently going to something valuable enough to justify the project cost?
If the answer is crisp—"My ops lead will stop spending three hours a day processing vendor invoices and spend that time on the partnership work that is currently not happening"—I am confident the project has a real ROI case.
If the answer is vague—"Things will be more efficient," "We will have better data," "The team will feel more supported"—I ask the question again, more specifically. Sometimes the second answer is crisp, and we are fine. Sometimes the second answer is still vague, and that is a signal that the ROI case is not actually there, and building the thing will produce a technically impressive result with no business impact.
The moral of the story is not "AI is overhyped" or "don't build things." The moral is that the value of an AI project is not in the automation; it is in what the automation makes possible. If you cannot clearly articulate what becomes possible, you have a demo, not a project.