What I actually do on a consulting call
A transparent walkthrough of the first sixty minutes, what I ask, what I find, and why most calls end with the same three fixes.
I sell consulting calls. You can book one in three clicks from my homepage. People keep asking me, before they book, what actually happens on the call. This post is the honest answer.
I am writing it partly so the people who would not get value from a call can self-select out, and partly because I think the process is interesting on its own. If you are an AI consultant, or thinking of becoming one, you may find it useful. If you are a founder thinking of hiring someone like me, you will know exactly what you are buying.
The first ten minutes are not about your code
The first ten minutes are about your problem. Specifically, which of four categories you are in, because each one gets a completely different first hour and I do not have a one-size script. (If a consultant tells you they do, run.)
Category one: "I have a working thing, and I need it to work better." More reliable, cheaper, faster, less maintenance. You know what it does, you can demo it, something is off, and you cannot quite name what.
Category two: "I have a half-built thing and I am stuck." You know what you want it to do, you have tried two or three approaches, none of them are working, and you are not sure if the problem is your architecture, your prompt, your tool choice, or your plan.
Category three: "I have not started." You have an idea, and you want to know if it is a good idea, what stack to use, what to build first, and what to skip.
Category four: "I want to get into AI automations myself." You are not hiring me to build something. You are hiring me to tell you how to become the person who builds things. Sometimes you are a software engineer looking to pivot, sometimes an operator who is tired of doing the same workflow by hand, sometimes a freelancer trying to add AI to the services you already sell. The hour is about what to learn, in what order, what to charge, how to find the first client, and what the common traps are in the first ninety days. I am opinionated here for the same reason I am opinionated everywhere: I have watched a lot of people start this journey, and the ones who succeed do a very specific set of things, and the ones who flounder do a very specific different set of things.
For category one, the first thing I ask is to see the logs. Not summarized, raw. The thing you think is going wrong is almost never the thing actually going wrong, and the logs know. The second thing I ask is how you would know, right now, if the system had stopped working for one specific customer. About 80% of the time the answer is some version of "I would not know." That is the real problem. The flakiness is a symptom; the invisibility is the disease. The first hour usually ends with a shared plan to instrument the system properly, which, if we are moving, we can start live on the call.
For category two, the first thing I do is make you draw the pipeline on a whiteboard. Not the prompt, not the code, just boxes and arrows. What calls what. What data moves between them. What happens on failure. Most of the time, within five minutes of drawing, you see the problem yourself. I am not being falsely modest about this. The act of drawing forces a clarity that writing code does not, and the times when the problem is genuinely not visible from the diagram are the times I actually earn my hourly rate, because those are the problems that need someone who has seen the failure mode before.
For category three, the first thing I ask is not about AI at all. I ask what you are going to stop doing, or stop paying someone to do, once this works. If you cannot answer that, the project does not need to get built. I am serious about this one. "We want to add AI to our product" is not a reason. "Our ops team spends eleven hours a week on X and we want to get that to under one" is a reason. The difference between those two sentences is the difference between a project that will pay for itself and one that will quietly die in a drawer six months from now.
The two modes after the diagnosis
After the first ten or fifteen minutes, the call pivots into one of two modes.
- Debugging. We open your repo or your n8n workspace or your Retell dashboard. I drive, you watch, and we find the thing. In the best case we fix it live. In the normal case we find it and leave you with a specific, scoped task you can finish in a day. I try very hard to make sure you understand why the fix works, not just what the fix is, because you are going to hit ten more of these and I will not be there for most of them.
- Architecture. We do not touch the code. We map out what the system should look like, what the build order is, what you are going to cut from v1 so v1 actually ships, and which parts need to be an LLM versus a function versus a sub-agent versus a cron job. I am opinionated in this mode. I will tell you that the thing you are most excited about is the wrong thing to build first, and that is what you are paying me for. (A consultant who will not disagree with you is not a consultant. They are a very expensive yes-button.)
The three fixes almost every call ends with
Most calls end with some combination of exactly three recommendations. I am going to list them here, not to spoil the product but because knowing them is the thing that tells you whether you should book the call in the first place.
- Your observability is not good enough. You need to be able to see, per run, what happened, and right now you cannot. Fix this first. (The long version is a post on this blog called "Build the dashboard before the agent.")
- You are using an LLM for a step that does not need one. Replace it with code. Your system will get faster, cheaper, and more reliable in a single afternoon, and you will be embarrassed you did not do it sooner. (Long version: "Most of your automation does not need an LLM.")
- You are trying to get one agent to do too many things at once. Split it into smaller agents, wired by code, each with one job. (Long version: "The single-agent trap.")
Three posts on this blog are the long versions of those three fixes, dressed up as worldview essays. That is not an accident. I wrote them because I was tired of explaining the same three things on call after call, and because writing them once meant the clients who read them before booking arrived already halfway to the answer.
Category four calls end differently. If you booked because you want to become the person who builds these systems, the hour is not about fixing your pipeline, it is about the first ninety days of your new practice. The usual three recommendations are: pick one narrow vertical and obsess over it instead of being a generalist, build a single portfolio piece that is embarrassingly specific so the right client recognizes themselves in it, and charge more than feels comfortable because the clients who balk at your price are the same clients who would have drained your time. None of those three are on the blog yet. They will be.
Things I will not do on a call
I will not build your whole system in sixty minutes. If the scope of your problem is larger than an hour, I will tell you that in the first ten minutes, and either we adjust the scope or you book Kingstone (my enterprise arm) for the longer engagement. I would rather turn you away than take your money for a call I know is going to feel rushed.
I will not tell you what you want to hear. If your idea is bad, or your architecture is wrong, or you are about to waste three months, I will say so on the call, politely, with evidence, but directly. People sometimes do not like this in the moment. They almost always thank me for it later. Close enough to always that I have stopped worrying about the ones who do not.
I will not pretend to know things I do not know. If you ask me about a tool I have not used or a model I have not benchmarked, I will tell you, and I will usually offer to look into it and send a follow-up after the call. The consulting business runs on trust, and trust does not survive bluffing.
So: if you have a system that should be working and isn't, or a plan you are not sure about, or a half-built thing that is making you question your life choices, that is what the call is for. You can book one from the homepage. Bring your repo, your logs, and your honest version of the problem. I will bring the questions. The first fix is usually live. The second usually arrives a week later, in an email from you that starts with "you were right about the..." That is the whole job.