← All Posts
12 March 2026 by MichaelandDaniel
Software DevelopmentBusinessDevOps

Michael: I get asked “how long will this take” on nearly every project. And I have learned that the honest answer is always longer than the client wants to hear. Not because we are slow, but because building software properly has stages, and most of the time people want to skip to the bit where code gets written.

The projects that go wrong almost always went wrong before anyone wrote a line of code. Requirements that nobody pinned down. Assumptions that turned out to be wrong. A deadline based on someone’s gut feeling rather than an actual plan.

So Daniel and I decided to write this together. I will explain what each stage means for the person paying for the software. He will explain what it looks like from the other side. Between us you will get a picture of what the process actually involves and why each part earns its place.

Daniel: And I will try not to make it boring.

Working out what you actually need

Michael: This is where half the problems start. A client says “we need an app” or “we need a portal” and expects us to start building. But when I ask what it needs to do, the answer is often vague. “Something like X but simpler.” “We just need a dashboard.” I have heard “you know what we mean” more than once.

My job at this stage is to be annoying. I ask questions that feel obvious. What happens when a user forgets their password? What do you do with this data once you have it? Who else needs access? These conversations feel tedious but they are the ones that prevent a rewrite three months later.

Daniel: From my side, discovery is about turning business language into something I can build. When Michael says “the client wants to track orders,” I need to know what an order looks like. Does it have statuses? Can it be edited after submission? Does anyone need to approve it? What happens when two people edit the same order at the same time?

I usually end up drawing diagrams. Not because I enjoy it, but because the moment you draw a flow on a whiteboard, someone in the room says “oh, that is not how we do it.” That correction at the whiteboard stage costs nothing. The same correction after two weeks of coding costs a lot.

The output of this stage is a document that describes what we are building in enough detail that both the client and I agree on what “done” looks like. It does not need to be fifty pages. For a small project it might be two pages. But it needs to exist.

Planning before anyone writes code

Michael: Clients sometimes push back on this stage. They see planning as a delay. “Why are you not coding yet?” Because if we start coding without a plan, we will build the wrong thing, realise it three weeks in, and start again. I have seen it happen on projects we inherited from other teams. Weeks of work thrown away because nobody planned the data model or thought about how the parts fit together.

What I want clients to know is that planning is the cheapest part of the project. Changing a plan costs nothing. Changing code costs time and money. Changing code that is already in production costs time, money, and trust.

Daniel: Planning for me is architecture. Where does the data live? How do the services talk to each other? What can we use off the shelf and what needs building? What are we going to regret in six months if we get it wrong now?

For small projects this takes a day or two. For bigger ones it might take a week. I break the work into pieces small enough that each one can be built, tested, and reviewed independently. That way if one piece takes longer than expected, the rest of the project does not grind to a halt.

I also make technical decisions at this stage that have budget implications, and Michael needs to be involved in those. Choosing a managed database versus running your own. Picking a framework that is well-supported versus one that is trendy but might not be around in two years. These are not just technical decisions. They affect how much it costs to run and maintain the software after we leave.

Building it

Daniel: This is the part people think is the whole job. It is usually about 40% of the total effort.

I write code in small pieces. Each piece does one thing. It gets reviewed before it goes anywhere near the main codebase. I use pull requests for this. Another engineer reads what I have written, questions the bits that do not make sense, and approves it or sends it back. I do the same for them.

Version control means every change is recorded. If something breaks, we can see exactly what changed, when, and who did it. We can undo it. This is not about blame. It is about being able to fix things quickly when they go wrong.

I commit code with clear messages about what changed and why. This sounds like a small thing but I have inherited codebases where the commit history is nothing but “fix” and “update” and it makes debugging almost impossible.

Michael: What clients should expect during this phase is regular visibility. We show working software early and often. Not slide decks, not progress reports, actual working screens. If a client has not seen their software running within two weeks of development starting, something is wrong.

The phrase I have learned to distrust is “it is nearly done.” In my experience, the last 20% of a feature takes as long as the first 80%. When Daniel says something is done, it is done. When someone says it is nearly done, I start asking what is left.

Testing

Daniel: I write tests alongside the code, not after it. Automated tests that run every time anyone pushes a change. If a test fails, the change does not get merged. This catches problems before they reach anyone else.

There are different types. Unit tests check that individual functions do what they are supposed to. Integration tests check that the pieces work together. End-to-end tests simulate a real user clicking through the application. Not every project needs all three, but every project needs some.

The tests also act as documentation. If you want to know what a function is supposed to do, read its tests. They describe the expected behaviour in concrete terms.

Michael: I can tell you what happens when you skip testing because I have inherited those projects. A client comes to us with an application that is “mostly working” but they are afraid to change anything because every time they fix one thing, two other things break. That is a codebase with no tests.

The cost of adding tests to an existing project that has none is brutal. It is much cheaper to write them from the start. When clients ask whether testing is really necessary, I ask whether they would buy a car that had not been test-driven. Nobody says yes.

Getting it live

Daniel: Code that works on my laptop needs to work on a server. That gap is where a lot of things go wrong. Different operating system versions, different environment variables, missing dependencies. The fix is automation.

We use CI/CD pipelines. When code gets merged, an automated process builds it, runs the tests, and deploys it to a staging environment. Staging is a copy of production where we can test things without affecting real users. If staging looks good, we promote it to production.

This means deployments are boring. That is the goal. If deploying your software is a stressful event that requires someone to stay late and babysit the process, your deployment pipeline needs work.

We also set up monitoring from day one. If the application starts throwing errors, we know about it before the users do. If the database is running slow, we see it in the graphs. If the server runs out of disk space, we get an alert. We do not wait for someone to ring us and say “the site is down.”

Michael: Clients often think the project is finished when the software goes live. It is not. Launching is the beginning of a new phase, not the end of the project.

Software needs maintenance. Security patches. Dependency updates. Performance monitoring. Bug fixes for edge cases that only appear with real users and real data. If you are not budgeting for maintenance, you are building something that will slowly degrade until it needs replacing.

I always have this conversation before a project starts, not after. How much ongoing support do you need? Who is going to monitor it? What is your plan if something breaks at 2am? If the answer to all of those is “we have not thought about it,” that is the first thing we work on.

Where AI fits into this

Daniel: I use AI tools throughout the process now. Not to write the software for me, but to move faster through the parts that used to slow me down.

During planning, I use it to think through edge cases. “What happens if two users submit the same form at the same time?” I know the answer, but having something to bounce off helps me make sure I have not missed anything. It is like rubber duck debugging but the duck talks back.

During development, AI handles a lot of the tedious bits. Generating boilerplate, writing the first draft of tests. I review everything it produces. It gets things wrong often enough that blind trust is not an option, but it gets things right often enough that ignoring it would be stubborn.

Documentation is where it saves the most time. Writing up what a function does, generating API docs, creating onboarding guides for the next developer. I used to put this off because it was boring. Now I generate a first draft and clean it up. The documentation actually gets written, which is the point.

Code review is another one. Before I ask a colleague to review my code, I sometimes run it through an AI tool first to catch the obvious things. Unused variables, inconsistent naming, potential null references. It does not replace a human review but it means the human reviewer can focus on the things that matter, like whether the approach is right.

Michael: From the business side, AI has changed what is realistic for a small team. We can take on work now that would have needed a bigger team three years ago, not because the AI does the skilled work, but because it handles enough of the routine that our engineers spend more of their time on the hard problems.

It has not changed the process itself. You still need discovery, planning, testing, deployment. AI does not skip stages. It just makes each stage faster. The clients who understand that get better results than the ones who expect AI to mean everything is instant.

The bit that ties it all together

Michael: If your development team or consultancy cannot explain their process in plain language, that is worth paying attention to. A good process does not need to be complicated. It needs to be understood by everyone involved and followed consistently.

Daniel: And if someone tells you they do not need a process because they are agile, run. Agile is a process. It is just a different one. “We do not have a process” means “we make it up as we go along,” and that only works until it does not.

Michael: If you are about to commission software and want to understand how we would approach it, get in touch. We will walk you through it honestly.

Want to talk about this?

If something here is relevant to what you are working on, we are happy to chat.

Get In Touch