Why software projects fail
The failure modes of software projects are remarkably consistent across industries, company sizes, and project types. Understanding them is the first step to avoiding them.
Scope creep
The most common failure mode. A project that starts well-scoped gradually accumulates requirements — "while you are in there, could you also..." — until the timeline is untenable and the team is exhausted. Every change request has a cost; the discipline is making that cost visible and making explicit decisions about whether to pay it.
Ambiguous requirements
Ambiguous requirements produce ambiguous software. The phrase "the system should be user-friendly" is a design value, not a requirement. Requirements should be specific, testable, and agreed upon before development begins: "The learner can access a completed course certificate from their profile within 2 clicks."
Communication gaps
Problems discovered late are expensive. A misunderstanding about how a feature should work that surfaces during development is a day of rework. The same misunderstanding discovered in user acceptance testing is a week. Discovered by a client in production, it can be months of negotiation and remediation.
The remedy is structured, regular communication — not longer status meetings, but focused check-ins with specific artefacts (demos, updated tickets, written decisions) that create a shared record.
The toolkit
The Product Requirements Document (PRD)
A PRD answers three questions: what problem are we solving, for whom, and how will we know if we have succeeded? It does not need to be long — a well-written PRD for a focused feature set can be 2–5 pages. What it must include: the problem statement, the user personas, the core user flows, the success metrics, the explicit out-of-scope list, and the acceptance criteria.
Sprint planning and the backlog
Work is planned in fixed-length iterations (sprints). The backlog is a prioritised list of user stories — small, estimable pieces of work that each deliver value. Stories are estimated using relative sizing (story points or t-shirt sizes), not hours. Hours give false precision; relative sizing is honest about uncertainty.
At the start of each sprint, select stories from the top of the backlog up to the team's capacity. Commit to that scope. Protect the sprint from new work — urgent items go into the next sprint unless something is explicitly swapped out.
Definition of Done
Agree on a Definition of Done at the start of the project and apply it consistently to every story. A typical Definition of Done: code written, peer-reviewed, tests written and passing, deployed to staging, acceptance criteria verified, documentation updated. Stories that do not meet the Definition of Done do not count as complete — they create technical debt that compounds over time.
Risk log
Maintain a simple risk log throughout the project: what might go wrong, how likely is it, what is the impact, and what is the mitigation. Review it at every sprint retrospective. Risks that are not tracked are risks that become surprises.
Change request process
Every change to agreed scope should go through a lightweight change request process: describe the change, estimate the impact (time, cost), identify what it displaces (something must be deferred or descoped to accommodate new work), and get explicit sign-off. This creates a record and makes trade-offs visible to all stakeholders.
The communication cadence
A minimal effective communication cadence for a software project: - Daily standup (15 minutes): what did I do yesterday, what am I doing today, any blockers? - Sprint review (1 hour fortnightly): demo of working software in staging, client feedback - Sprint retrospective (30 minutes fortnightly): what went well, what could improve - Monthly steering update: project status, risk log review, budget tracking
More communication is not always better. Undifferentiated status updates create noise. Structured communication at the right cadence with clear artefacts creates alignment.