Discovery: understanding the problem before writing code
Every project starts with discovery — a structured process of understanding the problem, the users, the constraints, and the success criteria before any technical decisions are made.
Discovery typically takes one to two weeks and produces: a product requirements document that defines what the system must do, a user flow diagram that maps the critical journeys, a data model draft, a list of third-party integrations and their API capabilities, a risk register, and a high-level estimate.
The most important output of discovery is alignment. Teams that skip discovery and go straight to development build the wrong thing faster.
Architecture: making the decisions that are hard to reverse
After discovery, we define the architecture — the decisions that are expensive to change later: technology stack, database design, tenancy model, authentication approach, deployment infrastructure, and API design.
For a typical full-stack web application, our default stack is: - Next.js 14 (App Router) for the frontend — server-side rendering, built-in API routes, excellent performance - Node.js with TypeScript for any standalone backend services - PostgreSQL for the primary database - Prisma as the ORM — type-safe, excellent migration management - Redis for caching and session management - S3-compatible storage for user uploads - Vercel or a VPS with PM2 for deployment
We document architecture decisions as ADRs (Architecture Decision Records) — short documents that capture the decision, the alternatives considered, and the rationale. ADRs are invaluable when onboarding new team members or revisiting decisions 12 months later.
Sprint planning and development
We work in two-week sprints. Each sprint starts with a planning session where we review the prioritised backlog, clarify acceptance criteria, and commit to a sprint goal.
Each user story follows the same definition of done: - Code written and reviewed - Unit and integration tests covering the critical paths - Deployed to the staging environment - Acceptance criteria verified by a team member who did not write the code - Documentation updated if the feature affects the API or user guide
Client demos happen at the end of every sprint — not a formal presentation, but a working software walkthrough in the staging environment. Feedback loops every two weeks prevent the project from drifting in the wrong direction.
Quality and testing
We do not write tests as an afterthought. Testing is part of the definition of done for every story.
What we test
- Unit tests for business logic and utility functions (Jest)
- Integration tests for API endpoints (Supertest)
- Component tests for complex UI interactions (React Testing Library)
- End-to-end tests for critical user journeys (Playwright)
We do not aim for 100% coverage — that is a vanity metric. We aim for comprehensive coverage of the paths that, if they break in production, would cause the most user or business impact.
Deployment and handover
Infrastructure as code
All infrastructure is defined as code (Terraform or Pulumi for cloud resources, Docker Compose for local development). This means the staging and production environments are identical and reproducible, and infrastructure changes go through the same review process as application code.
CI/CD pipeline
Every commit triggers a CI pipeline: lint, type check, unit tests, integration tests, build. Every merge to the main branch triggers a deployment to staging. Production deployments are promoted manually after sign-off, with automated rollback if health checks fail post-deployment.
Handover documentation
At project close, we deliver a technical handover document covering: system architecture diagram, database schema, API documentation, deployment process, monitoring and alerting setup, known limitations, and recommended next steps. Our goal is that a competent developer who was not on the project can maintain and extend the system from day one.