Employers, clients, and colleagues expect far more from developers than working code. They want solutions that cause no harm, can be verified, are released in small increments, are based on honest estimates, and are built by people who respect each other’s professionalism and never stop improving. These ten principles — drawn directly from Bob Martin’s books and complemented by my own experience across projects of varying scale are what I call the Developer’s Decalogue.
Why do expectations of developers go beyond the code itself?
Most requirements placed on developers are never explicitly stated. The client wants the system to work, but also assumes it won’t harm their users, that it can be extended a year from now, and that no single person will be the only one who understands how it works. When one of those assumptions breaks down, the expectations suddenly become very loud – usually at the worst possible moment.
This article grew out of reading Bob Martin (primarily Clean Code and The Clean Coder) and years of hands-on project experience. I treat it as a checklist to be used thoughtfully, not a rulebook to follow blindly.
Principle 1: First, do no harm
We don’t harm society, the company, clients, or their clients. We don’t harm the structure of the code either, because who’s going to fix it later? Usually us. Three cases worth knowing:
Volkswagen – engine software deliberately falsified emissions test results. The case went to court and blame was placed on the developers. The absence of any documented objection to unethical decisions left them with no protection whatsoever.
Toyota – following the high-profile unintended acceleration case, auditors publicly described the codebase as spaghetti. So tangled that it was impossible to trace the logic of what it was doing.
Healthcare.gov – the platform rolled out under President Obama. A launch date was announced first; the team then began executing an unrealistic schedule.
The common thread: decisions made under pressure, without verification mechanisms, that hurt users, creators, and clients alike. When pressure mounts – and it always does – „do no harm” is a boundary that must not move. Bob Martin dedicates an entire chapter to this in The Clean Coder, complete with ready-made scripts for difficult conversations with management.
The mechanism that helps us prove we’re not causing harm is testing. Manual tests catch more edge cases. Automated tests run faster and protect against regressions we didn’t anticipate.
Principle 2: Build optimally, not maximally
Code, documentation, an AI-based solution, whatever we deliver, should be the best we can produce given the context and constraints. I deliberately say optimally, because „absolutely best” is an asymptotic goal, not a practical one, and often an impossible one.
Kent Beck, the creator of TDD and one of the signatories of the Agile Manifesto, puts it simply: first make it work, then make it right, then move on to the next task.
Many mature organisations have learned empirically that to go fast, you have to go steadily. A consistent, predictable pace outlasts heroic sprints followed by weeks of regression fixes.
Key tools: design patterns, deliberate architecture, and building solutions with future changes in mind.
A practical warning sign: when unit tests are hard to write, something is wrong with the architecture. Difficulty in testing is an indicator of excessive coupling and complexity. It’s a signal to stop and reconsider the design.
I worked with a client where the pace of delivering new features was clearly declining month after month. There was no room for a full redesign, but through small, iterative refactorings, we gradually rebuilt momentum, a classic application of the Scout Rule: leave the code a little better than you found it.
Principle 3: Prove that what you built actually works
In mathematics we have formal proofs. In software we have tests and that’s the closest verification mechanism we have.
According to IBM Systems Sciences Institute research, fixing a bug found after production deployment costs between 4 and 100 times more than fixing the same bug caught at the design stage. This isn’t an argument for writing tests because it’s expected. It’s a financial argument.
What does an effective test pyramid look like?
| Test level | Purpose | Target execution time | When to run |
|---|---|---|---|
| Unit tests | Verify isolated components | Under 2 minutes | On every commit |
| Integration tests | Check communication between modules | 5–10 minutes | On every pull request |
| End-to-end tests | Validate full user journeys | Up to 30 minutes | On every release candidate |
| Acceptance tests | Confirm alignment with business requirements | Depends on scope | Before every deployment |
Tests that run slowly don’t get run. That’s why their speed isn’t a luxury, more a prerequisite for usefulness.
The mindset shift I consider most important: QA shouldn’t be looking for bugs that developers missed. It should be verifying that acceptance criteria defined before coding were actually met. That requires upfront investment, acceptance test templates and a process where we write to a specification.
At a client project, once we introduced unit tests that genuinely verified behaviour, not just turned green releases became noticeably less stressful. When something broke during an update, the problem was isolated to one place. Quick fix and we moved on. Before that, a single change could trigger a cascade of failures in completely unrelated parts of the system.
Principle 4: Release in small increments, continuously
One of the principles teams most often push to „later” and almost always regret. Research by Pivotal Labs found that projects using TDD and continuous integration with small increments had a 28% shorter delivery cycle than projects without this approach.
Small changes mean in practice:
- merge conflicts are rare and small – code review is actually feasible and pleasant for everyone involved
- the codebase is always in a releasable state, ready for production, test environments, or stakeholder demos
- the business can say „deploy what you have” and we can do it immediately
- we find out faster whether what we’re building actually addresses a real need
Large changes mean weeks where the main branch doesn’t move, followed by more weeks of painful integration. Having something in production earlier, even incomplete functionality, lets us observe its behaviour in real conditions, rather than discovering problems at a big launch event.
Small changes have another advantage: they encourage tidying along the way. Adding a small feature and noticing poorly named variables in a neighbouring file, I just fix them. Zero extra cost, code quality goes up.
Principle 5: Maintain quality in a measurable way
Good code, tests, and clean architecture are not ends in themselves. They are means to specific, measurable outcomes. If we measure nothing, we have no way of knowing whether our practices are actually working.
Three metrics that actually matter
| Metric | What it measures | Warning signal |
|---|---|---|
| Time to deploy new functionality | Effectiveness of the entire development process | Increases month over month |
| Number of bugs and time to fix them | Code quality and test coverage | One bug triggers a cascade of others |
| New developer onboarding time | Code readability and documentation quality | More than a few days before the first independent task |
According to CISQ (Consortium for Information & Software Quality) data, poor software quality costs US companies over $2.41 trillion annually. Development teams spend an average of 30–50% of their working time fixing bugs and dealing with unplanned rework, instead of building new features.
In Agile, story points can serve as a pace metric: if tasks estimated at 5 points are consistently taking longer and longer, that’s a clear signal that something in the code structure is slowing progress.
Tools like SonarQube can support code quality analysis, cohesion, coupling, cyclomatic complexity, but they don’t replace the metrics above. They’re a useful supplement, not a substitute.
Principle 6: Continuously improve — but thoughtfully
The Scout Rule, Bob Martin’s mantra: leave the code a little better than you found it. Rename variables, extract functions, reduce coupling as part of every task.
There’s a nuance worth noting here. Martin suggests introducing not just improvements, but changes even seemingly neutral ones. Rename a class and see what happens. Not happy with a method’s structure change it, observe how the system responds. It’s a way of continuously verifying the flexibility of the code.
Good structure reduces the fear of change. If I know that modifying one module will produce clearly localised consequences, because the tests will tell me exactly what broke and where I’m happy to make changes. If every change risks a domino effect in unrelated parts of the system, I stop touching anything. Stagnation is worse than imperfect refactoring.
A clean environment also means: documentation in Confluence, comments in code kept to the necessary and meaningful minimum, README files, task descriptions in Jira. If I see something described poorly, I update it. Our product isn’t just the code.
Principle 7: Maximise productivity — holistically, not just in the editor
Writing code is just one part of the job. The rest is building, testing, debugging, deploying, attending meetings, and communicating with clients. All of it affects real productivity.
How does TDD change debugging time?
Research conducted at the University of Oulu found that developers using TDD experience a 30–35% reduction in debugging time compared to traditional approaches. Microsoft reported a 50–90% reduction in defect counts in projects that adopted TDD.
I used to launch the debugger dozens of times a day. Now it’s once a month, if that. Unit tests replace most debugging sessions: when something breaks, the test shows me exactly where — no fumbling around in the dark.
What else affects productivity?
Environment automation. A Docker Compose setup that starts with a single command isn’t a luxury, morke a standard. Onboarding a new developer shouldn’t take a week of configuration. If it does, something needs simplifying.
IDE dependency. I’ve seen a project where everything had to go through IntelliJ. Meanwhile another team had locked themselves into a plugin that only worked on one version of Eclipse. Developers moving between teams wasted time learning tools instead of getting up to speed with the code.
Scripts and process automation. Long, multi-line terminal commands go into a script. A script is easy to run, easy to update, and doesn’t require announcing to the whole team that „you now need to run this differently”.
Meetings. If something can be resolved at a whiteboard in 10 minutes, we don’t need an hour-long call. Meetings are part of our productivity too.
Principle 8: Be replaceable
Hoarding knowledge, writing code only you can understand, skipping documentation – it’s a trap. Being „irreplaceable” doesn’t protect you from a company restructuring or a management decision. It does protect you from taking a holiday without your phone ringing.
Replaceability is freedom. If the system I built can be handled by anyone on the team, I can switch projects, go on holiday, or get sick without everything falling apart.
How to achieve this in practice:
- Pair programming and code review as knowledge transfer
- Readable code as the first layer of documentation
- Tests as documentation of system behaviour
- Scripts instead of lengthy README instructions
- Regular conversations at the whiteboard or over coffee – some things only come up in casual discussion.
Principle 9: Estimate honestly and communicate progress
I heard a story about a company that was commissioned to build a fanpage with „social portal elements.” They didn’t specify the scope in the contract. The client gradually added features — photos, videos, comments, reactions — always pointing back to „social portal elements.” The company eventually paid a penalty, despite having spent far more of their own resources than planned. Give an inch, they’ll take a mile.
How to estimate more accurately?
Three numbers instead of one. Rather than giving a single estimate, I give three: optimistic, average, and pessimistic with a clear description of what conditions must hold for each scenario. If you only give the optimistic number, the client will only remember that one.
There’s a limit to how deep analysis should go. At some point the time spent on analysis becomes more expensive than the risk of an inaccurate estimate. Sometimes it’s better to start with a reasonable scope and adjust as you go, rather than spend a month gathering requirements, especially when the client themselves isn’t yet sure what they want.
Statement of Work. A document that clearly defines what we are doing and what we are not. It’s an investment that pays off every time a misunderstanding arises. It’s worth recording your assumptions: if we estimated a portal with photos, not videos – that should be written down.
Continuous progress communication. With estimates in hand and a clear view of where we are against them, we can flag a change to the client early. That gives the business time to decide: adjust scope, shift the deadline, grow the team. They don’t find out about the problem on the day they were expecting the finished product.
Principle 10: Respect each other and keep improving
I once heard the phrase: professionals respect each other for their professionalism, even when they disagree. I don’t remember the source, but in a professional environment it pretty much says everything that needs to be said.
It eliminates debates about who uses which tools or prefers which approach. At work: professionalism.
Continuous improvement, but in balance. The Pragmatic Programmer by Andy Hunt and Dave Thomas suggests one technical book per month, regular blog reading, attending conferences. That’s a fine ideal. We also have lives, families, and other interests. More realistically: learn at work when you have the space. Try a new pattern on a real task.
Sharing knowledge is also learning. Research on learning effectiveness shows that when we explain something to someone else, we internalise it far more deeply than through reading alone. Hence the value of code review, internal presentations, and whiteboard sessions. It’s rubber duck debugging in social form more than once, I’ve found the answer myself before I finished explaining the problem to a colleague.
Broaden your horizons beyond IT. Painting teaches attention to detail that transfers directly to UI work. Chess develops thinking several moves ahead, exactly what you need when designing architecture. Any mental exercise improves overall cognitive function.
When NOT to apply all principles at once?
This is a checklist, not a mandate. Context determines which principles apply and to what extent.
| Context | Approach |
|---|---|
| Startup validating a business hypothesis | Minimum viable quality — the priority is fast validation, not full test coverage |
| Healthcare system handling medical data | Most principles become requirements, not options |
| Mature product with a large user base | Full test pyramid, CI/CD, quality monitoring |
| Internal script automating a single task | It just needs to work and be readable |
The key question for every principle: what is the return on this investment in this specific context? As Kent Beck said: first make it work. Everything else is iteration.
The worst situation: we’re moving slowly because we’re following all the practices — and we’re still slowing down month after month. We’ve taken on all the downsides of a bad approach without any of the benefits of a good one. If delivery pace is declining despite applying good practices, it’s a sign we’re applying them out of habit, not with purpose.
This article is based on an internal presentation at fireup.pro, inspired by Bob Martin’s books Clean Code and The Clean Coder as well as The Pragmatic Programmer by Andy Hunt and Dave Thomas, and Clean Craftsmanship: Disciplines, Standards, and Ethics by Robert C. Martin. Practical examples come from projects delivered through Custom Software Development, System Refactoring, and Test Automation at fireup.pro.
🚀 The header photos are my own, I’ve been an amateur photographer for years. You can find more of my work at photography.mwilczek.net.

