Quality is not a process, but a state. In order to achieve that state we need to follow a specific path. But, to follow a path would mean not to follow other paths. Conventionally, it is this rigidity that ensures the attainment of a defined standard of quality. So, in effect, to achieve quality, we must instill among ourselves a sense of purpose greater than that of merely finishing the task at hand to free ourselves for the next; to achieve, not just a state of completion, but, a state of satisfaction.
Having said that, it is interesting to note that a state has very little measure than its achievement! Hence most Quality Metrics aim at measuring our adherence to the defined path (Process). But, do retrospective measures prove effective in ensuring, if not enhancing, quality? Could we actually be looking at the whole problem the wrong way round?
I believe a direct measure of the reliability of the planning done to ensure the achievement of a desired state would be more effective than a measure of adherence. I do not intend to downplay Tracking, but adherence to a faulty plan would logically lead to an undesired if not faulty result! In short, the quality achieved at the end of an activity can only be reliably determined if the corresponding planning artifacts are reliable. The following example could give you a clearer view of this approach.
Assume a scenario where billing is effected on the basis of estimates (ASSUME!). It would naturally follow that teams would attempt to provide reliable estimates to guarantee steady billing. Such reliable estimates would automatically warrant a well-defined and accurate means of deriving the size of a project; such objective sizing would in turn demand a clearly defined scope; and the stress on effective Scoping would subsequently translate to better Requirement Specifications.
I know what you’re thinking; Teams will end up trying to bloat estimates while the client makes it a point to shrink them to their limit. But I believe that this very conflict could result in the creation of an objective and well-defined estimation model approved by both parties. Also, such an approach would enhance not only the reliability of estimates, but also adherence as any positive variance would go unbilled.
Having said that, it is interesting to note that a state has very little measure than its achievement! Hence most Quality Metrics aim at measuring our adherence to the defined path (Process). But, do retrospective measures prove effective in ensuring, if not enhancing, quality? Could we actually be looking at the whole problem the wrong way round?
I believe a direct measure of the reliability of the planning done to ensure the achievement of a desired state would be more effective than a measure of adherence. I do not intend to downplay Tracking, but adherence to a faulty plan would logically lead to an undesired if not faulty result! In short, the quality achieved at the end of an activity can only be reliably determined if the corresponding planning artifacts are reliable. The following example could give you a clearer view of this approach.
Assume a scenario where billing is effected on the basis of estimates (ASSUME!). It would naturally follow that teams would attempt to provide reliable estimates to guarantee steady billing. Such reliable estimates would automatically warrant a well-defined and accurate means of deriving the size of a project; such objective sizing would in turn demand a clearly defined scope; and the stress on effective Scoping would subsequently translate to better Requirement Specifications.
I know what you’re thinking; Teams will end up trying to bloat estimates while the client makes it a point to shrink them to their limit. But I believe that this very conflict could result in the creation of an objective and well-defined estimation model approved by both parties. Also, such an approach would enhance not only the reliability of estimates, but also adherence as any positive variance would go unbilled.
No comments:
Post a Comment