We all know a classical user story format that are expressed as a a simple sentence, structured as follows:
“As a [Persona], I [want to], [so that].”
Where:
· Persona: is an end-user of the system. The more acute understanding of who the person is, whether it is a job function, relevant profile criteria, affiliation, expertise, the better.
· Wants to: is a description of the Persona’s intent insofar as SW functionality. A classic advise suggests that this should not describe implementation details. In principal, I’d agree with that, however, too high-level of a description may lead to imprecise generalizations and missing a mark altogether.
· So that: an overall objective or a goal that Persona is trying to achieve.
For example, user stories may look something like this:
As Doug, who is a Project Manager, I would like to monitor a progress of cross-functional teams towards a completion of a project, so that I can feel more in control, report on progress in near real-time.
I’ve used this format a lot and it is ok, but it does have some limitations and I am no longer a fan of this format. It is not a goal of this post to describe these limitations, suffice to say that, in my opinion, it lacks rigor, success/failure, and experimentation parameters. Besides, paraphrasing and expanding on Steve Jobs’ famous postulate, Users cannot tell your what they want, the can surely what they don’t like.
I recently started using, a hypothesis-driven format for user stories. I feel that it is a much better model. It provides better rigor, testability of success criteria, and right level of implementation level of detail:
We believe that
[building this feature]
[for these people]
Will achieve [this outcome] satisfying these [testable criteria,…]
We will know we are successful when we see
[this signal from the market]
Here is the previous user story according to this format:
We believe that
Building a dynamic Work In Progress Limit (WIP) that automatically apply WIP for user-selected Sprint Workflow States: for cross-functional internal and external project stakeholders
will highlight blockers and bottlenecks satisfying these testable criteria:
across desktop and mobile devices;
across Slack channels;
WIP automatically computed after first 3 sprints and this default value is further refined for each subsequent sprint;
Kaban board Activity Column is automatically highlighted if a user-selected Sprint Workflow States (In Progress, Code Review, To Do, Done) is exceeding WIP Limit;
This feature will collect usage metrics and feed a user usage report;
Etc
We will know we are successful when we see 20% uptake in orders of our product over the current financial years or 10K distinct users downloads in the first 3 months.
Ok, this seems to be a lot more work, but I feel that to create this hypothesis requires a more precise thinking, creates a built-in self-testing mechanism and provides success criteria that are not open to interpretation. And this, last point, is the most important criteria to that underlines the value of all the work we are doing, delivering value-driven SW.
Which one would you rather have your team working on?
コメント