This is what I have been putting up with lately. I don’t post this to complain (well maybe a little), but because I know that most of the people who visit here run into similar situations.
I am on the receiving end of “Do what I say, not what I do.” I work with an area in the company that does research into the HCI world and feeds that info (read: the who’s and what’s we should be designing for) to us over here on the practitioner side. The dichotomy is like that of academia (them) and practitioners (us).
Academia looks at the whole picture. They look at research that exists to see if it will apply, or how to augment the research with their own. They work at their own pace (of sorts). Following the information where it takes them. Coming at the issues with a hypothesis, but allowing for deviation if the situation warrants.
They think that everyone will benefit from their foraging, and will benefit from each data point collected in the foraging. But they usually don’t think about their audience (outside of other academics) when they present their findings.
Practitioners look at the whole picture within the defined scope/context of the effort du jour. They look at research (depending on how much time there is) to see what applies, but more often than not decisions are already made in the form of style guides and recent past contextual analysis of the same or similar user group. They don’t follow the information where it takes them (again scope/time issues).
A hypothesis is usually developed en route to a solution and by that time it is too late to make changes. They don’t care what the user is thinking at every point from the moment they decide (for instance) to go to work to when they leave for the day. They care about what it takes for the person to complete the task, and perhaps how that task fits with the previous and following tasks.
Finally, though not all inclusively, they care about how their audience will interact with their deliverables. Developers must be given specific instructions on what to build. Business partners need to understand the impact of decisions (or non-decisions) on the tasks within the user’s workflow.
I get told to do a lot of things. Some make sense, most don’t. Well, they all make sense actually, just not in the context in which I have to complete my work. How do I integrate what one area of the company tells me I must do, with another area of the company that defines how much time and effort I get to spend on things.
I have a question for all’y’all’s: I will state, first, that no one knows the real number for how many people it takes to get valid usability test results. I will also offer, as an assumption, that most practitioners test between 5 and 15 people, whether on one or multiple iterations. That said, how many people does it take to get good user requirements?
For usability, one would figure that a relative sampling of the user population would be in order to get valid data. So if your user group has 80,000 users and you want a 95% reliability in your results (with a +/- 3% MOE) you would need 1053 test participants. If you tested 15 people out of your 80,000 population, your MOE would be almost 29%!
I bet most of us on the practitioner side can live with a 29% MOE. Because we see, within the first 5 participants, the “big issues.” Is 29% MOE valid for academia? No way. (See this site for how I came up with these numbers.)
These days, thankfully, it’s not much of a problem to get buy-in to get 5, 10, 15, 20, 25 usability test participants. But when you get on a project and say, “I need to talk with 1000 people in 10 different cities across the US and Canada,” what do you think the answer will be? This is what the academia side can do. Because they aren’t hindered by deadlines and scope.
But the practitioner side cannot do this. (And if you can, I hate you and want to work with you. :) We get the business analyst, and perhaps a visit with 2 or 3 people in one city, in one office. And we go with what we’ve got and usually make decent decisions which are tested but actual users.
Should academia understand the context in which people outside of academia will use their research? Should practitioners push back from more time analyzing user requirements? The answer to both questions is probably yes, but at this point, I am not sure how to make that happen. I am still in a Us and Them state of mind.
I think it will be easier for practitioners to change. We have to adapt to different situations from project to project (and even within a project), so we have practice. They sit in their smoking jackets, raise flags on issues and risks (which they leave up to us to solve), and tell us we shouldn’t be practitioners because we are doing it wrong.
I know the goal is the same: assist the user however possible within the known constraints. But the way in which we go about our business seems like an unbridgeable gap. There is no data (that I know of) that shows that a practitioner creates a better or worse solution based on the environment in which we work.
Should we then, drop their (academia) asses on the side of the road until they can prove we really do need them? Boy, do I wanna. But, because we are so adaptable, we will end up being the ones to make things work. It will take much effort, become a great distraction, and leave a bitter taste in most everyone’s mouth, but it’s (probably) the right thing to do.
Anyone have a success story? I’d love to hear about it.