Four More Words

Mark Hurst‘s October article has a very good point to make. “Don’t define tasks beforehand.”

If you have never tried what Mark suggests, I (as well) suggest you try it. However, I think an important point must be made to put the article in context.

A Listening Lab is a very useful tool in terms of design and usability. Mark notes that this is research, but equates a Listening Lab mostly to usability. Not that usability isn’t research, of course, but it is the type of information he is getting/looking for that makes me see it as more than just usability.

I see two things happening in a Listening Lab.

Current Task Analysis (CTA). What do users do now with the site/product? However, a Listening Lab is CTA out of context to how the user actually uses the site/product. Honestly, I don’t know if this is a good or bad thing. I prefer, as you might guess to do CTA in the context of the user.

Exploratory Usability. There are four main categories (as well as 3 main methods) of usability (as of today anyway): Exploratory, Assessment, Validation, and Comparative. Generally speaking they fit into a hierarchy or range:

do-before-designing to do-after-users-are-using-your-product-for-a-while.

Mark is in the first category with a Listening Lab. It involves a high-level of interaction between the facilitator and the participant. It is done early in the design process. It usually involves getting more than just the generic (yet important) demographic information.

Exploratory testing is absolutely a must and yet is often skipped by most people (for many, often valid reasons). Most people do either Assessment of Validation. Well, most good designers will do at least Assessment, and most everyone else only does Validation, if anything.

For Assessment and Validation testing you should absolutely have tasks set up that match your Future Task Analysis. You’ve designed how people will use the new product, you should test to see how users will fare. For Comparative testing I can see value for using a Listening Lab method or a scenario-based method.

My question though is when is it suggested to do a Listening Lab during a project, and how many users is enough?

I have an answer for the first question, but I don’t see it answered in Mark’s article. I do not have an answer for the second question. If you’ve been in this business for more than two days, it’s an argument that is best left to researchers with more time and talent than me.

1 comment

  1. I posted the following on Steve Hoffman‘s site, and realized I should follow up here too.

    Two thing I did wrong (just two?:) with my post.

    1. I didn’t make the point enough that I was speaking of validation testing during the design phase. Validation testing can take place, as you said, well after the product is up and running. Actually you can do all the types of testing to a full production product, but in the case of my post (and in the context of Mark’s article) I meant during a design project.

    2. My question on how many users was somewhat rhetorical, and I didn’t show that properly. Depending on how many usability issues you want to find 15 participants are probably not a good enough statistical sampling for a user population of 3 million. In practice 8-15 is enough to get the big problems (probably), but I think both that there is no real answer to this question (because as with everything we do, it depends), and that the answer still needs to be pursued.

    Anyway, hope that clarifies my intent with my post.

Comments are closed.