Quantcast
Channel: Casey McKinnon
Viewing all articles
Browse latest Browse all 10

The Case for Remote Testing

$
0
0

At FreshBooks we just started doing formal usability testing this week. In the past, its been done guerrilla-style and on an opportunistic basis.  As we commit to be the most outstanding usability in our industry, we needed to raise the bar on how we formalize testing within the company and start to test in a way that’s both cost effective and gets the best results possible.  We started with benchmark testing (commonly referred to as “baseline testing”) where we were looking for quantitative data about the usability of our product such as success or failure percentages, and timings (for example: how long does it take a typical user in a typical situation to Setup our product?).

So we started on a path, not knowing exactly where we might end-up by comparing the results from usability tests conducted in-person here in Toronto (on-site with our target customers, using a laptop), vs. remote usability testing using screen-sharing and Skype.  Here’s the results of how things compared…

Recruitment of Participants

Finding people to do a remote usability test was INCREDIBLY EASY.  We made a post online in a few strategic places, and within minutes we had people responding to our posting who actually matched our screener.  Finding the right sites to post your request for usability testers may take some thinking, but if you find the right one, it can be free, and fast!

Meeting the Participants

For this test, we split our participants 50/50 between in-person and online meetings.  By far, the most reliable participants were the ones we met with online.  It was convenient for them, and was very low-risk because they were participating in our tests from a place comfortable to them.  It actually makes so much more sense to test these people from where they actually do their work, because you’re capturing more of the context that’s such an important factor in understanding your users.  We met people from Russia, from Argentina, and everywhere in-between.  All spoke great English, and were forthcoming with their feedback.  Not once (knock on wood) did someone try to sneak into the test without being someone who could give us valuable feedback on our product – awesome!

Reviewing the Results

Like I said, we did it 50/50 remote vs. in-person, and the results were alarmingly similar.  There was no discernable performance impact of the test for people who were online vs. those we visited in-person.  We had consistent success rates, and consistent timings regardless of where the participant was located.

In Conclusion…

Next time, we’ll switch to 100% remote and see what happens.  I think with all of the time we save with recruitment, and our great experience this time with remote participants, its worthwhile to continue down the remote track until we at least reach its limitations.  Now, I’ll admit it was perfectly suited for our current needs, a baseline/benchmark test, and may not suit everyone’s needs, especially if you’re looking for qualitative feedback, and for eliminating external factors that would be isolated in a lab environment.

Give it a shot, it worked for us, and may make your life a metric ton easier as well…



Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images