• Selling Web Budgets

    by Laurie | Sep 10, 2009

    Forrester recently identified “five mistakes to avoid when asking for Web customer experience funding,” certainly a good reminder.

    1. Letting people think the current site is “good enough”
    2. Being too vague about how they want to spend the money
    3. Assuming positive ROI will be enough to get a request approved
    4. Giving in to pressures to cut research and evaluation
    5. Giving a boring business case presentation

    Easier said than done, you might respond. So what are the five best ways to avoid these mistakes? Here’s my list, one per “mistake.”

    1)    So people are saying the site is good enough? Do a site walk-through for management and show exactly where and how bounces occur, where money is left on the table and how your brand loses out to the competition over a single visit. Show competing sites where appropriate. As everyone around the table starts to fidget half way through, point out that when users fidget, you may well lose them forever, even when your brand is a perfect fit.

    2)    The group or individual that’s authorizing the budget wants to see a road map with a destination in sight, not a road to nowhere. Document the key functionality to be added. Even if look and feel will be key to improving the experience, you can identify what “pain points” are currently there, even if it’s “bright green makes it difficult to read the call to action,” from #1 and then explain how these will be addressed. Draw straight lines, in writing, from the problem to the solution.

    3)    Dig out the numbers. What percentage of your all-channel contacts hit the Web site at some point? Do some primary research (can be as simple as store associates asking customers at checkout) if you don’t know. What percentage of your transactions is partially attributable to the Web? It’s not just positive project-level ROI you’re looking for, it’s continued marketing ROI overall, and that won’t happen unless all the major channels are both optimized and talking with each other. In most business models, if your Web site doesn’t support the messaging in other channels, you’re not just wasting money maintaining the Web site, but also on all those e-mails, print ads, trade shows and the like. Remember, the Web has tremendous presence just by its being 24/7. Bring that implication and huge dollar amount to the forefront.

    4)     The #1 constraint on research and competitive analysis budgets in Web design is a brand manager or marketing director who says, “We know our customers. We know our site. We don’t need to spend money finding out what we already know.”

    The best defense is letting this person speak, then quietly asking, “Did we gather what we think we know on the Web, or in our stores and call centers? Is your behavior when you send an angry e-mail the same when you are sitting in this room? For that matter, do you behave the same way with your mother-in-law as with your wife?” Then throw up your hands and say, “What we are asking for is the opportunity to understand how people are behaving and why, while they are communicating via the channel that we want to improve.”

    5)    What makes a business case boring? Four words: Nothing here for me. Make sure that’s not the case by first pinpointing what everyone, no matter what her silo, wants – stability and growth. While saying if we can’t spend six figures on our site this year, we’ll go bankrupt is hyperbole in some cases, you can note that the Web touches an increasing percentage of customers and prospects, and that audience expectations for seeing a personalized proposition have never been higher.

    Consistent, cross-channel positioning and messaging isn’t just an ideal – it’s becoming table stakes in the face of unprecedented competition for everyone’s dollar.  If you’re not getting your points across online, chances are that a competitor, or potential competitor, is doing just that.

    Sometimes I’ll say, “What people expect online is coming down to the salesperson on the floor who asks, “Can I help you?” and is prepared to give you personal attention. If your experience is more like one-size-fits-all, if it’s impersonal, unfriendly, stiff and unforgiving, wouldn’t you expect someone to leave that store and hit the one down the street? They do, every day. And on the Web, it’s a lot less trouble to “trade up.”

    So hit hard on the reasons people are in their positions to start with, and point out that the greatest new products, highest quality service and premier staff are wasted if your message isn’t being heard, understood and acted on. Building out the Web channel robustly also offers advantages like a feedback  mechanism, beta tests and so on…but at root, the reason to do it right is to offer an online hub that’s worth connecting to, interacting with, transacting with…and it’s a pretty short step from there to drawing the same conclusions about your brand.

  • When you’re not a pet rock: Six qualitative research sins, Part 3

    by Laurie | Jul 16, 2009

    A slightly different version of this article originally appeared in Quirk’s Research Review, May 2005, page 40.

    Part 3 in a 6-part series. Part 2

    Sin #3: It’s not a product; it’s a bundle of attributes.

    We could spend hours discussing how this assumption has constrained market insight for products where attributes are neither readily changed by the manufacturer nor independent (health care is an excellent example). “Which is more efficacious, drug A or drug B?” is a red herring in any setting. What qualitative can tell us is:

    Do perceived efficacy differences, if any, actually affect decision-making among drugs in this class? If so, when and why? If not, what does and how?

    Qualitative is no better place than quantitative for the faulty assumption that all decision-makers are consciously trading-off all attributes all the time. Nor is it a setting in which to “validate” attributes (domains and measures) and levels (threshold values) used to make decisions where the attributes are not universally salient and defined (two vs. three bedrooms is clear, a “crunchy” vs. “not crunchy” cereal less so). Eliciting the shortcuts used to decide between products whose attributes themselves are subjective calls for methodologies other than qualitative work, e.g., taste tests for the cereal or heuristic market research for pharmaceuticals.

  • When you’re not a pet rock: Six qualitative research sins, part 2

    by Laurie | Apr 22, 2009

    A slightly different version of this article originally appeared in Quirk’s Research Review, May 2005, page 40.

    Part 2 in a 6-part series. 

    The second sin, or ‘Presto!  Let there be quant.’
    Under the illusion of “representativeness” noted in my previous post, researchers may bring quantitative instruments into the qualitative setting and report the aggregate (or worse, subgroup) results as if they represented individual data points, thereby choosing a quicksand pit as a building site. Though elementary, my dear readers, if you interview 38 people in your “national” qualitative project, whether singly or in groups, whether they represent 38 metro areas or three, you do not have an n of 38 independent cases. Only respondents in a few areas had a non-zero chance of selection; there are more than 38 metro areas in the U.S.; three of your respondents may have signed up with the same research center as friends and so on.

    The misconception that qualitative findings should be cut-and-pasted into quant design rests on this faulty premise as well, but that’s another story.
    Qual must provide context that numbers can neither replace nor explain, or there’s no reason to do it. It’s reasonable to ask what someone would anticipate doing under certain circumstances, or how, if at all, participants would differentiate various stimuli. However, those answers are integrally connected to the “what, when, where, why, how” that presumably the rest of the interview has been about. Understanding this connection is the “beef” into which marketing can sink its teeth. If clients ask for quant instruments in exploratory settings, I politely explain why these could compromise our objectives, and then outline what the research will do.

    There’s nothing wrong with yes/no and structured or numeric questions as they might occur in real conversations. There is something wrong with aggregating the results as if they were the Harris Poll, or separating them from their context. This also argues against routine “head counts” for questions or forced differentiation. The information the client needs should be in the verbatims, not a show of hands. Just because we can force respondents to comment that layout A is very “green” doesn’t mean we learned anything.
    If we aren’t presenting stimuli that can evoke different reactions and preferences and allowing exploration as to why the responses are different, we have brought inadequate stimuli to the table; torturing the respondents all night won’t change that.

    As for the notion that using card sorts, rankings, ratings and such will “facilitate discussion,” in over 20 years of interviewing (and twice that as a conversationalist), I can’t recall ever needing a quantitative catalyst. Do you? Sometimes, perhaps, these tools are attempts to substitute for conversational skills/product category knowledge. But interviewers who look or act ill at ease should be given more prep/training, or replaced, not handed stacks of forms. Maybe good conversations aren’t as easy to sell (sounds too simple?) or even deliver. But the effort is well worth it.

    Besides wasting time, superimposing quant reroutes the discussion. Mid-conversation with your friend, do you ask, “How was your date with George? Here, do this attribute rating task so I can more fully understand your viewpoints.” When we try later to reconcile free-flowing conversation with eked-out data, we are no longer doing qual work, or anything else useful.

    In the next part of this article, I’ll explore the perils of using attributes and “trade-offs” in qualitative research.

Showing posts specific to: Laurie