Wednesday, December 20, 2006

Flog Alert Repository?; Company Training About Ethics Violations

Does anyone know of a central place where people are keeping a list of all the companies who have engaged in stealth marketing (including flogs, or fake blogs)? The latest flog (over a week old now) seems to be Sony for their PSP: alliwantforxmasisapsp.com (see links below for stories covering this development; of course is not the first time for Sony as they messed around with stealth marketing with the Sony Beta 7 as well).

Sony, or at least one part of Sony, is a member of the Word of Mouth Marketing Association (for which I'm an advisory board member), so I would hope that at least some representatives from their company are learning about how social media works and the ethics involved so that no part of their company continues the use of stealth tactics.

The viral marketing firm that created the flog for Sony is Zipatoni (not a WOMMA member). According to a report in a MediaPost article, a person claiming to be a Zipatoni representative contends they advised Sony that there might be this backlash, but their client went ahead with it anyway (this information should be treated with caution, though, because none of this has been confirmed).

Anyway, this story, the Edelman fake blog scandal with Wal-Mart, and many other companies point to the importance of the education work that needs to be done within corporations about social media and ethics. At the latest WOMMA Summit in D.C. Rick Murray from Edelman gave an update on how his company has implemented a mandatory training program for all their employees. Further, they set up a 24/7 hotline where anyone around the world can call in with questions to make sure they are behaving in an ethically approriate way. They also engaged in a review of all their existing programs to ensure that there are no further violations of the WOMMA Ethics Code (at the time of the presentation they were 98% done). Read this post for more details about what Edelman is doing in response.

Other links about the Sony fake blog for PSP:

Sony 'fesses to fake blog. Still gets it wrong. -- Michael O'Connor Clarke

Oh Sony, What Were You Thinking? -- David Binkowski

Sony pays PR firm to lie about wanting a PSP for Christmas -- Videogamesblogger.comSony's holiday marketing campaign sniffed out -- Engadget

Sony Confesses To Creating 'Flog,' Shutters Comments -- Media Post Article

And be sure to check out the following link for a YouTube video created by gamers essentially saying that Sony is stupid for thinking that gamers are stupid and wouldn't figure out the flog.

Thanks to Constantin Basturea for some of the links above.

-->
Tags:

Tuesday, December 12, 2006

FTC Response on Word of Mouth Marketing Regarding Disclosure

I'm blogging today from the Word of Mouth Marketing Association Summit in Washington, D.C. We just heard a presentation from Mary Engle, Associate Director for Advertising Practices at the Federal Trade Commission. She discussed the FTC's response to a complaint received from Commercial Alert (see background on the complaint and FTC consideration here). She summarized their complaint as follows:

Commercial Alert states that it is deceptive for marketers to pay consumers to engage in buzz without disclosure of the monetary exchange. They sought investigation of "buzz marketing" practices and asked the FTC to issue guidelines and bring cases. (NOTE: Commercial Alert should be calling this "shilling" or "stealth marketing" rather than calling this buzz marketing).

Here's the quick summary, with more details below:

- when payment is made to a consumer, that payment, by law, needs to be disclosed;
- marketers do not need to get parental permission for teens 13-18, but do need permission if the kid is under 13 (consistent with COPPA);
- non-monetary compensation (such as free samples, reward points, swag, etc.) do not need to be disclosed by law, but the FTC referenced that the WOMMA ethics code requires disclosure regardless of payment.
Here are the details:

The FTC declined the request to issue specific guidelines for WOM marketing, arguing that they feel a case-by-case investigation and enforcement is adequate. However, they did issue an official response later stating the the FTC's Endorsement & Testimonial Guides are applicable to WOM marketing. The FTC states that paid WOM advocacy fits the following definition of endorsement:
"An endorsement is any advertising message that consumers believe represents the opinions, beliefs, experience, etc. of a person other than the sponsoring advertiser" (Slide 8)
The Endorsement Guides require disclosure of the relationship between a seller and endorser "that might materially affect the weight or credibility of the endorsement" (Slide 9). They define a material connection as "one that isn't reasonably expected by the audience" (Slide 9). They also provide the following examples of these relationships: 1) seller is paying endorser, 2) endorser is related to seller, and 3) endorser is business associate of seller.

Their reasoning is that consumers wouldn't normally expect that someone has been paid to talk to them about a product. Further they suggest that consumers may give more weight to Person A's views rather than Person B's views if they know that Person A is independent from a seller while Person B is getting paid. Therfore, the reasoning goes, "Under the FTC Endorsement Guides, financial tie between the seller and paid agent should be disclosed."

Ms. Engle's presentation also addressed if the WOM program participant isn't paid, is disclosure still required? The FTC argues that it depends on whether consumers would give more weight to an endorsement if payment was or wasn't involved. It also notes that WOMMA's ethical guidelines call for disclosure even when there isn't payment. (For a research study about the potential business benefits of disclosure and guidelines for companies, please read my "To Tell Or Not To Tell?" report).

The Commercial Alert complaint also expressed concern about children's involvement in WOM marketing programs. The same disclosure applies in these cases. But what about parental consent? If a marketer solicits participation of kids under 13, then marketers need to comply with COPPA (Children's Online Privacy Protection Act), which means parental consent is required. But outside the scope of COPPA, the FTC doesn't enforce any other law that requires parental approval.

Download Mary Engle's Presentation from the FTC
(opens into PDF file)
Commerical Alert's Reaction (Dec 11, Dec 12)
WOMMA's Reaction
Download To Tell Or Not To Tell? Research Report (link to download page)

Disclosure: Advisory Board Member of WOMMA

-->
Tags:

Sunday, December 10, 2006

What Some Leading Academic Researchers on Word of Mouth Marketing Are Up To

On Monday, I'll be giving the "State of Word of Mouth Research and Measurement" talk at WOMMA's Research Symposium. I'll be reviewing some of the latest academic research that affects the WOM marketing industry and offering a modest proposal of research priorities for the upcoming year.

You can find my a copy of my presentation handouts here.

As part of the research for my presentation I interviewed some of WOMMA's academic advisory board members, some of the leading researchers working on WOM. I asked them to give me a description of their current projects. Some of their projects have been published already while others are in the earlier stages as working papers.

Because I won't have time to discuss all of this in my presentation Monday morning, I am attaching this PDF file as a supplement to my talk. To learn more about the WOMMA Advisory Board members please visit the Advisors page on WOMMA's website. And to learn more about other industry and academic research studies please download the WOM Bibliography Project from my download page.

Please note that this is a very selective and partial list so it's not meant to exlucde anyone. Any other folks who are doing interesting work in this area should feel free to contribute an update in the comments section to this post.

Also be sure to check out the latest research book published by WOMMA: Measuring Word of Mouth, Volume 2. (Disclosure: I served as the workgroup leader editing this volume).

Download presentation supplement

-->
Tags:

Tuesday, December 05, 2006

WOMBP: December 2006 Update

The latest version of the WOM Marketing Communication Bibliography Project (WOMBP) is now uploaded. You can access it at my download page.

Here's the background of the project and details of the contributors.

Below are new entries in this version (these aren't necessarily new studies, they just weren't included in the last update):

Anderson, E. W., C. Fornell, et al. (2004). "Customer Satisfaction and Shareholder Value." Journal of Marketing 68: 172-185.

Chandon, P., V. G. Morwitz, et al. (2005). "Do Intentions Really Predict Behavior? Self-Generated Validity Effects in Survey Research." Journal of Marketing 69: 1-14.

Dawar, N., P. M. Parker, et al. (1996). "A Cross-Cultural Study of Interpersonal Information Exchange." Journal of International Business Studies 27(3): 36.

Dholakia, U. M. and V. G. Morwitz (2002). "The Scope and Persistence of Mere-Measurement Effects: Evidence from a Field Study of Customer Satisfaction Measurement." Journal of Consumer Research 29(2).

English, B. (2000). "Pitching her tent "word of mouth", plus author Anita Diamant's promotional moxie, make for success." Boston Globe February 24, 2000: F1.

Helm, S. (2003). "Calculating the value of customers' referrals." Managing Service Quality 13(2): 124-133.

InfoMag (2006). "Surging Interest." InfoMag.

Kiecker, P. and D. Cowles (2003). "Interpersonal Communication and Personal Influence on the Internet: A Framework for Examining Online Word-of-Mouth." Journal of Euromarketing 11(2): 71-88.

Mittal, V. and W. A. Kamakura (2001). "Satisfaction, Repurchase Intent, and Repurchase Behavior: Investigating the Moderating Effect of Customer Characteristics." Journal of Marketing Research 38(1).

Morgan, N. A. and L. L. Rego (2006). "The Value of Different Customer Satisfaction and Loyalty Metrics in Predicting Business Performance." Marketing Science 25(5): 426-439.

Nicks, S. (2006). "What Not to Do With Net Promoter." Business Week Online August 1, 2006.

Oliver, R. L. (1999). "Whence Consumer Loyalty?" Journal of Marketing 63(Special Issue): 33-44.

Stokes, D. and W. Lomax (2001). "Taking Control of Word-of-Mouth Marketing: The Case of an Entrepreneurial Hotelier." Kingston Business School 44: 1-18.

Story, L. (2006). "What We Talk About When We Talk About Brands." The New York Times November 24, 2006.

Systems, S. (2004). "Growing your business with Net Promoter." White Paper: 1-11.

TARP "Measuring the Grapevine-Consumer Response and Word-of-Mouth." TARP White Paper: 1-25.

Vox, M. (2006). "Moms: WOM's Good, So's Online." Marketing VOX Online.

Wangenheim, F. V. and T. Bayon (2003). "The effect of word of mouth on services switching; Measurement and moderating variables." European Journal of Marketing 38(9/10): 1173-1195.

Yu, L. (2005). "How Companies Turn Buzz Into Sales." MIT Sloan Management Review Winter 2005.

-->
Tags:

Monday, December 04, 2006

"One Question, and Plenty of Debate": More Scrutiny of the Net Promoter Score

Readers of my blog know I've been following the recent academic research seeking to establish a relationship between likelihood to recommend metrics and business performance measures. In these articles the Net Promoter Score (NPS) has come under intense scrutiny.

The first study was the Morgan & Rego study published in Marketing Science (see series of posts beginning with this one for a summary). This is a very interesting study to read but a significant limitation was that it did not calculate the NPS as the creators do (Fred Reichheld, Bain, and Satmetrix), thus it wasn't an apples-to-apples comparison. This same limitation was noted by other researchers as well (see this post for details) and a paper has been submitted to correct and clarify this point (Keiningham, Aksoy, Cooil, and Andreassen, 2006; see Footnote 1 below for citation). Interestingly, this corrected study was also referenced on Fred Reichheld's Net Promoter blog.

But now it's Monday, and Monday's the day for big news to come out, so here we go... In today's Wall Street Journal Scott Thurm reports in his article "One Question, and Plenty of Debate" (subscription required) about a new study that calls into question the relationship between the NPS and business performance. The study was written by the same authors who penned the correction and clarification article in Marketing Science (Keiningham et al.) and it will be published next year in the Journal of Marketing, another very well-respected marketing journal.

According to the WSJ article, this new study was unable to replicate the claims made by Mr. Reichheld, Bain, and Satmetrix that "net promoter scores are better indicators of revenue growth than other customer-satisfaction measures." Further, for two of the industries (airlines and personal computers) cited in Mr. Reichheld's book (The Ultimate Question), the researchers found that a different customer-satisfaction measure, rather than NPS, better explains revenue growth.

Mr. Reichheld is quoted in the article making the point that too much is made of the correlation between NPS and revenue growth and that focusing on this correlation is missing the "forest for the trees." Instead he states that NPS is effective "because it forces top executives, and other managers, to focus on creating happy customers" (this quote was parahrased by the journalist in the article).

The WSJ article goes on to say that this is a consequential issue because many executives are making managerial decisions based on the use of the NPS metric and that critics "are trying to protect executives racing to adopt the net-promoter metric," expressing concern that "the corporate boardroom is probably misinterpreting the importance of this" (the latter quotation is from V. Kumar, a marketing professor at U of Connecticut School of Business, who will also be publishing a study soon that questions the link between NPS and business performance).

The new study by Keiningham et al. entitled "A Longitudinal Examination of 'Net Promoter' on Firm Revenue Growth" is currently embargoed (except for citation in academic journals) until its publication in 2007. However, through correspondence with the first author, I have learned that interested readers can download an executive summary here. The executive summary provides an overview and gives sufficent detail to illustrate that the methodology used was much closer to the methodology used by Mr. Reichheld, Bain, and Satmetrix.

I'll have more to say on the executive summary in a future blog post as well as what this latest research finding means for companies and the WOM industry. I'll also be referencing this debate on the NPS, among many other metrics and research traditions, in my "State of Word of Mouth Research and Measurement" presentation at the WOMMA Research Symposium on Dec. 11th in Washington, D.C.

Disclosure: I was interviewed for this Wall Street Journal article and I am on the Advisory Board for the Word of Mouth Marketing Association.

1. Keiningham, T., Aksoy, L., Cooil, B., & Andreassen, T. W. (2006). Net Promoter, Recommendations, and Business Performance: A Clarification and Correction on Morgan and Rego. Manuscript submitted for publication.

-->
Tags:

Monday, November 13, 2006

Does the Morgan & Rego Study in Marketing Science Undermine the Net Promoter Score Metric?

Since my series of blog posts about the Morgan & Rego article in Marketing Science that tested the value of different customer feedback metrics on predicting business performance, I've been asked if this study undermines the Net Promoter Score (NPS) as a useful metric for companies.

First, let me clarify that the following response is not meant to provide an endorsement or detraction regarding the use of the Net Promoter Score metric. Rather, it's to look at the available evidence from the Morgan & Rego study and assess the implications of that study.

OK, with that aside, the short answer is that we don't have sufficient data to come to a conclusion on the matter. As I noted in my series (here, here, and here) the main reason for this is that the Morgan & Rego study does not actually measure the Net Promoter Score in the same way that Reichheld, Satmetrix, and Bain & Co. do, so it's not fair to make a comparison. (This post explains the difference between how the Reichheld/Satmetrix/Bain "Net Promoter Score" is calculated and how the Morgan & Rego "net promoter" metric was calculated for the study).

Other researchers have also noted this disparity. Specifically, I recently learned that a manuscript has been submitted for publication (Keiningham et al., 2006)* whose goal is to correct and clarify the Morgan & Rego study on this point. In that manuscript they explain why the difference in calculation may actually make a difference in the conclusions that Morgan & Rego come to. Specifically, these authors contend that Morgan and Rego appear to have significantly misunderstood the data fields from which they calculated Net Promoter and Number of Recommendations. As a result, Net Promoter and Number of Recommendations were not actually examined. Therefore, conclusions regarding the effectiveness of the Net Promoter metric advocated by Reichheld on business performance cannot be accurately made from the Morgan & Rego study. Because the Keiningham et al. manuscript is under review I cannot provide more detail publicly at this time, but stay tuned!

For more details about what I see as the implications of the Morgan & Rego study on companies invested in understanding the importance of word-of-mouth marketing please read my last post.

* Keiningham, T., Aksoy, L., Cooil, B., & Andreassen, T. W. (2006). Net Promoter, Recommendations, and Business Performance: A Clarification and Correction on Morgan and Rego. Manuscript submitted for publication. (Thank you to these authors for allowing me to cite their paper in this blog post.)


-->
Tags:

Sunday, November 05, 2006

What Are the Implications of the Morgan and Rego Study for Companies Focusing on Word of Mouth Marketing?

Over the weekend I posted a series of entries summarizing an important article that tested the value of various customer feedback metrics to understand how they relate to key indicators of business performance. I reviewed what the authors saw as the implications of this study and now I want to offer some of my thoughts as well.

Let’s start with the use of the Net Promoter Score. If the Morgan & Rego had computed their “net promoter” metric in the same way as the Bain/Satmetrix “Net Promoter Score” and then found the same results, then this study is a very big deal because a compelling reason companies have adopted the NPS metric is because of its correlation with revenue growth. We’ll have to await future studies from these or other authors that make an apple-to-apple comparison.

I haven't heard a response from Fred Reichheld or Bain/Satmetrix, but I imagine their response might be similar to a recent blog post Fred Reichheld made on his blog.

In response to other criticisms of the Net Promoter Score (besides Morgan and Rego), Fred Reichheld makes two points in the metric’s defense. First, he argues that NPS was never based on statistical correlations but instead based on the relationship between customers’ survey scores (their likelihood to recommend to others) and their subsequent behavior. He states that “People who rate a company higher on the NPS scale buy more and refer more friends to the company than people who rate it lower” (though Morgan & Rego would also dispute this; see p. 437). Reichheld claims that the statistical correlations are “not the foundation of the NPS theory, merely supporting evidence.” According to this reasoning, then, the Morgan & Rego finding showing a non-significant correlation for the net promoter metric and business performance is far less damaging.

The second defense that Riechheld offers is that industry definition is extremely important. He says that you can’t just look for correlations between NPS and business performance in the “online retailing” industry because this is too broad (for example, Home Depot and Victoria’s Secret are both in retailing, and LandsEnd.com and Brookstone.com are both online retailers, but neither of these pairs are really in the same business.

Reichheld’s bottom line is this: “In testing the relevance of NPS to your business, avoid starting with correlations. Instead, begin with real behaviors of individual customers over time. Then, when you examine the correlations of NPS and growth rates, focus your analysis on your true competitors.” This point about industry specificity is crucial since one of the limitations that Morgan & Rego identify is that they can generalize across industries but cannot account for differences within or between industries (see p. 437).

From my perspective, even if companies find value in using the NPS, the prescription that it’s the “only” number a company needs to grow is misleading (many others have noted this as well). For example, if a company has a low NPS score it’s important to understand *why* the number is low so that the company can improve. Thus additional research to determine why people talk positively or negatively is still necessary. Additionally, companies need to focus on other business indicators besides just revenue growth (therefore Morgan & Rego’s critique that revenue growth is not the only indicator of business performance is still valid).

So, should a company who’s currently using the Net Promoter Score stop using it based on the findings from the Morgan & Rego study? For many companies, it seems that it has helped the company focus on creating a better product or service experience for the customer. It has also directed people’s attention to the importance of customer word or mouth.

If a company is not currently using the Net Promoter Score, should they start using it? My best advice here is to let the power of WOM loose and seek out those companies who have used NPS to find out about their experiences in their particular industry. Let the power of a peer recommendation work its magic.

OK, so setting aside the NPS, Morgan and Rego’s study is also relevant to the WOM marketing industry because it confirms the value of ensuring customers have satisfactory experiences such that they don’t generate negative WOM or complaints. Additionally, companies need to focus on customer complaints, use these as a source of consumer insight into what makes them satisfied, and also better handle the complaints to offset the deleterious effects of negative WOM.

I’d love to learn what readers think. What do you think are the implications of the Morgan & Rego study for the WOM marketing industry?

-->
Tags:

Friday, November 03, 2006

Morgan & Rego Study: Limitations & Avenues for Future Research

Limitations

First, the findings are not generalizable to smaller firms or B2B businesses since these weren’t included in the study.

Second, it's difficult to control for differences between industries, and since there may be differences in the preconditions leading to loyalty in each industry, this limitation may affect the utility of satisfaction and loyalty metrics.

Third, all customers were treated equally in the analysis. There's no data on which consumers are most relevant to a firm's success (for example, where should a firm spend the majority of its resources, on the best customers, on the most number of customers, those who complain the most, etc.)?

Fourth, the study was limited to customer feedback mechanisms that were easy for employees and managers to comprehend. Further, the relationship between these and a firm's performance was linear; non-linear relationships and interaction effects among the customer feedback variables might be more useful. NOTE: To understand the importance of how non-linear and asymmetrical relationships might be relevant read this article by Anderson & Mittal (2000) [opens into PDF].

ALSO NOTE: As mentioned before the way the net promoter score was calculated in this study is different than the Bain/Satmetrix Net Promoter Score. Stay tuned, however, from these authors for using the same language and method in future studies.

Avenues for Future Research

First, maybe the reason that satisfaction is more related to firm performance is becuase it costs more for the firm to generate positive WOM recommendations than for customers to simply be satisfied.

Second, promoters don't seem to buy more nor does their influence on potential new prospects seem to be as strong as people believe. Why? Maybe the process of getting customers to engage in positive WOM paradoxically increases their involvement in the category and their desire to seek out the variety offered by other brands for future purchases. Or maybe people who engage in positive WOM are more likely to be opinion leaders who find utility in seeking out variety in brands and companies. Thus, more research needs to be done about the impact of recommendation behaviors on not only the behaviors of others, but also the behaviors of the person doing the recommending. Also, the authors wonder if more active repurchase behaviors that indicate loyalty, like share of wallet, are better predictors of financial performance than the self-reported attitudinal indicators of loyalty which tend to be more passive.

Third, some of the correlational findings between recommendation and satisfaction measures seem to contradict service-profit chain logic (opens into PDF). There was a significant positive correlation between number of recommendations and the proportion of customers complaining. The authors wonder to what extent are WOM behaviors, both positive and negative, driven by consumer characteristics versus the firm’s marketing actions?

So, in conclusion, then, the authors maintain that their results show the value of customer feedback metrics and their ability to predict business performance of the firm. Further, they argue that the best feedback metrics are average customer satisfaction, Top 2 Box customer satisfaction scores, proportion of customers complaining, and the repurchase likelihood loyalty metric. The authors failed to find support for the predictive value of loyalty metrics based on data from recommendation behaviors, net promoters, and the number of recommendations.

This is the end of the Morgan & Rego article summary series. Here are some additional thoughts on the implications of this study.

-->
Tags:

Morgan & Rego Study: Discussion & Implications

Discussion & Implications

Why is the study important? Here are the factors that the researchers identify. First, the authors link customer feedback measures with previously unexplored measures of financial performance, like Total Shareholder Return and Sales Growth. Their findings that Average Customer Satisfaction Scores and Top 2 Box Customer Satisfaction Scores actually do predict sales growth directly counter Reichheld’s claim that they do not. Their findings also counter previous findings that found no relationship between certain variables (for example, average customer satisfaction and gross margins were not found to be significantly related in past research but were found to be so in this research). There is also a positive relationship between customer satisfaction and market share. All of this is to say that a firm’s ability to satisfy its customers has an important impact on that firm’s business performance.

Second, this study shows the impact of customer complaining behavior on business performance. Previous research suggested that firms should try to actually increase the number of customer complaints so that the concerns of these “at-risk” customers can be better addressed. While there was one positive relationship between complaining behavior and market share (that is, more complaining behavior, more market share) the results of this study suggest that these complaints have not been adequately “heard” by the company, or if they have been “heard,” then the firm’s attempts to address the concerns have not counter-acted the negative effects of the customers’ complaining behavior on subsequent business performance. NOTE: The positive relationship between market share and complaining behavior is not surprising given Robert East’s work that suggests companies with higher market share tend to have both more positive AND negative WOM.

Related to this point about complaining behavior, the authors state that existing research by TARP (1986) suggests that customer complaints aren’t a good indicator of customer satisfaction but the authors state that the results of this study suggests that monitoring customer complaints provide insights into customer satisfaction and is valuable for predicting future business performance (the evidence for this is that the customer complaining variable and the other two satisfaction metrics were correlated with one another, and further there were similar patterns in the regressions across all three satisfaction measures).

Third, the study sheds new light on the relationship between customer loyalty and business performance. First, this study found that repurchase likelihood is related to a firm’s business performance which increases confidence in the current managerial practice of paying attention to repurchase likelihood. Second, the study also sheds light on the significance of positive consumer recommendations. In short, the authors argue that focusing on likelihood to recommend is not as useful as focusing on repurchase likelihood. (Other authors argue that you need to focus on recommendation likelihood because repurchase likelihood gets confused by inertia, indifference, or exit barriers that the company puts in the way to make it harder for customers to switch brands [see Reichheld, 2003, p. 48]).

The good news is that customer feedback systems can help a firm implement planning and control measures as there are metrics that help a company to predict business performance. Which ones to use, though, then becomes the issue. This study finds that the three customer satisfaction metrics and the repurchase likelihood metric (loyalty) are the best ones. The average number of recommendations seems only to have a positive impact on future market share and have a negative impact on future gross margins. The authors contend that the net promoter metric seems to have no predictive value at all. The data from this study suggests that increasing the number of promoters will not help the company’s business performance. Rather than focusing just on the net promoter score, companies should focus on a “scorecard” method that includes the following four metrics: average customer satisfaction, Top 2 Box satisfaction, proportion of customers complaining, and repurchase intent.

OK, now we'll discuss the limitations of this study, future avenues for research, and the researchers' conclusions.

---
Reichheld, Fredrick E. 2003. The one number you need to grow. Harvard Business Review (December) 46–54.

TARP. 1986. Consumer Complaint Handling in America: An Update Study. Technical Assistance Research Programs, White House Office of Consumer Affairs, Washington, D.C.

*** Information in this post adapted from Morgan, N. & Rego, L. Marketing Science, Vol. 25, No. 5, September–October 2006, pp. 426–439.

-->
Tags:

Morgan & Rego Study: Results

Results

The researchers found that firm and industry characteristics did have an effect on financial performance (so it was good that the authors controlled for them!).

Above and beyond these characteristics then, the six behavioral measures also explained additional variance, from a low of 1% (for Total Shareholder Return) to a high of 16% (for market share).

Below is how well the each of the six customer satisfaction and loyalty metric explained the variance in the six business performance measures. The customer feedback metrics are listed in the order of their predictive power.

- Average Customer Satisfaction Score. This metric was statistically significant across all six business performance measures. It explained from 5% (net operating cash flow) to 16% (market share) of the variance in these performance measures.

- Top 2 Box Customer Satisfaction Score. This was statistically significant across 5 of the 6 performance measures and approached significance on the 6th one as well (Total Shareholder Return). It explained from 5% (net operating cash flow) to 16% (market share) of the variance in these performance measures.

- Proportion of Customers Complaining. This was statistically significant across 4 of the 6 performance measures (except Total Shareholder Return and Net Operating Cash Flow). It explained 4% (TSR) to 13% (market share).

- Repurchase Likelihood. This was statistically significant across 4 of the 6 performance measures (except Total Shareholder Return and Net Operating Cash Flow). It explained 4% (TSR) to 15% (market share).

- Number of Recommendations. This was statistically significant only across 2 of the 6 performance measures (Gross Margin & Profit Share). But the gross margin coefficient is negative! It explained 1% (TSR) to 12% (Tobin’s Q).

- Net Promoters. This was not statistically significant across any of the 6 performance measures… none! It explained from 2% (market share) to 12% (sales growth) of the variance in these performance measures.
OK, let's move to the discussion and implications of these results.

*** Information in this post adapted from Morgan, N. & Rego, L. Marketing Science, Vol. 25, No. 5, September–October 2006, pp. 426–439.

-->
Tags:

Morgan & Rego Study: Purpose and Methods for Study

OK, what follows are excerpts of my notes from reading this article.

The content was adapted from the article and I generally indicate with quotation marks where I've pulled direct quotes, but because these are from my informal notes I often cut-and-pasted from the article.

Purpose of Study

To determine which of the available customer feedback systems currently used by practicing managers (based on measures of customer satisfaction and loyalty) best predict a company's financial performance.

Method

Data Set

The authors took data from the American Consumer Satisfaction Index (ACSI; provided by the National Quality Research Center at the University of Michigan), which closely matches satisfaction and loyalty data that companies would have available in their own customer feedback mechanisms. Since 1994, the ASCI has collected data from 50,000 consumers annually from 200 of the Fortune 500 companies from 40 different industries to measure consumer evaluations of these companies’ products and services. Utility companies were removed from the analysis because of their monopoly position and from private companies since their financial data was not available. 80 different companies were represented for 7 years (1994-2000).

Financial Performance Measures

There were 6 measures of financial performance:

- Tobin's Q. Compares a firm’s market value to the replacement cost of its assets; forward-looking financial market measure

- Net Operating Cash Flow. Measures a firm’s ability to generate cash; historical accrual accounting info-based measure

- Total Shareholder Returns. Measures firm’s ability to deliver value to shareholders by increasing price of firm’s stock and distributing dividends; forward-looking financial market measure

- Sales Growth. Measures increase/decrease in sales revenue; customer market-based measure

- Gross Margin. Ratio of gross profit to sales revenue; shorter-term

- Market Share. Percentage of sales a firm has relative to entire industry sales; customer market-based measure
Customer Feedback Measures

There were 6 measures of customer feedback:
- Average Customer Satisfaction Score (Satisfaction Measure). This is the mean score on the three specific indicators used to estimate the ACSI latent satisfaction index. The three measures are 1) overall satisfaction, 2) expectancy disconfirmation, and 3) performance versus their ideal product or service in the category.

- Top 2 Box Customer Satisfaction Score (Satisfaction Measure). This refers to the two highest-scoring points on the five-point scale that firms typically use to capture customer satisfaction. Because the ACSI uses 10-point satisfaction scales, this metric was operationalized as the proportion of customers surveyed that rated the firm in the top 4 points on the 10-point single-item “overall satisfaction” ACSI scale.

- Proportion of Customers Complaining (Satisfaction Measure). Number of consumers of a firm who voice dissatisfaction with the product or service versus those who do not. This was calculated using the ACSI “voice” variable that comprises two items that ask if the consumer has either formally (as in writing or by phone to the manufacturer or service provider) or informally (as to others) complained about the product or service.

- Net Promoters (Identified as Loyalty-based Measure; could also measure advocacy). Percentage of a firm’s customers who make positive recommendations of the company brand to others minus those who do not (NOTE: this differs from Reichheld’s definition since his refers to a likelihood to make a recommendation).

The net promoter score in this study utilized ACSI data concerning consumer responses to the questions “Have you discussed your experiences with [brand or company x] with anyone?” and “Have you formally or informally complained about your experiences with [brand or company x]?” The first question measures both positive and negative recommendations, while the second question measures negative recommendations. Net promoters computed as the number of a firm’s surveyed customers that reported discussing their consumption experiences minus the number of the firm’s surveyed customers that reported formally or informally complaining expressed as a proportion of the total number of a firm’s surveyed customers.

- Repurchase Likelihood (Loyalty-based Measure). Customer’s stated probability of purchasing from the same product or service provider in the future. This was taken from the ACSI that asks consumers to rate “How likely are you to repurchase this brand/company?”

- Number of Recommendations (Loyalty-based Measure; could also measure advocacy). This refers to the number of people to whom consumers of a firm’s product or service engaged in positive word-of-mouth (WOM) behavior as captured in the authors' net promoters variable report having recommended the brand or company. The ACSI question asks: "With how many people have you discussed [brand or company x]?” Authors averaged this metric at the firm level, representing the average number of people to whom the surveyed customers of a firm who engaged in positive WOM have recommended the brand or company.
Co-Variates and Other Industry Characteristics

To account for effects of other factors the authors identified a number of firm-level and industry-level co-variates: for the firm, number of business segments the firm competes in, the intensity of advertising and R&D expenses, and size of assets; for the industry-level, Hirschmann-Herfindahl index (HHI; because market structure can affect financial performance) and demand growth.

To control for other industry characteristics, “we included three dummy variables in our analyses: ACSI sector definitions to identify service-focused versus physical goods-focused firms, annual reports to identify firms that market direct to their end-user consumers versus those using intermediaries, and the ASCI survey data collection protocol to indicate firms that face longer versus shorter interpurchase cycles.”

OK, so basically the authors are going to take the six customer feedback metrics (3 satisfaction and 3 loyalty metrics) and see which of these best predict the six business performance measures.

We should note at this point that the "net promoter" metric in this study is not calculated in the same way as the Net Promoter Score, which is calculated by subtracting the number of detractors (0-6 on a 10-point scale indicating how likely a person is to recommend a company or brand to others) from the number of promoters (9-10), disregarding the number of passive responses (7-8).


Next, the results!

*** Information in this post adapted from Morgan, N. & Rego, L. Marketing Science, Vol. 25, No. 5, September–October 2006, pp. 426–439.

-->
Tags:

Very Important Study! -- The Value of Different Customer Satisfaction and Loyalty Metrics in Predicting Business Performance

OK, this is a big one.

There’s a very important study for those interested in WOM marketing that has been published in Marketing Science, a well respected marketing journal, about which customer satisfaction and loyalty metrics best predict a firm’s business performance. It’s an even bigger study for those invested in the use of the Net Promoter Score.

Much has been made of the Net Promoter Score (NPS) as a powerful metric for companies to get a handle on the WOM activity of consumers. According to the proponents of the NPS, asking one simple question, how likely a person is to recommend a company or brand to their friends or colleagues, will allow you to effectively monitor a firm's performance (e.g., a firm's level of revenue growth). They claim that other metrics, such as satisfaction measures that so many firms routinely use, are far less important and perhaps not even worth using. While Reichheld and colleagues have been praised for coming up with an easy-to-implement solution to benchmark their success with WOM, they have also come under fire for over-simplifying the process of WOM tracking by advocating for just this one question.

At least one study has been conducted to confirm the findings of the NPS, extending it from U.S. companies to U.K. companies (opens into PDF file; see my blog post about this study), but there has been no peer-reviewed academic journal articles where the central premises of the NPS have been studied. Until now.

Researchers Neil Morgan and Lopo Leotto Rego entitled their article "The Value of Different Customer Satisfaction and Loyalty Metrics in Predicting Business Performance."

In their manuscript they offer what amounts to a scathing critique of the NPS arguing that it's 1) ineffective in predicting business performance and 2) misguided for companies to simply use this metric. By the way, these same researchers published counter-point letters-to-the-editor in the Harvard Business Review following the publication of Reichheld's influential article in the same publication.

This article deserves a careful read and consideration. However, like any research, there are important limitations that we also need to consider. Over the next few posts I'll offer a summary of the article and what I see as its implications to the use of the Net Promoter Score by companies and the WOM marketing industry.

For starters, here's the abstract (due to copyright restrictions I won't post the article here; it can be accessed at your local university library*):

Managers commonly use customer feedback data to set goals and monitor performance on metrics such as “Top 2 Box” customer satisfaction scores and intention-to-repurchase” loyalty scores. However, analysts have advocated a number of different customer feedback metrics including average customer satisfaction scores and the number of “net promoters” among a firm’s customers. We empirically examine which commonly used and widely advocated customer feedback metrics are most valuable in predicting future business performance. Using American Customer Satisfaction Index data, we assess the linkages between six different satisfaction and loyalty metrics and COMPUSTAT and CRSP data-based measures of different dimensions of firms’ business performance over the period 1994–2000. Our results indicate that average satisfaction scores have the greatest value in predicting future business performance and that Top 2 Box satisfaction scores also have good predictive value. We also find that while repurchase likelihood and proportion of customers complaining have some predictive value depending on the specific dimension of business performance, metrics based on recommendation intentions (net promoters) and behavior (average number of recommendations) have little or no predictive value. Our results clearly indicate that recent prescriptions to focus customer feedback systems and metrics solely on customers’ recommendation intentions and behaviors are misguided.
Very exciting stuff! Let's dig in...

* You can find the article in Marketing Science, Vol. 25, No. 5, September–October 2006, pp. 426–439.

UPDATE: You can find a pre-press version of the article on Dr. Rego's site. [Thanks to Constantin Basturea for pointing this out!]

-->
Tags:

Thursday, November 02, 2006

Blogging Success Study Released!








It's been a long time in the works, but finally released!

John Cass (Backbone Media) and I conducted a research study with my class in Advanced Organzational Communication (Spring 2006) on what makes for a successful blog. We identified a number of best practices and also five themes that cut across all the interviews:


Culture: If a company has particular cultural traits worth revealing or a bad reputation it wants to repudiate, blogging can be an attractive option.
Transparency: Critical to establishing credibility and trust with an audience. People want to see an honest portrayal of a company and know that there are not ulterior motives behind the blog. Blog audiences respect a willingness to disclose all points of view on a subject.
Time: It takes a lot of time to set up, research and write a quality blog. Companies need to identify a person who has the time or whose schedule is freed up to make the time, or need to engage a group of people to share the responsibility.
Dialogue: A company’s ability and willingness to engage in a dialogue with their customer base about topics that the customer base is interested in is critical to its blogging success.
Entertaining writing style and personalization: A blogger’s writing style and how much they are willing to reveal about their life, experience and opinions brings human interest to a blog, helps build a personal connection with readers and will keep people reading.
You can download the study as a PDF (name/e-mail registration required) or read it online. The online report takes the form of a blog so that people can comment on individual sections and is hosted on the Scout blogging server.

Be sure to check out the summaries of the interviews with each blogger to learn more detail about each blogger's experience.

Thanks so much to all the bloggers who agreed to participate in the study, the students in my class who did the interviews, and the folks from Backbone Media (John Cass, Stephen Turcotte, Megan Dickinson, Kristine Munroe and Dave Alpert).

Northeastern University Press Release

-->
Tags:

Saturday, October 28, 2006

Gimci, Nicolas Cage, and WOM Marketing in South Korea








This dispatch comes from Seoul, South Korea. I have been here for the past few days at the invitation of Ms. Inus Hwang, CEO of Advantage Marketing Lab and founder of Azoomma.com, an online community dedicated to Korean housewives (currently there are about 600,000 members). I met Ms. Hwang last year at the 1st International WOM Marketing Conference in Hamburg, Germany. Since that time they have changed their name from Azoomma Marketing Lab to Advantage Marketing Lab as they broaden their WOM marketing program offerings for other products and services beyond Korean housewives.

For this trip I was invited to speak at the 3rd Annual Korean WOM Marketing Conference, which took place on Thursday. There were over 100 brand managers, press, and academics in attendance at the event. Apparently Seth Godin's book Purple Cow has been pretty popular here, and more and more companies are beginning to use WOM marketing techniques (see this article from the European magazine, Infomag, for a brief overview of the media and marketing landscape in South Korea including how WOM, buzz, and community marketing are being used [note that the article confuses terminology quite a bit]). There are a few other WOM firms in the country now, and at least two are WOMMA members. Additionally, about two weeks ago, there was a big article in a Korean newspaper about the Net Promoter Score, which was the first introduction to the metric for a lot of companies in South Korea.

The title of my talk was "Will the Real Word-of-Mouth Marketing Please Stand Up?". In my presentation I discussed five common misunderstandings that people have of WOM marketing:

Misunderstanding #1: WOM Marketing = Buzz Marketing = Viral Marketing (confusion about terminology and thinking that buzz and viral marketing represent the only forms of WOM marketing)
Misunderstanding #2: WOM Works Best in Stealth Mode (there have already been cases here of stealth marketing)
Misunderstanding #3: WOM Is Only Used for Launching New Products and Services (this is partially related to Misunderstanding #1 in that companies tend to focus on the shorter-term WOM strategies rather than the longer-term principles and techniques, such as community, evangelist, and grassroots marketing)
Misunderstanding #4: WOM Versus Advertising (the sense is that if you do WOM marketing you don't also use advertising or vice versa)
Misunderstanding #5: WOM Cannot Be Measured (here I discussed a number of differnt ways firms are measuring WOM marketing programs and ROI)
While in Korea I also learned a great deal about the culture, people, and cuisine. In terms of cuisine, for example, I have learned to appreciate Gimci, which is fermented vegetables and a staple of Korean meals. The sequencing of foods that are eaten is also very smart (for example, eating noodles or rice after meat, followed by tea, really helps to settle the stomach). My Korean vocabulary now stands at about 10 words (hello, thank you, nice to meet you, good bye, etc.). I learned about "bang culture" which is where there are a number of public rooms devoted to PC gaming, DVD viewing, conversations, and singing (see the recent article in the New York Sunday times on PC bangs; bang literally means "room"). I have also learned that many people in Korea think I look like Nicolas Cage (I guess I should take that as a compliment!).

Ms. Hwang and her staff at Advantage Marketing Lab have been wonderful hosts and it has been a memorable experience for me. Hopefully I will have the chance to visit again soon!


-->
Tags:

Friday, October 27, 2006

Weighing in On The Edelman & Wal-Mart Flogging Scandal: Insights from Organizational Communication

Like many others, I was very disappointed to learn of Edelman's role in the Wal-Mart fake blogging (flogging) scandal, especially since Edelman is a governing member of the Word of Mouth Marketing Association. This issue is especially salient to me, not only because I am a member of the WOMMA Advisory Board and Co-Chair of the Research & Metrics Council, but because I also conduct research in the WOM marketing space, consult orgnizations about how they should ethically and effectively manage WOM, and am an educator who teaches classes on the same topics.

There has been a great deal of discussion about what WOMMA's response should be. Broadly speaking, should it take more of an educational role or an enforcement role, or some combination of the two? Clearly WOMMA has an educational role to play and I struggle with what role an industry association should play in terms of enforcement. I'll speak briefly to this below, but for the time being, I would actually like to bracket (just for a few paragraphs) what WOMMA's role should be so that I can discuss what I would like to see from Edelman, or any company in this situation, but especially a company who is as well-known and respected as Edelman is in this space.

In one of the classes I teach at Northeastern -- Advanced Organizational Communication -- we study certain communication imperatives that any organization must follow in order to be ethical and effective. There are a number of imperatives, but two seem highly relevant to me: the notion of automatic responsibility and the institutionalization of dissent.

Automatic responsibility is the idea that each organizational member has the responsibility to solve a problem as they become aware of it (whether that problem be a difficulty a customer is having, or whether there is an ethical violation within the organization). If they don't have the technical knowledge or skill set to solve the problem, then it is their responsibility to make sure they find somebody in the organization who can address it.

The institutionalization of dissent is the obligation of an organization to build communication systems such that organizational members are encouraged to dissent, have that dissent be listened and responded to, and to be rewarded (rather than punished) for their dissent.

Upon learning of Edelman's complicity with this obvious and unfortunate violation of the WOMMA Ethics Code (for example, dishonesty about the bloggers' identities) I was left to wonder how this could have happened.

How could an organization who contributed to the creation of the ethics code and who has demonstrated leadership in the social media space commit such a transgression? What was the organizational decision-making process that led to this? Did anyone ever dissent and say this (fake) blogging program wasn't a good idea (it should have been the moral obligation of each employee to exercise their automatic responsibility and do so)? If there was dissent, what happened to this dissent? Was it ever encouraged in the first place? Was it ever listened to? Further, was it (or will it be) rewarded or was it punished?

I read that Edelman wants to make this situation right (see Rick Murray's comment on the WOMMA discussion list) and I sincerely believe that they do. A lot is at stake, as it represents a crisis of confidence for the reputation of their company, for WOMMA and the WOM marketing industry, and for the consumer/citizen.

So what should Edelman do now? A tough question and here's a start (see the WOMMA-faciliated discussion for other ideas): If I were advising them on what to do I would counsel them to conduct a comprehensive, independent investigation whose purpose would be to uncover the organizational communication and decision-making before, during, and after this scandal. Specifically, the goal would be to find answers to the questions above about where were the voices of dissent and if there were voices, why weren't they heeded? Fundamentally interrogating this issue of dissent is paramount in a media and organizational landscape where the dominant values are discursive engagement, openness, and transparency.

The board would include representatives from the following groups: consumer protection group(s), the FTC, academic expert(s) in corporate blogging and/or corporate ethics, member(s) of the WOMMA Ethics Council (and possibly other associations to which Edelman belongs), and of course, blogosphere netizen(s) or other consumers/citizens. The board would make the results of their investigation publicly available through a written report (electronically faciliated and with a public discussion board). Individual actors and the organizational system must then be held accountable in accordance with the findings of the report. The appropriate actions of holding individuals and the organizational system responsible cannot be further debated until after the investigation and the report is made public -- well the actions can be debated, but there's still too much we don't know at this time.

I feel this would be an appropriate course of action because it would be conducted in the spirit of openness and transparency that is highly valued and desired by not only the blogging community but by the principles of a democratic citizenry, it would allow Edelman to continue in the leadership position they have taken great strides to establish by showing they should be held to the same principles of openness and transparency they espouse, and it is a proactive move that would allow other practitioners and students to learn from this crisis.

I think the decision to initiate this investigation should be Edelman's while WOMMA (and again, other relevant industry associations) should continue to facilitate this process by continuing to host discussions on the matter, to contribute a representative of the Ethics Council to the investigation board, and to help with the educational effort based on the results of the independent investigation.

I would love to hear others' views on this matter through comment, but I would encourage that we make trackbacks and/or comments to WOMMA's discussion board as well.

Many of the principles discussed in this blog post were dervied from Phil Tompkins' book: Apollo, Challenger, and Columbia: The Decline of the Space Program (A Study in Organizational Communication) which I use as a textbook in my class.

Addendum: I should add that part of the investigative or auditing process I describe above needs to include Wal-Mart and what role they played in this. Working out the practicalities of involving Wal-Mart makes this more complex but it would be important to understand the systemic dimensions.

Update (11/1/2006): WOMMA has officially placed Edelman's membership under a 90-day administrative review. Edelman needs to take the following 6 steps (some of which appear to already be underway) in order to re-instate its full membership:

1. Provide assurances that all inappropriate programs have been stopped.
2. Provide a briefing to the WOMMA Executive Committee to fully explain the details of the incident.
3. Implement a training program to educate all employees on ethical practices and disclosure requirements.
4. Institute systems to prevent violations from happening in the future, and to correct them if they do.
5. Formally participate in upcoming WOMMA ethics programs and comply with all new ethics requirements for members.
6. Please provide detailed documentation of your compliance with the above requests.

-->
Tags:

Playing 20 Questions With Ethics -- It's Not Just A Game

WOMMA is at it again on the ethics front. Building from their Ethics Code and Honesty ROI framework (Relationship, Opinion, and Identity), the Ethics Council has created a tool to help organizations evaluate whether or not their next WOM marketing program falls within what could be considered ethical boundaries. WOMMA has submitted their "20 Questions Ethics Assessment Tool" in draft form and is inviting public comment.

This document is very timely given recent revelations by Edelman about its role in the Wal-Mart flogging (fake blogging) scandal and other violations of the Ethics Code such as LonelyGirl15

Disclosure: WOMMA Advisory Board member

-->
Tags:

Talk About An Effective Use of Cause and Viral Marketing: Dove's "Evolution"

Here's an excellent example of a company using viral and cause marketing. It's part of Dove's Campaign for Real Beauty and the name of their viral clip is "Evolution." [option to download or open clip in Windows Media Player]

Not only is it well crafted, there's an opportunity to pass-along the clip, get involved with the Dove Self-Esteem Fund, and join in a discussion in Dove's self-esteem forum (all available at the campaign web site)

Reflect and pass it on!

Thanks to Liz Stokoe's post to the DARG listserv for making me aware of the clip

-->
Tags:

Wednesday, October 25, 2006

Word-of-Mouth Marketing Communication Bibliography Project

There is a tremendous amount of interest in all things word of mouth these days. But many people who are new to this topic and interested in learning more are not aware that research on word-of-mouth marketing and related concepts -- like loyalty, advocacy, consumer behavior, social networks, etc. -- have been studied for decades and decades in the academic world and by some companies.

In light of this I have conducted my own review of the literature on the topic of word-of-mouth marketing communication and, along with others who have done the same -- notably Greg Nyilasy and Martin Williams -- we have combined our bibliographies and are making them available at my download page as a common resource.

We make no claims that this bibliography is complete in any way. Further, what exactly counts as "word of mouth" is somewhat ambiguous and so there are a lot of articles in the list that are tangentially related. We have also included, albeit unsystematically, industry white papers and news articles on the topic.

The current list is both in PDF format and as an Endnote bibliography file (as it stands now, the list is at 32 pages). The PDF format is not as useful as the Endnote bibliography, but most everyone has access to PDF. You can request a copy of the Endnote file (other academics and researchers would be most interested in this, I would assume) but we would just ask that you agree to make a meaningful contribution to the bibliography project (for example, contribute new resources to it, spot-check it, add in abstracts or summaries of the articles, etc.).

We would love for people to fill in gaps where the current list is incomplete! Simply e-mail me (w DOT carl AT neu DOT edu) with "WOMBP" in the subject line. I aim to update the file every month and post a new version. When there are significant revisions or additions I will create a new blog post.

There's surely a much better way to maintain an active bibliography, such as through the use of a wiki, and we'll probably move to that at some point. There are also other WOM-related resources to check out, such as:

- The WOMNIBUS and WOM Library hosted by the Word of Mouth Marketing Association
- Google Scholar (keyword search "word of mouth")
- SPIDER (Social Psychology of Information Diffusion -- Educational Resources)
- The New PR Wiki (managed by Constantin Basturea)
- Michael Cafferky's WOM Resource page
- Kerimcan Ozcan's website
Happy researching!

Link to download page for WOM Bibliography Project


-->
Tags:

Friday, October 13, 2006

Top Ten Sources for PR and Marketing Communication and Higher Education Blogs

I recently learned that the Word-of-Mouth Communication Study blog was included as one of the Top 10 blogs for PR, Marcom, and HigherEd. The list was compiled by Robert French from Auburn University. His blog and associated resources at Auburn Media are excellent and should be included in the Top 10 as well, but he modestly put himself at number 11.

Here's the introduction that Robert put together for the list:

Public relations from the academic point of view. These links represent some of the people teaching the future PR practitioners of our world and those practicing PR for their universities. What do they have in common? They are blogging. Their views are important to the wider public relations conversation occurring online. To the PR practitioners of the world, your concerns are important to these bloggers. Without active involvement from practitioners, how else may future PR practitioners be prepared for post-graduate life? To recent adopters, or those just exploring social media for their classes, may this list serve as a jumping off point for your efforts. Please engage these educators in conversations.
Given this description interested reasders may want to check out my class blogs: Advanced Organizational Communication and WOM, Buzz, and Viral Marketing Communication.

I encourage folks to check out the other great blogs and wikis on the list.

-->
Tags:

Wednesday, October 04, 2006

WOM Marketing Activity For Your Next Class, Presentation, or Training Session

I recently received an e-mail inquiry about exercises or activities that can be used when presenting about WOM marketing with PR and marketing professionals.

One activity I’ve found really useful when talking with groups is about the different approaches to WOM marketing. The goals of the activity are:

1. for people to see the broad range of strategies and techniques that fall under the umbrella of WOM marketing,
2. to understand the advantages and disadvantages of each, and
3. to understand when to use each technique.
The activity is based on my own observations and interviews with industry practitioners, as well as on the work of authors and consultants Ben McConnell and Jackie Huba at Church of the Customer, and consultant John Moore from Brand Autopsy.

First, I explain that the range of WOM marketing initiatives can be broken down along three continua (there are more but three is a good start):
1. Degree of control over messages: consumer control versus company control
2. Outlook for future: longer-term versus shorter-term
3. End-goal: generating advocacy by developing a better brand, product, or service experience versus generating awareness though an attention-grabbing WOM activity
Second, I present case studies of actual WOM marketing initiatives and ask the participants to place the case along each of the continua above. In selecting cases I make sure that the case serves as a good representation of certain WOM marketing techniques. You can find a list of these techniques at the Word of Mouth Marketing Association’s website. Placing each case along these continua often generates some very good discussion.

For example, I have presented the case of the Chevy Tahoe Campaign where Chevy hosted a contest on their site to determine who could create the best consumer-generated ad. Chevy provided folks with the tools they needed: images, audio tracks, an easy editing interface to create the ad, and then a way to easily spread the message to other consumers. Many of the user-generated submissions were posted on their own site while others found their way to YouTube. Additionally, many of the ads were not very flattering from Chevy’s perspective, often serving as political statements about the negative effects of SUVs on the environment.

So after I explain the program I ask the audience to place it along the first continuum: consumer-company control. Some audience members think it should go all the way on the consumer end arguing that consumers could create their own ad without any input from the company. Further, Chevy didn’t filter or exclude submissions they didn’t agree with. Others argued that it shouldn’t be all the way on the consumer-control end because Chevy still did provide the raw materials and hosted many of the videos on their own site. Plus the ads on their site were taken down after the campaign, thus suggesting more company control over the process).

The next continuum is whether or not the initiative represented a longer-term strategy or a shorter-term campaign. Most of the audience says this is a shorter-term campaign (but maybe some initial stages of a longer-term strategy of figuring out consumer-generated media).

In terms of the third continuum, the end-goal, most audience members feel like the focus was to capture attention and generate awareness of Chevy Tahoe through an innovative activity designed to spread WOM (of course, Chevy wasn’t the first to do this; other examples include Converse, MasterCard, etc.). To support this classification, I usually chime in here to show that the WOM activity wasn’t really designed to elicit consumer feedback about how the Chevy Tahoe could be built better or how it wasn't designed to build more loyalty to Chevy and ultimately WOM advoacy.

The contrasts among the different types of programs become clearer when you discuss a few different examples. Here are some examples of other ones I’ve used:
1. Monitoring & Engaging in Online Conversations (Kryptonite U-locks)
2. Creating Consumer Communities & Cultivating Advocacy (Discovery Educator Network’s program for their unitedstreaming product; Church of the Customer)
3. Providing Tools for Viral Spread of Messages (Family Guy DVD program; M80)
4. Engaging Influencers (Wine Council of Ontario’s VQA program; Matchstick)
5. Stimulating WOM via Agent-Based Networks (Hershey’s Take 5 candy bar program; BzzAgent and Arnold Worldwide)
Finally, after going over the continua and presenting each case and discussing where each falls, I then facilitate a discussion about the relative advantages and disadvantages of each type of program and when a company might want to use each one. In the facilitation I think it’s important to emphasize that each technique can be useful depending on the specific company goals, and may in fact be complementary (here’s a link to one of my students’ post about combining the goals of generating both awareness and advocacy).

Feel free to contribute other activities, exercises, or discussion points you might use, or want to see, in a presentation on WOM marketing.

For more information on this post:

Watch John Moore's presentation on WOM Creationists versus Evolutionists.

Read Ben & Jackie's post on the buzz versus WOM divide

Read posts on the Chevy Tahoe campaign from Church of the Customer and Pete Blackshaw's CGM blog and ClizkZ column on consumer control and what counts as (in)authentic consumer-generated media:


Why Chevy Tahoe campaign was doomed before it launched

Chevy Tahoe campaign: Not CGM

GM Steps into the Chevy Tahoe Debate

Can Marketers Control CGM? Should They?


-->
Tags: