Submitted by Howard Rauch
One of the most important values a publication or Web site can deliver to its readers is articles based on high-quality editorial research. High quality does not necessarily mean that you retain the most expensive research service to conduct a national study for you … although quality certainly is implied as being present if you do use an outside agency. High quality does mean:
1. You addressed a topic of considerable importance;
2. You asked significant questions;
3. You obtained a decent response;
4. Your conclusions were highly instructive to readers;
5. Whenever possible, the information reported clearly was ground-breaking.
Even the smallest magazine staff should plan on publishing a continuous flow of original statistics. So … how do you organize a research effort? How do you exploit it? What are some do’s and don’ts along the way? Here are 14 responses that fill in some of the blanks.
1. If you’re planning a major project, seek input on the questionnaire from your readers. You may be pleasantly surprised to find how responsive people are when you’re not contacting them for the conventional reason of conducting an interview.
2. Follow the established principles of making a questionnaire easy to answer. Have plenty of check-offs and multiple choice, but also include several open-end questions. Also include historical questions; otherwise you lack a frame of reference, especially in terms of sales data.
3. Random sampling is for the birds. Perhaps saying so is heresy to devout researchers. But if you leave things strictly to Nth name response, you run the risk of winding up with lots of non-authoritative responses.
4. Watch questionnaire length. When a questionnaire is especially complex, offer an incentive to respond.
5. Establish a written timetable and stick to it. Allow for second and third mailings. Provide adequate time for your art staff to develop graphics that are easily followed by your readers.
6. Plan on conducting some interviews personally so you can confirm whether the tabulations you are seeing actually make sense. Try to do this early in the game, before the questionnaire is mass-mailed, so you can get the bugs out via some final revisions.
7. Beware of interpreting results based on straight averages. This caveat applies particularly to salary studies. “Median” is the magic word implying proper use. An additional thought about salary studies is to present results by region as opposed to reporting a single national average.
8. You don’t need a high quantitative return if you draw a high quality response. Just remember to say that results reflect only the experience of the sample and are not projectable to your industry’s universe.
9. When you write the article based on survey results, interpret rather than recite. It is absolutely unpalatable for a reader to wade through a series of sentences of the “60% said this, 30% said that, 15% said the other” variety.
10. Proofread your charts and insist on seeing a final color key. For example, the technique of using varying shades of a single color to reflect different response segments can backfire if all those shades appear identical.
11. Don’t be guilty of publishing research that has no foundation. If the overall response is bad, or certain “cell” groups did not furnish adequate data, clearly you are in hot water.
12. Don’t rely totally on the mail. Plan to supplement the response with telephone and/or face-to-face interviews.
13. For best results, always cross-reference your data. There are many significant gems left languishing because hasty decisions were made that cross-tabulation would be too time-consuming.
14. Variety is the spice of editorial research. Depending upon your field and your budget, you can mix trade and consumer projects. One way to get mileage from reader studies is to gather enough information that can be reported in monthly doses without diluting the timeliness factor.
I would be remiss if I ignored the elusiveness of statistical math. To illustrate the possibilities, here are two questions excerpted from a test I’ve used during editorial research workshops:
- In 2010, there were 11.6% more hospital P.T. units than in 2009. In 2011, there were 14.7% more hospital P.T. units than in 2010. Based on this data, is it true or false that the increase reported for 10-11 was 3.1% higher than the increase reported for 09-10?
- The following facts were reported in a survey of lawncare firms: (1) Of 1,000 firms surveyed, 18% responded; (2) 48% of respondents also provide pest control services; (3) 27% of those offering pest control services consider the business to be unsuccessful; (4) lack of success was attributed to high cost of service by 8% of respondents; dissatisfied customers – 4%; diminishing customer base – 4%. What common statistical problem does the data reflect?
Everybody ought to ace this two-question exam. However, be assured that variations of the above snafus do get published from time to time.