- Describe primary research methods and the types of information they yield
- Explain the pros and cons of in-person, telephone, and online research methods
Choosing the Right Primary Research Method
When secondary research doesn’t provide all the answers, marketers often turn to primary research, which involves data collection that’s tailored to the specific problem or challenge you’re trying to address. There are many ways to conduct primary research. Which approach to take depends on the type of information you need along with the timing, budget, and resources of your project.
Quantitative vs. Qualitative Research
Qualitative research explores ideas, perceptions, and behaviors in depth with a relatively small number of research participants. It aims to answer questions with more complex, open-ended responses such as, “What does this mean to you . . . ?” or “Why do you believe . . . ?” or “How do you like to . . . ?” Qualitative research doesn’t yield data that are easily tabulated and translated into tidy percentages. Instead, provides information that can help marketers understand the big picture of how customers perceive or experience something.
Qualitative research can also give an organization directional information. That is, it can help an organization tell whether it’s on the right track with its approach or solution to a problem. Qualitative research techniques tend to be loosely structured and less formal, since the topical exploration may head in very different directions depending on the person or group participating. These techniques can provide great insights to marketers, but because they involve relatively few participants, the results can be very subjective and idiosyncratic. The risk is in assuming what you learn from a handful of individuals pertains to your target audience as a whole.
In contrast, quantitative research collects information that can easily be counted, tabulated, and statistically analyzed. When organizations need to understand (or quantify) the exact percentage of people who believe or act in a certain way, quantitative research is necessary. Quantitative methods allow researchers to test and validate a hypothesis or what they believe is the best course of action. These methods collect enough data to provide statistically valid results, and managers use them to inform the choices they make.
Often marketing research projects start with qualitative research activities to get a more complete picture of an issue or problem and how customers/consumers are thinking about it. With a better understanding of the issue, they follow up with quantitative research that provides more specificity about what proportion of the population shares common preferences, beliefs, or behaviors. This information provides insights to help marketers refine their segmentation and targeting strategy, the marketing mix, or other considerations related to marketing effectiveness.
Qualitative Research Methods
Typical qualitative methods include behavioral observation, in-depth interviews, focus groups, and social listening. Each of these methods is described below.
Observation may be the oldest method of primary research. Since the beginning of commerce, merchants have been watching their customers and non-customers engage in a variety of behaviors. Examples include information-gathering, shopping, purchasing, product returns, complaints, and so forth. Observation can be as simple as a local fast-food restaurant manager watching the expression on customers’ faces as they eat a new sandwich.
More formal observation techniques are also employed. Researchers might record observations in a prescribed way for later analysis and reference. Video cameras, audio systems, movement tracking, biofeedback, and other technologies may be used to observe and capture information about consumers. Some observational techniques can be quite intrusive. For instance, a researcher might enter a consumer’s home and conduct an audit to take an inventory of products found. Ethnographic research requires that the researcher practically move in with the consumer to observe and record various relevant behaviors.
Observation may be the only way to capture some types of information, such as how consumers actually behave or use a product. It can provide important research insights, especially if consistent patterns are identified.
A great example of observational research is the way technology company Google works to ensure that its search-engine product functions well in every market in which it operates. One of its major markets is China. In Chinese, though, the alphabet has a much more extensive character set than English does, which makes it difficult for Chinese users to get helpful research results. Google researchers observed and video-recorded Chinese people using search engines to help them understand exactly what, when, and why problems occurred. The company used this information to develop potential solutions such as “Google Suggest,” which auto-fills search suggestions so people don’t have to type in the full search query. The research also led to Google’s “Did You Mean?” feature, which asks users if they meant to type in a different, more popular, standardized, or spell-checked search query. Experimenting with and adding these sorts of features helped the company create a much more useful product for the Chinese market. Google has also added improvements with broad appeal to its standard search-engine product in other markets.
Depending on the approach, observation can be relatively inexpensive and quick. More sophisticated observational research can be significantly more expensive, but it can also offer unique insights that marketers might otherwise miss.
In-depth interviews give marketing researchers the opportunity to delve deeply into topics of interest with the individuals they want to understand better. Research projects that use this method typically involve a fairly small number of these interviews, and they target the precise characteristics of the audiences that researchers want to understand. For example, a pharmaceutical company might want to understand a medical doctor’s reasoning when considering which drugs to prescribe for certain medical conditions. A business software company might want to have a focused discussion with a product “power-user” about the limitations they see in the current product and what improvements they would like to see.
In-depth interviews are structured around a discussion guide. The interviewer asks questions and then listens carefully to capture responses—and sometimes asks follow-up questions to gain additional clarity and insight. In-depth interviews provide the opportunity to get under the surface and probe for more thoughtful answers and nuanced responses to interviewer questions. Often these interviews help researchers identify the range of questions and responses they should include in a quantitative survey (with more participants). In-depth interviews might also be combined with behavioral observation to get a richer understanding of why people do what they do: “What were you thinking when…?” or, “Why did you do this . . . ?”
Interview length is an important consideration for in-depth interviews. It is difficult to keep people deeply engaged in a conversation for more than thirty minutes, so both the discussion guide and the interviewer must be very focused on covering key topics in the time allotted.
A primary disadvantage of in-depth interviews is cost: they tend to be quite expensive because they require not only the time of an experienced interviewer, but also some compensation, or incentives, for interview participants. Exactly how much compensation depends on the audience. To get a busy practicing lawyer to participate in an in-depth interviewer, researchers must offer significantly more money than they might to a flexible (and cash-strapped) college student, for example.
Focus groups are much like in-depth interviews, except that they involve small groups (usually 6–12 individuals) rather than one person at a time. Like in-depth interviews, focus groups also try to delve deeply into topics of interest with people whose perspectives the researchers want to understand better. Focus groups have the added benefit of inviting peers to talk to one another about the topics in question, so the researchers hear not just one individual’s views but also listen to and observe the group’s interactions.
Whereas in-depth interviews are fairly short, focus groups tend to be longer, running 60–90 minutes, on average. It takes more time to hear from multiple people weighing in on a topic and to build an insightful group dynamic during the discussion. Focus groups tend to be expensive because each person receives an incentive for their time and participation. Audio or video recording and transcription are often preferred, so as to capture information for later reference.
It can be difficult to control the group dynamic in focus groups: sometimes one or a few people dominate the discussion while others hang back. “Groupthink” can be a problem when a charismatic participant manages to persuade others to adopt his way of thinking instead of allowing the full range of opinions to come to light. For these reasons, focus groups require skilled facilitators who are good at listening, managing time, steering the discussion, and keeping people on track. Focus group facilitators must also scrupulously avoid biasing participants with their own views, in order to ensure that the information captured accurately represents customer views.
The following video satire shows some of the challenges in conducting focus groups effectively and why a skilled facilitator isn’t always enough:
Networks and media production companies frequently rely on focus groups to guide their decisions about which television programs to produce and how to make improvements to programs in development. Termed “audience research,” these focus groups invite people into a viewing room to watch and provide feedback on a show. All are given a feedback dial—a tool participants use to indicate when they like or dislike something in the program. If they like something, they turn the dial up, and if they dislike something, they turn it down. A computer records the audience responses and provides a second-by-second view of the program overlaid with the audience’s response. Focus group facilitators monitor this feedback and then follow up with discussion about what people did or didn’t respond to, and why.
Interpreting the feedback from this audience research is something of an art: notoriously, the hit program Seinfeld was nearly canceled because the pilot show tested poorly in focus groups. Show creators look to audiences to help them understand not only what they like or dislike, but also what is interesting or unusual, and why. According to Michael Wright, former head of programming for TBS and TNT, “It’s very rare that a test compels you to order or not order a show. All you’re looking for is interesting feedback, to get insight you didn’t have before. It’s a tool. It’s diagnostic.” The focus group insights then provide guidance about where and how to improve a program to increase the chances that it will be a hit. 
Communication strategists use this same technique to test messaging in political speeches, advertising, and other presentations. The following video, from the PR firm Luntz Mazlansky, shows the results from a focus group’s feedback-dial reactions to a Barack Obama speech. The tracking lines on the screen show reactions from audience members who lean Democratic (green line) and Republican (red line). Marketers and messaging strategists use this feedback to understand which ideas and messages generate strong positive or negative feedback from the target audience:
With the proliferation of social media comes a tremendous opportunity to learn exactly what key individuals are saying with regard to marketing-related messages. Social listening is a systematic process for tracking what is being said about a given topic in forums such as Facebook, Twitter, LinkedIn, blogs, and even mainstream media. When they engage in social listening, marketers monitor and analyze both positive and negative perspectives. Social listening helps marketers map not only who is saying what, but also who is influencing whom to help shape these opinions.
Social listening can be passive, with marketers mainly tracking which topics are trending and the prevailing sentiments around those topics. Social listening can also be conducted in a more focused, proactive way by putting questions or prompts out to a targeted group—a set of bloggers and influencers or a social media community, for instance—and saying, “Tell me what you think about . . .”
A key challenge with social listening is how to best interpret the data that’s collected: There can be so much information or chatter that it’s hard to sift through everything to pick out the worthwhile nuggets. Marketers have a growing number of interesting tools to help monitor and harness the power of social media for social listening, from free tools like Google Alerts and Tweetdeck to advanced social media monitoring services like Brandwatch and Social Studio.
Unlike the other research methods described here, social listening takes place in public forums rather than through private research activities and interviews. This means that anything associated with the project may garner attention from members of the community or even the media. While this can be beneficial if an organization is trying to generate awareness, it can also seem manipulative or disingenuous. Social media communities have been known to turn on companies for misjudging the difference between “observation” and “interference.”
Most marketing leaders today would argue that social listening should be an integral part of a marketer’s job all the time in order to stay abreast of what people are saying about a product, company, industry, and competitive set. At the same time, marketing research projects may target social listening in a given subject or community in order to provide additional insight about a problem the organization is trying to solve or an opportunity under exploration.
An interesting example of social listening research is the work Brandwatch provides to video gaming companies. It tracks social media conversations over time as companies announce and launch new video games and new editions to monitor what creates buzz, who are the influential voices, and what generates positive and negative reactions.
The company analyzes this information and offers insights to game creators and marketers about audience receptivity to the new games, the effectiveness of marketing campaigns and messages, product and competitive strategy, and whom to target in the future to influence market perceptions.
Quantitative Research Methods
The most common quantitative marketing research methods are surveys and experimental research. Each is explained below.
Survey research is a very popular method for collecting primary data. Surveys ask individual consumers to give responses to a questionnaire. Questions may cover a variety of topics, but the question topics, format, response options, and survey length must all be a good fit for the audience and contact method (telephone, online, mail, in-person; more on this shortly).
Survey questions and responses must always be clearly worded and unambiguous. This stands to reason: if survey respondents are confused about what a question is asking, the data collected for that question won’t be very valid. Surveys typically contain a combination of close-ended questions and open-ended questions. Closed-ended questions (also called structured questions) are easily tabulated, with a discrete set of answers such as yes/no, multiple choice, a scale rating, or “select all that apply.” Open-ended questions (also called unstructured questions) ask for a verbal or textual response, such as “Why did you choose X?” While it may be tempting to include lots of open-ended questions in surveys, in fact it is best to use this type of question sparingly. Survey respondents find closed-ended questions easier to answer and often skip open-ended questions or supply only minimal responses. Too many open-ended questions increases the likelihood that participants will abandon the survey before it’s complete.
When creating a survey, marketing researchers must strike the right balance between covering enough information to gain useful data and making the questionnaire short enough that people will finish it. The longer the questionnaire the less likely people are to take the time to answer all the questions. Most marketing researchers concur that if a questionnaire takes longer than 15 minutes to answer, odds are good that people won’t get through it.
Surveys can be conducted quickly and inexpensively. For example, a store owner can ask people visiting the store to answer a few questions verbally or with a pencil-and-paper survey. Alternatively, a company can distribute a customer satisfaction survey at little or no out-of-pocket cost using freely available online survey tools (such as Survey Monkey or Wufoo).
Some surveys may require more complex and expensive data collection. For instance, a candidate running for public office may want to poll likely voters to learn which way they are leaning and what factors might influence their vote. For the survey to be useful and accurate, a representative set of likely voters must take the survey. This requires a screening process to make sure that the survey reaches the right people: likely voters whose age, ethnicity, gender, and other characteristics are similar to the population in the voting district. In this case, marketing researchers might opt for a telephone survey rather than an online or in-person survey. A telephone survey allows an interviewer to efficiently screen respondents to make sure they fit the likely voter profile and other characteristics of the voting population.
Once data are collected, the results are tabulated and analyzed with statistical methods in order to help marketing researchers understand the views, preferences, and experiences of their target audiences. The statistical analysis confirms not only how people respond to the survey questions, but also how confident researchers can be about the results’ accuracy. A large number of completed surveys yields greater confidence that the results accurately represent the views of the general population. A smaller number of completed surveys means researchers can be less sure that the sample reflects the views of the general population.
The brokerage and banking firm Charles Schwab takes an interesting approach to survey research. The company frequently commissions quantitative surveys to better understand different various issues related to investing, such as attitudes about retirement savings among 401K plan participants, and the economic outlook of adults living in major metropolitan centers. The company uses these surveys for two purposes. First, they gain deeper insights into ways of winning new customers and better serving existing customers. They can adjust targeting, marketing messages, product features, pricing, and placement as a result. Second, the company publishes many of the research results through its Web channels, social media, and paid media in order to generate attention. The company views this type of content as “currency for engagement”—that is, it’s a way of starting conversations with new and current customers about ways that Charles Schwab might meet their needs.
Another quantitative research method is to conduct experiments in which some factor or set of factors is varied to yield comparative results. A typical example is A/B testing in marketing campaigns. In an A/B test, marketers develop two different versions of a marketing campaign artifact, such as a Web site landing page. Each version may use a slightly different call to action, image, or headline. The marketers send out each version to a set of target customers and then track the results to see which one is most effective. Marketers then use this information to further refine the campaign message and materials, hoping to boost results.
Experimental research may also be used to investigate how individuals with one set of factors or criteria compare to another. For instance, marketing researchers for a sales consulting services company might track the sales growth of companies using their services to companies that do not. Marketers might use the data from this research to demonstrate how using their company’s services is linked to improved financial performance.
Research Contact Methods: Offline vs. Online
As marketing researchers decide which type of primary research to conduct, they must also decide which contact method fits best with their needs. In some situations, offline techniques like mail, telephone, and in-person research work best. In other situations, online contact methods are preferred, using email, mobile phone, and/or Web sites to attract survey participants and capture responses.
The following table outlines advantages and disadvantages of each contact method.
Before the advent of the Internet, marketing researchers relied on a combination of mail, in-person, and telephone contact to conduct marketing research. Observation techniques and focus groups were typically carried out in person, using skilled interviewers to facilitate high-quality data collection in the processes described above. Telephone and mail were the preferred contact methods for surveys, with researchers mailing a survey packet to targeted households or making telephone calls to request that people participate in survey research.
In mail surveys, a typical packet might contain a cover letter explaining the purpose of the research, a copy of the questionnaire, a stamped self-addressed return envelope, and an incentive for compliance (cash, merchandise, contribution to charity, or copy of report). Mail questionnaires allow the researcher to ask a large number of questions over a broad range of topics. They also permit the respondents to answer the questionnaire at their leisure. Mail surveys also have disadvantages. Researchers lose control through the mail process: Did the targeted person receive and answer the questionnaire? Did the respondent understand the questions? Did she/he complete the questionnaire? On what time frame? Mail surveys have been a good option for budget-conscious marketing-research projects, while, until recently, telephone surveys have been the preferred method for in-depth interviews and short, timely surveys with highly targeted audiences.
Historically, telephone surveys have offered several advantages. Names and related telephone numbers can be obtained directly from a telephone directory or from internal or external databases. Telephone survey costs are relatively low, and research companies can provide well-trained and technically supported interviewers to ensure good data collection. Telephone surveys are limited in several important ways, though, such as the difficulty of reaching the correct respondent, the problem of completing the interview if the respondent decides to hang up, and the inability to eliminate the bias introduced by not interviewing those without phones or individuals with unlisted numbers. Telephone survey respondents may lose patience rather quickly, so it is best to limit survey length as much as possible. This means only a limited number of topics can be addressed.
Digital technologies have altered the picture of marketing research data collection dramatically. Today, virtually everything that was once done in-person via telephone or mail can now be conducted digitally, often very effectively and at a lower cost. Digital tools like Skype, Google Hangouts, and a variety of other Web conferencing technologies offer effective means of conducting in-depth interviews and even focus groups. Surveys can be provided through links in email messages, pop-up windows on Web sites, online forms, and through a range of other delivery mechanisms. Even many types of observational research can be conducted in virtual settings.
However, digital data collection has limitations, as well. In the digital world, researchers have less control over who opts to participate in a survey, so there is greater potential for self-selection bias—the problem of data reflecting the views of those who choose to participate, while omitting a significant proportion of the population who choose not to participate. Digital data collection also bypasses the many individuals who spend little if any time online. Over time as the population approaches universal access to the Internet, this will become less of a factor. As long as the digital divide exists, researchers must factor in this issue when they design data collection among their target audiences.
Depending on the target audience, the quality and type of data researchers need, in-person, telephone, or mail may still be the optimal contact method. But with a growing array of sophisticated and cost-effective online data collection tools now available, it’s always sensible for marketing researchers to evaluate online options for data collection, too.
Developing Research Instruments
Every marketing research method requires an instrument—the tool used for data collection. There are three basic types of marketing research instruments: questionnaires (for surveys), discussion guides (for in-depth interviews and focus groups), and mechanical data collection techniques designed to capture data associated with a research activity such as observation or experiment.
There are several rules of thumb for designing a questionnaire. Each question should be worded carefully, concisely and clearly, so that the respondent knows exactly what is being asked and what the response options mean. After drafting survey questions, it is always wise to have others review them and provide feedback on the question wording, clarity and overall flow from question to question. A good questionnaire should resemble a well-written story: it should be logical, relevant, easy to follow, and interesting to the reader or respondent.
As explained above, questionnaires usually include a mix of open-ended and closed-ended questions. The figure below illustrates the forms questions can take. As a yes/no question, Question 1 is considered a closed-ended dichotomous question; i.e., the respondent must check one of two possible answers. Question 2 is considered short response; the respondent enters a brief text response of no more than a few words. Questions 3 and 4 are two different scaled questions, a type of closed-ended. Questions 5 and 6 are open-ended, allowing the respondent to provide any answer desired. Closed-ended questions are best used when the researcher wants to capture a particular set of answers or feels the respondent is unlikely to come up with an original answer. Open-ended questions allow the respondent to provide personal answers with as much or as little detail as desired. Of course, there is a risk that the respondent will have no answer.
Another important consideration is how to sequence the questions in the questionnaire. This may include placing easier questions at the beginning, to encourage people to stick with the survey and complete it, whether and how to group similar questions, and where to place demographic questions such as gender, age, occupation, and so forth. Typically demographic questions are grouped at the beginning or end. Researchers must also pay attention to making questions flow logically. Again, the goal is to create a coherent questionnaire so that respondents can answer it easily and accurately.
Designing Qualitative Discussion Guides
Discussion guides for in-depth interviews and focus groups follow many of the same rules as questionnaires: Questions need to be clearly worded and logically sequenced to provide a natural flow of discussion. Because these qualitative techniques are trying to get beneath the surface and uncover more in-depth information, they typically contain fewer closed-ended questions and more open-ended questions. Closed-ended questions might preface a thoughtful discussion about why a research participant feels or acts in a certain way.
Discussion guides should leave flexibility for the interviewer to pursue a useful line of inquiry that might surface. Focus group discussion guides should include questions that spark dialogue among the participants, so the researcher can benefit from the richness of peer interaction and opinion.
Timing is always an important consideration for these research instruments: How much ground can the interviewer realistically cover in the time allotted? Researchers must also pay close attention to where questions are placed in the discussion guide to ensure that the most important topics are covered even if the interviewer runs out of time.
Using Mechanical Instruments for Marketing Research
Some marketing research techniques collect information as research participants complete a task or go through a process. The research instruments in these research activities may involve some type of mechanical device and/or activity for data collection. For instance, marketing researchers may conduct Web site user testing to understand the effectiveness of the the Website design, layout, and messaging to encourage desired behaviors and perceptions. This research activity may involve equipment and a research process to track the user’s eye movements, mouse/pointer movements and click stream, as well as his or her impressions of the Web site user experience. Marketing research on media and messaging may use a variety of devices to track research participants’ media usage habits or their responses to messages and images as they view an advertisement, program, or speech.
Rather than designing these research tools from the ground up, marketing researchers typically work with specialists to conduct marketing research projects using these techniques and tools. Often these techniques are used in conjunction with other qualitative or quantitative methods to understand a marketing problem and possible solutions from multiple perspectives and approaches.
Sampling: Selecting Research Participants
In most marketing research, it is not necessary or feasible to conduct a complete census—that is, to speak to 100 percent of the target segment you want to study. This would be time-consuming, expensive, and superfluous, since after you have heard from a number of individuals, you will have information that is representative of the views of the entire population. Sampling is the process of selecting the appropriate number and types of research participants so that the data you collect is sufficiently representative of the whole segment.
A sample is a group of elements (persons, stores, financial reports) chosen for research purposes from among a “total population” or “universe” of all possible participants who fit the target criteria for research subjects. The value of a research project is directly affected by how well the sample has been conceived and constructed.5
The first critical question in sampling is getting the right participant profile: whom, exactly, should you talk to or study for this marketing research? For example, if a research project is about laundry soap, the sampling plan must identify the right individuals to contact: Is it the person in the household who buys laundry soap? Is it the person who usually does the laundry? Is it the supermarket inventory manager who decides which products and brands to stock? Any of these individuals could be the right research subject, depending on what problems and questions the marketing research project is trying to address.
Another essential question is sample size: How many people must participate in the research to give valid results? A small project involving in-depth interviews or focus groups might require recruiting just a dozen research participants or thereabouts. A large qualitative survey might involve hundreds or even thousands of individuals in order to yield the type of data and desired level of reliability in the results.
Marketing researchers must also determine how to identify potential participants. For some projects, a company’s own customer and prospective-customer records provide enough names within a target segment to complete the research. For other projects, marketing researchers must purchase lists of individuals who fit the target profile, or they may pay a marketing research services company to recruit participants. Another option for some projects is to use a panel: a group of people who have been recruited by an organization to participate in periodic research projects. While these are effectively professional (paid) marketing research subjects, if they happen to fit the respondent profile, they may still provide useful data and perspectives. Because their members are pre-screened for a wide variety of criteria, panels can be extremely useful for reaching hard-to-find individuals amongst the general population—such as people who drive Volkswagen vehicles or parents of teenagers.
How researchers select the individuals who will participate—also known as the sampling procedure—is another important consideration. All sampling procedures can be classified as either probability samples or nonprobability samples. In a probability sample, each individual has a known chance of being selected for inclusion in the sample. The simplest version is the simple random sample, in which each individual in the research population has exactly the same chance of selection. For example, a sample of names could be selected from the company’s customer list according to a random process, such as using a randomization algorithm to order the list.
While in a probability sample the sampling units have a known chance of being selected, in a nonprobability sample the sampling units are selected arbitrarily or according to a marketing researcher’s judgment. Returning to the customer list example, instead of using a randomization algorithm to order the list, an arbitrary selection method would be to start research with the first fifty or sixty names on the list. Another method would be for researchers to select a subset of the customer list that includes known individuals or entities that would be great prospects for being willing to participate and provide useful information.
Analyzing Primary Data
Once primary data collection is complete, these projects proceed with the process described previously for analyzing data: interpreting what it means, generating recommendations, and reporting results to the appropriate stakeholders within an organization. As noted above, qualitative research methods do not yield neat percentages and statistically reliable results, so it can be difficult to describe the data in these projects. Summarizing key themes and takeaways can be a useful approach, as well as including verbatim comments from research participants that express important points.
Quantitative research usually has a rigorous analysis phase involving cleaning and formatting the data. Researchers apply a variety of statistical tabulations, manipulations, and tests to determine what the data are saying, which findings are truly significant, and what meaningful correlations or relationships exist to offer new insights about the target segment. A key challenge for interpreting quantitative data involves sifting through lots of information and data points to determine which findings are most important and what they mean as organizations take steps to apply the results of marketing research. With this in mind, it can be helpful for marketers and researchers to look for the story quantitative data tell: What is the picture they paint of the problem, and how should managers understand the problem (and possible solutions) differently as a result of the research?
This type of approach can help managers, marketers, and teams who are stakeholders in the marketing research better understand and digest the insights provided by the research project and take action accordingly.
Check Your Understanding
Answer the question(s) below to see how well you understand the topics covered in this outcome. This short quiz does not count toward your grade in the class, and you can retake it an unlimited number of times.
Use this quiz to check your understanding and decide whether to (1) study the previous section further or (2) move on to the next section.
- https://hbr.org/2009/03/how-google-and-pg-approach-new ↵
- How We Decide by Jonah Lehher, pp. 108–109, https://books.google.com/books?id=f9LqaUbde2QC. See also http://www.nytimes.com/2012/05/13/arts/television/networks-rely-on-audience-research-to-choose-shows.html ↵
- https://www.brandwatch.com/de/wp-content/uploads/2012/01/social_media_in_videogames.pdf ↵
- https://aboutschwab.com/press/research and http://blog.news360.com/2014/05/content-marketing-all-star-qa-with-helen-loh-of-charles-schwab/ ↵