Top Banner
CHAPTER SUMMARIES WHY STUDY MARKETING RESEARCH? A. Marketing research is the systematic inquiry that provides information to guide marketing decisions. 1). The expanded American Marketing Association definition—“a process of determining, acquiring, analyzing and synthesizing, and disseminating relevant marketing data, information, and insights to decision makers in ways that mobilize the organization to take appropriate marketing actions, that, in turn, maximize business performance.” 3). Decision makers rely on information to make more efficient and effective use of their marketing budgets. 4). At no other time in our history has so much attention been placed on measuring and enhancing return on marketing investment (ROMI). a. When we measure ROMI we calculate the financial return for all marketing expenditures. b. Several factors (e.g., the number of qualified prospects, the rate of conversion of prospects to customers, etc.) are factored into the ROMI equation. c. Increasingly, organizational managers want to know what marketing strategies and tactics capture the most revenue. 5). Marketing research plays an important part in the new technological environment faced by businesses.
103
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: BRM Chapter Summaries

CHAPTER SUMMARIESWHY STUDY MARKETING RESEARCH?

A. Marketing research is the systematic inquiry that provides information to guide marketing decisions. 1). The expanded American Marketing Association definition—“a process of determining, acquiring, analyzing and synthesizing, and disseminating relevant marketing data, information, and insights to decision makers in ways that mobilize the organization to take appropriate marketing actions, that, in turn, maximize business performance.”

3). Decision makers rely on information to make more efficient and effective use of their marketing budgets. 4). At no other time in our history has so much attention been placed on measuring and enhancing return on marketing investment (ROMI). a. When we measure ROMI we calculate the financial return for all marketing expenditures. b. Several factors (e.g., the number of qualified prospects, the rate of conversion of prospects to customers, etc.) are factored into the ROMI equation. c. Increasingly, organizational managers want to know what marketing strategies and tactics capture the most revenue.

5). Marketing research plays an important part in the new technological environment faced by businesses. 6). Marketing research expenditures are increasingly scrutinized for their contribution to ROMI. 7). A management dilemma—the problem or opportunity that requires a marketing decision.

WHEN IS RESEARCH UNNECESSARY?

A. Value versus no value 1). Marketing research has an inherent value only to the extent that it helps make better decisions that help achieve organizational goals. 2). Information for information’s sake is not associated with professional marketing research. a. If a study does not help management select more effective, more efficient, less risky, or more profitable alternatives than would otherwise be the case, its use should be questioned. 3). Management may have insufficient resources (time, money, or skill) to conduct an appropriate study or may face a low level of risk associated with the decision at hand. 4). In the situation above, it is valid to avoid marketing research and its associated costs in time and money.

Page 2: BRM Chapter Summaries

5). Marketing research finds its justification in the contribution it makes to the decision maker’s task and to the bottom line.

WHEN IS RESEARCH ESSENTIAL?

A. Introduction 1). When considering or answering the following four examples (mini-cases), please answer the following questions: a. What is the marketing decision-making dilemma facing the manager? b. What must the researcher accomplish?

G. What Types of Research Should be Considered? 1). The different types of marketing research can be categorized as being: a. Reporting. b. Descriptive. c. Explanatory. d. Predictive.

2). Reporting a. A reporting study may be made only to provide an account or summation of some data or to generate some statistics. b. This task might be quite simple and the data readily available. c. A reporting study calls for knowledge of and skill with information sources and gatekeepers of information sources. d. Purists claim that reporting studies do not qualify as research. e. Others believe that investigative reporting does qualify as research.

3). Descriptive a. A descriptive study tries to discover answers to the questions who, what, when, where, and, sometimes, how. b. The researcher attempts to describe or define a subject, often by creating a profile of a group of problems, people, or events. c. Such studies may involve the collection of data and the creation of a distribution of the number of times the researcher observes a single event, act, or characteristic (known as a research variable), or they may involve relating the interaction of two or more variables. d. Descriptive studies may or may not have the potential for drawing powerful inferences. e. The descriptive study is popular in marketing research because of its versatility in numerous management dilemmas. f. Descriptive studies assist management in planning, monitoring, and evaluating..

Page 3: BRM Chapter Summaries

4). Explanatory a. An explanatory study goes beyond description and attempts to explain the reasons for the phenomenon that the descriptive study only observed. b. Research that studies the relationship between two or more variables is also referred to as a correlational study. c. In an explanatory study, the researcher uses theories or at least hypotheses to account for the forces that caused a certain phenomenon to occur. d. A hypothesis can be thought of as one plausible explanation that explains a result.

5). Predictive a. It is desirable in marketing to be able to predict when and in what situations an event will occur—this is the predictive study. b. Managers would like to be able to control a phenomenon once we can explain and predict it. c. Being able to replicate a scenario and dictate a particular outcome is the objective of control. d. Control is the logical outcome of prediction. e. The complexity of the phenomenon and the adequacy of the prediction theory largely decide the success in a control study.

A summary of steps that might be followed in a scientific inquiry include: a. A problem is encountered. b. There are struggles to state the problem. c. Hypotheses are proposed to explain the facts that are logically related to the problem. d. Outcomes or consequences of the hypotheses are deduced. e. Several rival hypotheses are formulated. f. The researcher devises and conducts a crucial empirical test with various possible outcomes. g. A conclusion (an inductive inference) is drawn based on acceptance or rejection of the hypotheses. h. Information is fed back into the original problem, modifying it according to the strength of the evidence.

THE LANGUAGE OF RESEARCH

A. When we do research, we seek to know what is in order to understand, explain, and predict phenomenon. 1). The questions developed require the use of concepts, constructs, and definitions.

B. Concepts.

Page 4: BRM Chapter Summaries

1). A concept is a bundle of meanings or characteristics associated with certain concrete, unambiguous events, objects, conditions, or situations. 2). Classifying and categorizing objects or events that have common characteristics beyond any single observation create concepts. 3). We abstract meanings from our experiences and use words as labels to designate them. 4). Sources of Concepts. a. We acquire concepts through personal experience. b. We have problems with an uncommon concept or a newly advanced idea. c. We try to solve this problem by borrowing from other languages (for example, gestalt) or to borrow from other fields (for example, from art, impressionism). d. Sometimes we need to adopt new meanings for words or develop new labels for concepts. 5). Importance to Research. a. Special problems grow out of the need for concept precision and inventiveness. b. The success of research hinges on: 1]. How clearly we conceptualize. 2]. How well others understand the concepts we use.

C. Constructs. 1). Concepts have progressive levels of abstraction. a. Abstract concepts are often called constructs. b. A construct is a definition specifically invented to represent an abstract phenomena for a given research project. 2). Concepts and constructs are easily confused. 3). Hypothetical constructs are constructs inferred only from data; its presumption must be tested. 4). A conceptual scheme is the interrelationships between concepts and constructs. D. Definitions. 1). Confusion about the meaning of concepts can destroy a research study’s value without the researcher or client even knowing it. a. Definitions are one way to reduce this danger. 2). Researchers struggle with two types of definitions: a. Dictionary definitions. b. Operational definitions. 3). To be of value, definitions need to be rigorous. 4). Operational Definitions. a. Operational definition defines a variable in terms of specific measurement and testing criteria.

b. Operational definitions may vary, depending on your purpose and the way you choose to measure them. c. Whether you use a definitional or operational definition, its purpose in

Page 5: BRM Chapter Summaries

research is basically the same—to provide an understanding and measurement of concepts.

E. Variables. 1). The term variable is used as a synonym for construct or the property being studied. 2). A variable is an event, act, characteristic, trait, or attitude that can be measured and to which we assign categorical values. a. For purposes of data entry and analysis, we assign numerical value to a variable based on the variable’s properties. b. Some variables may be dichotomous and have two values. c. Other variables may be described as being discrete since they have only certain values. d. Income, temperature, age, or a test score are examples of continuous variables as these values may take on values within a given range or, in some cases, an infinite set.

3). Independent and Dependent Variables. a. Researchers are most interested in relationships among variables. b. Independent variable (IV) or predictor variable is a variable manipulated by the researcher, thereby causing an effect on the dependent variable. c. Dependent variable (DV) or criterion variable is a measured, predicted, or otherwise monitored variable expected to be affected by manipulation of an independent variable.

4). Moderating Variables. a. In each relationship, there is at least one independent variable and a dependent variable. b. A moderating variable (MV) is a second independent variable, believed to have a significant contributory or contingent effect on the originally stated IV-DV relationship. c. Whether a given variable is treated as an independent or as a moderating variable depends on the hypothesis.

5). Extraneous Variables. a. An almost infinite number of extraneous variables (EVs) exist that might conceivably affect a given relationship. 1]. An extraneous variable (EV) is a variable to assume or exclude from a research study. 2]. Fortunately, these variables have little or no effect on a given situation. b. However, there may be some extraneous variables to consider as possible confounding variables to hypothesized IV-DV relationships. c. A control variable is a variable introduced to help interpret the relationship between variables.

Page 6: BRM Chapter Summaries

6). Intervening Variables. a. An intervening variable (IVV) is a factor that affects the observed phenomenon but cannot be measured or manipulated.

b. Its effect must be inferred from the effects of the independent and moderator variables on the observed phenomenon.

F. Propositions and Hypotheses. 1). A proposition is a statement about observable phenomena that may be judged as true or false. 2). When a proposition is formulated for empirical testing, it is called a hypothesis. a. As a declarative statement, a hypothesis is of a tentative and conjectural nature. b. Hypotheses have been described as statements in which we assign variables to cases. 1]. A case is the entity or thing the hypothesis talks about. 2]. The variable is the characteristic, trait, or attribute that, in the hypothesis, is imputed to the case.

3). Descriptive Hypotheses. a. Descriptive hypotheses are statements about the existence, size, form, or distribution of a variable. b. Researchers often use a research question rather than a descriptive hypothesis.

c. Descriptive hypotheses have the following advantages over the research question format (or other formats): 1]. It encourages researchers to crystallize their thinking about the likely relationships to be found. 2]. It encourages them to think about the implications of a supported or rejected finding. 3]. It is useful for testing statistical significance.

4). Relational Hypotheses. a. The research question format is less used with a situation calling for relational hypotheses. b. A relational hypothesis is a statement about the relationship between two variables with respect to some case. c. Relationships can be: 1]. Correlational—an unspecified relationship. 2]. Explanatory (causal)—predictable relationship. d. Correlational hypotheses are statements indicating that variables occur together in some specified manner without implying that one causes the other. e. Explanatory (causal) hypotheses are statements that describe a relationship between two variables in which one variable leads to a specified effect on

Page 7: BRM Chapter Summaries

the other variable. 1]. In proposing or interpreting causal hypotheses, the researcher must consider the direction of the influence. a). Sometimes the ability to identify the direction of influence depends on the research design.

5). The Role of Hypotheses. a. In research, a hypothesis serves several important functions: 1]. It guides the direction of the study. 2]. It identifies facts that are relevant and those that are not. 3]. It suggests which form of research design is likely to be most appropriate. 4]. It provides a framework for organizing the conclusions that result. b. What is a strong hypothesis? 1]. A strong hypothesis should fulfill three conditions: a). Adequate for its purpose. b). Testable. c). Better than its rivals.

Page 8: BRM Chapter Summaries

THE MARKETING RESEARCH PROCESS.

A. Writers usually treat the research task as a sequential process involving several clearly defined steps. 1). The research process includes the various decision stages involved in a research project and the relationship between those stages. 2). According to the authors, with respect to the research process, the management question—its origin, selection, statement, exploration, and refinement—is the critical activity in the sequence. 3). According to Albert Einstein, “The formulation of a problem is far more essential than its solution, which may be merely a matter of mathematical or experimental skill. To raise new questions, new possibilities, to regard old problems from a new angle requires creative imagination and marks real advance in science.”

2. STAGE 1: CLARIFYING THE RESEARCH QUESTION

A. The management-research question hierarchy. 1). The management-research question hierarchy process of sequential question formulation that leads a manager or researcher from management dilemma to investigative questions. 2). The process begins with the management dilemma—the problem or opportunity that requires a marketing decision. 3). The management dilemma is usually a symptom of an actual problem, such as: a. Rising costs. b. The discovery of an expensive chemical compound that would increase the efficacy of a drug. c. Increasing tenant move-outs from an apartment complex. d. Declining sales. e. A larger number of product defects during the manufacture of an automobile. f. An increasing number of letters and phone complaints about postpurchase service 4). The management dilemma can also be triggered by an early signal of an opportunity or growing evidence that a fad may be gaining staying power. a. Identifying management dilemmas is rarely difficult. b. Choosing one dilemma on which to focus may be difficult. c. Choosing incorrectly may result in a waste of time and resources. d. Experienced managers claim that practice makes perfect in this area. e. New managers may wish to develop several management-research question hierarchies, each starting with a different management dilemma. 5). Subsequent stages of the hierarchy take the marketer and his or her research collaborator through various brainstorming and exploratory research exercises to define the following:

Page 9: BRM Chapter Summaries

a. Management question—the management dilemma restated in question format. b. Research question(s)—the hypothesis that best states the objective of the research; the question(s) that focuses the researcher’s attention. c. Investigative questions—questions the researcher must answer to satisfactorily answer the research question; what the marketer feels he or she needs to know to arrive at a conclusion about the management dilemma. d. Management questions—the questions asked of the participants or the observations that must be recorded. 6). The definition of the management question sets the research task.

DESIGNING THE RESEARCH PROJECT.

A. Research Design. 1). The research design is the blueprint for fulfilling objectives and providing the insight to answer management’s dilemma. a. The field of marketing research offers a large variety of methods, techniques, procedures, and protocols. b. The numerous alternatives and combinations spawned by the abundance of tools may be used to construct alternative perspectives on the same problem.

B. Sampling Design. 1). Another step in planning the research project is to identify the target population (those people, events, or records that have the desired information and can answer the measurement questions) and then determine whether a sample or a census is desired. 2). A census is a count of all elements in a population. 3). A sample is a group of cases, participants, events, or records constituting a portion of the target population, carefully selected to represent that population. 1]. Probability sampling (every person within the target population get a nonzero chance of selection) and nonprobability sampling may be used to construct the sample.

C. Pilot testing. 1). The last step in a research design is often a pilot test. a. To condense the project time frame, this step can be skipped. 2). A pilot test is conducted to detect weaknesses in research methodology and the data collection instrument, as well as provide proxy data for selection of a probability sample. a. The pilot test should approximate the anticipated actual research situation (test) as closely as possible. b. A pilot test may have from 25 to 100 subjects and these subjects do not have to be statistically selected. 3). Pilot testing has saved countless survey studies from disaster by using the

Page 10: BRM Chapter Summaries

suggestions of the participants to identify and change confusing, awkward, or offensive questions and techniques.

5. STAGE 4: DATA COLLECTION AND PREPARATION.

A. The gathering of data includes a variety of data gathering alternatives. 1). Questionnaires, standardized tests, and observational forms (called checklists) are among the devices used to record raw data. 2). What are data? a. Data can be the facts presented to the researcher from the study’s environment. b. Data can be characterized by their abstractness, verifiability, elusiveness, and closeness to phenomenon. 3). Data, as abstractions, are more metaphorical than real. 4). Data are processed by our senses. 5). Capturing data is elusive. 6). Data reflect their truthfulness by closeness to the phenomena. a. Secondary data are data originally collected to address a problem other than the one which requires the manager’s attention at the moment. b. Primary data are data the researcher collects to address the specific problem at hand—the research question. c. Data are the information collected from participants, by observation, or from secondary sources. 7). Data are edited to ensure consistency across respondents and to locate omissions. a. In the case of a survey, editing reduces errors in the recording, improves legibility, and clarifies unclear and inappropriate responses. b. Coding is used to reduce the responses to a more manageable system for processing and storage.

6. STAGE 5: DATA ANALYSIS AND INTERPRETATION.

A. Managers need information and insights, not raw data, to make appropriate marketing decisions. 1). Researchers generate information and insights by analyzing data after its collection. 2). Data analysis is the editing, reducing, summarizing, looking for patterns, and applying statistical techniques to data. 3). Increasingly, managers are asking research specialists to make recommendations based on their interpretation of the data.

7. STAGE 6: REPORTING THE RESULTS.

A. As the marketing research process draws to a close it is necessary to prepare a report and transmit the findings, insights, and recommendations to the manager for

Page 11: BRM Chapter Summaries

the intended purpose of decision making. 1). The researcher adjusts the style and organization of the report according to the target audience, the occasion, and the purpose of the research. a. The report should be manager-friendly and avoid technical jargon. b. Reports should be developed from the manager’s or information user’s perspective. 2). The researcher must accurately assess the manager’s needs throughout the research process and incorporate this understanding into the final product, the research report. 3). To avoid having the research report shelved with no action taken, the researcher should strive for: a. Insightful adaptation of the information to the client’s needs. b. Careful choice of words in crafting interpretations, conclusions, and recommendations. 4). When research is contracted to an outside supplier, managers and researchers increasingly collaborate to develop appropriate reporting of project results and information. 5). At a minimum, a research report should contain: a. An executive summary consisting of a synopsis of the problem, findings, and recommendations. b. An overview of the research: the problem’s background, a summary of exploratory findings drawn from secondary data sources, the actual research design and procedures, and conclusions. c. A section on implementation strategies for the recommendations. d. A technical appendix with all the materials necessary to replicate the project.

Page 12: BRM Chapter Summaries

Clarifying the Research Question through Secondary Data and Exploration

1. A SEARCH STRATEGY FOR EXPLORATION

A. Exploration is particularly useful when researchers lack a clear idea of the problems they will meet during the study. 1). Through exploration researchers develop concepts more clearly, establish priorities, develop operational definitions, and improve the final research design. a. Exploration may save time and money. b. Exploration is needed when studying new phenomena or situations. c. Exploration is often, however, given less attention than it deserves.

2). The exploratory phase search strategy usually comprises one or more of the following: a. Discovery analysis of secondary sources such as published studies, document analysis, and retrieval of information from organizations' databases. b. Interviews with those knowledgeable about the problem or its possible solutions (called expert interviews). c. Interviews with individuals involved with the problem (called individual depth interviews (IDIs)—a type of interview that encourages the participant to talk extensively, sharing as much information as possible). d. Group discussion with individuals involved with the problem or its possible solutions (including informal groups, as well as formal techniques such as focus groups or brainstorming). 3). Most researchers find a review of secondary sources critical to moving from management question to research question. 4). In the exploratory research (e.g., research to expand understanding of an issue, problem, or topic) phase of a project, the objective might be to accomplish the following: a. Expand your understanding of the management dilemma by looking for ways others have addressed and/or solved problems similar to your management dilemma or management question. b. Gather background information on your topic to refine the research question. c. Identify information that should be gathered to formulate investigative questions. d. Identify sources for and actual questions that might be used as measurement questions. e. Identify sources for and actual sample frames (lists of potential participants) that might be used in sample design.

Page 13: BRM Chapter Summaries

5). In most cases, the exploration phase will begin with a literature search—a review of books, articles, research studies, or Web-published materials related to the proposed study. 6). In general, a literature search has five steps: a. Define your management dilemma or management question. b. Consult encyclopedias, dictionaries, handbooks, and textbooks to identify key terms, people, or events relevant to the management dilemma or management question. c. Apply these key terms, names of people, or events in searching indexes, bibliographies, and the Web to identify specific secondary sources. d. Locate and review specific secondary sources for relevance to your management dilemma. e. Evaluate the value of each source and its content. 7). Often the literature search leads to the research proposal. a. This proposal covers at minimum a statement of the research question and a brief description of the proposed research methodology. b. The proposal summarizes the findings of the exploratory phase of the research, usually with a bibliography of secondary sources that have led to the decision to propose a formal research study.

B. Levels of Information. 1). Information sources are generally categorized into three levels: a. Primary sources. b. Secondary sources. c. Tertiary sources. 2). Primary sources are original works of research or raw data without interpretation or pronouncements that represent an official opinion or position. a. Primary sources are always the most authoritative because the information has not bee filtered or interpreted by a second party. 3). Secondary sources are interpretations of primary data. a. Nearly all reference materials fall into this category. b. A firm searching for secondary sources can search either internally or externally.

4). Tertiary sources are aids to discover primary or secondary sources or an interpretation of a secondary source. a. These sources are generally represented by indexes, bibliographies, or Internet search engines. 5). It is important to remember that all information is not of equal value. a. Primary sources are the most valuable.

2. THE QUESTION HIERARCHY: HOW AMBIGUOUS QUESTIONS BECOME ACTIONABLE RESEARCH

A. The process we call the management-research question hierarchy is designed to

Page 14: BRM Chapter Summaries

move the researcher through various levels of questions, each with a specific function within the overall marketing research process.

B. The Management Question. 1). The management question is seen as the management dilemma restated in question format. a. The management questions that evolve from the management dilemma are too numerous to list; however, they are categorized in Exhibit 5-7.

2). Exploration. a. Note that the exploration stage is exemplified with an illustration that describes how BankChoice goes through the exploration process. b. BankChoice ultimately decides to conduct a survey of local residents. 1). The process would most likely begin with an exploration of books periodicals. 2). Once researchers become familiar with literature, interviews with experts in the field would occur. c. An unstructured exploration allows the researcher to develop and revise the management question and determine what is needed to secure answers to the proposed question. C. The Research Question. 1). A research question(s) is the objective of the research study. a. It is a more specific management question that must be answered. b. Incorrectly defining the research question is the fundamental weakness in the marketing research process. c. As stated by Peter Drucker, “The most serious mistakes are not being made as a result of wrong answers. The truly dangerous thing is asking the wrong questions.” 2). Fine-Tuning the Research Question. a. Fine-tuning the question is precisely what a skillful practitioner must do after the exploration is complete. b. At this point the research project begins to crystallize in one of two ways: 1). It is apparent the question has been answered and the process is finished. 2). A question different from the one originally addressed has appeared. c. Other research-related activities that should be addressed at this stage are: 1). Examine the variables to be studied. 2). Review the research questions with the intent of breaking them down into specific second-and third-level questions. 3). If hypotheses (tentative explanations) are used, be certain they meet the quality test mentioned in Chapter 3. 4). Determine what evidence must be collected to answer the various questions and hypotheses. 5). Set the scope of the study by stating what is not a part of the research question. a). This will establish a boundary to separate contiguous problems from the primary objective.

Page 15: BRM Chapter Summaries

D. Investigative Questions. 1). Investigative questions are questions the researcher must answer to satisfactorily arrive at a conclusion about the research question. 2). Typical investigative question areas include: a. Performance considerations. b. Attitudinal issues (like perceived quality). c. Behavioral issues.

E. Measurement Questions. 1). Measurement questions are the questions asked of participants or the observations that must be recorded. 2). Measurement questions should be outlined by the completion of the project planning activities but usually await pilot testing for refinement. 3). Two types of measurement questions are common in marketing research: a. Predesigned, pretested questions. b. Custom-designed questions. 4). Predesigned measurement questions are questions that have been formulated and tested previously by other researchers. a. Such questions provide enhancement validity and can reduce the cost of the project. 5). Custom-designed measurement questions are questions formulated specifically for the project at hand. a. These questions are collective insights from all the activities in the marketing research process completed to this point, particularly insights from exploration.

WHAT IS RESEARCH DESIGN?

A. The topics covered by the term research design are wide-ranging as are the definitions of research design. 1). Research design (definition used in this text) is the blueprint for fulfilling research objectives and answering questions. 2). Components of the definition include: a. An activity and time-based plan. b. A plan always based on the research question. c. A guide for selecting sources and types of information. d. A framework for specifying the relationships among the study’s variables. e. A procedural outline for every research activity.

2. CLASSIFICATION OF DESIGNS

A. Early in any research study, one faces the task of selecting the specific design to

Page 16: BRM Chapter Summaries

use.

B. Degree of Research Question Crystallization. 1). A study may be viewed as exploratory or formal. a. The essential distinctions between these two options are the degree of structure and the immediate objective of the study. b. Exploratory study (exploration) is a loosely structured study or a phase in a research project designed to expand understanding of a topic, provide insights and possible explanations, or discover future research tasks. 1]. The immediate purpose is usually to develop hypotheses or questions for further research. c. A formal study is a research-question driven process involving precise procedures for data collection and interpretation. 1]. The formal study begins where the exploration leaves off. 2]. The goal of a formal research design is to test the hypotheses or answer the research questions posed. 2). The exploratory-formal study dichotomy recognizes that all studies have elements of exploration in them and few studies are completely uncharted.

C. The Topical Scope. 1). A statistical study attempts to capture a population’s characteristics by making inferences from a sample’s characteristics and then testing resulting hypotheses. a. Hypotheses are tested quantitatively. b. Generalizations about findings are presented based on the representativeness of the sample and the validity of the design. 2). A case study is a full contextual analysis of a few events or conditions and their interrelations. a. Although hypotheses are often used, the reliance on qualitative data makes support or rejection more difficult. b. An emphasis on detail provides valuable insight for problem solving, evaluation, and strategy. c. Case studies have been maligned as “scientifically worthless” because they do not meet minimal design requirements for comparison. d. Marketers do find case studies useful in explaining or debunking phenomena.

D. The Purpose of the Study. 1). The essential difference between descriptive and causal studies lies in their objectives. 2). A descriptive study discovers answers to the questions who, what, when, where, or how much. 3). If the study is concerned with learning why, it is concerned with a causal study—attempts to reveal a causal relationship between variables.

Page 17: BRM Chapter Summaries

E. Researcher Control of Variables. 1). The researcher’s ability to manipulate variables is divided typically between experimental and ex post facto designs. 2). In an experiment the study involves manipulation of one or more variables to determine the effect on another variable. a. An example would be a split test wherein split mailings allow direct marketers to test different approaches to transmitting their messages to consumers. 3). With an ex post facto study (design), investigators have no control over the variables in the sense of being able to manipulate them. a. This design is an after-the-fact report on what happened to the measured variable. b. Researchers can only report what has happened or what is happening. c. An example would be a study on retail store design.

F. Method of Data Collection. 1). This classification distinguishes between monitoring and the communication process. 2). Monitoring is a study in which the researcher inspects the activities of a participant or the nature of some material without eliciting responses from the participant. a. An example would be traffic counts at an intersection. 3). In the communication study, the researcher questions the participants and collects their responses by personal or impersonal means. 4). Data from a communication study may result from: a. Interviews or telephone conversations. b. Self-administered or self-reported instruments sent through the mail, left in convenient locations, or transmitted electronically or by other means. c. Instruments presented before and/or after a treatment or stimulus condition in an experiment.

G. The Time Dimension. 1). Cross-sectional study—the study is conducted only once and reveals a Snapshot of one point in time. 2). Longitudinal study—a study that includes repeated measures over an extended period of time. 3). Some types of information once collected cannot be collected a second time from the same person without risk of bias. 4). While longitudinal research is important, the constraints of budget and time impose the need for cross-sectional analysis.

H. The Research Environment. 1). Field conditions—the actual environmental conditions where the dependent variable occurs. 2). Laboratory conditions—studies that occur under conditions that do not simulate actual environmental conditions.

Page 18: BRM Chapter Summaries

3). Simulations—a study in which the conditions of a system or process are replicated.

3. EXPLORATORY STUDIES

A. Exploration is particularly useful when researchers lack a clear idea of the problems they will meet during the study. 1). Through exploration researchers develop concepts more clearly, establish priorities, develop operational definitions, and improve the final research design. 2). Other reasons for exploration include: a. New areas of problems that might be previously unknown. b. To help in formulating hypotheses. 3). Exploration receives less attention than it deserves.

B. Exploratory Techniques. 1). The objectives of exploration may be accomplished with qualitative and quantitative techniques. 2). Qualitative techniques (nonquantitative data collection used to increase understanding of a topic) are relied on heavily. 3). A variety of approaches are adaptable for exploratory investigations of management questions: a. Interviewing. b. Participant observing. c. Films, photographs, and videotape. d. Projective techniques and psychological testing. e. Case studies. f. Street ethnography. g. Elite or expert interviewing. h. Document analysis. i. Proxemics and kinesics. 4). An exploratory study is finished when the researchers have achieved the following: a. Established the range and scope of possible management decisions. b. Established the major dimensions of the research task. c. Defined a set of subsidiary investigative questions that can be used as guides to a detailed research design. d. Developed several hypotheses about possible causes of a management dilemma. e. Learned that certain other hypotheses are such remote possibilities that they can be safely ignored in any subsequent study. f. Concluded additional research is not needed or is not feasible. 5). Secondary Data Analysis. a. The first step in an exploratory study is a search of secondary sources

Page 19: BRM Chapter Summaries

sometimes called a literature search. b. Secondary data are studies done by others and for different purposes than the one for which the data are being reviewed. c. Primary data are original research where the data being collected are designed specifically to answer the research question. d. Within secondary data exploration, a researcher should start first with an organization’s own data archives. e. A second source of secondary data is published documents prepared by authors outside the sponsor organization.

6). Experience Survey. a. The experience survey is semistructured or unstructured interviews with experts on a topic or dimension of a topic. b. This may also be called the expert interview or key informant survey. c. The product of the experience survey may be a new hypothesis, the discarding of old notions, or information about the practicality of doing the study.

7). Focus Groups. a. A focus group is a discussion on a topic involving a small group of participants led by a trained moderator. b. A focus group usually contains 6-10 participants. c. One topical objective of a focus group is a new product or product concept. d. The most common application of focus group research continues to be in the consumer area.

4. DESCRIPTIVE STUDIES

A. Formal studies serve a variety of research objectives: 1). Descriptions of phenomena or characteristics associated with a subject population (the who, what, when, where, and how of a topic). 2). Estimates of the frequency of appearance and the population that has these characteristics. 3). Discovery of associations among different variables.

B. Correlation is the relationship by which two or more variables change together, such that systematic changes in one accompany systematic changes in the other. 1). The relationship is measured statistically with an index that represents how closely two variables covary—in unison or opposition. 2). A descriptive study may be simple or complex. 3). The simplest descriptive study concerns a question or hypothesis in which we ask or state something about the size, form, distribution, or existence of a variable.

5. CAUSAL STUDIES

Page 20: BRM Chapter Summaries

A. What is causation? 1). The essential element of causation (e.g., a situation where one variable leads to a specified effect on the other variable) is that some external factor “produces” a change in the dependent variable. 2). Since causation is difficult to prove conclusively, we create inferences and these probability statements assist in predicting what will likely happen. 3). If we consider the possible relationships that can occur between two variables, we can conclude there are three possibilities: a. Symmetrical relationship—when two variables vary together but without causation. b. Reciprocal relationship—when two variables mutually influence or reinforce each other. c. Asymmetrical relationship—when a change in one variable (IV) is responsible for a change in another variable (DV). 4). Independence and dependence of variables is assessed by: a. The degree to which each variable may be altered. b. The time order between the variables. 5). Four types of asymmetrical relationships are shown in Exhibit 8-3 to be: a. Stimulus-response. b. Property-Disposition. c. Disposition-Behavior. d. Property-Behavior.

B. Testing Causal Hypotheses. 1). When testing causal hypotheses, we seek three types of evidence: a. Covariation between A and B. b. Time order of events moving in the hypothesized direction. c. No other possible causes of B. 2). Causation and Experimental Design. a. To be convincing, inferences from experimental designs must meet two other requirements: 1]. Control—when all factors but the IV are held constant and not confounded with another variable that is not part of the study. 2]. Random assignment—uses a randomized list of participants for assigning participants to test groups. b. To balance what is being studied, control groups (a group of participants that is measured but not exposed to the independent variable being studied) are used. c. Another control mechanism is matching (an equalizing process for assigning participants to experimental and control groups). d. Still other possibilities exist for confounding variables.

3). Causation and Ex Post Facto Design. a. Many research studies cannot be carried out experimentally by manipulating variables; however, we are still interested in causation. b. The ex post facto design is widely used in marketing research and often is

Page 21: BRM Chapter Summaries

the only approach feasible. 1). In particular, one seeks causal explanations between variables that are impossible to manipulate and subjects that usually cannot be assigned to treatment and control groups.

Basic Sampling Concepts

1. THE NATURE OF SAMPLING

A. Most people intuitively understand the idea of sampling. 1). Sampling is the process of selecting some elements from a population to represent that population. 2). A population element is the individual participant or object on which the measurement is taken. 3). A population is all elements about which we wish to make some inferences. 4). A census is a count of all the elements in a population. 5). A sampling frame is a list of elements in the population from which the sample is actually drawn.

B. Why Sample? 1). Compelling reasons for sampling include: a. Lower Cost. b. Greater Accuracy of Results c. Greater Speed of Data Collection. d. Availability of Population Elements. e. Sample versus Census.

C. What Is a Good Sample? 1). The ultimate test of a good sample design is how well it represents the characteristics of the population it purports to represent. a. In measurement terms, the sample must be valid. b. Validity depends on two considerations: accuracy and precision.

2). Accuracy. a. Accuracy is the degree to which bias is absent from the sample. b. For the desired effect (just as many above as there are below the measured variable) to occur, there must be enough elements in the sample. c. An accurate (unbiased) sample is one in which the underestimators offset the overestimators. d. Systematic variance is a variation that causes measurements to skew in one direction or another. 1]. Increasing the sample size can reduce systematic variance as a cause of error.

Page 22: BRM Chapter Summaries

3). Precision. a. The numerical descriptors that describe samples may be expected to differ from those that describe populations because of random fluctuations inherent in the sampling process. b. This is called sampling error (or random sampling error) and reflects the influence of chance in drawing the sample members. In short, it is the error caused by the sampling process. c. Sampling error is what is left after all known sources of systematic variance have been accounted for. d. Precision is measured by the standard error of estimate, a type of standard deviation measurement; the smaller the standard error of estimate, the higher is the precision of the sample.

D. Types of Sample Design. 1). The researcher makes several decisions when designing a sample. 2). Sampling decisions flow from two decisions made in the formation of the management-research question hierarchy: a. The nature of the management question. b. The specific investigative questions that evolve from the research question. 3). These decisions are influenced by requirements of the project and its objectives, level of risk the marketer can tolerate, budget, time, available resources, and culture.

4). Representation. a. The members of a sample are selected using probability or nonprobability procedures. b. Nonprobability sampling is an arbitrary and subjective sampling procedure where each population element does not have a known, nonzero chance of being included. 1]. Early Internet samples had all the drawbacks of nonprobability samples. c. The key difference between nonprobability and probability samples is the term random. d. Probability sampling is a controlled, randomized procedure that assures that each population element is given a known, nonzero chance of selection. 1]. This procedure is never haphazard. 2]. Only probability samples provide estimates of precision. 5). Element Selection. a. If each sample element is drawn individually from the population at large, it is an unrestricted sample. b. Restricted samples covers all other forms of sampling.

2. STEPS IN SAMPLING DESIGN

A. Questions to answer in securing a sample include: 1). What is the target population?

Page 23: BRM Chapter Summaries

2). What are the parameters of interest? 3). What is the sampling frame? 4). What is the appropriate sampling method? 5). What size sample is needed?

B. What is the Target Population? 1). The definition of the population may be apparent from the management problem or the research question(s), but often it is not. a. Does the population consist of individuals, households, or families or a combination of these?

C. What Are the Parameters of Interest? 1). Population parameters are summary descriptors of variables of interest in the population. 2). Sample statistics are descriptors of the relevant variables computed from sample data. a. Sample statistics are used as estimators of population parameters. b. Sample statistics are the basis of our inferences about the population. 3). When the variables of interest in the study are measured on interval or ratio scales, we use the sample mean to estimate the population mean and the sample standard deviation to estimate the population standard deviation. 4). The population proportion of incidence is the number of category elements divided by number of elements in the population. a. Proportion measures are necessary for nominal data.

D. What Is the Sampling Frame? 1). The sampling frame is closely related to the population. a. It is the list of elements from which the sample is actually drawn. b. Ideally, it is a complete and correct list of population members only. c. As a practical matter, however, the sampling frame often differs from the theoretical population. 1]. A screening procedure can reduce this problem.

E. What Is the Appropriate Sampling Method? 1). The researcher faces a basic choice: a probability or nonprobability sample. 2). In choosing a probability sample, the researcher must follow appropriate procedures, so that: a. Interviewers and others cannot modify the selections made. b. Only the selected elements from the original sampling frame are included. c. Substitutions are excluded except as clearly specified and controlled according to predetermined decision rules. 3). Despite all the care, the actual sample achieved will not match perfectly the sample that is originally drawn.

Page 24: BRM Chapter Summaries

F. What Size Sample is Needed? 1). In probability sampling, how large a sample should be is a function of the variation in the population parameters under study and the estimating precision needed by the researcher. 2). Cost considerations influence decisions about the size and type of sample and the data collection methods.

3. PROBABILITY SAMPLING

A. Simple Random Sampling. 1). The unrestricted, simple random sample is the purest form of probability sampling. 2). The simple random sample is a probability sample in which each element has a known and equal chance of selection. a. Formula: Probability of Selection = Sample Size/Population Size.

B. Complex Probability Sampling. 1). Simple random sampling is often impractical. 2). A more efficient sample in a statistical sense is one that provides a given precision with a smaller sample size. 3). Four methods are: a. Systematic sampling. b. Stratified sampling. c. Cluster sampling. d. Double sampling.

4). Systematic Sampling. a. Systematic sampling—probability sample drawn by applying a calculated skip interval to a sample frame. b. Skip interval—interval between sample elements drawn from a sample frame in systematic sampling. c. The major advantage of systematic sampling is its simplicity and flexibility. d. Systematic sampling can introduce subtle biases—periodicity or monotonic trend. e. While systematic sampling has some theoretical problems, from a practical point of view it is usually treated as a simple random sample. 5). Stratified Sampling. a. Most populations can be segregated into several mutually exclusive subpopulations, or strata. b. Stratified random sampling—a probability sample that includes elements from each of the mutually exclusive strata within a population. c. This method is chosen: 1]. To increase a sample’s statistical efficiency. 2]. To provide adequate data for analyzing the various subpopulations or strata.

Page 25: BRM Chapter Summaries

3]. To enable different research methods and procedures to be used in different strata. d. The more strata used, the closer you come to maximizing interstrata differences and minimizing intrastratum variances. e. The size of the strata samples is calculated with two pieces of information: 1]. How large the total sample should be. 2]. How the total sample should be allocated among strata.

f. Proportionate versus Disproportionate Sampling. 1]. Proportionate stratified sampling—each stratum’s size is proportionate to the stratum’s share of the population. a). This approach is more popular than any of the other stratified sampling procedures. 2]. Any stratification that departs from the proportionate relationship is disproportionate stratified sampling—each stratum’s size is not proportionate to the stratum’s share of the population. 3]. A researcher makes decisions regarding disproportionate sampling by considering how a sample will be allocated among strata.

6). Cluster Sampling. a. Cluster sampling—divides the population into subgroups, then draws a sample from each subgroup. b. Two conditions foster the use of cluster sampling: 1]. The need for more economic efficiency than can be provided by simple random sampling. 2]. The frequent unavailability of a practical sampling frame for individual elements.

c. Area Sampling. 1]. Area sampling—cluster sampling technique applied to a population with well-defined political or geographic boundaries. 2]. This method overcomes the problems of both high sampling cost and the unavailability of a practical sampling frame for individual elements.

d. Design. 1]. In designing clusters samples, including area samples, we must answer several questions: a. How homogenous are the resulting clusters? b. Shall we seek equal-sized or unequal-sized clusters? c. How large a cluster shall we take? d. Shall we use a single-stage or multistage cluster? e. How large a sample is needed? 7). Double Sampling. a. Double sampling (sequential sampling, multiphase sampling)—a procedure for selecting a subsample from a sample.

Page 26: BRM Chapter Summaries

b. Cost is a factor in using this method.

4. NONPROBABILITY SAMPLING

A. Probability sampling is technically superior to nonprobability sampling. 1). With a subjective approach like nonprobability sampling, the probability of selecting population elements is unknown. 2). Even with its disadvantages, nonprobability sampling does have reasons for usage that should be considered by researchers.

B. Practical Considerations. 1). We may use nonprobability sampling procedures because they satisfactorily meet the sampling objectives. 2). Additional reasons for choosing nonprobability over probability sampling are cost and time. 3). Sometimes because of breakdowns in probability sampling application, nonprobability sampling may be the only feasible option.

C. Methods. 1). Convenience. a. Convenience samples—nonprobability sample where element selection is based on ease of accessibility. 1]. Examples—pools of friends and neighbors or a “man in the street interview.” b. While a convenience sample has no controls to ensure precision, it may still be a useful procedure.

2). Purposive Sampling. a. A nonprobability sample that conforms to certain criteria is called purposive sampling. Types are: 1]. Judgment sampling. 2]. Quota sampling. b. Judgment sampling—purposive sampling where the researcher arbitrarily selects sample units to conform to some criterion. 1]. When used in the early stages of an exploratory study, a judgment sample is appropriate. c. Quota sampling—purposive sampling in which relevant characteristics are used to stratify the sample. 1]. We use it to improve representativeness. 2]. In most quota samples, researchers specify more than one control dimension. 3]. Illustrations of types of controls: a). Precision control. b). Frequency control.

Page 27: BRM Chapter Summaries

4]. Weaknesses: a). The idea that quotas on some variables assume a representativeness on others is argument by analogy. b). Controls might be outdated. c). There is a practical limit to the number of controls operating simultaneously to ensure precision. d). The choice of subjects is left to field workers to make on a judgmental basis. 5]. Despite the problems with quota sampling, it is widely used by opinion pollsters and marketing and other researchers. 6]. Advocates of quota sampling argue that while there is some danger of systematic bias, the risks are usually not that great. a). Where predictive validity has been checked (e.g., in election polls), quota sampling has been generally satisfactory.

d. Snowball. 1]. Snowball sampling—subsequent participants are referred by current sample elements. 2]. The “snowball” gathers subjects as it rolls along. 3]. Variations on snowball sampling have been used to study drug cultures, teenage gang activities, power elites, et cetera where respondents are difficult to identify and contact.

Determining Sample size

1. THE ROLE OF STATISTICS IN SAMPLING

A. A marketing researcher must continually make decisions about sample sizes with incomplete and imperfect information. 1). Statistical analysis helps us to mine information from data and also helps assess the information’s quality. 2). Key factors from Chapter 16 that relate to this chapter: a. The greater the dispersion or variance within the population, the larger the sample must be to provide estimation precision. The concept of variability is integral to descriptive statistics. b. The greater the desired precision of the estimate—the narrower the interval range, the larger the sample must be. Estimates are based on point and interval data that allow us to make inferences about populations from samples. c. The higher the confidence level in the estimate, the larger the sample must be. Both the range, which includes the unknown population mean or proportion, and our confidence in the population estimate are based on inferences derived from sample data.

B. Frequencies and Distributions. 1). Frequency distribution—ordered array of all values for a variable.

Page 28: BRM Chapter Summaries

2). Frequency table—arrays category codes from lowest value to highest value, with columns for count, percent, percent adjusted for missing values, and cumulative percent.

3). The proportion is the percentage of elements in the distribution that meet a criterion. 4). The normal distribution is a frequency distribution of many natural phenomena; graphically shaped like a symmetrical curve. a. The distribution of values for any variable that has a normal distribution is governed by a mathematical equation. b. A standard normal distribution is the statistical standard for describing normally distributed sample data. 1]. This distribution has a mean of 0 and a standard deviation of 1. c. A standard score (Z score) conveys how many standard deviation units a case is above or below the mean. 1]. The Z score, being standardized, allows us to compare the results of different normal distributions. d. The characteristics of central tendency, variability, and shape are useful tools for summarizing distributions. 1]. Descriptive statistics—display characteristics of the location, spread, and shape of a data array.

5). Measures of Central Tendency. a. Summarizing information often requires the description of “typical” values. b. Central tendency—a measure of location (mean, median, mode). c. Mean—the arithmetic average of a data distribution. d. Median—the midpoint of a data distribution. e. Mode—the most frequently occurring value in a distribution. 1]. There may be more than one mode in a distribution; there may be no mode. f. When the measures of central tendency diverge, the distribution is no longer normal.

6). Measures of Variability. a. Variability—another term for measures of spread or dispersion within a data set. b. Variance—a measure of score dispersion about the mean. 1]. The greater the dispersion of scores, the greater the variance. 2]. Both the variance and standard deviation are used with interval and ratio data. c. The standard deviation is a measure of spread; the positive square root of the variance. 1]. Summarizes how far away from the average the data values typically are. 2]. It is perhaps the most frequently used measure of spread because it improves interpretability by removing the variance’s square and

Page 29: BRM Chapter Summaries

expressing deviations in their original units. 3]. It is an important concept for descriptive statistics because it reveals the amount of variability within the data set. d. The range is the difference between the largest and smallest scores in the distribution. 1]. Unlike the standard deviation, the range is computed from only the minimum and maximum scores; thus, it is a very rough measure of spread. 2]. The range provides useful but limited information for all data. 3]. It is mandatory for ordinal data. e. The interquartile range (IQR) measures the distance between the first and third quartiles of a distribution. 1]. It is also called the midspread. 2]. Ordinal or ranked data use this measure in conjunction with the median. 3]. It is also used with interval and ratio data when asymmetrical distributions are suspected or for exploratory analysis. f. The quartile deviation is a measure of dispersion for ordinal data involving the median and quartiles.

7). Measures of Shape. a. The measures of shape, skewness, and kurtosis describe departures from the symmetry of a distribution and its relative flatness (or peakedness), respectively. b. Deviation scores display distance of observation from the mean. c. Skewness is a measure of a distribution’s deviation from symmetry. 1]. A distribution that has cases stretching toward one tail or the other is called skewed. d. Kurtosis—a measure of a distribution’s peakedness or flatness. 1]. Distributions may be: a). Leptokurtic—piled up in the center. b). Platykurtic—flat distributions. c). Mesokurtic—neither too peaked nor too flat.

C. Basic Concepts for Sampling. 1). Point Estimates. a. Point estimate—sample mean; our best predictor of the unknown population mean.

2). Interval Estimates. a. We cannot judge which estimate is the true mean; however, we can estimate the interval in which the true u will fall by using any of the samples. b. The standard error of the mean is the standard deviation of the distribution of sample means. 1]. It varies directly with the standard deviation of the population from which it is drawn.

Page 30: BRM Chapter Summaries

2]. It varies inversely with the square root of the sample size.

D. Estimating the Population Mean. 1). The standard error creates an interval estimate that brackets the point estimate. 2). The interval estimate is a range of values within which the true population parameter is expected to fall. 3). Recall that the area under the curve also represents the confidence estimates that we make about our results. 4). The combination of the interval range and the degree of confidence creates the confidence interval. a. Confidence interval—the combination of interval range and degree of confidence. b. The standard deviation of the distribution of sample means equals the standard error. 5). The central limit theorem is the concept that the sample means of repeatedly drawn samples will be distributed normally around the population mean. a. Even if the population is not normally distributed, the distribution of sample means will be normal if there is a large enough set of samples.

E. Estimating the Interval for the Metro U Dining Club Sample.

F. Changing Confidence Intervals. 1). The confidence level is the probability that the results will be correct.

2. CALCULATING THE SAMPLE SIZE FOR QUESTIONS INVOLVING MEANS

A. Information needed to calculate a desired sample size: 1). The precision desired and how to quantify it: a. The confidence level we want with our estimate. b. The size of the interval estimate. 2). The expected dispersion of the population for the investigative question used. 3). Whether a finite population adjustment is needed.

B. Precision. 1). Precision is measured by: a. The degree of confidence researchers and marketers wish to have in the estimate. b. The interval range in which they expect to find the parameter estimate. 2). The 95 percent confidence level is often used for precision, but more or less confidence may be needed in light of the risks of any given project. 3). When a smaller interval is selected, the researcher is saying that precision is vital, largely because inherent risks are high.

C. Population Dispersion. 1). The smaller the possible dispersion, the smaller will be the sample needed to give a representative picture of population members.

Page 31: BRM Chapter Summaries

2). Proxies of the population dispersion may be used.

D. Size of the Population. 1). When the size of the calculation sample exceeds 5 percent of the population, the finite limits of the population constrain the sample size needed. 2). A correction factor formula is available. 3). In most sample calculations, population size does not have a major effect on sample size.3. CALCULATING THE SAMPLE SIZE FOR QUESTIONS INVOLVING PROPORTIONS

A. Instead of using the arithmetic mean, p (the proportion of the population that has a given attribute) may be used. 1). The measure of dispersion of the sample statistic changes from the standard error of the mean to the standard error of the proportion. 2). We calculate a sample size based on the two same subjective decisions— deciding on an acceptable interval estimate and the degree of confidence. 3). When there are several investigative questions of strong interest, researchers calculate the sample size for each such variable. a. The researcher then chooses the calculation that generates the largest sample. b. This ensures that all data will be collected with the necessary level of precision.

2. HYPOTHESIS TESTING

A. Having detailed your hypotheses in your preliminary analysis planning, the purpose of hypothesis testing is to determine the accuracy of your hypotheses due to the fact that you have collected sample data, not a census. 1). We evaluate the accuracy of hypotheses by determining the statistical likelihood that the data reveal true differences—not random sampling error. 2). We evaluate the importance of a statistically significant difference by weighing the practical significance of any change that we measure. 3). Approaches: a. Classical statistics—an objective view of probability in which the hypothesis is rejected, or not, based solely on the sample data collected. 1]. The most widely used and accepted method for testing hypotheses. b. Bayesian statistics—uses subjective probability estimates based on general experience rather than on collected data. 1]. This method uses statistics and considers all other available information. a). Subjective probability estimates stated in terms of degrees of belief. b). Generally based on experience rather than on specific collected data.

C. Statistical Significance. 1). Following classical statistical approaches, we accept or reject a hypothesis on

Page 32: BRM Chapter Summaries

the basis of sampling information alone. a. Since any sample will almost surely vary somewhat from its population, we must judge whether the differences are statistically significant or insignificant. b. Statistical significance—an index of how meaningful the results of a statistical comparison are. c. Practical significance—when a statistically significant difference has real importance to the decision maker.

D. The Logic of Hypothesis Testing. 1). In classical tests of significance, two kinds of hypotheses are used: a. Null hypothesis. b. Alternative hypothesis. 2). The null hypothesis (Ho) assumes that no difference exists between the sample parameter and the population statistic. a. Analysts usually test to determine whether there has been no change in the population of interest or whether a real difference exists. b. Why not state the hypothesis in a positive form? 1]. This type of hypothesis cannot be tested definitively. 3). The alternative hypothesis (HA) states that difference exists between the sample parameter and the population statistic to which it is compared. a. The alternative hypothesis is the logical opposite of the null hypothesis.

4). Alternative hypotheses correspond with one- and two-tailed tests. a. A two-tailed test is a nondirectional test to reject the hypothesis that the sample statistic is either greater than or less than the population parameter. b. A one-tailed test is a directional test of a null hypothesis that assumes the sample parameter is not the same as the population statistic, but that the difference can be in only one direction. 5). In testing the hypotheses described in the chapter example, adopt this decision rule: Take no corrective action if the analysis shows that one cannot reject the null hypothesis. a. It is argued that a null hypothesis can never be proved and therefore cannot be “accepted.” b. Statistical testing gives only a chance to: 1]. Disprove (reject). 2]. Fail to reject the hypothesis. c. If we reject a null hypothesis, then we are accepting the alternative hypothesis. d. Incorrect decisions can come from this action.

6). Type I error (α) is error that occurs when one rejects a true null hypothesis. a. The α value is called the level of significance and is the probability of rejecting the true null. 7). Type II error (β) is error that occurs when one fails to reject a false null hypothesis.

Page 33: BRM Chapter Summaries

8). Hypothesis testing places a greater emphasis on Type I errors than on Type II errors.

9). Type I Error. a. See chapter examples and illustrations. b. Region of rejection—area beyond the region of acceptance set by the level of significance. c. Region of acceptance—area between the two regions of rejection or above/ below the region of acceptance at the chosen level of significance. d. Critical value—the dividing point(s) between the region of acceptance and the region of rejection.

e. If the probability of a Type I error is 5 percent (α = .05), the probability of a correct decision if the null hypothesis is true is 95 percent. 1]. By changing the probability of a Type I error, you move critical values either closer or farther away from the assumed parameter. 2]. You can also change the Type I error and the regions of acceptance by changing the size of the sample.

10). Type II Error. a. This kind of error is difficult to detect. b. The probability of committing a β error depends on five factors: 1]. The true value of the parameter. 2]. The α level we have selected. 3]. Whether one- or two-tailed test was used to evaluate the hypothesis. 4]. The sample standard deviation. 5]. The size of the sample. c. The power of the test is 1 minus the probability of committing a Type II error (e.g., 1 – β). d. There are several ways to reduce a Type II error: 1]. We can shift the critical value closer to the original μ (in the example μ was 50); but to do this, we must accept a bigger α. 2]. Increase sample size. 3]. A third method seeks to improve both α and β errors simultaneously and is difficult to accomplish.

Data Collection: Surveys and Interviews

1. CHARACTERISTICS OF THE COMMUNICATION APPROACH

A. Research designs can be classified by the approach used to gather primary data. 1). Primary methods include:

Page 34: BRM Chapter Summaries

a. Observation. b. Communication with participants. 2). The researcher determines the appropriate data collection approach largely by identifying the types of information needed—investigative questions the researcher must answer. a. Additionally, the characteristics of the sample unit have a bearing on the choice of research designs. 3). A researcher’s choice of a communication approach affects the following: a. The creation and selection of the measurement questions. b. Instrument design, which incorporates attempts to reduce error and create participant-screening procedures. c. Sampling issues, which drive contact and callback procedures. d. Data collection processes, which create the need for follow-up procedures (when self-administered instruments are used) and possible interviewer training (when personal or telephone surveying methods are used).

4). The communication approach is a design involving surveying or interviewing people. a. A survey is a measurement process using a highly structured interview. b. The goal of the survey is to derive comparable data across subsets of the

chosen sample so similarities and differences can be found. 1]. When combined with statistical probability sampling for selecting participants, survey findings and conclusions are projectable to large and diverse populations. 2]. The great strength of the survey as a primary data collecting approach is its versatility. 3]. Abstract information of all types can be gathered by questioning others. c. The bad news for communication research is all communication research has some error. 1]. Understanding the various sources of error helps researchers avoid or diminish such error. B. Error in Communication Research. 1). There are three major sources of error in communication research: a. Measurement questions and survey instruments. b. Interviewers. c. Participants. 2). Researchers cannot help a marketing decision maker answer a research question if they: a. Select or craft inappropriate questions. b. Ask them in an inappropriate order. c. Use inappropriate transitions and instructions to elicit information.

3). Interviewer Error. a. Interviewer error is error that results from interviewer influence of the participant.

Page 35: BRM Chapter Summaries

b. Interviewer error is caused by: 1]. Failure to secure full participant cooperation (sampling error). 2]. Failure to record answers accurately and completely (data entry error). 3]. Failure to consistently execute interview procedures. 4]. Failure to establish appropriate interview environment. 5]. Falsification of individual answers or whole interviews. 6]. Inappropriate influencing behavior. 7]. Physical presence bias.

4). Participant Error. a. Three broad conditions must be met by participants to have a successful survey: 1]. The participant must possess the information being targeted by the investigative questions. 2]. The participant must understand his or her role in the interview as the provider of accurate information. 3]. The participant must have adequate motivation to cooperate.

b. Participant-based Errors. Three factors influence participation: 1]. The participant must believe that the experience will be pleasing and satisfying. 2]. The participant must believe that answering the survey is an important and worthwhile use of his or her time. 3]. The participant must dismiss any mental reservations that he or she might have about participation.

c. Whether the experience will be pleasant and satisfying depends heavily on the interviewer. d. The quality and quantity of information secured depends heavily on the ability and willingness of participants to cooperate. 1]. At the core of a survey or interview is an interaction between two people or between a person and a questionnaire.

e. Nonresponse error is error that develops when an interviewer cannot locate or involve the targeted participant. 1]. This occurs when the researcher: a). Cannot locate the person (the predesignated sample element) to be studied. b). Is unsuccessful in encouraging that person to participate. 2]. Despite its challenges, communicating with research participants—and the use of the survey—is the principal method of marketing research.

f. Response-based Errors. 1]. Response error occurs when the participant fails to give a correct or complete answer. 2]. Screening can be done to ensure that a participant has an adequate

Page 36: BRM Chapter Summaries

information level to be a participant. 3]. Participants can cause error by responding in such a way as to unconsciously or consciously misrepresent their actual behavior, attitudes, preferences, motivations, or intentions (response bias). 4]. Participants can create response bias when they modify their responses to be socially acceptable or to save face or reputation with the interviewer (social desirability bias), and sometimes even in an attempt to appear to be rational and logical. a). One major cause of response bias is acquiescence—the tendency to be agreeable. b). If participants choose “don’t know” as a response, be forewarned that surveys show that most who answer in this way really have an opinion but don’t expressed it. 5]. Participants may interpret a question or concept differently from what was intended by the researcher.

C. Choosing a Communication Method. 1). There are various options for researchers to choose from when they begin data collection. 2). There is a new trend toward computer-assisted data collection or survey information collection.

2. SELF-ADMINISTERED SURVEYS

A. The self-administered questionnaire is an instrument completed by the participant without additional contact with an interviewer beyond delivery. 1). Mail surveys are self-administered studies both delivered and returned by mail. a. Other delivery modalities include computer-delivered and intercept studies.

B. Evaluation of the Self-Administered Surveys. 1). Nowhere has the computer revolution been felt more strongly than in the area of the self-administered survey. a. Computer-assisted self-interview (CASI) is a computer-delivered questionnaire that is self-administered by the participant. b. Disk-by-mail (DBM) survey is a type of computer-assisted self-interview where the survey and its management software, on computer disk, are delivered by mail to participant. 2). Intercept surveys may use a traditional paper-and-pencil questionnaire or a computer-delivered survey via a kiosk.

3). Costs. a. Self-administered surveys of all types typically cost less than surveys via personal interviews. b. Computer-delivered surveys lower costs in the pre- and postnotification of participants.

Page 37: BRM Chapter Summaries

4). Sample Accessibility. a. One asset of using mail self-administered surveys is that researchers can contact participants who might otherwise be inaccessible. b. The computer-delivered survey can often reach samples that are identified in no way other than their computer and Internet use. 5). Time Constraints. a. In a mail survey, the participant can take more time to collect facts, talk with others, or consider replies at length than is possible in a survey employing the telephone or in a personal interview. 6). Anonymity. a. Most of the forms of surveys that are self-administered are considered to provide anonymity. 7). Topic coverage. a. A major limitation of self-administered surveys concerns the type and amount of information that can be secured (how much information can be collected). b. A general rule of thumb is that participants should be able to answer the survey or questionnaire in less than 10 minutes.

C. Maximizing Participation in the Self-Administered Survey. 1). To maximize the overall probability of response, attention must be given to each point of the survey process where the response may break down. a. For example, the wrong address might be written or the envelop might look like junk mail. 2). The Total Design Method (TDM) suggests minimizing the burden on participants by designing questionnaires that: a. Are easy to read. b. Offer clear response directions. c. Include personalized communication. d. Provide information about the survey via advance notification. e. Encourage participants to respond. 3). Hints from the pros and contemporary research suggest conclusions: a. Preliminary or advance notification of the delivery of a self-administered questionnaire increases response rates. b. Follow-ups or reminders after the delivery of a self-administered questionnaire increase response rates. c. Clearly specified return directions and devices improve response rates. d. Monetary incentives for participation increase response rates. e. Deadline dates do not increase response rates but do encourage participants to respond sooner. f. A promise of anonymity, while important to those who do respond, does not increase response rates. g. An appeal for participation is essential.

D. Self-Administered Survey Trends.

Page 38: BRM Chapter Summaries

1). Web-based questionnaire—a measurement both delivered and collected via the Internet. a. This form has the power of computer-assisted telephone interview systems, but without the expense of network administrators, specialized software, or additional hardware. b. Most products are browser-driven with design features that allow custom survey creation and modification. c. Users can choose proprietary solutions through research firms and off-the- shelf software designed for researchers. d. The computer-delivered survey has made it possible to use many of the suggestions for increasing participation.

3. SURVEY VIA TELEPHONE INTERVIEW

A. The survey via telephone interview is still the workhorse of survey research. 1). Telephone interview—a study conducted wholly by telephone contact between participant and interviewer. a. This is a popular method for many reasons one of those being the high level of telephone penetration in the United States and Europe.

B. Evaluation of the Telephone Interview. 1). Of the advantages that telephone interviewing offers, probably none ranks higher than its moderate cost. a. Costs are often 45 to 64 percent lower than comparable personal interviewing costs. b. Interviews can be done nationally at a reasonable cost. 2). Computer-assisted telephone interviewing (CATI) is a telephone survey with computer sequenced questions and real-time data entry. a. A software program prompts the interviewer with introductory statements, qualifying questions, and prerecorded questionnaire items that drive the survey. 3). Computer-administered telephone survey—a telephone survey via voice- synthesized computer questions. a. There is no human interviewer. b. Modes include: 1]. Touch-tone data entry (TDE). 2]. Voice recognition (VR). 3]. Automated speech recognition (ASR). 4). Noncontact rate is the ratio of noncontacts to all potential contacts. a. The refusal rate is the ratio of participants who decline the interview to all eligible contacts. b. The industry has found a growing negative perception of telephone interviewing among potential participants. c. Telephone interviewing, however, brings a faster conclusion to the study. d. Behavioral norms work to the advantage of telephone interviewing.

Page 39: BRM Chapter Summaries

5). Disadvantages of telephone interviewing include: a. Inaccessible households (no telephone service or no/low contact rate). b. Inaccurate or nonfunctioning numbers. c. Limitation on interview length (fewer measurement questions). d. Limitations on use of visual or complex questions. e. Ease of interview termination. f. Less participant involvement. g. Distracting physical environment.

6). Inaccessible Households. a. Approximately 94 percent of all U.S. households have access to telephone service. b. However, not all that have access to phone service are adequately represented in telephone studies. Examples are: 1]. Rural customers. 2]. Unlisted numbers. 3]. Those that have filtering services. 4]. Secondary phone lines. 5]. Those who use cellular as the primary phone service. c. Random dialing (e.g., computerized process that chooses phone exchanges or exchange blocks and generates random numbers within these blocks for telephone surveys) helps to reduce bias caused by the above.

7). Inaccurate or Nonfunctioning Numbers. a. The highest incidence of unlisted numbers is in the West. b. Random dialing can help to reduce this bias. 8). Limitation on Interview Length. a. A limit on interview length is one of the disadvantages of the telephone survey, but the degree of this limitation depends on the participant’s interest in the topic. b. 10 minute interviews are ideal; some last up to 20 minutes. 9). Limitations on Use of Visual or Complex Questions. a. The telephone survey limits the complexity of the survey and use of complex scales or measurement techniques that is possible with personal interviewing or other methods. b. In telephone interviewing it is difficult to use maps, illustrations, and other visual aids. 10). Ease of Interview Termination. a. Some studies find that the response rate in telephone studies is lower than that for comparable face-to-face interviews. b. One reason is that participants find it easier to terminate a phone interview. c. Bogus telephone research: 1]. Sugging—sales under the guise of research. 2]. Frugging—fund-raising under the guise of research. 11). Less Participation Involvement.

Page 40: BRM Chapter Summaries

a. Telephone surveys can result in less thorough responses, and persons interviewed by phone find the experience to be less rewarding than a personal interview. 12). Changes in the Physical Environment. a. Replacement of home or office phones with cellular and wireless phones also raises concerns.

4. SURVEY VIA PERSONAL INTERVIEW

A. The personal interview is a two-way communication initiated by an interviewer to obtain information from a participant.

B. Evaluation of the Personal Interview. 1). The greatest value of the personal interview lies in the depth of information and detail that can be secured. a. The quality of the information can also be improved using this method. b. Interviewers can note conditions of the interview, probe with additional questions, and gather supplemental information through observation. c. Human interviewers can have more control than in other kinds of communication studies. 1]. Computer-assisted personal interviewing (CAPI)—a personal interview with computer-sequenced questions capable of employing visualization techniques. 2). A chief disadvantage of the personal interviewing technique is its cost. a. Costs are particularly high if the study covers a wide geographic area. b. An exception is the intercept interview—a face-to-face communication that targets participants in a centralized location. 1]. Intercept interviews reduce costs associated with the need for several interviewers, training, and travel. 3). Costs have also risen because of the growing negative social climate against personal interviewing. 4). Results of surveys via personal interviews can be affected adversely by interviewers who alter the questions asked or in other ways bias the results. a. Interviewer bias is one of the three major sources of error.

5. SELECTING AN OPTIMAL SURVEY METHOD

A. The choice of a communication method is not as complicated as it might first appear. 1). When your investigative questions call for information from hard-to-reach or inaccessible participants, the telephone interview, mail survey, or computer- delivered survey should be considered. 2). If data needs to be collected quickly, the mail survey would likely be ruled out because of lack of control over the returns. 3). If extensive questioning and probing is required, the survey via personal

Page 41: BRM Chapter Summaries

interview should be considered. 4). In some instances, methods can be combined to capitalize on strengths or diminish weaknesses. 5). Ultimately, all researchers are confronted by the practical realities of cost and deadlines. 6). Note Exhibit 11-5 for a summary of the advantages and disadvantages of each communication study alternative.

B. Outsourcing Survey Services. 1). Commercial suppliers of research services vary from full-service operations to specialty consultants. 2). Most organizations use a request for proposal (RFP) to describe their requirements and seek competitive bids. 3). Research firms also offer special advantages that their clients do not typically maintain in-house. a. Centralized-location interviewing or computer-assisted telephone facilities may be particularly desirable for certain research needs.

4). A panel is a group of potential participants who have indicated a willingness to participate in research studies. a. The panel can be used to track trends in attitudes toward issues or products, product adoption or consumption behavior, and a myriad of other research interests.

Measurement Scales

1. INTRODUCTION

A. Where the act of scaling fits in the research process. 1). Scales in marketing research are generally constructed to measure behavior, knowledge, and attitudes. 2). Attitude scales are among the most difficult to construct. They are illustrated

Page 42: BRM Chapter Summaries

in this chapter.

2. THE NATURE OF ATTITUDES

A. An attitude is a learned, stable predisposition to respond to oneself, other persons, objects, or issues in a consistently favorable or unfavorable way. 1). Important aspects of this definition include the learned nature of attitudes, their relative permanence, and their association with socially significant events and objects. 2). An attitude is a predisposition; therefore, it makes sense that a favorable attitude toward a product or service will lead to purchase. 3). Attitudes have several dimensions: a. Cognitive—memories, evaluations, and beliefs about the properties of the attitude object. b. Affective—feelings, intuition, values, and emotions toward the attitude object. c. Conative or behaviorally-based—expectations and behavioral intentions toward attitude object.

B. The Relationship between Attitudes and Behavior. 1). Attitudes and behavioral intentions do not always lead to actual behaviors. 2). Behaviors can influence attitudes. 3). Marketing researchers treat attitudes as hypothetical constructs because of their complexity and the fact that they are inferred from the measurement data, not actually observed. 4). Several factors have an effect on the applicability of attitudinal research for marketing: a. Specific attitudes are better predictors of behavior than general ones. b. Strong attitudes (strength is affected by accessibility or how well the object is remembered and brought to consciousness, how extreme the attitude is, or the degree of confidence in it) are better predictors of behavior than weak attitudes composed of little intensity or topical interest. c. Direct experiences with the attitude object (when the attitude is formed, during repeated exposure, or through reminders) produce behavior more reliably. d. Cognitive-based attitudes influence behaviors better than affective-based attitudes. e. Affective-based attitudes are often better predictors of consumption behaviors. f. Using multiple measurements of attitude or several behavioral assessments across time and environment improves prediction. g. The influence of reference groups (interpersonal support, urges of compliance, peer pressure) and the individual’s inclination to conform to these influences improve the attitude-behavior linkage. 5). Marketers measure and analyze the attitudes of customers because attitudes offer insights about behavior.

Page 43: BRM Chapter Summaries

C. Attitude Scaling. 1). Attitude scaling is the process of assessing an attitudinal disposition using a number that represents a person’s score on an attitudinal continuum ranging from an extremely favorable disposition to an extremely unfavorable one. 2). Scaling is the assignment of numbers or symbols to a property of objects\ according to value or magnitude.

3. SELECTING A MEASUREMENT SCALE

A. Selecting and constructing a measurement scale requires the consideration of several factors that influence the reliability, validity, and practicality of the scale: 1). Research objectives. 2). Response types. 3). Data properties. 4). Number of dimensions. 5). Balanced or unbalanced. 6). Forced or unforced choices. 7). Number of scale points. 8). Rater errors.

B. Research Objectives. 1). Researcher’s objectives are too numerous to list. 2). Researchers, however, face two general types of scaling objectives: a. To measure characteristics of the participants who participate in the study. b. To use participants as judges of the objects or indicants presented to them.

C. Response Types. 1). Measurement scales fall into one of four general types: a. Rating scale—a scale that scores an object or property without making a direct comparison to another object or property. b. Ranking scale—a scale that scores an object or property by making a comparison and determining order among two or more objects or properties. 1]. A choice scale requires that participants choose one alternative over another. c. Categorization—participants put themselves or property indicants into groups or categories. d. Sorting—participants sort cards (representing concepts or constructs) into piles using criteria established by the researcher.

D. Data Properties. 1). Decisions about the choice of measurement scales are often made with regard to the data properties generated are each scale.

Page 44: BRM Chapter Summaries

E. Number of Decisions. 1). Measurement scales are either unidimensional or multidimensional. 2). Unidimensional scale—seeks to measure only one attribute of the participant or object. 3). Multidimensional scale—seeks to simultaneously measure more than one attribute of the participant or object.

F. Balanced or Unbalanced. 1). A balanced rating scale has an equal number of categories above and below the midpoint, or an equal number of favorable/unfavorable response choices. a. Generally, rating scales should be balanced. 2). An unbalanced rating scale has an unequal number of favorable and unfavorable response choices. a. The unbalanced rating scale can be justified when the researcher knows in advance that nearly all participants’ scores will lean in one direction or the other. b. Unbalanced scales are also considered when participants are known to be either “easy raters” or “hard raters.” c. An unbalanced scale can help compensate for the error of leniency created by such raters. Can be misusd by the researcher.

G. Forced or Unforced Choices. 1). A forced-choice rating scale requires that participants select from available alternatives. 2). An unforced-choice rating scale provides participants with an opportunity to express no opinion when they are unable to make a choice among the alternatives offered. 3). Marketing researchers often exclude the response choice of “no opinion” or something that means about the same thing. a. Understanding neutral answers can be a real challenge for marketing researchers. 4). There can be some risk in forcing participants to choose from among specified choices without the opportunity to deviate.

H. Number of Scale Points. 1). What is the ideal number of points for a rating scale? The answer is debatable. 2). The answer we select is that the scale should be appropriate for its purpose. 3). Complexity of the purchase, amount of self-involvement, and product or financial risk would be examples of factors that might dictate simple (3 points) or complex scales (5-11 points).

I. Rater Errors. 1). The value of rating scales depends on the assumption that a person can and will make good judgments. 2). Types of errors:

Director, 12/11/10,
Might lead to midpt
Page 45: BRM Chapter Summaries

a. Error of central tendency—a participant, reluctant to give extreme judgments, chooses the central point within the rating options. b. Error of leniency—when a participant consistently chooses the extreme position on one end of the scale. 3). These errors most often occur when the rater does not know the object or property being rated. 4). To address these tendencies, researchers can: a. Adjust the strength of descriptive adjectives. b. Space the intermediate descriptive phrases farther apart. c. Provide smaller differences in meaning between the steps near the ends of the scale than between the steps near the center. d. Use more points in the scale. 5). The halo effect (error) is bias introduced when the participant carries over a generalized impression of the subject from one rating scale to another.

4. RATING SCALES.

A. We use rating scales to judge the properties of objects without reference to other similar objects.

B. Simple Attitude Scales. 1). Simple category scale—a scale with two mutually exclusive response categories. a. This scale works for demographic questions or where a dichotomous response is adequate. b. Generates nominal data. 2). Multiple-choice, single-response scale—a measurement question that poses more than two category responses but seeks a single answer. a. Produces nominal data. 3). Multiple-choice, multiple-response scale—offers participant multiple options and solicits one or more answers. a. Generates nominal data. 4). Simple attitude scales are easy to develop, are inexpensive, and can be designed to be highly specific. a. The design approach is subjective.

C. Likert Scales. 1). The Likert scale is the most frequently used version of the summated rating scale. 2). The summated rating scale is used when the participant agrees or disagrees with statements that express favorable or unfavorable attitudes toward an object.

3). Advantages of the Likert scale include: a. It is easy and quick to construct. b. Probably are more reliable and provide a greater volume of data than

Page 46: BRM Chapter Summaries

many other scales. c. The scale produces interval data. 4). Originally, creating a Likert scale involved a procedure known as item analysis—when scales are tested with a group of participants to determine which questions discriminate between high and low raters. 5). Researchers have found that a larger number of items for each attitude object improves the reliability of the scale. 6). Although item analysis is helpful in weeding out attitudinal statements that do not discriminate well, the summation procedure causes problems for marketing researchers.

D. Semantic Differential Scales. 1). The semantic differential (SD) scale measures the psychological meanings of an attitude object using bipolar adjectives. a. Researchers use this scale for marketing studies of brand and institutional image. 2). The method consists of a set of bipolar rating scales, usually with 7 points, by which one or more participants rate one or more concepts on each scale item. 3). An object can have several dimensions of connotative meaning. a. The meanings are located in multidimensional property space, called semantic space. b. Connotative meanings are suggested or implied meanings, in addition to the explicit meaning of an object.

4). Osgood and his associates developed the semantic differential method to measure the psychological meanings of an object to an individual. a. Three factors contributed to meaningful judgments by participants: 1]. Evaluation. 2]. Potency. 3]. Activity. 5). Marketers have followed a somewhat different approach to SD scales than did the original study advocates. 6). The semantic differential has several advantages: a. It is an efficient and easy way to secure attitudes from a large sample. b. These attitudes can be measured in both direction and intensity. c. It is easily repeated. d. The total set of responses provides a comprehensive picture of the meaning of an object and a measure of the person doing the rating. e. It produces interval data. E. Numerical/Multiple Rating List Scales. 1). Numerical scales are interval scales using numeric scale points between verbal anchors. a. The verbal anchors serve as the labels for the extreme points. b. Numerical scales are often 5-point scales but may have 7 or 10 points. 2). The scale’s linearity, simplicity, and production of ordinal or interval data

Page 47: BRM Chapter Summaries

make it popular for managers and researchers. 3). A multiple rating list scale is a single interval or ordinal numerical scale where raters respond to a series of objects. a. This scale differs from the numerical scale(s) in two ways: 1]. It accepts a circled response from the rater. 2]. The layout facilitates visualization of the results. b. This scale produces interval data.

F. Stapel Scales. 1). The Stapel scale is a numerical scale with up to 10 categories where the central position is a named attribute. 2). This scale is used as an alternative to the semantic differential, especially when it is difficult to find bipolar adjectives that match the investigative question. 3). The scale is composed of the word (the phrase) identifying the image dimension and a set of 10 response categories for each of the three attributes. 4). Stapel scales usually produce interval data.A different version of asking respondents to express their degree of agreement / disagreement with statements / phrases reflecting an opinion. Scores from -5 to +5 or other extreme points.

G. Constant-Sum Scale. 1). In the constant-sum scale the participant allocates points to more than one attribute or property indicant, such that they total a constant sum, usually 100 or 10. 2). A participant’s ability to add is often taxed. 3). The advantage of the scale is its compatibility with percent (100 percent) and the fact that alternatives that are perceived to be equal can be so scored. 4). The scale is used to record attitudes, behavior, and behavioral intent. 5). The constant-sum scale produces interval data.

H. Graphic Rating Scales. 1). In the graphic rating scale the participant places his or her response along a line or continuum. 2). The results are treated as interval data. 3). The difficulty is in coding and analysis. 4). This scale requires more time than scales with predetermined categories. 5). This scale often uses pictures; it is often used with children.

5. RANKING SCALES

A. In ranking scales, the participant directly compares two or more objects and makes choices among them. 1). Frequently, the participant is asked to select one as the “best” or the “most preferred.” 2). However, it often results in ties if more than two choices are offered.

Page 48: BRM Chapter Summaries

3). In the paired-comparison scale a participant chooses a preferred object within a pair or objects for a series of pairs, resulting in a rank order of objects.

4). Paired comparisons run the risk that participants will tire to the point that they give ill-considered answers or refuse to continue. 5). A paired comparison provides ordinal data. 6). In the forced ranking scale the participant orders several objects or properties of objects. a. This method is faster than paired comparisons and is usually easier and more motivating to the participant. 7). A drawback to the forced ranking is the number of stimuli that can be handled by this method. 8). Rank ordering produces ordinal data since the distance between preferences is unknown.

9). In a comparative scale the participant compares an object against a standard. a. Some researchers treat this data as interval data. b. This text considers such data to be ordinal data unless the linearity of the variables in question could be supported.6. SORTING

A. Q-sorts. 1). In Q-sorts a participant sorts a deck of cards (representing properties or objects) into piles that represent points along a continuum. 2). Marketers use Q-sorts to resolve three special problems: a. Item selection. b. Structured or unstructured choices in sorting. c. Data Analysis. 3). The basic Q-sort procedure involves the selection of a set of verbal statements, phrases, single words, or photos related to the concept being studied. a. In a structured sort, the distribution of cards allowed in each pile is predetermined. b. In an unstructured sort, only the number of piles will be determined. 4). The purpose of sorting is to get a conceptual representation of the sorter’s attitude toward the attitude object and to compare the relationships between people.

7. CUMULATIVE SCALES

A. Cumulative scales. 1). In a cumulative scale a participant’s agreement with one extreme scale item endorses all other items that take a less extreme position.

Page 49: BRM Chapter Summaries

QuestionnaireREVISTING THE RESEARCH QUESTION HIERARCHY

A. The management-research question hierarchy is the foundation of the research process and also of successful instrument development. 1). Question levels: a. Management question—the dilemma, stated in question form, that the manager needs resolved. b. Research question(s)—the fact-based translation of the question the researcher must answer to contribute to the solution of the management question. c. Investigative questions—specific questions the researcher must answer to provide sufficient detail and coverage of the research question. 1]. Within this level, there may be several questions as the researcher moves from the general to the specific. d. Measurement questions—questions participants must answer if the researcher is to gather the needed information and resolve the management question.

2). The following are prominent among the strategic concerns: a. What type of scale is needed to perform the desired analysis to answer the management question? b. What communication approach will be used? c. Should the questions be structured, unstructured, or some combination? d. Should the questioning be undisguised or disguised? If the latter, to what degree? 3). Today’s environment is affected by technology that provides advantages unseen in the past.

B. Type of Scale for Desired Analysis. 1). The analytical procedures available to the researcher are determined by the scale types used in the survey.

C. Communication Approach. 1). Decisions regarding which method to use as well as where to interact with the participant will affect the design of the instrument.

D. Disguising Objectives and Sponsors. 1). Another consideration in communication instrument design is whether the purpose of the study should be disguised. 2). A disguised question is a measurement question designed to conceal the question’s and study’s true purpose. a. Some degree of disguise is common in survey questions. b. Sponsorship is often disguised. 3). Four situations where disguise is not an issue:

Page 50: BRM Chapter Summaries

a. Willingly shared, conscious-level information. b. Reluctantly shared, conscious-level information. c. Knowable, limited-conscious-level information. d. Subconscious-level information. 4). Example: Have you attended the showing of a foreign language film in the last six months?

5). Sometimes the participant knows the information we seek but is reluctant to share it for a variety of reasons. 6). Not all information is at the participant’s conscious level. a. Some motivations are subconscious. 7). Projective techniques thoroughly disguise the study objective, but they are often difficult to interpret.

E. Preliminary Analysis Plan. 1). Researchers are concerned with adequate coverage of the topic and with securing the information in its most usable form. a. Dummy tables can be devised to assist in visualization. 2). The preliminary analysis plan serves as a check on whether the planned measurement questions meet the data needs of the research question. a. This helps to determine the type of scale needed. 3). The guiding principle of survey design is always to ask only what is needed.

3. CONSTRUCTING AND REFINING THE MEASUREMENT QUESTIONS

A. Drafting or selecting questions begins once you develop a complete list of investigative questions and decide on the collection processes to be used. 1). The order, type, and wording of measurement questions, the introduction, the instructions, the transitions, and the closure in a quality communication instrument should accomplish the following: a. Encourage each participant to provide accurate responses. b. Encourage each participant to provide an adequate amount of information. c. Discourage each participant from refusing to answer specific questions. d. Discourage each participant from early discontinuation of participation. e. Leave the participant with a positive attitude about survey participation.

B. Question Categories and Structure. 1). Questionnaires and interview schedules (question list used to guide an interview) can range from those that have a great deal of structure to those that are essentially unstructured.

Page 51: BRM Chapter Summaries

2). Questionnaires contain three categories of measurement questions: a. Administrative questions. b. Classification questions. c. Target questions (structured or unstructured). 3). Administrative questions—a measurement question that identifies the participant, interviewer, interview location, and conditions (nominal data). a. These questions are rarely asked of participants but are necessary for studying patterns within the data and identify possible error sources. 4). Classification questions—measurement question that provides sociological-demographic variables for use in grouping participants’ answers. a. These questions usually appear at the end of a questionnaire except for filters or screens. 5). Target question—measurement question that addresses the core investigative questions of a specific study. Target questions can be: a. Target questions, structured—measurement questions that present the participant with a fixed set of categories per variable. b. Target questions, unstructured—measurement questions that present the participant with the context for participant-framed answers. 1]. Some times referred to as open-ended questions.

C. Question Content. 1). Question content is first and foremost dictated by the investigative questions guiding the study. 2). Four questions guide the instrument designer in selecting appropriate question content: a. Should this question be asked (does it match the study objective)? b. Is the question of proper scope and coverage? c. Can the participant adequately answer this question as asked? d. Will the participant willingly answer this question as asked?

D. Question Wording. 1). Misunderstanding a question is partially due to the lack of a shared vocabulary. 2). The dilemma (the need to be explicit, to present alternatives, and to explain meanings) contributes to longer and more involved sentences. 3). The difficulties caused by question wording exceed most other sources of distortion in surveys. 4). The diligent question designer will put a survey question through many revisions before it satisfies these criteria: a. Is the question stated in terms of a shared vocabulary? b. Does the question contain vocabulary with a single meaning? c. Does the question contain unsupported or misleading assumptions? d. Does the question contain biased wording? e. Is the question correctly personalized? f. Are adequate alternatives presented within the question?

Page 52: BRM Chapter Summaries

5). Leading questions (measurement question whose wording suggests to the participant the desired answer) can inject significant error by inferring that one response should be favored over another. 6). Target questions need not be constructed solely of words; visual images may be used.

E. Response Strategy. 1). The third major decision area in question design is the degree and form of structure imposed on the participant. 2). Response strategies include: a. Unstructured response (or open-ended response, the free choice of words) occurs where participant’s response is limited only by space, layout, instructions, or time. b. Structured response (or closed response, specified alternatives provided) occurs when the participant’s response is limited to specific alternatives provided. 3). Closed responses are typically categorized as: a. Dichotomous. b. Multiple-choice. c. Checklist. d. Rating. e. Ranking response strategies. 4). Several situational factors affect the decision of whether to use open-ended or closed questions. a. Objectives of the study. b. Participant’s level of information about the topic. c. Degree to which participant has thought through the topic. d. Ease with which participant communicates. e. Participant’s motivation level to share information.

F. Response Strategies Illustrated. 1). Free-Response Strategy. a. Free-response question—measurement question where the participant chooses the words to frame the answer. 1]. Also known as open-ended questions. 2]. Researchers try to limit the number of these questions as they pose significant problems in interpretation and are costly in terms of data analysis.

2). Dichotomous Response Strategy. a. Dichotomous question—a measurement question that offers two mutually exclusive and exhaustive alternatives. b. In many two-way questions, there are potential alternatives beyond the stated two alternatives. c. Dichotomous questions can become multiple-choice or rating questions. d. Dichotomous questions generate nominal data.

Page 53: BRM Chapter Summaries

3). Multiple-Choice Response Strategy. a. Multiple-choice questions—a measurement question that poses more than two category responses but seeks a single answer. b. We seek graduations of preference, interest, or agreement. c. Design problems associated with multiple-choice questions include: 1]. When one or more responses have not been anticipated. 2]. When the list of choices is not exhaustive. 3]. When the participant divides the question into several questions, each with different alternatives. a). Pretesting should reveal if a multiple-choice question is really a double-barreled question (a measurement question that includes two or more questions in one that the participant might need to answer differently). 4). When the choices are not mutually exclusive. 5). When the choices do not represent a one-dimensional scale. d. It is necessary to seek a fair balance in choices when a participant’s position on an issue is unknown. e. Multiple-choice questions should present reasonable alternatives. f. The order in which choices are given can become a problem. 1]. Numeric alternatives are normally presented in order of magnitude; this practice introduces bias. g. Order bias with non-numeric response categories often leads a participant to choose the first alternative (primacy effect) or the last alternative (recency effect). 1]. The split-ballot technique can counteract this effect. h. Multiple-choice questions usually generate nominal data.

4). Checklist Response Strategy. a. Checklist—a measurement question that poses numerous alternatives and encourages multiple unordered responses. b. Checklists are more efficient than asking a series of dichotomous questions. c. Checklists generate nominal data.

5). Rating Response Strategy. a. Rating questions—a question that asks the participant to position each property or object on a verbal, numeric, or graphic continuum. b. Generally, rating-scale structures generate ordinal data; some carefully crafted scales generate interval data. c. It is important to remember that the researcher should represent only one response dimension in rating-scale response questions.

6). Ranking Response Strategy. a. Ranking question—a measurement question that asks the participant to compare and order two or more objects or properties using a numeric scale.

Page 54: BRM Chapter Summaries

b. Though ranking solves some problems, concerns surface. 1]. How many factors should be ranked?

G. Response Strategies and the Web Questionnaire. 1). All strategies covered so far are applicable for use on Web questionnaires. 2). Layout options are, however, slightly different.

H. Sources of Existing Questions. 1). Tools of data collection should be adapted to the problem, not the reverse. 2). Borrowing items from existing sources is not without risk. 3). Researchers whose questions or instruments you borrow may not have reported sampling and testing procedures needed to judge the quality of the measurement scale. 4). Language, phrasing, and idioms can pose problems. 5). No matter where the questions come from pretesting is expected.

4. DRAFTING AND REFINING THE INSTRUMENT

A. Drafting and refinement is a multi-step process: 1). Develop participant-screening process along with the introduction. 2). Arrange the measurement question sequence: a. Identify groups of target questions by topic. b. Establish a logical sequence for the question groups and questions within groups. c. Develop transitions between these question groups. 3). Prepare and insert instructions—for the interviewer or participant—including termination instructions, skip directions, and probes. 4). Create and insert a conclusion, including a survey disposition statement. 5). Pretest specific questions and the instrument as a whole.

B. Introduction and Participant Screening. 1). The introduction must supply the sample unit with the motivation to participate in the study. 2). The introduction reveals the sponsor unless the study is disguised. 3). Introduction can come through screen questions (filter questions)—a question to qualify the participant’s knowledge about the target questions of interest or experience necessary to participate. a. At a minimum, a phone or personal interviewer will introduce himself or herself to help establish critical rapport with the potential participant.

C. Measurement Question Sequencing. 1). The design of survey questions is influenced by the need to relate each question to the others in the instrument. a. A branched question is a measurement question sequence determined by the participant’s previous answers.

Page 55: BRM Chapter Summaries

2). The psychological order of the questions is important. 3). The basic principle used to guide sequence decisions—the nature and needs of the participant must determine the sequence of questions and the organization of the interview schedule. 4). Guidelines suggested to implement this principle: a. The question process must quickly awaken interest and motivate the participant to participate in the interview. b. The participant should not be confronted by early requests for information that might be considered personal or ego-threatening. c. The questioning process should begin with simple items and then move to the more complex. d. Changes in the frame of reference should be small and should be clearly pointed out.

5). Awaken Interest and Motivation. a. We awaken interest and stimulate motivation to participate by choosing or designing questions that are attention-getting and not controversial. 6). Sensitive and Ego-Involving Information. a. It is dangerous to ask sensitive questions too early and questions that are too personal. b. Buffer questions (neutral measurement question designed chiefly to establish rapport with the participant) can be used in conjunction with sensitive questions. 7). Simple to Complex. a. Deferring complex questions or simple questions that require much thought can help reduce the number of “don’t know” responses. 8). General to Specific. a. Often called the funnel approach, the objectives of the procedure are to learn the participant’s frame of reference and to extract the full range of desired information while limiting the distortion effect of earlier questions on later ones. b. There is a risk of interaction when two or more questions are related.

9). Question Groups and Transitions. a. The last question-sequencing guideline suggests arranging questions to minimize shifting in subject matter and frame of reference.

D. Instructions. 1). Instructions to the interviewer or participant attempt to ensure all participants are treated equally, thus avoiding building error into the results. 2). Two principles form the foundation for good instructions: clarity and courtesy. a. Instruction language needs to be unfailingly simple and polite. 3). Instruction topics include: a. Terminating an unqualified participant. b. Terminating a discontinued interview.

Page 56: BRM Chapter Summaries

c. Moving between questions on an instrument. d. Disposing of a completed questionnaire.

E. Conclusion. 1). The role of the conclusion is to leave the participant with the impression that his or her involvement has been valuable.

F. Overcoming Instrument Problems. 1). The researcher can do several things to improve survey results: a. Build rapport with the participant. b. Redesign the questioning process. c. Explore alternative response strategies. d. Use methods other than surveying to secure the data. e. Pretest all the survey elements. 2). Build Rapport with the Participant. a. Most information can be secured by direct undisguised questioning if rapport has been developed. b. The assurance of confidentiality can increase participants’ motivation. 3). Redesign the Questioning Process. a. You can redesign the questioning process to improve the quality of answers by modifying the administrative process and the response strategy. 4). Explore Alternative Response Strategies. a. When drafting the original question, try developing positive, negative, and neutral versions of each type of question. b. Minimize nonresponse to particular questions by recognizing the sensitivity of certain topics.

G. The Value of Pretesting. 1). The final step toward improving survey results is pretesting (the assessment of questions and instruments before the start of a study) instruments before the start of a study. 2). Reasons for pretesting include: a. Discovering ways to increase participant interest. b. Increasing the likelihood that participants will remain engaged to the completion of the survey. c. Discovering question content, wording, and sequencing problems. d. Discovering target question groups where researcher training is needed. e. Exploring ways to improve the overall quality of survey data.