Overview of Multivariate Analysis Methods

By Hair, J.F., Bush, R.P., Ortinau, D.J.

Edited by Paul Ducham


If we use multivariate techniques to explain or predict the dependent variable on the basis of two or more independent variables, we are attempting to analyze and understand dependence. A dependence method can be defined as one in which a variable is identified as the dependent variable to be predicted or explained by other independent variables. Dependence techniques include MULTIPLE REGRESSION analysis, discriminant analysis, and MANOVA. For example, many businesses today are very interested in predicting dependent variables like customer loyalty, or high-volume customers versus light users (e.g., heavy vs. light consumers of Starbucks coffee), on the basis of numerous independent variables. Multiple discriminant analysis is a dependence technique that predicts customer usage (frequent beer drinker vs. nondrinker) based on several independent variables, such as how much is purchased, how often it is purchased, and age of purchaser.

In contrast, an interdependence method is one in which no single variable or group of variables is defined as being independent or dependent. In this case, the multivariate procedure involves the analysis of all variables in the data set simultaneously. The goal of interdependence methods is to group respondents or objects together. In this case, no one variable is to be predicted or explained by the others. Cluster analysis, factor analysis, and multidimensional scaling are the most frequently used interdependence techniques. For example, a marketing manager who wants to identify various market segments or clusters of fast-food customers (burgers, pizza, chicken customers) might utilize these techniques.


Just as with other approaches to data analysis, the nature of the measurement scales will determine which multivariate technique is appropriate to analyze the data. Selection of the appropriate multivariate method requires consideration of the types of measures used for both independent and dependent sets of variables. When the dependent variable is measured nonmetrically, the appropriate method is discriminant analysis. When the dependent variable is measured metrically, the appropriate techniques are multiple regression, ANOVA, and MANOVA. Multiple regression and discriminant analysis typically require metric independents, but they can use nonmetric dummy variables. ANOVA and MANOVA are appropriate with nonmetric independent variables. The interdependence techniques of factor analysis and cluster analysis are most frequently used with metrically measured variables, but nonmetric adaptations are possible.

In this chapter we will consider factor analysis, cluster analysis, discriminant analysis, and Conjoint Analysis. These statistical techniques help us to analyze marketing problems that have multiple variables. Multivariate statistical techniques help marketers make better decisions than is possible with univariate or bivariate statistics. But regardless of which type of technique is selected, the outcome of the analysis is key. Review the nearby Global Insights box to see how out- comes can change across markets.


Factor analysis is a multivariate statistical technique that is used to summarize the information contained in a large number of variables into a smaller number of subsets or factors. The purpose of factor analysis is to simplify the data. With factor analysis there is no distinction between dependent and independent variables; rather, all variables under investigation are analyzed together to identify underlying factors.

Many problems facing businesses today are often the result of a combination of several variables. For example, if the local McDonald’s franchisor is interested in assessing customer satisfaction, many variables of interest must be measured. Variables such as freshness of the food, speed of service, taste, food temperature, cleanliness, and how friendly and courteous the personnel are would all be measured by means of a number of rating questions.

Let’s look at an intuitive example of factor analysis. Customers were asked to rate a fast- food restaurant on six characteristics. On the basis of the pattern of their responses, these six measures were combined into two summary measures, or factors: service quality and food quality (see Exhibit 17.3). Marketing researchers use factor analysis to summarize the information contained in a large number of variables into a smaller number of factors. The result is that managers can then simplify their decision making because they have to consider only two broad areas—service quality and food quality—instead of six. Our example has reduced six variables to two factors, but in typical business situations marketing researchers use factor analysis to reduce, for example, 50 variables to only 10 or fewer factors, a much simpler problem to handle.

The starting point in interpreting factor analysis is factor loadings. Factor loading refers to the correlation between each of the original variables and the newly developed factors. Each factor loading is a measure of the importance of the variable in measuring each factor. Factor loadings, like correlations, can vary from 1.0 to –1.0. If variable A4. (food taste) is closely associated with factor 2, the factor loading or correlation would be high. The statistical analysis associated with factor analysis would produce factor loadings between each factor and each of the original variables. An illustration of the output of this statistical analysis is given in Exhibit 17.4. Variables A1 , A2 , and A3 are highly correlated with factor 1 and variables A4 , A5 , and A6 are highly correlated with factor 2. An analyst would say that variables A1 , A2, and A3 have “high loadings” on factor 1, which means that they help define that factor. Similarly, an analyst would say that variables A4 , A5 , and A6 have “high loadings” on factor 2.

The next step in factor analysis is to name the resulting factors. The researcher examines the variables that have high loadings on each factor. There often will be a certain consistency among the variables that load high on a given factor. For example, the ratings on waiting time (A1), cleanliness (A2), and friendly personnel (A3 ) all load on the same factor. We have chosen to name this factor service quality because the three variables deal with some aspect of a customer’s service experience with the restaurant. Variables A4 , A5 , and A6 all load highly on factor 2, which we named food quality. Naming factors is often a subjective process of combining intuition with an inspection of the variables that have high loadings on each factor.

A final aspect of factor analysis concerns the number of factors to retain. While our restaurant example dealt with two factors, many situations can involve anywhere from one factor to as many factors as there are variables. Deciding on how many factors to retain is a very complex process because there can be more than one possible solution to any factor analysis problem. A discussion of the technical aspects of this part of factor analysis is beyond the scope of this book, but we will provide an example of how an analyst can decide how many factors to retain.

An important measure to consider in deciding how many factors to retain is the percentage of the variation in the original data that is explained by each factor. A factor analysis computer program will produce a table of numbers that will give the percentage of variation explained by each factor. A simplified illustration of these numbers is presented in Exhibit 17.5. In this example, we would definitely keep the first two factors, because they explain a total of 75.7 percent of the variability in the five measures. The last three factors combined explain only 24.3 percent of the variation, and each accounts for only a small portion of the total variance. Thus, they contribute little to our understanding of the data and would not be retained. Most marketing researchers stop factoring when additional factors no longer make sense, because the variance they explain often contains a large amount of random and error variance.

Factor Analysis Applications in Marketing Research

While our fast-food example illustrated the power of factor analysis in simplifying customer perceptions toward a fast-food restaurant, the technique has many other important applications in marketing research:

• Advertising. Factor analysis can be used to better understand media habits of various customers.

• Pricing. Factor analysis can help identify the characteristics of price-sensitive and prestige-sensitive customers.

• Product. Factor analysis can be used to identify brand attributes that influence consumer choice.

• Distribution. Factor analysis can be employed to better understand channel selection criteria among distribution channel members.

SPSS Application—Factor Analysis of Restaurant Perceptions

The value of factor analysis can be demonstrated with our Santa Fe Grill database. When we look at our database we have many variables that are measured metrically. Let’s look first at variables X12 to X21, which are customers’ perceptions of The Santa Fe Grill and its competitor Jose’s on 10 dimensions. The task is to determine if we can simplify our understanding of the perceptions of the restaurant by reducing the number of restaurant perceptions variables to fewer than 10. If this is possible, the owners of the Santa Fe Grill can simplify their decision making by focusing on fewer aspects of the restaurants in developing competitive profiles as well as appropriate marketing strategies.

The SPSS click-through sequence is ANALYZEData ReductionFACTOR, which leads to a dialog box where you select variables X12–X21. After you have put these variables into the Variables box, look at the data analysis options below. First click on the Descriptives box and unclick the Initial Solution box because we do not need it at this point. Now click Continue to return to the previous dialog box. Next go to the Extraction box. In this one you leave the default of principal components and unclick the unrotated factor solution under Display. We will keep the other defaults, so now click the Continue box. Next go to the Rotation box. The default is None. We want to rotate, so click on Varimax as your rotational choice and then Continue. Finally, go to the Options box and click Sorted by Size, and then change the Suppress Absolute Values from .10 to .30. These last choices eliminate unneeded infor- mation, thus making the solutions printout much easier to read. We do not need Scores at this point, so we can click on OK at the top of the dialog box to execute the factor analysis. Exhibit 17.6 shows examples of some of the dialog boxes for running this factor analysis.

The SPSS output for a factor analysis of the restaurant perceptions is shown in Exhibit 17.7. The first table you will see on the output is the Rotated Component Matrix table. Labels for the 10 variables analyzed (X12–X21) are shown in the left column. To the right are four columns of numbers containing the factor loadings for the four factors that resulted from the factor analysis of restaurant perceptions. By suppressing loadings under .30 we see only three numbers under column one (Component 1, or factor 1), three numbers under column two (Component 2, or factor 2), and two numbers under columns three and four (Components 3 and 4). For example, X12—Friendly Employees has a loading of .923 on factor 1, and X18—Excellent Food Taste has a loading of .895 on factor 2. We prefer a factor solution in which each original variable loads on only one factor, as in our example. But in many cases this does not happen.

Before trying to name the factors we must decide if four factors are enough or if we need more. Our objective here is to have as few factors as possible yet account for a reasonable amount of the information contained in the 10 original variables. To determine the number of factors, we look at information in the Total Variance Explained table (bottom of Exhibit 17.7). It shows that the four factors accounted for 83.600 percent of the variance in the original 10 variables. This is a substantial amount of the information to account for, and we have reduced the number of original variables by two-thirds, from 10 to four. So let’s consider four factors acceptable and see if our factors seem logical.

To determine if our factors are logical, look at the information in the Rotated Component Matrix (Exhibit 17.7). First, examine which original variables combine to make each new factor. Factor 1 is made up of X12—Friendly employees, X21—Speed of Service, and X19—Knowledgeable Employees. Factor 2 is made up of X18—Excellent Food Taste, X15—Fresh Food, and X20—Proper Food Temperature. Factor 3 is made up of X14— Large Size Portions and X16—Reasonable Prices. Factor 4 is made up of X17—Attractive Interior and X13—Fun Place to Eat. To analyze the logic of the combinations we look at the variables with the highest loadings (largest absolute size). That is why we suppressed loadings less than .30. Factor 1 seems to be related to service, whereas factor 2 is related to food. Similarly, factor 3 seems to be related to value, whereas factor 4 is related to atmosphere. Thus, we have developed a four-factor solution that accounts for a substantial amount of variance and shows logic in the combinations of the 10 original variables. With this four-factor solution, instead of having to think about 10 separate variables, the owners of the Santa Fe Grill can now think about only four variables—service, food, value, and atmosphere—when they are developing their marketing strategies.

Using Factor Analysis with Multiple Regression

Sometimes we may want to use the results of a factor analysis with another multivariate technique, such as multiple regression. This is most helpful when we use factor analysis to combine a large number of variables into a smaller set of variables. We can demonstrate this with the previous example, where we combined the 10 restaurant perceptions into four factors.

Without factor analysis, we must consider customer perceptions on 10 separate characteristics. But if we use the results of our factor analysis we have to consider only the four characteristics (factors) developed in our factor solution. To use the resulting four factors in a multiple regression, we first must calculate factor scores. Factor scores are composite scores estimated for each respondent on each of the derived factors. Return to the SPSS dialog box for the four-factor solution (if you have left the previous factor solution, follow the same instructions as before to get to this dialog box). Looking at the bottom of this dialog box you see the Scores box, which we did not use before. Click on this box and then click Save as Variables. When you do this there will be more options, but just click the Regression option. Now click Continue and then OK and you will calculate the factor scores. The result will be four factor scores for each of the 405 respondents. They will appear at the far right side of your original database and will be labeled fac1_1 (scores for factor 1), fac2_1 (scores for factor 2), and so on. See Exhibit 17.8 to view the factor scores.

Now we want to see if perceptions of the restaurant customers, as measured by the factors, are related to satisfaction. In this case, the single dependent metric variable is X22— Satisfaction and the independent variables are the factor scores. The SPSS click-through sequence is ANALYZE S REGRESSION S LINEAR, which leads you to a dialog box where you select the variables. You should select X22 as the dependent and fac1_1, fac2_1, fac3_1, and fac4_1 as the independents. Note that the fac1_1, fac2_1, fac3_1, and fac4_1 are the factor scores of the new variables you created in the previous example. Now click on the Statistics button and check Descriptives. There are several additional types of analysis that can be selected, but at this point we will use the program defaults. Click OK at the top right of the dialog box to execute the regression. The dialog boxes for this regression are shown in Exhibit 17.9.

The Descriptive Statistics and bivariate correlations for the SPSS regression with factor scores are shown in Exhibit 17.10. Note that the factor scores are standardized with a mean of zero and a standard deviation of one. Moreover, in the correlations table you can see that the factor scores have correlations of zero with each other, but in all cases are significantly related to dependent variable satisfaction.

The Model Summary table in Exhibit 17.11 reveals that the R-square is .490 and the ANOVA table indicates it is statistically significant at the .000 level. This means that 49 percent of the variation in satisfaction (dependent variable) can be explained from the four independent variables—the factor scores. Footnote a underneath the table tells you that the REGRESSION EQUATION included a constant and that the predictor (independent) variables were factor scores for the four variables (Atmosphere, Value, Service, and Food).

To determine if one or more of the factor score variables are significant predictors of satisfaction we must examine the Coefficients table (Exhibit 17.11). Looking at the Stan- dardized Coefficients Beta column reveals that Factor 1—Service is .388, Factor 2—Food is .498, Factor 3—Value is .238, and Factor 4—Atmosphere is .186. The Statistical Significance is .000 for all four factors. Thus, we know from this REGRESSION ANALYSIS that perceptions of all restaurant factors are strong predictors of satisfaction with the two restaurants, with Factor 2 being somewhat better than the other three factors since the size of the Factor 2 Beta is the largest. Furthermore, interpreting only four variables in developing a marketing strategy is much easier than is dealing with the original 10 independent variables.

In this section we have demonstrated how you can use one multivariate technique— factor analysis—with another technique—regression—to better understand your data. It is also possible, however, to use other multivariate techniques in combination. For example, if your dependent variable is nonmetric, such as gender, then you could use discriminant analysis in a manner similar to our use of regression. Also, you could use cluster analysis in combination with regression or discriminant analysis. This will be clearer after we have covered these other techniques in this chapter.

Ex 17.3

Ex 17.4

Ex 17.5

Ex 17.6

ex 17.7

ex 17.8

ex 17.9

Ex 17.10

ex 17.11



Cluster analysis is another interdependence multivariate method. As the name implies, the basic purpose of cluster analysis is to classify or segment objects (customers, products, market areas) into groups so that objects within each group are similar to one another on a variety of variables. Cluster analysis seeks to classify segments or objects such that there will be as much similarity within segments and as much difference between segments as possible. Thus, this method strives to identify natural groupings or segments among many variables without designating any of the variables as a dependent variable.

We will start our discussion of cluster analysis with this intuitive example. A fast-food chain wants to open an eat-in restaurant in a new, growing suburb of a major metropolitan area. Marketing researchers have surveyed a large sample of households in this suburb and collected data on characteristics such as demographics, lifestyles, and expenditures on eating out. The fast-food chain wants to identify one or more household segments that are likely to visit its new restaurant. Once this segment is identified, advertising and services will be tailored to them.

A target segment can be identified by conducting a cluster Analysis of the Data researchers have gathered. Results of the cluster analysis will identify segments, each containing house- holds that have similar characteristics but differing considerably from the other segments. Exhibit 17.12 identifies four potential clusters or segments for our fast-food chain. As our intuitive example illustrates, the growing suburb contains households that seldom visit restaurants at all (cluster 1), households that tend to frequent dine-in restaurants exclusively (cluster 2), households that tend to frequent fast-food restaurants exclusively (cluster 3), and households that frequent both dine-in and fast-food restaurants (cluster 4). By examining the characteristics associated with each of the clusters, management can decide which clusters to target and how best to reach them through marketing communications.

Statistical Procedures for Cluster Analysis

Several cluster analysis procedures are available, each based on a somewhat different set of complex computer programs. The general approach in each procedure is the same, however, and involves measuring the similarity between objects on the basis of their ratings on the various characteristics. The degree of similarity between objects is usually determined through a distance measure. This process can be illustrated with our earlier example involving two variables:

V1 = Frequency of eating out at restaurants

V2 = Frequency of eating out at fast-food restaurants

Data on V1 and V2 are shown on the two-dimensional plot in Exhibit 17.12. Five individuals are plotted with the letters A, B, C, D, and E. Each letter represents the position of one consumer with regard to the two variables V1 and V2. The distance between any pair of letters is positively related to how similar the corresponding individuals are when the two variables are considered together. Thus, individual A is more like B than like either C, D, or E. As can be seen, four distinct clusters are identified in the exhibit.

This analysis can inform marketing management of the proposed new fast-food restaurant that customers are to be found among those who tend to eat at both fancy and fast-food restaurants (cluster 4). To develop a marketing strategy to reach this cluster of households, management would like to identify demographic, psychographic, and behavioral profiles of the individuals in cluster 4.

Clusters are often developed from scatter plots, as we have done with our fast-food restaurant example. This is a complex trial-and-error process. Fortunately, computer algorithms are available and must be used if the clustering is to be done in an efficient, systematic fashion. While the mathematics are beyond the scope of this chapter, the algorithms are all based on the idea of starting with some arbitrary cluster boundaries and modifying the boundaries until a point is reached where the average distances within clusters are as small as possible relative to the average distances between clusters.

Cluster Analysis Applications in Marketing Research

While our fast-food example illustrated how cluster analysis segmented groups of house- holds, it has many other important applications in marketing research:

New-product research. Clustering brands can help a firm examine its product offerings relative to competition. Brands in the same cluster often compete more fiercely with each other than with brands in other clusters.

Test marketing. Cluster analysis groups test cities into homogeneous clusters for test marketing purposes.

Buyer behavior. Cluster analysis can be employed to identify similar groups of buyers who have similar choice criteria.

Market segmentation. Cluster analysis can develop distinct market segments on the basis of geographic, demographic, psychographic, and behavioral variables.

SPSS Application—Cluster Analysis

The value of cluster analysis can be demonstrated easily with our restaurant database. The task is to determine if there are subgroups/clusters of the 405 respondents to the customer surveys of the two restaurants. In selecting the variables to use in cluster analysis, we must use only variables that are metrically measured and logically related. There are three logical sets of metric variables to consider for the cluster analysis: the life style questions (x1–X11), the restaurant perceptions questions X12–X21), and the three relationship questions (X22–X24; X25 is nonmetric).

The owners of the Santa Fe Grill have been asking if there are subgroups of customers that exhibit different levels of loyalty to the restaurant. To answer this question, we must define what we mean by loyalty. The loyalty construct could consist of variables X22–X24, or it could include only X23 and X24, based on the assumption that satisfaction (X22) differs from loyalty. While the definition of loyalty can be debated, let’s use only variables X23 and X24 as our measure of the loyalty construct. Thus, we will apply cluster analysis using variables X23 and X24 to find loyalty clusters for the restaurant customers.

The SPSS click-through sequence is ANALYZECLASSIFYHIERARCHICAL CLUSTER, which leads to a dialog box where you select variables X23 and X24. After you have put these variables into the Variables box, look at the other options below. Unclick the Plots check in the Display window; this is not needed and will speed up the processing time. You do not need to change anything in the Statistics and Plots options below. Click on the Method box and select Ward’s under the Cluster Method (you have to scroll to the bottom of the list), but use the default of squared euclidean distances under Measure. We do nothing with the Save option at this point, so you can click OK at the top of the dialog box to execute the cluster analysis. Exhibit 17.13 shows two of the SPSS dialog boxes for running the cluster analysis.

The SPSS output has a table called Agglomeration Schedule, a portion of which is shown in Exhibit 17.14. This table has lots of numbers in it, but we look only at the numbers in the Coefficients column (middle of table). Go to the bottom of the table and look at the numbers in the Coefficients column (inside the box). The number at the bottom will be the largest, and the numbers get smaller as you move up the table. The bottom number is 1073.042, the one right above it is 420.974, and the next above is 245.556. The coefficients in this column show how much you reduce your error by moving from one cluster to two clusters, from two clusters to three clusters, and so on. As you move from one cluster to two clusters there al- ways will be a large drop (difference) in the coefficient of error, and from two clusters to three clusters another drop. Each time you move up the column the drop (difference) in the numbers will get smaller. What you are looking for is where the difference between two numbers gets substantially smaller. This means that going from, say, three clusters to four clusters does not reduce your error very much. You will note that in this case the change is from 245.556 to 179.094. For this solution, we definitely would choose three clusters over four because the difference between the numbers as you go from three clusters to four clusters is getting much smaller. We might also choose to use only two clusters instead of three. We could do this because the error is reduced a large amount (>50%) by going from one to two clusters, and two clusters likely will be easier to understand than three.

Let’s focus on the two-cluster solution because it is easier to understand. Before trying to name the two clusters, let’s make sure they are significantly different. To do so, you must first create a new variable that identifies which cluster each of the 405 respondents has been assigned to by the cluster analysis. Go back to the Cluster dialog box and click on the Save box. When you do this, you can choose to create a new cluster membership variable for a single solution or for a range of solutions. Choose the single solution, put a 2 in the box, and a group membership variable for the two-group solution will be created when you run the cluster program again. The new group membership variable will be the new variable in your data set at the far-right-hand side of your data labeled clu2_1. It will show a 1 for respondents in Cluster One and a 2 for respondents assigned to Cluster Two, as shown in Exhibit 17.15.

Now you can run a one-way ANOVA between the two clusters to see if they are statistically different. The SPSS click-through sequence is ANALYZECOMPARE MEANSONE-WAY ANOVA. Next you put variables X23 and X24 in the Variables box and the new Cluster Membership variable in the Factor box. This will be the new variable in your data set labeled clu2_1. Next click on the Options box and then on Descriptive under Sta- tistics, and Continue. Now click OK and you will get an output with a Descriptives and an ANOVA table. The dialog boxes for running this procedure are shown in Exhibit 17.16.

The SPSS output for the ANOVA of the cluster solution is shown in Exhibit 17.17. When you look at the Descriptives table you will see the sample sizes for each cluster (N) and the means of each variable for each cluster, as well as a lot of other numbers we will not use. For example, the sample size for Cluster One is 271 and for Cluster Two it is 134. Similarly, the mean for likely to return in Cluster One is 3.86 and in Cluster Two it is 5.68, and the mean for likely to recommend in Cluster One is 3.13 and in Cluster Two it is 5.12.

We interpret the two clusters by looking at the means of the variables for each of the groups. By looking at the means we see that respondents in Cluster One are relatively less likely to return and less likely to recommend (lower mean values), and therefore less loyal. In contrast, Cluster Two respondents are relatively more likely to return and more likely to recommend the restaurants (higher mean values), and therefore more loyal. Thus, Cluster Two respondents have much more favorable perceptions of the restaurants than do respondents in Cluster One. We therefore can define Cluster Two as the more highly loyal group. A final interesting conclusion is that there are substantially more customers that are not loyal (N 271) than there are loyal customers (N 134).

Next, look at the ANOVA table to see if the differences between the group means are statistically significant. You will see that for all three variables the differences between the means of the two clusters are highly significant (Sig. .000) and therefore statistically different. Thus, we have two very different groups of restaurant customers with Cluster Two being relatively loyal to the restaurants and Cluster One much less loyal. Based on the mean values and significance levels, we will name Cluster One “Low Loyalty” and Cluster Two “Moderately High Loyalty.”

ex 17.12

ex 17.13

Ex 17.14

Ex 17.15

ex 17.16

ex 17.17


Discriminant analysis is a multivariate technique used for predicting group membership on the basis of two or more independent variables. There are many situations where the marketing researcher’s purpose is to classify objects or groups by a set of independent variables. Thus, the dependent variable in discriminant analysis is nonmetric or categorical. In marketing, consumers are often categorized on the basis of heavy versus light users of a product, or viewers versus nonviewers of a media vehicle such as a television commercial. Conversely, the independent variables in discriminant analysis are metric and often include characteristics such as demographics and psychographics. Additional insights into discriminant analysis can be found in the nearby A Closer Look at Research (Using Technology) box.

Let’s begin our discussion of discriminant analysis with an intuitive example. A fast-food restaurant, Back Yard Burgers (BYB), wants to see whether a lifestyle variable such as eating a nutritious meal (X1) and a demographic variable such as household income (X2) are useful in distinguishing households visiting their restaurant from those visiting other fast-food restaurants. Marketing researchers have gathered data on X1 and X2 for a random sample of households that eat at fast-food restaurants, including Back Yard Burgers. Discriminant analysis procedures would plot these data on a two-dimensional graph, as shown in Exhibit 17.18.

The scatter plot in Exhibit 17.18 yields two groups, one containing primarily Back Yard Burgers’ customers and the other containing primarily households that patronize other fast- food restaurants. From this example, it appears that X1 –Lifestyle and X2 –Income are critical discriminators of fast-food restaurant patronage. Although the two areas overlap, the extent of the overlap does not seem to be substantial. This minimal overlap between groups, as in Exhibit 17.18, is an important requirement for a successful discriminant analysis. What the plot tells us is that Back Yard Burgers customers are more nutrition conscious and have relatively higher incomes.

Let us now turn to the fundamental statistics of discriminant analysis. Remember, the prediction of a categorical variable is the purpose of discriminant analysis. From a statistical perspective, this involves studying the direction of group differences based on finding a linear combination of independent variables—the discriminant function—that shows large differences in group means. Thus, discriminant analysis is a statistical tool for determining linear combinations of those independent variables, and using this to predict group membership.

A linear function can be developed with our fast-food example. We will use a two-group discriminant analysis example in which the dependent variable, Y, is measured on a nominal scale (i.e., patrons of Back Yard Burgers versus other fast-food restaurants). Again, the marketing manager believes it is possible to predict whether a customer will patronize a fast-food restaurant on the basis of lifestyle (X1 ) and income (X2 ). Now the researcher must find a linear function of the independent variables that shows large differences in group means. The plots in Exhibit 17.18 show this is possible.

The discriminant score, or the Z score, is the basis for predicting to which group a particular individual belongs and is determined by a linear function. This Z score will be derived for each individual by means of the following equation:

Zi = b1X1i + b2X2i....... + bnXni

Zi = ith individual’s discriminant score

bn = Discriminant coefficient for the nth variable

Xni = Individual’s value on the nth independent variable

Discriminant weights (bn), or discriminant function coefficients, are estimates of the discriminatory power of a particular independent variable. These coefficients are computed by means of the discriminant analysis software, such as SPSS. The size of the coefficients associated with a particular independent variable is determined by the variance structure of the variables in the equation. Independent variables with large discriminatory power will have large weights, and those with little discriminatory power will have small weights.

Returning to our fast-food example, suppose the marketing researcher finds the standardized weights or coefficients in the equation to be

Z = b1X1 + b2X2

  = .32X1 + .47X2

These results show that income (X2) with a coefficient of .47 is the more important variable in discriminating between those patronizing Back Yard Burgers and those who pa- tronize other fast-food restaurants. The lifestyle variable (X1) with a coefficient of .32 also represents a variable with good discriminatory power.

Another important goal of discriminant analysis is classification of objects or individuals into groups. In our example, the goal was to correctly classify consumers into Back Yard Burgers patrons and those who patronize other fast-food restaurants. To determine whether the estimated discriminant function is a good predictor, a classification (prediction) matrix is used. The classification matrix in Exhibit 17.19 shows that the discriminant function correctly classified 214 of the original BYB patrons (99.1%) and 80 of the nonpatrons (100%). The classification matrix also shows that the number of correctly classified consumers (216 patrons and 80 nonpatrons) out of a total of 296 equals 99.3 percent correctly classified. This resulting percentage is much higher than would be expected by chance.

Discriminant Analysis Applications in Marketing Research

While our example illustrated how discriminant analysis helped classify users and nonusers of the restaurant based on independent variables, other applications include the following:

Product research. Discriminant analysis can help to distinguish between heavy, medium, and light users of a product in terms of their consumption habits and lifestyles.

Image research. Discriminant analysis can discriminate between customers who exhibit favorable perceptions of a store or company and those who do not.

Advertising research. Discriminant analysis can assist in distinguishing how market segments differ in media consumption habits.

Direct marketing. Discriminant analysis can help in distinguishing Characteristics of Consumers who respond to direct marketing solicitations and those who don’t.

SPSS Application—Discriminant Analysis

The usefulness of discriminant analysis can be demonstrated with our Santa Fe Grill data- base. Remember that with discriminant analysis the single dependent variable is a nonmetric variable and the multiple independent variables are measured metrically. In the classification variables of the database, variables X30—Distance Driven, X31—Ad Recall, and X32— Gender are nonmetric variables. The screening variable of Favorite Mexican Restaurant is also a nonmetric variable. Variables X31 and X32 are two-group variables and X30 is a three- group variable. We could use discriminant analysis to see if there are differences between per- ceptions of the Santa Fe Grill by male and female customers or by ad recall, or we could see if the perceptions differ depending on how far customers drove to eat at the Santa Fe Grill.

The Santa Fe Grill owners want to know how its food and service compare to Jose’s. In looking at variables X12–X21, there are three variables associated with food: variables X15, X18, and X20, and one variable measuring speed of service (X21). The task is to determine if customer perceptions of the food and service are different between the two restaurants. Another way of stating this is “Can perceptions of food and service predict which restaurant a customer ate at?” This second question is based on the primary objective of discriminant analysis: to predict group membership. In this case, can the food and service perceptions predict restaurant customer groups?

The SPSS click-through sequence is ANALYZECLASSIFYDISCRIMINANT, which leads to a dialog box where you select the variables (see Exhibit 17.20). The dependent, nonmetric variable is Favorite Mexican Restaurant (screening question 4) and the in- dependent, metric variables are X15, X18, X20, and X21. The first task is to move the favorite Mexican restaurant variable to the Grouping Variable box at the top, and then click on the Define Range box just below it. You must tell the program what the minimum and maximum numbers are for the grouping variable. In this case the minimum is 0 = Jose’s and the maximum is 1= Santa Fe Grill, so just put these numbers in and click on Continue. Next you must transfer the food and service perceptions variables into the Independents box (X15, X18, X20, and X21). Then click on the Statistics box at the bottom and check Means, Univariate ANOVAS, and Continue. The Method default is Enter, and we will use this. Now click on Classify and Compute from group sizes. We do not know if the sample sizes are equal, so we must check this option. You should also click Summary Table and then Continue. We do not use any options under Save so click OK to run the program. Exhibit 17.20 shows the SPSS screen where you move the dependent and independent vari- ables into their appropriate dialog boxes as well as the Statistics and Classification boxes.

Discriminant analysis is an SPSS program that gives you a lot of output you will not use. We will look at only five tables from the SPSS output. Information from two tables is shown in Exhibit 17.21. The first important information to consider is in the Wilks’ Lambda table. The Wilks’ Lambda is a statistic that assesses whether the discriminant analysis is statisti- cally significant. If this statistic is significant, as it is in our case (.000), then we next look at the Classification Results table. At the bottom we see that the overall ability of our discrim- inant function to predict group membership is 90.4 percent. This is good because without the discriminant function we could predict with only 62.5 percent accuracy (our sample sizes are Santa Fe Grill = 253 and Jose’s = 152, so if we placed all respondents in the Santa Fe Grill group, we would predict with 253/405 = 62.5% accuracy).

To find out which of the independent variables help us to predict group membership we look at the information in the two tables shown in Exhibit 17.22. Results shown in the table labeled Tests of Equality of Group Means show which food perceptions variables differ between the two restaurants on a univariate basis. Note that variables X15, X18, X20, and X21 are all highly statistically significant (look at the numbers in the Sig. column). Thus, on a univariate basis all four food perceptions variables differ significantly between the restaurant customer groups.

To consider the variables from a multivariate perspective (discriminant analysis), we look at the information in the Structure Matrix table. First we compare the sizes of the numbers in the Function column. The variables with the largest numbers are the best predictors. Food taste and food freshness help predict group membership the most, but speed of service is a moderately strong predictor, and even food temperature helps predict some- what. These findings are similar to the univariate results, in which all four perceptions variables are statistically different between the two restaurants.

To further interpret the discriminant analysis we look at the group means in the Group Statistics table (Exhibit 17.23). For all four variables (X15, X18, X20, and X21) we see that customers had more favorable perceptions of Jose’s Southwestern Café than of the Santa Fe Grill (mean values for Jose’s are all higher). Thus, perceptions of food and service are significantly more favorable for Jose’s customers than for the Santa Fe Grill’s. This finding can definitely be used by the owners of the Santa Fe Grill to further develop their plan to improve restaurant operations.

SPSS Application—Combining Discriminant Analysis and Cluster Analysis

We can use discriminant analysis in combination with other multivariate techniques. Remember the cluster analysis example earlier in the chapter in which we identified customer loyalty groups using variables X23 and X24. Of the two clusters, Cluster One respondents were least loyal while Cluster Two respondents were most loyal. We can use the results of this cluster analysis solution as the dependent variable in a discriminant analysis.Now we must identify which of the database variables we might use as metric independent variables. We have used the restaurant perceptions variables (X12–X21) in an earlier example but we have not used the lifestyle variables (X1–X11). Let’s, therefore, see if we can find a relationship between the metric lifestyle variables and the nonmetric customer loyalty clusters.

There are eleven lifestyle variables that could be used as independent variables. Three of the variables are related to nutrition: X4–Avoid Fried Foods, X8–Eat Balanced Meals, and X10–Careful about What I Eat. If we use these three variables as independents, the objective will be to determine whether nutrition is related to customer loyalty. That is, can nutrition predict whether a customer is loyal or not?

The SPSS click-through sequence is ANALYZECLASSIFYDISCRIMINANT, which leads to a dialog box where you select the variables. The dependent, nonmetric variable is clu2_1, and the independent, metric variables are X4, X8, and X10. First transfer vari- able clu2_1 to the Grouping Variable box at the top, and then click on the Define Range box just below it. Insert the minimum and maximum numbers for the grouping variable. In this case the minimum is 1 Cluster One and the maximum is 2 Cluster Two, so just put these numbers in and click on Continue. Next you must transfer the food perceptions variables into the Independents box (X4, X8, and X10). Then click on the Statistics box at the bottom and check Means, Univariate ANOVAS, and Continue. The Method default is Enter, and we will use this. Now click on Classify and Compute from group sizes. We do not know if the sample sizes are equal, so we must check this option. You should also click Summary Table and then Continue. We do not use any options under Save so click OK to run the program.

Remember the SPSS discriminant analysis program gives you a lot of output you will not use. We again will look at only five tables. The first two tables to look at are shown in Exhibit 17.24. Note that the discriminant function is highly significant (Wilks’ Lambda of .000) and that the predictive accuracy is good (77.3% correctly classified). Recall that group 1 of our cluster analysis solution had relatively fewer customers than did group 2. The mean level of loyalty of the customers is shown in the Classification Results section of the exhibit.

To find out which of the independent variables help us to best predict group membership we look at the information in two tables (shown in Exhibit 17.25). Results shown in the table labeled Tests of Equality of Group Means show which nutrition lifestyle variables differ on a univariate basis. Note that all three predictor variables are highly significant. To consider the variables from a multivariate perspective, use the information from the Structure Matrix table. The structure matrix numbers are all quite large and can therefore be considered to be helpful in predicting group membership. Like the univariate results, all of the variables help us to predict group membership. The strongest nutrition variable is X4 (.882), the second best predictor is X10 (.818), and the least predictive but still helpful is X8 (.622).

To interpret the meaning of the discriminant analysis results we examine the means of the nutrition variables shown in the Group Statistics table of Exhibit 17.26. Note that the means for all three nutrition variables in the Most Loyal group are lower than the means in the Least Loyal group. Moreover, based on the information provided in Exhibit 17.25 we know all of the nutrition variables are significantly different. Thus, customers in the Most Loyal group are significantly less “nutrition conscious” than those in the Least Loyal group.

Recall that Cluster One was not very loyal (mean = 3.5 on a 7-point scale) and Cluster Two (less nutrition conscious) was relatively loyal (based on a combination of variables X23 and X24). Thus, the results indicate the most loyal customers are less nutrition conscious. One interpretation of this finding might be that the owners of the Santa Fe Grill should consider putting some “Heart Healthy” entrees on their menu. But before doing that they need to look at loyalty as it relates only to the Santa Fe Grill. Up to this point the analysis has been with both restaurants combined.

Ex 17.18

ex 17.19

ex 17.20








Conjoint analysis is a multivariate technique that estimates the relative importance consumers place on different attributes of a product or service, as well as the utilities or value they attach to the various levels of each attribute. This dependence method assumes that consumers choose or form preferences for products by evaluating the overall utility or value of the product. This value is composed of the individual utilities of each product feature or attribute. Conjoint analysis tries to estimate the product attribute importance weights that would best match the consumer’s indicated product choice or preference.

For example, assume that our fast-food restaurant wants to determine the best combination of features to attract customers. A marketing researcher could develop a number of descriptions or restaurant profiles, each containing different combinations of features. Exhibit 17.27 shows two examples of what these profiles might look like. Consumers would then be surveyed, shown the different profiles, and asked to rank the descriptions in order of their likelihood of patronizing the restaurant. Note that with the conjoint analysis technique, the researcher has to do a lot more work than the survey respondent. The researcher must choose the attributes that are likely to affect consumer choice or preference, and must also pick the levels of each attribute to include in the survey. All that is required of the consumer is to rank order the profiles in terms of preference.

If each of the four attributes shown in Exhibit 17.27 had two levels or values (e.g., price level: inexpensive versus moderate), there would be 16 possible combinations for consumers to rank (2 2 2 2 16). Once those data were collected, applying conjoint analysis to the responses would produce a part-worth estimate for each level of each attribute.

The statistical process underlying conjoint analysis uses the customer ranking of the profiles as a target. The process then assigns a part-worth estimate for each level of each at- tribute. The overall utility is estimated using the following formula:

U( X) = α11+ α12 + α21 + α22 + ... + αmn


U( X) = Total worth for product

α11 = Part-worth estimate for level 1 of attribute 1

α12 = Part-worth estimate for level 2 of attribute 1

α21 = Part-worth estimate for level 1 of attribute 2

α22 = Part-worth estimate for level 2 of attribute 2

αmn = Part-worth estimate for level n of attribute m

Once the total worths of the product profiles have been estimated, the process compares it to the consumer’s actual choice ranking. If the predictions are not accurate, then the in- dividual part-worth estimates are changed and the total worths recalculated. This process continues until the predictions are as close to the consumer’s actual rankings as possible. The ability of the estimated part-worth coefficients to accurately predict the consumer rankings can be determined through inspection of the model statistics, such as r 2 . Just as in regression, a high r 2 indicates a good fit to the data (i.e., the model predictions closely match the consumer rankings).

Returning to our fast-food example above, Exhibit 17.28 shows graphs of the part-worth estimates for the various levels of the four attributes. The importance of each attribute across its different levels is indicated by the range of the part-worth estimates for that attribute, that is, by subtracting the minimum part-worth for the attribute from the maximum part-worth. Looking at the graphs, we see the “price” attribute is the most important because the differ- ence between the highest and lowest plotted part-worths is the greatest. Similarly, menu type is second most important and the two lowest are atmosphere and service level.

Once the attribute importance estimate has been determined, the relative importance of each attribute can be calculated as a percentage of the total importance scores of all the attributes in the model. The formula for the attribute importance is:

Ii = {Max( αij) - Min(αij )} for each attribute i

And the formula for the relative attribute importance is:

image 2

If we take the part-worth estimates shown in Exhibit 17.28 and calculate the importance of each attribute and its relative importance, we get the results shown in Exhibit 17.29. As you can see, the price level of the potential restaurant is the most important attribute to con- sumers in choosing a place to eat, followed by menu type.

Once the importance weights of the attributes have been estimated, it is relatively easy to make predictions about the overall preference for particular combinations of product fea- tures. Comparisons of alternative products can then be made to determine the most feasi- ble alternative to consider bringing to market.

The main advantages of conjoint analysis techniques are (1) the low demands placed on the consumer to provide data; (2) the ability to provide utility estimates for individual levels of each product attribute; and (3) the ability to estimate nonlinear relationships among attribute levels. The limitations placed on the researcher by conjoint analysis are (1) that the researcher is responsible for choosing the appropriate attributes and attribute levels that will realistically influence consumer preferences and choice; and (2) that con- sumers may have difficulty making choices or indicating preferences among large numbers of profiles. Therefore, the number of attributes and levels used cannot be too large.

Conjoint Analysis Applications in Marketing Research

The fast-food example in this discussion illustrates one possible use of conjoint analysis to identify the important attributes that influence consumer restaurant choice. There are other im- portant applications of this technique in marketing research, however, such as the following:

Market share potential. Products with different combinations of features can be compared to determine the most popular designs.

Product image analysis. The relative contributions of each product attribute can be determined for use in marketing and advertising decisions.

Segmentation analysis. Groups of potential customers who place different levels of importance on the product features can be identified for use as high and low potential market segments.