This section suggests a five-step procedural system for evaluating sales force performance (see Figure 16–1). The program is complete, but it is also expensive and time-consuming.
Step 1. Establish Some Basic Policies
Preliminary to the actual evaluation, management should set some ground rules. One question that calls for a decision is: Who will participate in the evaluation? Several executives normally are involved. One of the most likely is the salesperson’s immediate superior—perhaps a field supervisor, a district manager, or a branch manager. The boss of the immediate supervisor also is likely to be involved. Over 25 percent of companies today use an employee assessment known as 360-degree feedback, which is especially effective in team environments. This technique involves getting evaluative feedback from an employee’s peers, subordinates, and clients, as well as superiors.
Certainly, the salesperson being evaluated should participate actively, usually with some form of self-evaluation. Involving salespeople in the development of their objectives creates a greater sense of responsibility and commitment on the part of the salespeople. In some firms, the manager and salesperson identify and negotiate specific goals for the upcoming period. Then the rep and manager sign a performance agreement that specifies these goals as the performance standards. This ensures that there will be no misunderstandings about what is expected. This process is often called management by objectives.
Another policy decision concerns the frequency of evaluation. Figure 16–2 presents results of a survey of a wide cross-section of sales organization. As shown on the chart, 19 percent of firms do not even conduct a formal performance evaluation with their salespeople. Of those sales organizations that do evaluate their salespeople, most do it at least on a quarterly basis.4 Synygy Inc., a software company, makes a point to have dozens of performance reviews with its salespeople every year. According to Synygy’s vice president of sales, “If you don’t communicate frequently, they don’t know where they stand, or how well they’re doing on performance improvement.”5 Although the time and costs required to conduct more frequent evaluations must be balanced against the benefits, the improvements in performance generally outweigh the costs.
Step 2. Select Bases for Evaluation
One key to a successful evaluation program is to appraise a sales rep’s performance on as many different bases as possible. To do otherwise is to run the risk of being misled. Let’s assume that we are rating a sales rep, Ryan, on the basis of the ratio of selling expenses to sales volume. If this percentage is very low compared to the average for the entire sales force, Ryan probably will be commended. Yet Ryan actually may have achieved that low ratio by failing to prospect for new accounts or by otherwise covering the territory inadequately. Knowing the average number of daily calls Ryan made, even in relation to the average call rate for the entire sales force, does not help us very much. By measuring Ryan’s ratio of orders per call (batting average), we learn a little more, but we still can be misled. Each additional piece of information—sales volume, plus average order size, plus presentation quality, and so on—helps give a clearer picture of Ryan’s performance.
When selecting the bases on which to evaluate salespeople, it is important to remember that the evaluation serves two purposes. One is to recognize and reward people for a job well done; the other is to develop a clear understanding of the salesperson’s performance in order to help him or her improve. Salespeople are more likely to respond to and learn from the evaluation when they perceive it to be fair. Consequently, it is important for sales managers to clearly communicate the bases on which salespeople will be evaluated. Some even feel that salespeople should be involved in selecting the bases. Studies show that when salespeople buy into the evaluation process, their satisfaction toward all aspects of the job tends to be higher.
Bases of evaluation fall into two general categories: output measures and input measures. Both types of measures should be used to get a complete picture of the salesperson’s performance.
Output measures relate to the salesperson’s results—sales volume, gross margin, number of orders, and so on. A list of some output factors ordinarily used as evaluation bases is shown in Figure 16–3. These measures are often used to make some meaningful comparisons. One rep may be compared to another, performance this year may be compared to performance for last year, performance may be compared to a goal or target, the rep’s share of the market may be compared to that of competitors, and so on.
Each of these measures can be further broken down by type of product, customer type, or channel of distribution, and similar comparisons can be made. Breaking the information down by various subcategories may provide some insights into the rep’s performance that otherwise would be overlooked. If the salesperson’s performance is below average, it may be that the problem can be isolated to one type of selling situation or to one category of product. If a manager can pinpoint the cause of a performance problem, it becomes much easier to find a solution to alleviate that problem.
All of the output bases are quantitative measures. To a large extent, the use of these quantitative measures minimizes the subjectivity and biases of the evaluator. Quantitative properties are also relatively easy to measure.
However, since they consider results only, these measures may not provide an equitable base on which to compare the performance of one salesperson to another.
Problem of Data Comparability
Ideally, a salesperson should be judged only on factors he or she can control. Management should identify the uncontrollable factors and take them into consideration when appraising an individual’s performance.
The sales potential in a territory, especially in relation to size and number of customers, is a good example of an uncontrollable factor. The greater sales potential in one territory versus another may make it easier for the rep in the first territory to reach his or her goals while the rep in the second territory struggles to meet the same goals. Differences in competitive activity or physical conditions among territories also must be considered when comparing performances. Usually, there are territorial variations in the amount of advertising, sales promotional support, or home-office technical service available to customers. These and several other factors make it difficult to compare performance data. This is one of the reasons for considering information on inputs or efforts as well as results.
Two types of input measures are used in the evaluation process. The quantitative measures focus on the salesperson’s efforts or activities. The number of calls a salesperson makes in a day and the number of e-mails sent to prospects are examples of quantitative input measures. Figure 16–4 lists the more commonly used factors. Tracking these factors is considered so important by Bell South Cellular that 25 percent of its sales managers’ quarterly bonus is based on how closely they monitor their reps’ activities.
The second group of input measures is the qualitative factors. These factors measure such things as the quality of the sales rep’s presentation, product knowledge, customer relations, and the salesperson’s attitude. Figure 16–5 lists other qualitative factors that are often used in the evaluation process.
Both the quantitative and the qualitative input factors are based on behaviors that are usually under the salesperson’s control. Therefore, they are less subject to criticisms concerning inequities among the reps. But the most important value in using these measures is that they are usually critical in locating trouble spots. Assume that a salesperson’s output performance (average order size, gross margin, and so on) is unsatisfactory. Very often the cause lies in the certain behaviors over which the rep has control.
Research has demonstrated that an evaluation system that emphasizes behaviors more than outcomes has a number of positive effects on the salesperson’s overall performance. For example, the more behavior-based the evaluation system, the more the salesperson is willing to cooperate as part of the sales team and the more the salesperson is committed to the organization. With such a system, the salesperson places a greater emphasis on implementing adaptive strategies. However, it also has been shown that evaluation systems that measure both inputs and outputs lead to higher sales and profits.
Most of the quantitative measures discussed above can be combined to create ratio measures that can be used for evaluative and comparative purposes. Orders/calls, expenses/sales, and sales/orders are some of the more common ratios managers use to evaluate and compare the performance of salespeople.
A quantitative evaluation of a sales rep’s performance can involve the following equations:
If the sales volume for a representative is unsatisfactory, the basic cause must rest in one or more of these four factors. An analysis such as that done in Figure 16–7 on pages 473–474 can help focus the manager’s attention on the trouble spot so that additional detailed investigation can pinpoint the rep’s exact difficulties.
Sources of Information
When choosing factors to use as bases for a performance evaluation, management should select only those for which data are available at a reasonable cost. The four main sources of information are company records, the sales reps themselves, field sales managers, and customers.
Company records are the main source for data on most of the quantitative output factors. By studying sales invoices, customers’ orders, and accounting records, management can discover much about a sales rep’s volume, gross margin, average order size, and so on. Most firms fail to make optimum use of their records for evaluation purposes. In the past, the information often was not recorded in usable form for a performance evaluation. Firms found it was too expensive and time-consuming to tabulate and present the data in usable form. However, most companies today use computers in collecting, analyzing, and reporting data in a form useful for evaluation.
Reports submitted by the sales force are an important source of information, particularly for performance input factors. The regular use of call reports, activity reports, and expense reports can provide the necessary data on the salespeople’s work. The Achilles’ heel in using sales reps’ reports for evaluation is that the information is only as good as the accuracy, completeness, and punctuality of the reps’ reporting efforts. This is often a serious problem.
As a rule, sales supervisors and other sales executives regularly travel with the sales reps in the field. The managers observe the reps during sales calls on customers. This allows executives to make a firsthand appraisal of a salesperson’s performance with customers.
Customers can be used as a source of evaluation information in one of two ways. The more common method is to gather information submitted by customers on a voluntary, informal basis. Unfortunately, this usually takes the form of complaints, because customers rarely report commendatory performance by sales reps. Increasingly companies are actively soliciting opinions from customers on a regular basis. Some companies ask their customers such questions as How well does the salesperson analyze your needs? and How well does the salesperson build trust? The customer is certainly in the best position to answer these kinds of questions. However, some firms don’t use customers as a source of data. They feel that customers often give excessively good reviews to protect the salespeople they like.
Step 3. Set Performance Standards
Setting standards is one of the most difficult phases of performance evaluation. The standards serve as a benchmark, or a par for the course, against which a sales rep’s performance can be measured. Also, standards let a salesperson know what is expected and serve as a guide in planning work. Standards must be equitable and reasonable; otherwise, salespeople may lose interest in their work and confidence in management, and morale may decline. If the standards are too high or too low, using them to evaluate performance will be worthless or even harmful.
Standards for many of the output (results) factors can be tied to company goals for territories, product lines, or customer groups. Such performance measures as sales volume, gross margin, or market share probably already have been set.
It is more difficult to set performance standards for the effort (input) factors. A careful time-and-duty analysis of sales jobs should give management some basis for determining satisfactory performance for daily call rates, displays arranged, and other factors. Another approach is to use executive judgment based on the personal observations of those who work with the salespeople in the field.
To measure the efficiency of a company’s selling effort, management must balance the output against the input. Consequently, a firm should develop standards for such output/input ratios as sales volume/calls, orders/calls, gross margin/order, and sales volume/expenses.
Once the standards have been set, it is critical that these standards be communicated to the salespeople. Even if the salespeople were involved in establishing the standards, they should be formally communicated to the rep. This ensures that there are no misunderstandings about the benchmarks against which the rep’s performance will be judged.
Step 4. Compare Performance with Standards
The accumulated information must be interpreted. This step involves comparing an individual’s performance—both efforts and results—with the predetermined standards.
Interpreting Quantitative Data
Some factors ordinarily used as bases for performance appraisal were shown in Figures 16–3 and 16–4. The following discussion shows how these factors can be used with the performance standards in step 3 to evaluate a rep’s performance.
Sales Volume and Market Share The first criterion most sales managers use to judge the relative performance of salespeople is their sales volume. Some executives believe that the rep who sells the most merchandise is the best salesperson, regardless of other considerations. Unfortunately, sales volume alone may be a poor indicator of a rep’s worth because it tells the firm nothing about the rep’s contribution to profit or customer relations.
Sales volume can be a useful indicator of performance, however, if it is analyzed in sufficient detail and with discretion. For evaluation purposes, a rep’s total volume may be studied by product line, by some form of customer grouping, or by order size. Even then, the volume figures are not very meaningful unless they can be related to some predetermined standard of acceptable performance, volume quota for each product line or customer group, for example.
Another important evaluation factor is the salesperson’s market share. Firms compute this figure by dividing the rep’s sales volume by the territorial market potential. Here again, the data are more useful if share of market can be determined for each product line or customer group.
Management must be cautious when comparing market-share performance of one person with another. Sales rep A may get 20 percent of the market in his district, while sales rep B captures only 10 percent of her market.Yet B may be doing a better job. Competition may be far more severe in B’s district. Or the company may be giving A considerably more advertising support.
Gross Profit In most firms, a sales manager is (or should be) more concerned with the amount of gross profit the salespeople generate than with their dollar sales volume. Gross margin in dollars is a much better measure of a salesperson’s effectiveness because it gives some indication of the rep’s ability to sell high-margin items. Since the prime objective of most businesses is to earn a targeted return on investment, a person’s direct contribution to profit is a logical yardstick for evaluating performance.
Management can reflect its gross margin goals by setting volume quotas for each product line. In this way, the company can motivate the sales force to achieve a desirable balance of sales among the various lines. Then, even though the reps are later evaluated on the basis of sales volume, this evaluation will automatically include gross margin considerations.
As an evaluative yardstick, gross margin has some limitations, however. When management ignores selling expenses, there is no way of knowing how much it costs to generate gross margin. Thus, sales rep A may have a higher dollar gross margin than sales rep B. But A’s selling expenses may be proportionately so much higher than B’s that A actually shows a lower contribution margin. Furthermore, a salesperson does not fully control the product mix represented in his or her total sales volume. Territorial market potential and intensity of competition vary from one district to another, and these factors can influence the sales of the various product lines.
Number and Size of Orders Another performance measure combines the number of orders and the size of orders obtained by each sales rep. The average sale is computed by dividing a rep’s total number of orders into his or her total sales volume. This calculation may be made for each class of customer to determine how the rep’s average order varies among them. This analysis discloses which reps are getting too many small, unprofitable orders, even though their total volume appears satisfactory because of a few large orders. The analysis also may show that some reps find it difficult to obtain orders from certain classes of customers but make up for this deficiency by superior performance with their other accounts.
Call Rate A key factor in sales performance is the call rate—the number of calls made per day. A salesperson ordinarily cannot sell merchandise without calling on customers; generally, the more calls, the more sales. Sales rep A makes three calls a day, but the company average is four for sales reps who work under reasonably comparable conditions. If management can raise A’s call rate up to the company average of four, his sales should increase about 33 percent.
For evaluation purposes, a salesperson’s daily (or weekly) call rate can be measured against the company average or some other predetermined standard. Discretion must be exercised in interpreting a rep’s call rate, however. Call rates are influenced by the number of miles reps must travel and by the number of customers per square mile in the territory.
Usually, in a given business, a certain desired call rate yields the best results. If the rep falls below this rate, sales decline because the rep is not seeing enough prospects. If the rep calls on too many prospects, sales also may decline since he or she probably does not spend sufficient time with each one to get the job done.
Batting Average A salesperson’s batting average is calculated by dividing the number of orders received by the number of calls made (O/C). The number of calls made is equivalent to times at bat; the number of orders written is equivalent to the hits made. As a performance index, the batting average discloses ability to locate and call on good prospects and ability to close a sale. A salesperson’s batting average should be computed for each class of customers called on. Often, a rep varies in ability to close a sale with different types of customers.
Analysis of the call rate in relation to the order rate can be quite meaningful. If the call rate is above average, but the number of orders is below normal, perhaps the rep does not spend enough time with each customer. Or suppose the call rate and batting average are both above standard, but the average order is small. Then a field supervisor may work with the salesperson to show the rep how to make fewer but more productive calls. The idea here is to raise the size of the average order by spending more time and talking about more products with each account.
Direct-Selling Expenses Direct-selling expense is the sum of travel expenses, other business expenses, and compensation (salary, commission, bonus) for each salesperson. These total expenses may be expressed as a percentage of sales. Also, the expense-to-sales ratios for the various salespeople can be compared. Or management can compute the cost per call for each salesperson by dividing total expenses by the number of calls made.
In a performance evaluation, these various cost indexes may indicate the relative efficiency of the salespeople in the field. However, management must interpret these ratios carefully and in detail. An expense-to-sales ratio, for instance, may be above average because the salesperson is (1) doing a poor job, (2) working in a marginal territory, (3) working in a new territory doing a lot of prospecting and building a solid base for the future, or (4) working a territory that covers far more square miles than the average district. A rep with a low batting average usually has a high cost per order. Similarly, the one who makes few calls per day has a high ratio of costs per call.
Routing Efficiency Dividing the miles traveled by the number of calls made gives the average miles per call. This figure either indicates the density of the sales rep’s territory or measures routing efficiency. If a group of salespeople all have approximately the same size and density of territories, then miles per call is a significant figure for indicating each one’s routing efficiency. Suppose five salespeople selling for an office machines firm in a metropolitan area vary considerably in the number of miles traveled per call. Then the sales manager may have reason to control the routing of those who are out of line.
Evaluating Qualitative Factors
When the evaluation is based on qualitative factors, the personal, subjective element comes into full play. This can be good or bad.With qualitative measures, sales managers can give considerable weight to factors not easy to capture quantitatively. These factors include, for example, civic virtue, sportsmanship, and other citizenship behaviors that are not part of the formal evaluation process.12 Because these are important behaviors for the sales force, the sales manager is not wrong to take them into account. So in certain cases, some might argue that qualitative measures of performance are better at capturing all aspects of a salesperson’s performance.
In other situations, subjective evaluations are less accurate than quantitative assessments. Problems can stem from either the manager’s personal bias or the type of evaluation form used. There is an almost limitless variety of evaluation forms. Often each manager develops whatever form seems appropriate for the situation. Most such subjective forms suffer from three major defects.
First is the halo effect. Evaluators may be biased by a generalized overall impression or image of the person they are evaluating. If the manager does not like the way a rep dresses, for instance, that attitude may bias all aspects of the manager’s evaluation. Similarly, the manager who is impressed with a person’s sales ability also is likely to rate other aspects of the person’s performance highly.
Second, some rating forms generally overvalue inconsequential factors and undervalue truly important ones. The sales manager should be interested in the salesperson’s ability to make money for the firm, not whether the individual is socially adept or impressively dressed. In evaluations, it is essential for the manager to keep in mind what is important and what is not. Often, when a former employee files a legal case involving discrimination in hiring, firing, and promotion, the key point is that the manager based evaluations on unimportant factors.
Third, most subjective evaluation forms force the evaluator to make judgments on some factors without a valid basis for doing so. Lacking valid information on the factor, the evaluator allows the halo effect to take over.
In addition, firms face two even more serious problems. First, many raters refuse to give poor ratings to reps who deserve them because of fear of reprisal. As one executive put it, “Who knows what the future holds? The person I downgrade today may be my boss tomorrow.” Such managers fail to see any personal advantage in giving accurate ratings.Yet, in any good management evaluation program, a manager’s ability and willingness to accurately appraise people is a key factor in that executive’s rise in management. A second serious problem is that some people just don’t get along. In these cases, evaluators have difficulty being fair.
Management writers have extolled the virtues of behaviorally anchored rating scales (BARS) as superior instruments for subjectively evaluating people. A BARS instrument contains detailed descriptions of the subject’s behavior to guide the evaluator’s numerical rating of that person. A sample of one question is shown in Figure 16–6. It is important to remember, however, that no amount of instrument sophistication can overcome the basic weaknesses inherent in subjective rating systems.
Step 5. Discuss the Evaluation with the Salesperson
Once the salesperson’s performance has been evaluated, the results should be reviewed in a conference with the sales manager. This discussion should be viewed as a counseling interview, in which the manager explains the person’s achievements on each evaluation factor and points out how the results compared with the standards. Then the manager and the salesperson together may try to determine the reasons for the performance variations above or below the standards. It is essential to discuss the manager’s ratings on the qualitative factors and to compare them with the salesperson’s selfevaluation on these points. On the basis of their review of all evaluation factors, the manager and the salesperson can then establish goals and an operating plan for the coming period.
The performance-evaluation interview can be a very sensitive occasion. It is not easy to point out a person’s shortcomings face to face. People dislike being criticized and may become quite defensive in this situation. Some sales executives resist evaluation interviews because they feel these discussions can only injure morale. The concern is real and valid. They reason, “Why stir up trouble when you are basically happy with the person’s performance?” On the other extreme, some sales managers make a point to rank all salespeople from best to worst. (See the box titled “Ranking Salespeople from First to Last.”)
Unperceptive managers often lose sight of subordinates’ virtues and strengths and criticize unimportant factors. One key factor in management is learning to use people’s virtues to the best advantage while not allowing their weaknesses to hurt the firm.
A Sales Rep Objects to Her Evaluation
In December each year, Dasher’s sales manager Margaret Sprunger compiled information from the firm’s sales analysis files on each sales rep, added to it qualitative or subjective information she had about the person’s performance, and wrote a letter to each rep summarizing how well the rep had done during the year and what that rep should endeavor to do during the coming year.
It was not one of Sprunger’s favorite jobs, but the company’s top management was committed to formal evaluation programs. She knew there would be repercussions from her letter to Gail Zurcher, a Los Angeles area sales rep. Neither Zurcher’s numbers nor Sprunger’s observation of her performance could prompt much praise. In fact, Sprunger wanted to replace this rep but was being constrained because of company policies.
About 32 seconds after opening Sprunger’s evaluation letter, Gail Zurcher phoned her angrily.
“This is a bunch of garbage you cooked up to justify getting rid of me. If you want to fire me, then do it, but don’t insult my intelligence by expecting me to buy this rubbish!” Zurcher challenged.
“I rather expected that you would be coming in to see me. Let’s sit down and go over the items you disagree with one by one. We do have excellent records and statistics on what you have done and sold in comparison with the other sales reps,” Sprunger said calmly.
“I’m not talking numbers. I know the numbers stink and I’m not happy about them either. I am talking about comparing noncomparables. It is patently unfair to compare me with the other sales reps. Being the new kid on the block, I was handed a bad territory. Why was it open? The guy in it before me told me why he quit, and I am fighting the same lack of potential and competitive conditions,” Zurcher said.
Sprunger was well aware that one particular competitor was extremely aggressive because its plant was located there. Stalling for time, she asked, “What do you want me to do?”
“I want some understanding of my situation and consideration in my treatment. This evaluation in my file is the kiss of death for any future here. It says I stink. It says I can’t plan my work or penetrate the market. It says I can’t sell. And that isn’t so!” Zurcher fumed.
Questions: What should Margaret Sprunger say and/or do in response to Gail Zurcher’s request? What changes in the evaluation procedure might help?
R a n k i n g S a l e s p e o p l e f rom Fi r s t t o L a s t
Some of the largest and most respected sales organizations in the United States are experimenting with ranking their salespeople, including Ford, General Electric, and Microsoft. As part of the evaluation process at these companies, sales managers are required to rank order their salespeople—from the best to the worst.
At GE, for example, sales managers must identify the top 20 percent of salespeople, as well as the bottom 10 percent. The top group gets significant bonuses; but those at the bottom typically are fired.
Sales professionals are split on whether or not ranking salespeople is appropriate. Some say this is a heartless, cold-blooded management tactic that destroys teamwork. Others feel that it is a strong motivator. In fact, some even argue that weeding out the poor performers is an act of kindness. “Not removing the bottom 10 percent early in their years is not only a management failure, but false kindness as well—a form of cruelty,” says a top GE executive in a letter to shareholders.
Question: As a sales manager, would you rank your salespeople? Why or why not?
Output Factors Used as Evaluation Bases
• Sales volume
In dollars and in units
By products and customers (or customer groups)
By mail, telephone, and personal sales calls
• Sales volume as a percentage of
Market potential (i.e., market share)
• Gross margin by product line, customer group, and order size
Number of orders
Average size (dollar volume) of order
Batting average (orders / calls)
Number of canceled orders
Percentage of accounts sold
Number of new accounts
Number of lost accounts
Number of accounts with overdue payment
Quantitative Input Factors Used as Evaluation Bases
• Calls per day (call rate)
• Days worked
• Selling time versus nonselling time
• Direct selling expense
As percentage of sales volume
As percentage of quota
• Nonselling activities
Advertising displays set up
E-mails/letters written to prospects
Telephone calls made to prospects
Number of meetings held with dealers and/or distributors
Number of service calls made
Number of customer complaints received
Qualitative Input Factors Used as Evaluation Bases
• Personal efforts of the sales reps
Management of their time
Planning and preparation for calls
Quality of sales presentations
Ability to handle objections and to close sales
Inputting information into firm’s SFA/CRM system
Company and company policies
Competitor’s products and strategies
• Customer relations
• Personal appearance and health
• Personality and attitudinal factors
Acceptance of responsibility
Ability to analyze logically and make decisions