Advisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.
Without a doubt, 2008 exposed major weaknesses in the financial services industry. Spectacular abuses like the Bernard Madoff fraud captured the public’s attention, but the loss of 25% of all retirement and other assets was noteworthy too. Given the current legislative environment, extensive regulatory changes are all but certain. No one knows what these new standards will be, but now is an excellent time for advisors and analysts to review their current processes to make sure they are generating the best results for their investors.
Consider, for example, mutual fund evaluation and monitoring. Over the past two decades, screening has been at the core of most mutual fund evaluation processes. The advisor picks the criteria, sets a minimum or maximum level for each, and comes up with a list of funds that survive all screens. There are several inherent flaws to this process:
- If a fund fails any one criterion by even a small amount, it is no longer considered acceptable. It may pass all other screens by the widest of margins, but as long as it fails just one, it is eliminated from consideration.
- For the funds that do pass all of the criteria, there is no degree of passing. The fund that passes all screens by a wide margin looks no different from the one that barely squeaked by. This simple pass/fail grading system provides too little guidance.
- All criteria have equal importance. Not only is that counterintuitive, but research clearly shows it to be inappropriate. For example, research suggests lower expenses are a big determinant of future performance, so shouldn’t low expenses carry more weight in the evaluation? Why should an advisor assume all factors are equally predictive? More on this later.
- Once the acceptable funds are identified, the advisor has a long list with no particular order of best to worst. There is no way of ranking them because at no point in the process were they compared to one another.
To get around this last issue, some advisors have created “scoring” processes. These are actually derivatives of basic screening that simply assign a value based on the number of criteria passed by each fund. Funds that pass eighteen of twenty screens score higher than those that only pass seventeen or less. While the final list of funds can be sorted based on the number of criteria passed, there is still no differentiation between attributes of different importance and no way of knowing which funds barely passed a given test. Even if the factors are weighted for importance, failure to measure the magnitude of passing renders the results much less useful than a multi-factor scoring model that includes both.
To see why a different approach would be more effective, let’s step away from mutual funds and put ourselves in the shoes of a hypothetical hiring manager. Our task is to fill one position in our engineering department and one in our marketing department. Five recent college graduates have applied, but they have no experience – the only information we have about them is their grades in five relevant classes, shown in the first columns of Table 1. Intuitively we know that the successful engineering candidate should be more analytical while the successful marketing candidate needs strong communication skills.
One approach might be to analyze their grades with a simple screening model. Those results appear in the “Total Passed” column of Table 1. If we require that successful candidates for these positions must have passed all five classes, then only candidates 1 and 5 survive. But which is better suited for each job? Our screening technique can’t differentiate them at all. As with the survivors of a basic screening model for mutual funds, all we can say is that each is acceptable, but we know no more.
TABLE 1 GRADES AND SCORES
|
|
|
GRADES |
|
|
SCREEN RESULTS |
WEIGHTED
FACTOR MODEL SCORES
|
Candidate |
Math |
Science |
History |
English |
Writing |
Total
Passed |
Total
“Score” |
|
Engineering Score |
Marketing Score |
1 |
A |
D |
D |
D |
B |
5 |
10 |
|
2.25 |
2.00 |
2 |
A |
B |
C |
F |
D |
4 |
10 |
|
2.75 |
1.25 |
3 |
B |
F |
B |
D |
B |
4 |
10 |
|
1.75 |
2.00 |
4 |
F |
C |
C |
B |
B |
4 |
10 |
|
1.50 |
2.50 |
5 |
D |
D |
D |
A |
B |
5 |
10 |
|
1.50 |
2.75 |
Perhaps “scoring” these results with a measure of magnitude will help. Using the “4.0 System” where A=4, B=3, C=2, D=1, and F=0, the candidates receive the scores in Table 1’s “Total Score” column. Unfortunately, this didn’t help; all five candidates ended up with the same score. Adding another “scoring” element not only failed to help, one might argue we were in better shape before when we were at least down to two candidates.
But what if we consider adding one final factor: The importance of each of these classes for each job? We could do this based on the percentage of time the successful candidate will spend using each class skill:
TABLE 2 PERCENTAGE OF TIME USING EACH SKILL
Job |
Math |
Science |
History |
English |
Marketing |
Total |
Engineering |
35% |
35% |
10% |
10% |
10% |
100% |
Marketing |
10% |
10% |
10% |
35% |
35% |
100% |
By multiplying the numerical grades (4-0) by the importance of each class for each job, then summing the results, we can calculate an overall weighted score specific to the job. The results are shown in the last two columns of Table 1. Now we have clear winners for each position: Candidate 2 is our engineer and Candidate 5 is our marketer.
Also notice that Candidate 2, who was initially eliminated for failing the English class – hardly an essential requirement for an engineer – emerges as the best candidate for that position. His scores in math and science far outweigh his writing failure. On the other hand, Candidate 1 passed all of the courses, yet her low science score that is important for the position is not offset by her B in writing.
As this example clearly illustrates, only by combining degree of passing and relative importance for each criterion can a true multi-factor scoring model be employed. This is an easy case with only five classes and five students. For mutual funds, however, there may be thousands of potential funds being evaluated based on five, ten, or even more criteria simultaneously. This makes the task even more likely to have elimination errors, subjective results, and false positives.
Consider the top ten large cap growth funds as scored by the weighted factor model underlying the AlphaCycle Klein Large Growth Indexi. The most heavily weighted factor in this model is the Expense Ratio, but three other factors are considered as well. Because this is a true weighted scoring model, all factors are considered together and the scores and rankings for the top ten funds in this categoryii appear in the final two columns of table 3. These results are equivalent to weighted grades in the hiring example.
TABLE 3 TOP LARGE CAP GROWTH FUNDS
Fund |
Expense Ratio2
|
Screen Pass/Fail |
Weighted Factor Score |
Weighted Factor rank |
Elfun Trusts ELFNX |
0.21 |
P |
73 |
1 |
ING Evergreen Omega S IEOSX |
0.84 |
F |
73 |
2 |
Parnassus Workplace PARWX |
1.20 |
F |
71 |
3 |
Vanguard PRIMECAP Core VPCCX |
0.50 |
P |
70 |
4 |
VALIC Company I Large Cap Core VLCCX |
0.85 |
F |
68 |
5 |
Vanguard U.S. Growth VWUSX |
0.43 |
P |
68 |
6 |
Janus Twenty JAVLX |
0.84 |
F |
68 |
7 |
T. Rowe Price New America Growth PRWAX |
0.91 |
F |
68 |
8 |
Fidelity Blue Chip Growth FBGRX |
0.57 |
P |
68 |
9 |
Principal Large Cap Growth I Inst PLGIX |
0.73 |
P |
67 |
10 |
i Klein Decisions uses a weighted factor model with four factors to select the funds that comprise the AlphaCycle Klein Large Growth Index which is listed under ticker AIRALCN on the American Stock Exchange. For more details on this or the other eight Klein Decisions Alpha cycle funds please visit www.kleindecisions.com or www.activeindexsolutions.com.
ii Based on Morningstar’s Large Cap Growth Category and data as of May 31, 2009.
An advisor limiting the fund’s expense ratio to 0.75 percent or less (a common cutoff for this category) would eliminate half of the top ten funds regardless of how they scored on other factors. But these funds must have scored well enough on those other factors to overcome their higher expense ratios – that is why they are able to finish at the top of the weighted factor model rankings. Simple screening and “scoring” processes are eliminating funds all the time, and those who rely on them – both advisors and their clients – are missing some of the best funds available.
Of course a dedicated screener could always change the expense ratio screen cut-off to 1 percent or even higher to allow all of the top ten funds to pass, but that amounts to deciding what funds you want and then changing your “selection” process to accommodate them. This is not a good approach in any environment, much less a more regulated one.
In today’s world, with investors increasingly sensitive to the quality of the financial advice they are receiving and the growing likelihood that all advisors will be held to a fiduciary standard, the old methods can no longer suffice. In other times, it might have been acceptable to say your process has been in place for decades, but not anymore. When you look at how a mutual fund is managed, you expect the fund manager to constantly evaluate the process it uses to evaluate assets and sectors. Would you be confident recommending a fund manager who simply stuck to a time-worn process regardless of its results? Of course not, and your clients shouldn’t expect any less of you. Regulatory changes will soon force us to address these issues, but now is the time for proactive advisors to initiate changes that will put them in the driver’s seat.
More Evergreen Topics >