Friday, January 21, 2011

Shifting the Business School Hierarchy:
The Real Rankings behind the Financial Times Figures
by Philippe RUIZ, PhD and Benjamin GOURMEL, PhDc


Summary: This article aims to explain the flaws and limits of the current business school rankings. After an overview of the positive effects, we emphasize the drawbacks and negative influence on business school stakeholders before focusing on the methodological issues. We then present a new ranking methodology, in accordance with the standards of statistical principles. The results are enlightening.

In the past ten to fifteen years, rankings have taken a dominant position in the evaluation of business schools. They are used as a comparison tools by many stakeholders, primarily students in their application phase, and professionals considering hiring new recruits. Yet these are not the only categories influenced by this annual exercise performed by publishers. School deans and faculty are also largely under the spell of this classification practice, though they hate to admit it. Why would they? Well, first of all it brings competition to a new level, comparing their performance to that of their rivals – this is not always happily accepted. In addition, there are not only defects in the current ranking methodologies but also some adverse consequences due to the use of rankings to rate schools. Such drawbacks have been appropriately emphasized by several research publications. The topic of this paper is to discuss current rankings and propose a new methodology providing a higher level of accountability, thus a “new ranking” that will satisfy strict statistical requirements. At this stage, the new ranking focuses on mathematical/statistical rigor. Discussing the criteria themselves will be addressed in a subsequent study/paper. At present, we use the same criteria as the Financial Times ranking, as they represent the standard for international higher education measurement to date.

1)      Ranking history/construction

Business Schools have appeared about one hundred years ago, with teaching of management principles their main objective. In 1959, a report stressed the necessity to add research to the scope of these institutions. This has been the first major shift in business schools’ area of focus. The appearance of business school rankings in the 1980s has once more shaken the preoccupations of Business School senior executives and faculty, introducing elements of measurement that were not previously applied to these institutions and shifting the balance of attention that the different programs receive.  The consequences of this transformation lead to animated debates among all stakeholders.
The first recorded attempts of setting up rankings appeared in 1977, with the Carter Report, the Ladd & Lipset survey, and the survey of the MBA Magazine. These publications focused on insider perspectives (research outputs, faculty feedback on other schools’ quality, and deans’ vote on business schools) and did not experience a wide circulation, as they were very specific to faculty and deans centers of interest. They focused their attention on MBA programs, as they believed these were the most representative teaching programs, and key elements of professional training for the benefit of corporate customers.
Rankings of business schools as they exist today appeared in the late 1980s, with Business Week’s MBA program ranking (1988), paving the way for other business–related publishers. The issues displaying this type of information have met with great success among readers, especially students and recruiting professionals, making ranking of business schools a recognised indicator of quality.
Business week’s survey, addressing a larger audience than the early ranking attempts and shifting the focus from quality of teachings to salaries, has put an emphasis on return on investment, thus setting the trend for all subsequent newcomers. The ranking measured student, recruiter and intellectual capital scores (respectively 45%, 45% and 10% of the total). The wide circulation met by this issue drove other publishers to follow the move: US News and World report (1990) stepped in with a slightly different methodology, basing its ranking on reputation (40%), placement success (35%) and student selectivity (25%). These two rankings being very US-centric, competitors have appeared in other parts of the world, the most influential being the Financial Times ranking of international MBAs that came up in 1997. International diversity (20%) and research performance (25%) found much more weight in this new methodology, even though salary progression (40%) and student selectivity (15%) still represented the largest part of the ranking outcome.  In 2001, a host of new players had stepped in the ranking arena, including the Wall Street Journal, Forbes magazine, the Economist, the Aspen Institute, America Economica, Asia Inc., Handelsblatt, and a full range of other publishers, international or local, all of them having their own criteria and weightings to measure the quality of business schools.
Gradually, the scope of the surveys has grown larger to encompass more programs than MBAs only, thus somehow answering to the early critics. In part only: the rankings only measure one or several dimensions with an unsatisfying statistical methodology, even though it is undeniable that rankings have their role to play in the arena.

2)      Rankings and the publishing business

Let us turn to the construction of rankings and introduce the behaviour of publishing companies. Somewhat to their surprise, the first journal publishing rankings realised that these issues were bestsellers. Given the fact that selling is their primary objective, one will agree that they are likely to publish rankings of business schools on a regular basis. Yet the rankings do not move much from a year to another, thus yielding the threat of boring readers after a few years. This is (understandably) a strong incentive for publishers to modify the ranking methodology in order to introduce novelty and keep their sales up.  By doing so, they push schools to modify their strategy in response, as schools need to adapt to play this ever changing ranking game. In order to create momentum, institutions wishing to go up in the hierarchy must introduce short term modifications that will make them appear dynamic.  As a result, these schools end up performing mostly cosmetic changes. This is a dangerous game to play, as schools learn that superficial changes can be equated with real improvement, at a much lower cost than deep and necessary modifications. Of course this in not a systematic reaction, but it emphasizes how rankings can lead to extreme reactions of business school leaders. And the case happens more and more, as very few schools can feel genuinely independent and unaffected by ranking outcomes.
If the trend of focusing primarily on rankings goes on, we bear the risk of seeing teaching quality, and the overall school quality, dropping sharply (see Corley and Gioia1). This threat does not necessarily affect leading business schools, as their established reputation secures a flow of quality students and income that allows them to make independent strategic decisions. Lower ranking schools, however, have a very strong necessity to appear as highly ranked as possible, as it will influence their recruitment possibilities and thus their income. This dependency on rankings can lead to decisions and priority settings that are harmful to the school in the long run, as we will see later in this article.
Apart from the effect of rankings on school executives’ decision making, another issue is at stake when considering the rankings: the measured criteria and their weightings. The methodology used by publishers has met with criticism pointing to the criteria themselves, as they are added by publishers into a single measure of overall business school quality, when in fact such a measure may not really exist.
The core of the issue remains the weighting of the criteria and their aggregation in one unique dimension. It seems logical that one cannot add cars and bulldozers, as they do not enter in the same category. Yet in some ways this is what publishers do. There are ways to determine what can be added or not when building a statistical construct such as a ranking. Until today, however, journals which publish rankings have failed to do this, or have done it in an unsatisfying way.  
We will presently review the strengths and weaknesses of current ranking methodologies, before we present our revised ranking methodology, and explain why it represents an improvement compared to the existing ones. Finally, we will discuss the enlightening findings obtained.

3)      What we are trying to measure and why

Who reads the business schools rankings? The people interested in these issues can be divided in three main categories: a) students who need an indicator of the quality of the schools and want to know if they will have an appropriate outcome if they invest their time and money in a particular institution, b) recruiters who want to hire students who perform fast and well after integrating their company, and c) deans and faculty of schools, whose recruitment and pricing policies are largely dependent on their ranking scores. What have been the improvements triggered by the emergence of this practise?
It is generally admitted that competition fosters improvement, pushing schools to innovate and develop their programs in order to gain or maintain a competitive advantage in the field of education. This has improved the relevance of programs to the contemporary work environment, as it has forced institutions to listen to the growing protests of professionals in the industry that the teachings in MBA and similar programs where out of date1. At the end of the 1970s and from the beginning of the 1980s, business organisation, management and standards have changed dramatically, witnessing the emergence of Japanese-style management (Toyotism, just-in-time stocks…) and office computing. Most traditional business schools had failed to integrate those advances in a timely manner (even if a few leading institutions had integrated these changes in their teachings). A simple but powerful incentive has brought up this improvement: rankings have pushed business schools to compete openly on similar programs, giving them access to more information about other programs, therefore enabling them to see each other’s strengths and weaknesses. This has triggered a program rework process which, thanks to the progress in information technologies, has increased the speed of schools’ reactivity to market changes to a level that is more in tune with modern business cycles. Another consequence is that schools were forced to look into the aspects of business they had overlooked (value of teamwork, or the emergence of e-business and IT working tools), and to focus on change management. All of these reforms have helped business schools reshape their teachings to adapt to a modern business model, largely different from the one still existing at the time the first modern rankings appeared in 1988, as demonstrated by Gioia and Corley1.  It is important to notice that business schools had not experienced much real competition until the emergence of rankings, as odd as it may seem. Thanks to the appearance of rankings, schools are now more reactive. The pace of changes undertaken to stay in tune with their environment and that of their stakeholders (the corporations hiring their students) has gone up, in tune with a society where information is transmitted ever faster. This positive effect is also largely related to the transparency that has come along with publications judging schools’ performance. To differentiate themselves from others, institutions have made strategic decisions to focus on particular domains and program priorities and are maintaining continuous efforts to stay on track.
The next point is that the rankings provide useful information for external stakeholders by providing a frame of comparison and a good approximation of the relative position of the different institutions. Even though the absolute rankings of the schools can be discussed, it is clear that schools ranking in the top 10 or 15 deliver better education services (teachings and opportunities) than their lower-tier counterparts. Furthermore, in our globalised world, it is often difficult to find reliable information on a school situated in a far away country. For example, how can a student in Northern India know about the teachings and the faculty quality of a school based in Germany or France? It is not possible for him to come and visit the premises and shape his own opinion. Information found on the web is not always reliable, and often gives a distorted image of reality (usually for promotion purposes).  The image projected by the organisation is the only criterion a potential student (or even recruiter) can base his judgement on. That is the reason why schools are so careful to select the information elements that will best suit their purpose, thus projecting (intently or not) a neater image of their organisation. Rankings can help give an outsider point of view, and an “objective” assessment of the level of the school. They represent an easily accessible and immediately comprehensible figurehead for external stakeholders.
The rankings should be used to define the overall quality and reputation of the schools. Top ranking schools certainly have a more productive faculty, better placement results and higher career prospects than middle or lower ranking equivalents. Yet whether there are radical differences between school X and Y is another question, as the differences rely on scores in different dimensions of quality. We will now review the main objections raised against the usage of rankings.

4)      Questioning the rankings

The rankings have met with thorough criticism, which can be divided in two categories: The negative effects rankings have on deans’ strategy, and the methodology itself, in particular statistical rigor.

a.       The detrimental effects of rankings
  Dramatic changes have occurred in higher education in the United States as well as abroad. These changes have been triggered by internal and mostly external factors. In fact, one external factor – the appearance of Business School rankings – has changed the game of business school marketing and reputation to a point that it has brought about a dramatic shift of policy within Business Schools – not necessarily to their benefit.
How far are business schools ready to go in order to appear high in the rankings? What part of their budget should they allocate to meet the criteria of the publishers? Which ranking should they favor? Where can they divert the resources from? There lies the greatest threat of these “business school rankings”: they base their study on MBA programs, or in some cases, on postgraduate programs. This casts a spotlight on a particular program, but leaves other branches of schools in the dark and reroutes strategic resources to serve marketing purposes. Indeed, institutions are more and more focused on the image they cast to the outer world, but bear the risk of being trapped in the image they project. This focus on marketing and communication drains resources that could be used elsewhere to improve teaching quality, research or the premises in which students will evolve.
Ideally, allowing more resources to promotion should bring a positive upshot and compensate the investment through increased numbers of students. This, in turn, creates more funding to improve the teachings and the reputation of the school. Yet in reality, rankings mechanically create a glass roof effect that prevents schools from climbing the ladder and neutralises the financial efforts from being efficient. Thus, the diversion of resources is vain, and ends up weakening the school.
On a larger, and more education oriented scale, practical teaching skills are becoming more highly valued in business schools, and faculty are under tremendous pressure to perform in their classroom in an entertaining way. This is all the more true as alumni feedback is now also part of the ranking criteria.
In their above mentioned work, Corley and Gioia1 give a interesting view of the mechanism triggered by the ranking game: schools redesign their programs on very short cycles (5 years maximum), thus encouraging short-term thinking and change “for the sake of it”. Schools going down and lower-ranking schools reinforce their placement offices, focus all the more on their MBA program, and increase their attention to recruiters, in ways that are sometimes conflicting with class organisation or quality. They start to treat students as customers to ensure their positive feedback when answering surveys. This leads to extreme care in grading students, as low grades will dissatisfy them (as well as the recruiters); having students fail to get their diploma will indeed harm the school’s reputation. Classes are increasingly tailored to fit the students’ wishes, often at the cost of quality. Thus schools have a tendency to shift to brand and image management, rather than focusing on research and diversification.
 In one of the earliest survey verifying the quality of ranking methodologies, Martin Schatz2 underlined other negative effects: Do schools only recruit students that have better chances to find high positions regardless of their real performance? This could be a strategy to climb up rankings. It is also true that business schools also have a lot to do with networking, and the current methodologies favor large schools having big numbers of graduates every year (whatever their quality). Their top graduates will surely outnumber those of smaller-scale business schools that have fewer top quality graduates.

b.      Methodological questioning
Rankings in general depend very largely on the characteristics chosen to evaluate them, and even more on the weights assigned to them. The various rankings omit some factors that have great importance when trying to qualify the quality of education such as programme content and delivery, school resources and financial stability, research and development in teaching methodology and pedagogy, continuous learning and improvement mechanisms. Besides, most of the information used by the rankings is provided by the schools themselves, which can lead to fraudulent or untruthful behaviour.
It is difficult to determine a generally accepted overall scale upon which school quality could be measured, which leads to a multiplication of assessing methodologies. Single factor (Forbes bases its analysis on “return on investment”) or multidimensional (Economist Intelligence Unit), the ranking models have different levels of complexity as well as varied weightings of the criteria, mostly based on the publishers’ prior feelings and expectations.
In fact, when comparing the different classification methodologies, one can observe that the outcomes rely mainly on one central element, which is the financial one. Indeed, it explains approximately 95% of the variation in the different publications, as demonstrated by a 2009 study conducted by Michael Halperin, Robert Herbert and Edward Lusk3, three researchers from U.S. Universities. This indicates that the rankings of the different publishers are highly correlated or, in other words, that they are basically the same. Even though, the criteria used and methodologies applied are different, most publications measure, more or less, the same dimension: return on investment.
When observing the order of schools in various rankings, one can observe similarities, whatever the differences in methodologies. In spite of the differences in absolute rankings across the publications, groups of schools evolving at similar levels can be identified. In addition, it seems that top institutions have a tendency to remain relatively stable over time, whereas the lower ranking ones are subject to much more radical changes. This indicates that the variable sets used in the methodology will most likely comfort the position of high-rankers rather than the others’. It seems a hard task for programs to reach higher positions, and the overall tendency is rather to see lower-ranking schools dropping that climbing up the steps to glory, as explained by Kai Peters4 from the Ashridge Business School, U.K. Indeed, top schools attract the best students, charge the highest prices, have the budget to recruit the best faculty and can buy top quality equipment. This improves their ranking and their top-class students can be hired in high flying positions, thus reinforcing this virtuous circle. On the other hand, lower ranking schools cannot break through to the top. 
In 1999, Illia Dichev5, of the University of Michigan, inquired about the nature and quality of the ranking methodologies of two main publishers: U.S. News and Business week. It appeared that they were merely aggregations of “noisy” data elements. In addition, there was a lack of consistency between the ranking changes among the publications on a similar time period. This suggested that the order moves were more driven by methodology revisions or modifications than by actual changes in the schools. Thus the rankings could not be considered as a comprehensive measure of the quality of business schools, but rather as a very broad indicator of return on investment for external stakeholders (mainly potential students and recruiters).
The main conclusion of this investigation was that the design of the rankings is deficient, and that a redesigning of the methodology could overcome this issue. Noisy data elements should be reweighted in order to decrease their disturbing effects. This is the core of the discussion. The elaboration of a good model requires discrimination in the choice of variables. When digging deeper in the different ranking methodologies, one comes across surprising findings. Even though ranking publishers are explicit about their selection criteria, and sometimes give out how such criteria are weighted, they do not give a clear justification of these weights. Other aspects of their methodology are also questionable. As an example pre-selecting only 44 schools out of the over 700 colleges offering such teachings in the U.S. (as Business Week initially did) already gives ground for discussion (on what criteria were these selected?), not mentioning the fact that they do not give any justification for selecting that number only.

c.       Reaching a higher standard
The methodology should be revised in a way that will satisfy business schools, and enable publishers to provide their readers with robust and useful information, thus insuring a large circulation of their ranking issues. Rankings should also be divided in sub-categories to provide key information about schools’ specialisations, assets and competitive advantages. In addition, the methodology should be stable over the years, allowing institutions to implement long-term improvement strategies, rather than focusing on displaying an image of transformation to adapt to shifts in ranking methodology. A constructive relationship would enhance the outputs of both sides, as publishers could emphasize the improvement needs of schools and their evolution as external and independent judges, therefore gaining a credibility that would increase their sales.
The objective here is to identify common ground between the customers of the rankings (student and employers), their publishers and their targets, i.e. the business schools. We have noted that the criteria measured by the different ranking publishers ultimately measured the same dimension (mainly return on investment). We will not question the choice of criteria here, but we aim to provide at least a model which will satisfy all parties by its mathematical robustness. We have already alluded to the issue of criteria and their weightings. How does one compare a pumpkin and an armful of apples? One needs to bring them back to a common scale, which allows comparisons. Else one should measure them separately. What we aim to provide our readers with, is a ranking methodology which has a solid methodological basis, and that compares elements that are proven to be related.
To do so, we will use the Financial Times’ (FT) Masters in Management 2010 figures6 (collected yearly and published in September) as it provides the most commonly agreed upon criteria on the international scene. Therefore, no new underlying data is provided here, only differences in statistical analysis distinguish our work.


5) Methods


The FT rankings are based on the weighted average of different criteria such as salary after graduation or the percentage of international students. A weighted average is simply an average in which each quantity to be averaged is assigned a weight. These weightings determine the relative importance of each quantity on the average.
There are some problems with the weighted average that trained statisticians are aware of. The first is whether it is possible or not to add some quantities to obtain a meaningful sum. An example of a meaningless sum would be the sum obtained from the addition of 1) the number of hair on my neighbor’s head, 2) the number of trains leaving my local train station on a Sunday and, 3) the number of stars visible from my bedroom window on a clear summer night. The obtained sum is meaningless because the three numbers have nothing in common. Only things that are more or less related can be added to obtain a meaningful sum or an average. An example of an average of similar quantities is at school, when the overall average of all the courses followed by the students is obtained. In a typical business school, a Grade Point Average (GPA) is thus calculated. The addition of all the grades from different disciplines such as Marketing, Finance or Organizational Behavior is possible because those subjects somehow measure something in common. If they were not related at all, it would be impossible to compute a meaningful average.
Statisticians use the word “correlated” to indicate that two or more variables have tendency to “move together.” An example of two correlated variables in the human population is height and weight. Taller people have a tendency to be heavier (children are smaller and thus much lighter than adults) even if this relationship is not always true (there are some tall people who are lighter than some shorter people, but the bigger the difference in height, the less likely this is to be true). It is said that height and weight are positively correlated (i.e. people who are taller are usually heavier). When two variables are correlated, it is often possible to add them. However, when two variables are not correlated, it is never possible to add them to obtain a meaningful sum or an average. So the very first thing statisticians do before computing an average is to calculate the “coefficients of correlation” between the variables to find out whether it is possible or not to add them. For example, the Employed at 3 months (%) variable used in the FT ranking is absolutely not correlated with the Weighted salary variable and therefore the two quantities should never be added, even it appears “intuitively” that the two variables should be related.
Once it has been established that the variables are sufficiently correlated and that their average can be computed, another problem to address is what weights should be attributed to each variable. The FT journalists seem to believe that their educated opinion can do the job, but once again, this is at the expense of sound statistical practice. In order to establish the weights when many variables are available, the correlations with all the other variables in the analysis should be obtained first by using a technique called Factor Analysis (FA). The purpose of FA is to replace a given number of correlated variables by a smaller number of uncorrelated factors. FA is an advanced technique but the basic principles are easy to understand. This technique allows to determine “trends” in variables that are interrelated, and to aggregate them into factors. The strength of the links of each variable with all the others will indicate its weight and importance in the study.
Going back to our school example above, when the GPA is obtained for the students, the GPA can be considered to be a factor because it integrates all the original variables; it gives an overall indication of the scholastic level of the students. This indicator or factor is good only if it has strong links to the original variables (strong correlations). Some courses may be more highly related to it (they may have strong correlations with the GPA) while others may be only vaguely related to it (they may have low correlations with the GPA). If there are many courses highly related to it then the factor is strong, if not, the value of the factor is dubious.
Let’s now suppose that Marketing has a strong relationship with the GPA (we are just making this up) and Finance has a weak relationship with the GPA, then marketing should be considered more important than finance and therefore it should be weighted more in the average than finance.
To sum up, if the original data of the FT can be considered relevant and reliable, there are two main problems with the FT number crunching that make their study statistically mistaken and their rankings incorrect: 1) they did not check whether the numbers obtained could be added (we found that most of the variables do not share enough in common to compute an overall mean) and,  2) the weights they have picked are arbitrary and subjective, not the outcome of a mathematically objective optimization procedure.


6)      Results

We have thus applied correlation and factor analysis to their data and we have discovered three main dimensions, not just one as believed by the FT. It is impossible to add those dimensions/ factors because they are uncorrelated (they share nothing in common). We are now going to explain them and extract the correct corresponding rankings:

Statistical Ranking of the Financial Times Masters in Management 2010 (in decreasing order of ROI score) 






School name ROI score ROI rank FT rank Rank difference International score
Indian Institute of Management, Ahmedabad (IIMA) 80,2 1 8 -7 37,4
WHU - Otto Beisheim School of Management 78,8 2 12 -10 37,5
Universität St.Gallen 77,4 3 4 -1 61,8
Stockholm School of Economics 69,1 4 14 -10 48,6
HHL-Leipzig GSM 68,1 5 38 -33 38,4
Mannheim Business School 62,6 6 13 -7 33,3
Esade Business School 59,9 7 10 -3 61,3
HEC Paris 59,1 8 3 5 55,9
Solvay Business School 58,9 9 20 -11 45,1
Rotterdam School of Management, Erasmus University 58,1 10 11 -1 49,6
IAG-Louvain School of Management 58,0 11 19 -8 41,7
Maastricht University 57,0 12 25 -13 60,5
NHH 55,9 13 40 -27 38,7
Grenoble Graduate School of Business 55,2 14 5 9 65,8
Vlerick Leuven Gent Management School 55,0 15 37 -22 46,4
Imperial College Business School 54,9 16 27 -11 64,7
London School of Economics and Political Science 54,9 17 7 10 71,5
Universiteit Antwerpen Management School 54,3 18 27 -9 48,1
ESCP Europe 54,1 19 1 18 63,8
Università Bocconi 53,8 20 33 -13 47,7
Essec Business School 53,8 21 9 12 53,4
Copenhagen Business School 52,9 22 22 0 50,7
HEC Montreal 52,9 23 34 -11 48,4
Cems 52,4 24 2 22 74,0
TiasNimbas Business School, Tilburg University 51,9 25 55 -30 54,0
Nyenrode Business Universiteit 51,2 26 53 -27 42,5
IAE Aix-en-Provence Graduate School of Management 51,1 27 45 -18 48,5
Kozminski University 50,4 28 30 -2 42,6
Shanghai Jiao Tong University, Antai 50,2 29 46 -17 33,1
University of Strathclyde Business School 50,1 30 25 5 63,6
National Chengchi University 49,7 31 47 -16 35,8
Aalto University School of Economics 49,3 32 30 2 39,3
University of Cologne, Faculty of Management 49,2 33 42 -9 39,4
Aarhus School of Business 48,3 34 51 -17 55,8
National Sun Yat-Sen University 47,8 35 63 -28 32,9
Eada 47,5 36 50 -14 63,9
Brunel University 47,2 37 59 -22 60,3
City University: Cass 47,0 38 17 21 67,9
Aston Business School 46,8 39 39 0 60,2
HEC Lausanne 46,8 40 35 5 56,5
Durham Business School 46,4 41 56 -15 58,5
Warsaw School of Economics 46,2 42 47 -5 38,3
University College Dublin: Smurfit 46,1 43 60 -17 52,5
BI Norwegian School of Management 46,0 44 64 -20 38,2
Edhec Business School 45,8 45 14 31 52,7
Nottingham University Business School 45,5 46 43 3 53,2
Lancaster University Management School 45,2 47 60 -13 56,1
EM Lyon Business School 44,6 48 5 43 56,2
Audencia Nantes 44,4 49 18 31 48,4
ESC Toulouse 41,7 50 16 34 50,2
WU (Vienna University of Economics and Business) 41,6 51 24 27 37,3
University of Bath School of Management 41,3 52 41 11 58,0
Bradford University School of Management 40,5 53 54 -1 58,8
Faculdade de Economia of the Universidade Nova de Lisboa 40,5 54 57 -3 43,7
ESC Clermont 40,2 55 47 8 43,1
Rouen Business School 39,8 56 23 33 49,3
Bem Bordeaux Management School 39,5 57 35 22 50,2
ICN Business School 39,4 58 43 15 46,2
Euromed Management 39,0 59 30 29 53,3
Reims Management School 38,0 60 21 39 50,3
Skema 37,4 61 29 32 49,1
ESC Tours-Poitiers (ESCEM) 36,3 62 52 10 42,4
Politecnico di Milano School of Management 35,0 63 65 -2 45,8
University of Economics, Prague 34,3 64 58 6 39,1
Corvinus University of Budapest 33,9 65 62 3 38,6


ROI means Return on Investment.





ROI scores and International scores are T score transformations of the factor scores obtained by Principal Component Analysis with Varimax rotation.
Rank difference is obtained by subtracting the original FT ranks from the ROI ranks. A positive number indicates that a school has been overrated by the FT whereas
a negative number indicates underrating.


The first factor includes the following variables (in decreasing order of importance): Weighted salary (US$), Aims achieved (%), Women students (%), Value for money rank, Women faculty (%), and Placement success rank. This factor reflects what we traditionally mean by business school quality: added value or return on investment (ROI) but also the number of women students and faculty. This correlation between the proportion of women and the traditional financial criteria came as a surprise to us but it cannot be denied, based on FA, that women have a very positive link with business school education quality.
The second factor includes the following variables (in decreasing order of importance): International students (%), International faculty (%), International mobility rank, and Course length (months). It is clearly an international factor that plays an essential role in our global economy.
The third factor includes the following variables (in decreasing order of importance): International course experience rank, Languages, Number enrolled 2009/10, Careers rank. The size of this factor is such that it simply cannot be ignored. This last factor came as a surprise to us because we were expecting to find the number of languages taught or “international exposure” (International course experience rank) correlated with the variables of the International factor found above. Why international exposure isn’t positively correlated with this International factor? We don’t know exactly. We analyzed this last factor carefully to find out what it was about and discovered, when we looked at the rank order of the schools on this factor, that only French schools were at the top – the first non-French school was ranked 17th. This last factor is thus a French factor that is not related in any way to the two factors identified above (ROI and Internationalism that are traditionally associated with business school quality). It means that the schools ranking high on this French factor don’t necessarily have a high ROI or a high international score. This appears clearly on the graph below. The French schools, represented by black dots are not on the top right or left of the graph where the very best schools are located.
Because this factor is more a peculiarity than anything else (something typically French not linked to real quality), the corresponding ranking will not be reported here. We would like to emphasize, though, that adding the variables of this French factor to all the other variables introduce a fatal bias in the FT rankings because French schools score higher on them and therefore obtain higher scores on the aggregate FT rankings. But those variables that French business schools score higher on (such as number of languages) increase their FT rankings despite the total absence of relationship between those variables and the two previous factors, in particular the ROI factor.
This French Factor is probably due, among others, to the display of a more multicultural attitude (as opposed to the international English-based factor found above) that is not so popular outside of France. A thorough explanation of this factor would require further research that is out of this article’s scope.
The graph below plots the ROI factor against the Internationalization factor. The schools scoring high on ROI are located at the top of the graph, and the best among them are represented by a triangle pointing upwards. The schools that are highly international stand on the right side of the graph, and are represented by a triangle pointing to the right. The numbers displayed represent the 2010 FT Masters in Management ranks. Basically, the best schools are in the top right corner, followed by the top left and bottom right corners, which represent high performance on the ROI and international dimensions. The middle of the graph represents the average schools, and the level gets lower as you go towards the bottom left corner.
French schools are represented by black dots. You can see that they are not necessarily in the highest positions, as opposed to the outcome of the FT methodology. All the other schools are represented by empty circles.
Thus, the top school given by this new methodology is the Universität St.Gallen in Switzerland (represented by a cross on the graph), closely followed by Indian Institute of Management, Ahmedabad in India and the WHU - Otto Beisheim School of Management in Germany from a return on investment perspective, and Cems (international - The Cems programme is taught in many different countries), London School of Economics in the U.K., for the international dimension.
This reflects a completely different hierarchy than the one displayed in the latest FT ranking issue –  some schools with wide FT rankings distances can be very close on our graph (i.e. schools 14, 43, 60). Indeed there is a huge discrepancy between real quality measured by ROI and the FT rankings, as already seen in the table above. The schools that have been the most severely downgraded are HHL-Leipzig GSM (-33) and TiasNimbas Business School, Tilburg University (-30).

Visual representation of the schools on ROI and Internationalism (numbered after the original 2010 FT ranking)


ROI (ROI score) and Internationalism (International score) are T score transformations of the factor scores obtained by Principal Component Analysis with Varimax rotation.
All T scores have a mean of 50 and a standard deviation of 10.
The numbers displayed represent the rank numbers of the 2010 FT Masters in Management ranking (see table above). For example, ‘1’ represents ESCP Europe, 
‘2’ represents Cems, ‘3’ HEC Paris and ‘4’ Universität St.Gallen.



References:
1-      Gioia, D.A. and Corley, K.G. Being Good Versus looking good: Business school rankings and the Circean transformation from image to substance, Academy of Management Learning & Education, 2002, Vol.1 N°1, 107-120.
2-      Schatz, M. What’s wrong with MBA ranking surveys? Management Research News, 1993, Vol.16 N°7, 15-18.
3-      Halperin, M., Hebert, R., Lusk, E.J. Comparing the rankings of MBA curricula: do methodologies matter? Journal of Business & Finance Librarianship, 2009, Vol. 14, 47-62.
4-      Peters, K. Business school rankings: content and context. European Journal of Management Development, 2007, Vol26 N°1, 49-53.
5-      Dichev, I. How good are business school rankings? Journal of Business, 1999, Vol. 72 N°2, 201-213.
6-      http://rankings.ft.com/businessschoolrankings/masters-in-management

3 comments:

  1. Hi Scientific Ranking Team,

    Your new article is wonderful and timely. At last some rankings that establish the truth behind the mainstream media. Your work is very much needed and I look forward to reading your next post!

    H. Eysenck

    ReplyDelete
  2. Hello Team,

    Excellent post. I could not find INSEAD in the list. Could you please let us know where INSEAD stand in ROI.

    Thanks

    ReplyDelete
  3. This design is steller! You obviously know how to keep a reader amused. Between your wit and your videos, I was almost moved to start my own blog (well, almost...HaHa!) Excellent job. I really enjoyed what you had to say, and more than that, how you presented it. Too cool!

    ReplyDelete