Table of Contents
- 1.Definition & Objectives
- 2.Role of multi-criteria analysis in impact assessment
- 3.Process & Method
- 4.Combination with other methods
- 5.Strengths and weaknesses
- 6. Choosing between multi-criteria analysis methods
- 7.Software for MCDA
- 9.Further Reading
Suggested citation: Geneletti, D. (2013). Multi-criteria analysis. LIAISE Toolbox. Retrieved date, from http://beta.liaise-toolbox.eu/ia-methods/multi-criteria-analysis.
MCA structures a decision problem in terms of several possible alternatives and assesses each of them under various criteria at the same time. There are a number of MCA methods to rank, compare and/or select the most suitable options according to the chosen criteria. Depending on the method, each criterion can also be measured in different ways, either qualitatively or quantitatively. In the following indications on the typical steps to be taken in order to carry out a MCA are provided. It is argued that MCA methods can effectively support the structuring, assessment of and decision making on complex policy issues. The main advantage of MCA methods is their capability to integrate a diversity of criteria in a multidimensional way, and to be adapted to a large variety of contexts. The procedures and results obtained from MCA can be improved with the interaction of stakeholders, and in this regard MCA methods are particularly suitable to be used in combination with participatory methods.
Definition & Objectives
Multicriteria analysis (MCA) is a family of methods commonly implemented by decision support systems (DSS) to compare alternative courses of action on the basis of multiple factors, and to identify the best performing solution (Massam, 1988). These methods include techniques to structure the decision problems, perform sensitivity analysis, improve transparency, enhance result and visualisation, etc. (Beinat and Nijkamp, 1998). A characteristic feature of multicriteria approaches is that the evaluation is based on a number of explicitly formulated criteria, i.e., ‘standard of judging’, that provide indications on the performance of the alternatives with respect to a number of objectives. The criteria, which are typically represented by considerable mutual difference in nature, are expressed by appropriate units of measurement. The nature of MCA makes it particularly suitable for decision problems in impact assessment and sustainability appraisal. This type of decision problems involves multiple objectives and multiple criteria, which are typically non-commensurable and often conflicting.
Role of multi-criteria analysis in impact assessment
MCA plays its main role in the comparison of policy options, by identifying the effects of these options, their relative performance and the trade−offs to be made.
MCA plays also a role in the development of options ("problem design"), when it is used to evaluate a series of options to eliminate the most undesired or, for example, when it is used in a GIS environment to identify possible solutions of a spatially-explicit problem (e.g., site selection).
However, in order to apply a MCA effectively in an impact assessment, first the objectives have to be made clear and the problem has to be structured in a specific way. So, problem definition of an impact assessment has to be done (properly) in order to successfully apply an MCA.
Process & Method
A short description follows of the typical operational steps required to carry out MCA to support decision problems. A more detail description can be found in DCLG (2009). The starting point is the setting-up of an evaluation matrix, which contains the possible alternatives and the criteria against which they have to be evaluated. The criterion scores consist in raw measurements expressed by different scales or units (monetary units, bio-physical units, etc.).
In order to be relatable to the degree of ‘desirability’ of the alternatives under analysis, such scores need to be transformed from their original units into a value scale. Different MCA methods use different approach for this stage (see list of methods at the end of the page). For the purpose of a general illustration of the concept, we use here the approach followed by multiattribute decision analysis methods (DCLG (2009). In this methods, through the step of value assessment (or normalization), the criterion scores lose their dimension and become an expression of the achievement of the evaluation objectives. This operation cane be performed by constructing a value scale or by generating a value function, i.e. a curve that expresses the relationship between the criterion scores and the correspondent value scores (Beinat, 1997; Geneletti 2005b). Value functions transform the score of a given criterion into values in a pre-defined range, typically between 0 and 1, where 0 corresponds to minimum desirability and 1 to maximum desirability. They also show whether a criterion is considered as a “benefit” (the higher the score, the higher the desirability) or as a “cost” (the higher the score, the lower the desirability).
The different evaluation criteria are usually characterised by different importance levels, which need to be included into the evaluation. This is obtained by assigning a weight to each criterion. A weight can be defined as a value assigned to a criterion that indicates its importance relatively to the other criteria under consideration (Malczewski, 1999). A survey of the methods developed to support the weight assignment can be found in Herwijnen (1999). The value of expert knowledge is widely recognised in impact assessment: it allows decision makers to take decisions when knowledge based on objective observations is not available, or not enough.
Once the weights are assigned to each criterion, the aggregation can be performed. This is done by using a decision rule that dictates how best to order the alternatives, on the basis of the data on the alternatives (criterion scores), and on the preferences of the decision-makers (criterion assessment and weights). The most widely used decision rule is probably the weighted linear combination. An overall score is calculated for each alternative by first multiplying the valued criterion scores by their appropriate weight, and then summing the weighted scores for all criteria. Another popular method is ‘concordance analysis’ (Roy, 1985), which assesses the ranking by pairwise comparison of the alternatives and it is particularly suitable when many criteria are expressed by qualitative information. Another family of method includes the so-called ideal/reference point methods, such as TOPSIS (Malczewski, 1999). They are based on the definition of an ideal point, and on the assessment of the alternatives according to their separation from this point. The ideal point results from the most desirable combination of all evaluation criteria.
Sensitivity analysis can be conducted to determine the robustness of the results with respect to the uncertainties in the assigned weights, value functions and scores, as well as to changes in the aggregation method. The rationale for sensitivity analysis is found in the fact that information available to the decision-makers is often uncertain and imprecise, owing to measurement and conceptual errors, as well as limited knowledge about process and values. Sensitivity analysis considers how, and how much, such errors and uncertainties affect the final result of the evaluation. Several techniques exist, the most popular of which test the sensitivity with respect to changes in the weights and the criterion scores. A way of performing this consists in imposing perturbations on the weights and/or the criterion scores to verify their effect on the overall performance of the alternatives. First of all, it is necessary to identify suitable uncertainty ranges for every weight/impact score. Such a range represents the maximum percentage by which we expect the weight/score to deviate from its original value, due to model limitations, data inaccuracy, disagreement among experts, etc. Afterwards, the weights/scores are calculated a large number of times using a random number generator within the pre-defined uncertainty ranges. The multicriteria aggregation is then repeated for each weight/score set, generating frequency tables that shows how often each alternative ranks in each position (examples can be found in Geneletti, 2010). Similarly, uncertainty ranges can be considered also in the construction of value functions, which then become “value regions” within which the normalized values are expected to be found (and selected randomly or through statistical sampling techniques). Another useful technique consists in studying the sensitivity of the ranking with respect to one of the criterion weights, by computing the rankings corresponding to all possible values for that weight. During this procedure the ratios of the remaining weights is assumed the same as those of the original weights. This analysis is useful to identify the reversal points, i.e., the weight values that cause a reversal in the rank order of two (or more) alternative (see example in Figure 1).
Figure 1. Results of the analysis of the sensitivity of a hypothetical ranking of three alternatives, with respect to changes in the weight of one of the evaluation criteria. The weight (represented in the horizontal axis) varies between 0 and 1. The vertical axis shows the composite score of the three alternatives, also ranging between 0 and 1. Hence the three lines represent the trend of the scores of the alternatives for all possible values assigned to the weight. This representation allows to visualize the reversal points (see the small circle in this figure), corresponding to the weights that cause a reversal in the rank order of the alternatives. Source: modified after Geneletti, 2005.
Combination with other methods
The robustness of an MCA result depends, among other things, upon the extent to which weights and values are commonly agreed upon by stakeholders. Hence, MCA is commonly employed in combination with other methods to elicit stakeholders' opinion, among which are Delphi survey and stakeholder analysis. The Delphi survey is a technique to elicit expert’s opinion that has been extensively applied to environmental management (MacMillan and Marshall, 2006). Delphi surveys aim at soliciting the advice of a panel of experts, and whenever possible forging a consensus. The approach is based on structured and written questionnaires to which panelists are asked to answer anonymously. All responses are summarised and reported back to panelists who have the opportunity to revise their judgments. Stakeholder analysis is a focused and well-planned exercise aimed at answering questions that are directly relevant and useful to the planning and management process (Renard, 2004). It allows to incorporate different values and concerns by identifying the key actors, and assessing their respective interest in the decision under analysis. In Geneletti (2010) and (2008) case studies showing the use of, respectively, Delphi survey and stakeholder analysis to assess weights in MCA are presented.
The spatial nature of many problems in policy assessment makes the use of Geographical Information Systems (GIS) necessary to easily manage the input data. MCA in a GIS environment (or spatial multicriteria analysis, SMCA) is a procedure to identify and compare solutions to a spatial problem, based on the combination of multiple factors that can be, at least partially, represented by maps (Malczewski, 1999). This approach takes advantage of both the capability of GIS to manage and process spatial information, and the flexibility of MCA to combine factual information (e.g., expected pollution levels) with value-based information (e.g., stakeholders’ opinion, participatory surveys) (Geneletti, 2010). Taking into account both technical elements and people’s values and perceptions is essential to build consensus around a decision, to reduce conflicts, and consequently to pave the way to successful policy making. Operationally, the steps of SMCA are similar to the one previously described for MCA, the most relevant difference being the fact that (some of the) criteria, as well as the final outputs, are represented by maps, i.e. spatial distribution of scores rather than individual scores (a comprehensive description of the SMCA process can be found in Herwijnen, 1999).
Strengths and weaknesses
Strengths of MCA
- Learning process that stimulates discussion and facilitates a common understanding of the decision problem
- Openness to divergent values and opinions
- Capability to tackle qualitative and intangible factors
- Accountability (systematic, transparent)
- Capability to support conflict resolution and to help reaching a political compromise
- Support a broad stakeholder participation
- Preferences are revealed in a more explicit and more direct way
Weaknesses and difficulties of MCA
- Potentially time consuming and technically complex
- Perceived as a technocratic approach
- Difficult inter-comparison of case studies
- Choice of stakeholders and timing of their participation
- Experts/stakeholders are reluctant to share their knowledge and values
(Source: Modified and integrated after Gamber and Turcanu, 2007)
Choosing between multi-criteria analysis methods
A large number of MCA methods exist to rank, compare and/or select the most suitable policy options according to the chosen criteria. These methods distinguish themselves through the decision rule used (compensatory, partial−compensatory and non−compensatory) and through the type of data they can handle (quantitative, qualitative or mixed). So the method to choose to apply MCA depends on the decision rule preferred and the type of data available (see Table 1).
multi-attribute value theory
Analytic hierarchy process
A decision rule is a procedure that allows for ordering alternative policies (Starr and Zeleny 1977; Greco et al. 2005). It integrates the data and information on alternatives and decision maker’s preferences into an overall assessment of the alternatives. The concept of compensability is an important factor in these decision rules. Compensability refers to the possibility of compensating what is considered to be a ‘bad’ performance of a criterion (for example a high environmental impact) with a ‘good’ performance of another criterion (for example a high income). According to the extent different criteria can be compensated by other criteria, three main types of methods can be distinguished in MCA: compensatory, partial−compensatory and non−compensatory methods. Within a compensatory method a weak performance of one criterion can be totally compensated by a good performance of another criterion. Within a partial−compensatory method a limit is set to the allowance to compensate weak performances by good ones. A non−compensatory method finally does not allow compensation at all.
Type of data
In principle each criterion to order policy alternatives can be measured qualitatively or quantitatively. Some MCA methods are designed to process only quantitative information on criteria (Weighted Summation). In practice, this disadvantage is not very significant because the pluses and minuses used for qualitative assessments are often derived from underlying classes of quantitative data. With a well−chosen method of standardisation such as goal standardisation this underlying quantitative scale can be used in the weighted summation of these scores. Other methods are designed to process qualitative data (Dominance method, Regime). Finally there is a group of MCA methods that can handle data according to the way it is measured (those with a tick mark under the heading ‘mixed data’ in Table 1).
Software for MCDA
Multi−criteria methods combine factual information with policy priorities. Therefore, they cannot only support decision−making with multiple objectives but also discussions and negotiations between stakeholders involved in the decision process. Belton and Stewart (2001) state that for the effective conduct of MCA in practice good supporting software is essential. In this way the facilitator, the analyst and the decision maker (DM) are free from the technical implementation details, and are able to focus on the fundamental value judgments and choices. Although it is possible to set up macros in a spreadsheet to achieve this, it is more convenient to make use of specially designed software. Software to be used when working directly with decision makers should be visual and interactive in order to facilitate communication about the problem and the evaluation results. Interactivity permits information on evaluations, impact scores, priorities and other parameters to be easily entered and changed. Effective visual tools can be used to reflect the information provided back to the decision makers, for example by using a graphical presentation of an inferred value function or of an aggregated evaluation result. Then these visual tools can help to create understanding of the essence of the issues. MCA software tools can be subdivided into four groups according to the type of support they provide:
1. Problem structuring for discrete choice problems,
Most decision support systems assume that a problem is already structured (cause−effect relationships are known, evaluation criteria are specified, alternatives under evaluation are well described etc.). But in practice this assumption is the exception rather than the norm. Therefore, a number of software tools are available to support the structuring of discrete choice problems.
2. Discrete choice problems,
If a discrete choice problem is structured, many software tools are available to support the evaluation of the problem. Most of these tools are originally designed for individual support, or group support in which the group shares one single set of information.
3. Discrete group choice problems
Lately, a number of developments have taken place in the field of real group systems. A group system allows more than one user to independently input their own evaluations and provides facilities for synthesizing and displaying this information. Group systems can therefore be used to support discussions in stakeholder sessions.
4. Discrete spatial choice problems.
Another development taking place in the field of multi−criteria decision support is the integration of spatial data. Systems that allow space−dependent input data, incorporate spatial multi−criteria evaluation tools and can display the information and results spatially are called spatial decision support systems (see also Herwijnen 1999 and Uran 2002). Most of these systems are tailor−made for one specific problem. But tools to support general discrete spatial choice problems can sometimes be found in specific procedures incorporated in a GIS.
The table below shows a list of software tools that could be used to support MCA in practice. Note that the list is not complete and will, of course, become rapidly outdated.
Other listings of MCA software can be found at:
1. Problem structuring for discrete choice problems
Decision Explorer 3.2
Qualitative data analysis, linking concepts through cognitive or cause maps (www.banxia.com)
Mind Manager 4.0
Structures complex situations through organising ideas and concepts, graphical visualization with icons, graphics, colors and multimedia (www.mind-map.com)
2. Discrete choice problems
Criterium DecisionPlus 3.0
Value function model based on trade-off analysis (www.infoharvest.com)
Promothee, pair wise comparison
Multiattribute value functions including option for imprecise preference information, cost-benefit analysis, outranking (www.ivm.vu.nl/en/projects/Projects/spatial-analysis/DEFINITE/index.asp)
Multiattribute value functions with imprecise preference information (www.hipre.hut.fi)
Multiattribute value functions (www.enterprise-lse.co.uk)
Logical Decisions 5.1
Multiattribute value functions and the Analytical Hierarchy Process (AHP) (www.logicaldecisions.com)
Novel Approach to Imprecise Assessment and Decision Environments (http://www.aiaccproject.org/meetings/ Trieste_02/trieste_cd/Adaptation/Munda_short.PDF).
Multiattribute value functions, graphical interaction and presentation (http://visadecisions.com/)
3. Discrete group choice problems
Team Expert Choice
AHP, pair wise comparisons (www.expertchoice.com)
Multiattribute value functions (www.simul8.com/visa.htm)
Multiattribute value functions and AHP (www.hipre.hut.fi)
4. Discrete spatial choice problems
A GIS that includes the following decision support procedures: WEIGHT (AHP), MCE (boolean combination, weighted linear combination or ordered weighted average), RANK (rank order the cells) and MOLA (allocate pixels to multiple objectives) (www.clarklabs.org)
A GIS that includes decision support procedures to assign weights and apply MCA on various ways (www.)
Ecosystem Management Decision Support; combines ArcGISTM, NetWeaver and Criterium DecisionPlus (www.institute.redlands.edu/emds/)
- Beinat, E., 1997. Value Functions for Environmental Management. Kluwer, Dordrecht.
- Beinat, E, and P Nijkamp (editors) (1998), Multicriteria Analysis for Land-use Management. Kluwer, Dordrecht.
- Department for Communities and Local Government (DCLG), 2009. Multi-criteria analysis: a manual. www.communities.gov.uk
- Geneletti, D., 2010. Combining stakeholder analysis and spatial multicriteria evaluation to select and rank inert landfill sites, Waste Management 30, 328-337.
- Geneletti,D., 2008. Incorporating biodiversity assets in spatial planning: Methodological proposal and development of a planning support system, Landscape and Urban Planning 84, 252-265.Geneletti, D. (2005) Multicriteria analysis to compare the impact of alternative road corridors: a case study in Northern Italy, Impact Assessment and Project Appraisal. 23, 135-146.
- Gamber and Turcanu (2007). On the governmental use of MCA. Ecological Economics (62)
- Greco, S., B. Matarazzo and R. Slowinski (2005). Decision Rule Approach. In J. Figueira, S. Greco, and M. Ehrgott, editors, Multiple Criteria Decision Analysis: State of the Art Surveys, pages 507−562. Springer Verlag, Boston, Dordrecht, London.
- Herwijnen, M.van, 1999. Spatial decision support for environmental management. PhD thesis, Vrije Universiteit Amsterdam.
- Malczewski, J. 1999. GIS and Multicriteria Decision Analysis. Wiley, New York.
- Massam, B.H., 1988. Multi-criteria decision making techniques in planning. Prog. Plann. 30, 1–84.
- MacMillan, D.C., Marshall, K., 2006. The Delphi process—an expert-based approach to ecological modelling in data-poor environments. Animal Conservation 9, 11–19.
- Renard, Y., 2004. Guidelines for Stakeholder Identification and Analysis: A Manual for Caribbean Natural Resource Managers and Planners. Caribbean Natural Resources Institute, Trinidad.
- Roy, B., 1985. Méthodologie multicritère d'aide à la décision. Economica, Paris.
- Starr, M.K. and M. Zeleny (1977). MCDM: state and future of the arts. In: M.K. Starr and M. Zeleny (eds.), Multiple criteria decision making. Amsterdam: North−Holland, pp. 5−29.
Theory and review of methods and applications
- French, S. et al 2009. Decision behaviour, analysis, and support. Cambridge, Cambridge university Press.
- Kiker et al. (2005), Application of Multicriteria Decision Analysis in Environmental Decision Making, Integrated Environmental Assessment and Management — Volume 1, Number 2 — pp. 95–108.
- Lahdelma, R., P Salminen, J Hokkanen (2000), “Using multicriteria methods in environmental planning and management”, Environmental Management 26(6), pages 595-605.
- Hjortsø, C.N., Stræde, S., Helles, F., “Applying multi criteria decision making to protected areas and buffer zone management”. Journal of Forest Economics 12(2) (2006) 91-108.
- Joerin, F., Thériault, M., Musy, A., 2001. Using GIS and outranking multicriteria analysis for land-use suitability assessment. International Journal of Geographical Information Science 15 (2), 153–174.
- Keisler, J.M., Sundell, R.C., “Combining Multi-Attribute Utility and Geographic Information for Boundary Decisions: An Application to Park Planning”. Journal of Geographic Information and Decision Analysis 1(2) (1997) 101-118.
Wikipedia (2013). Multi-criteria decision analysis. Retrieved from: http://en.wikipedia.org/wiki/Multi-criteria_decision_analysis