83,99 €
Presents recent advances in both models and systems for intelligent decision making.
Organisations often face complex decisions requiring the assessment of large amounts of data. In recent years Multicriteria Decision Aid (MCDA) and Artificial Intelligence (AI) techniques have been applied with considerable success to support decision making in a wide range of complex real-world problems.
The integration of MCDA and AI provides new capabilities relating to the structuring of complex decision problems in static and distributed environments. These include the handling of massive data sets, the modelling of ill-structured information, the construction of advanced decision models, and the development of efficient computational optimization algorithms for problem solving. This book covers a rich set of topics, including intelligent decision support technologies, data mining models for decision making, evidential reasoning, evolutionary multiobjective optimization, fuzzy modelling, as well as applications in management and engineering.
Multicriteria Decision Aid and Artificial Intelligence:
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 694
Veröffentlichungsjahr: 2013
Table of Contents
Title Page
Copyright
Preface
Notes on Contributors
Part I: The Contributions of Intelligent Techniques in Multicriteria Decision Aiding
Chapter 1: Computational intelligence techniques for multicriteria decision aiding: An overview
1.1 Introduction
1.2 The MCDA paradigm
1.3 Computational intelligence in MCDA
1.4 Conclusions
References
Chapter 2: Intelligent decision support systems
2.1 Introduction
2.2 Fundamentals of human decision making
2.3 Decision support systems
2.4 Intelligent decision support systems
2.5 Evaluating intelligent decision support systems
2.6 Summary and future trends
Acknowledgment
References
Chapter 3: Designing distributed multi-criteria decision support systems for complex and uncertain situations
3.1 Introduction
3.2 Example applications
3.3 Key challenges
3.4 Making trade-offs: Multi-criteria decision analysis
3.5 Exploring the future: Scenario-based reasoning
3.6 Making robust decisions: Combining MCDA and SBR
3.7 Discussion
3.8 Conclusion
Acknowledgment
References
Part II: Intelligent Technologies for Decision Support and Preference Modeling
Chapter 4: Preference representation with ontologies
4.1 Introduction
4.2 Ontology-based preference models
4.3 Maintaining the user profile up to date
4.4 Decision making methods exploiting the preference information stored in ontologies
4.5 Discussion and open questions
Acknowledgments
References
Part III: Decision Models
Chapter 5: Neural networks in multicriteria decision support
5.1 Introduction
5.2 Basic concepts of neural networks
5.3 Basics in multicriteria decision aid
5.4 Neural networks and multicriteria decision support
5.5 Summary and conclusions
References
Chapter 6: Rule-based approach to multicriteria ranking
6.1 Introduction
6.2 Problem setting
6.3 Pairwise comparison table
6.4 Rough approximation of outranking and nonoutranking relations
6.5 Induction and application of decision rules
6.6 Exploitation of preference graphs
6.7 Illustrative example
6.8 Summary and conclusions
Acknowledgment
References
Appendix
Chapter 7: About the application of evidence theory in multicriteria decision aid
7.1 Introduction
7.2 Evidence theory: Some concepts
7.3 New concepts in evidence theory for MCDA
7.4 Multicriteria methods modeled by evidence theory
7.5 Discussion
7.6 Conclusion
References
Part IV: Multiobjective Optimization
Chapter 8: Interactive approaches applied to multiobjective evolutionary algorithms
8.1 Introduction
8.2 Basic concepts and notation
8.3 MOEAs based on reference point methods
8.4 MOEAs based on value function methods
8.5 Miscellaneous methods
8.6 Conclusions and future work
Acknowledgment
References
Chapter 9: Generalized data envelopment analysis and computational intelligence in multiple criteria decision making
9.1 Introduction
9.2 Generalized data envelopment analysis
9.3 Generation of Pareto optimal solutions using GDEA and computational intelligence
9.4 Summary
References
Chapter 10: Fuzzy multiobjective optimization
10.1 Introduction
10.2 Solution concepts for multiobjective programming
10.3 Interactive multiobjective linear programming
10.4 Fuzzy multiobjective linear programming
10.5 Interactive fuzzy multiobjective linear programming
10.6 Interactive fuzzy multiobjective linear programming with fuzzy parameters
10.7 Interactive fuzzy stochastic multiobjective linear programming
10.8 Related works and applications
References
Part V: Applications in Management and Engineering
Chapter 11: Multiple criteria decision aid and agents: Supporting effective resource federation in virtual organizations
11.1 Introduction
11.2 The intuition of MCDA in multi-agent systems
11.3 Resource federation applied
11.4 An illustrative example
11.5 Conclusions
References
Chapter 12: Fuzzy analytic hierarchy process using type-2 fuzzy sets: An application to warehouse location selection
12.1 Introduction
12.2 Multicriteria selection
12.3 Literature review of fuzzy AHP
12.4 Buckley's type-1 fuzzy AHP
12.5 Type-2 fuzzy sets
12.6 Type-2 fuzzy AHP
12.7 An application: Warehouse location selection
12.8 Conclusion
References
Chapter 13: Applying genetic algorithms to optimize energy efficiency in buildings
13.1 Introduction
13.2 State-of-the-art review
13.3 An example case study
13.4 Development and application of a genetic algorithm for the example case study
13.5 Conclusions
Chapter 14: Nature-inspired intelligence for Pareto optimality analysis in portfolio optimization
14.1 Introduction
14.2 Literature review
14.3 Methodological issues
14.4 Pareto optimal sets in portfolio optimization
14.5 Computational results
14.6 Conclusion
References
Index
This edition first published 2013
© 2013 John Wiley & Sons, Ltd
Registered office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Doumpos, Michael.
Multicriteria decision aid and artificial intelligence : links, theory and applications / Michael Doumpos,
Evangelos Grigoroudis.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-119-97639-4 (hardback)
1. Multiple criteria decision making. 2. Artificial intelligence. I. Grigoroudis, Evangelos. II. Title.
T57.95.D578 2013
658.4'033–dc23
2012040171
In the rapidly evolving technological and business environment, decision making becomes increasingly more complex from many perspectives. For instance, environmental and sustainable development issues have risen but the related policies, priorities, goals, and socio-economic tradeoffs are not well defined or understood in depth. Furthermore, technological advances provide new capabilities in many areas such as telecommunications, web-based technologies, transportation and logistics, manufacturing, energy management, and worldwide trade. Finally, the economic turmoil increases the uncertainties in the global business environment, and has direct impact on all socio-economic and technological policies.
Such a challenging environment calls for the implementation of enhanced tools, processes, and techniques for decision analysis and support. Clearly, these should take into account the aforementioned multiple diverse aspects, in combination with the specific priorities and goals set by decision and policy makers. This has always been a prerequisite for providing decision support in a realistic context. But it is not enough anymore, as decision support technologies should nowadays also accommodate a variety of new (often crucial) requirements, such as distributed decision making, the handling of massive and increasingly complex data and structures, as well as the computational difficulties that arise in building and using models and systems, which are realistic enough to represent the dynamic nature of existing problems and challenges posed by new ones.
Multicriteria decision aid (MCDA) has evolved significantly over the past decades as a major discipline in operations research, dealing with providing decision support in complex, ill-structured problems involving multiple (conflicting) criteria. MCDA is involved in all aspects of the decision process, including problem structuring, model building, formulation of recommendations, implementation and support. Issues like preference modeling, the construction of proper sets of criteria and their measurement, the characterization of different criteria aggregation models, the development of effective interactive solution techniques, and the implementation of sophisticated methods in user-friendly decision support systems, have traditionally been at the core of MCDA research.
Among the many exciting trends developing in the area of MCDA, the one focused on exploring the connections of MCDA with other disciplines, is particularly interesting for providing integrated decision support in the complex context described above. In this framework, artificial intelligence (AI) has attracted much interest. Nowadays, AI is a broad field within which one can identify several major research areas, including among others, machine learning/data mining, soft computing, evolutionary computation, knowledge engineering and management, expert systems, symbolic reasoning, cognitive systems, etc. Even though, AI research ismostly focused on predictive modeling and the development of technological systems that imitate human behavior, the methods and techniques developed in this field have much to offer towards meeting the new requirement for decision support described above. This potential has been acknowledged by researchers working in MCDA and AI, and has led to a growing trend involved with a unification of ideas developed (often) independently in the two fields. Among others, the results of this unification trend can be identified in the development of new decision modeling forms and paradigms, advanced solution techniques for complex decision problems, new approaches for preference elicitation and learning, as well as implementations in integrated intelligent systems.
Nevertheless, despite the vast amount of research published on MCDA and AI, the research on their integration is scattered across different sources, which are often oriented toward different readerships from the one or the other field. Having that in mind, the aim set for the preparation of this edited volume, was to present in a unified manner the various capabilities that the integration of MCDA and AI provide, from different decision support perspectives, focusing on state-of-the-art research advances in this area, the comprehensive coverage of the existing literature, as well as applications in several fields. The book includes 14 chapters organized into five parts, covering all these topics in a comprehensive and rigorous manner.
The first two chapters provide detailed overviews of the use of AI methods in MCDA. In particular, the first chapter by Doumpos and Zopounidis, is focused on the computational intelligence paradigm. The chapter begins with an introduction to the basic concepts and principles of MCDA and then proceeds with an up-to-date review on the uses of computational intelligence approaches in MCDA. The review focuses on the methodological contributions of statistical learning, fuzzy systems, and evolutionary computation, in areas such as preference modeling, preference disaggregation analysis, multiobjective optimization, and decision making under uncertainty.
In the second chapter, Phillips-Wren extends the overview covering AI techniques and approaches in the context of decision support systems. The chapter first introduces the fundamentals of human decision making, followed by a presentation of the decision support systems philosophy, as computer systems that utilize data, models, and knowledge to solve complex decision problems. Then, the contribution of intelligent technologies is discussed covering areas such as neural networks, fuzzy logic, evolutionary computing, expert systems, and intelligent agents. The chapter closes with the introduction of a framework for evaluating the success of intelligent decision support systems in a multicriteria context.
The following two chapters cover AI techniques and technologies, which are particularly useful in the context of preference modeling as well as for the development of decision support systems. In particular, the chapter by Comes, Wijngaards, and Schultmann, involves multicriteria decision support systems, in a distributed setting. The authors focus on strategic decision making problems, which are characterized by high complexity and uncertainty.Examples of such decision situations are discussed and the key challenges are identified. To meet these challenges, the authors propose a framework combining techniques from MCDA with scenario-based reasoning. The framework is illustrated through a decision making situation involving emergency management.
The next chapter, by Valls, Moreno, and Borràs, is involved with the representation and management of the user preferences in decision support systems. The chapter illustrates how a semantic-based approach can be implemented to store and exploit the personal preferences of a user in complex domains. This approach is based on ontologies, which enable the representation of the domain's elements in a machine understandable manner. The authors analyze over 30 semantic-based recommender systems and review different ontology-based models for preference representation, as well as algorithms for learning the user profile. The chapter also discusses the way MCDM techniques can be combined with ontology-based user profiles in recommended systems.
Chapters 5–7 present the contributions of popular AI paradigms in constructing new types of decision aiding models. In Chapter 5, Hanne focuses on neural networks. Neural networks have been one of the most widely used and successful AI techniques. The chapter presents the main concepts and types of neural networks and reviews their use in various aspects of MCDA.
Chapter 6, by Szelag, Greco, and Słowiski, is devoted to rule-based models. The authors focus on ranking problems, where the objective is to rank a set of alternatives from the best to the worst ones. Models expressed in the form of decision rule are widely used in data mining and machine learning for classification. This chapter illustrates how rough set theory, a popular machine learning technique, can be extended to multicriteria ranking problems. The proposed methodology is based on the dominance-based rough set approach and it enables the development of decision rule models from decision examples.
In Chapter 7, Boujelben and De Smet analyze the applications of evidence theory in MCDA. Evidence theory has primarily been developed in the context of AI as a generalization of subjective probability theory, which is particularly suitable for problems under uncertainty and total ignorance. Thus, evidence theory provides a convenient framework for modeling and combining imperfect information. The chapter describes five multicriteria methods based on evidence theory and presents new concepts that have been developed within this modeling approach with specific applications in MCDA.
The following three chapters of the book are devoted to computational intelligence methods for multiobjective optimization. This part of the book starts with Chapter 8, by López Jaimes and Coello Coello, which is devoted to evolutionary algorithms for multiobjective optimization. This is one of the most active research topics in operations research in general and MCDA in particular. López Jaimes and Coello Coello focus on interactive procedures, in which the decision-maker has an active role in the solution process. The chapter presents a categorized review of recent multiobjective evolutionary algorithms designed to workas interactive optimization methods.
In the following chapter, Yun and Nakayama illustrate how data envelopment analysis (DEA) can be used in the context of multiobjective optimization combined with computational intelligence techniques. DEA is a popular approach for analyzing the efficiency of decision making units. Its underlying philosophy is closely related to multiobjective optimization and Pareto optimality. The authors introduce a generalized data envelopment analysis (GDEA) model and present several methods for combining GDEA and computational intelligence techniques for generating approximate Pareto optimal solutions. The methodology also enables the identification of the most interesting part of Pareto optimal solutions, as well as the improvement of the operation of popular computational optimization techniques.
In Chapter 10, Sakawa presents a comprehensive overview of fuzzy multiobjective linear programming. The chapter introduces modeling formulations for problems where the decision-maker has fuzzy goals, as well as cases where the parameters in the description of the objective functions and the constraints are fuzzy. Interactive solution techniques are also presented for these classes of problems. The chapter also discusses stochastic multiobjective linear programming problems and illustrates how they can be transformed into deterministic ones using a probability maximization model together with chance constrained conditions.
The last four chapters of the book are application-oriented, illustrating how the combination of AI and MCDA techniques contribute in addressing complex real-world decision making problems from various fields. In Chapter 11, Delias and Matsatsinis present a multicriteria methodology for supporting resource sharing in virtual organizations. The problem is considered in a cloud computing context, where specific computing resources should be managed in order to meet the needs of the clients. The proposed methodology adopts a multi-agent system design approach, with the ultimate goal of the methodology being the modeling of the collective preferences of the agents (clients). The methodology is illustrated through an example application to a data center that seeks to optimize the application environments that it hosts.
The following chapter, by Sarı, Öztayi, and Kahraman, presents the combination of fuzzy sets with MCDA techniques for the evaluation of warehouse location sites. Fuzzy set theory enables the formal modeling of uncertainty, vagueness, and imprecision that characterizes many ill-structured complex decision problems. The authors discuss the applicability of fuzzy sets in several MCDA techniques, and illustrate how a modeling approach combining type-2 fuzzy sets with the analytic hierarchy process can be employed to a warehouse selection problem.
In Chapter 13, Diakaki and Grigoroudis illustrate the use of genetic algorithms in the optimization of energy efficiency in building in a multiobjective context. The authors review the existing literature on the use of such optimization techniques in improving the energy efficiency of buildings and present an application case study regarding the minimization of the initial investment cost and the increase of the energy savings in a building.
The book closes with a chapter by Vassiliadis and Dounias on the use of a computational intelligence multiobjective optimization approach for portfolio optimization. In a traditional portfolio optimization setting an investor seeks to construct a portfolio of assets that maximizes return for a given level of risk, which leads to a bi-objective quadratic optimization model. The authors consider this problem using the case where constraints are imposed on the number of assets in the portfolio and examine the use of new risk measures. In order to construct the set of efficient portfolios, a genetic algorithm is employed combined with a local search procedure. Computational results are presented using a data set involving stocks from the New York stock exchange.
Sincere thanks must be expressed to all the authors who have devoted considerable time and effort to prepare excellent comprehensive works of high scientific quality and value. Without their help it would have been impossible to prepare this book in line with the high standards that we set from the very beginning of this project.
Michael Doumpos, Chania, GreeceEvangelos Grigoroudis, Chania, Greece
July 2012
Part I
The Contributions of Intelligent Techniques in Multicriteria Decision Aiding
Michael Doumpos and Constantin Zopounidis
Department of Production Engineering and Management, Technical University of Crete, Greece
Real world decision-making problems are usually too complex and ill-structured to be considered through the examination of a single criterion, attribute or point of view that will lead to an ‘optimal’ decision. In fact, such a single-dimensional approach is merely an oversimplification of the actual nature of the problem at hand, and it can lead to unrealistic decisions. A more appealing approach would be the simultaneous consideration of all pertinent factors that are related to the problem. However, through this approach some very essential issues/questions emerge: how can several and often conflicting factors be aggregated into a single evaluation model? Is this evaluation model unique and/or ‘optimal’? In addressing such issues, one has to bear in mind that each decision-maker (DM) has his/her own preferences, experiences, and decision-making policy.
The field of multicriteria decision aid (MCDA) is devoted to the study of problems that fit the above context. Among others, MCDA focuses on the development and implementation of decision support tools and methodologies to confront complex decision problems involving multiple criteria, goals or objectives of conflicting nature. It has to be emphasized through, that MCDA techniques and methodologies are not just some mathematical models aggregating criteria that enable one to make optimal decisions in an automatic manner. Instead, MCDA has a strong decision support focus. In this context the DM has an active role in the decision-modeling process, which is implemented interactively and iteratively until a satisfactory recommendation is obtained that fits the preferences and policy of a particular DM or a group of DMs.
Even though MCDA has developed as a major and well-distinguished field of operations research, its interaction with other disciplines has also received much attention. This is understood if one considers the wide range of issues related to the decision process, which the MCDA paradigm addresses. These involve among others the phases of problem structuring, preference modeling, the construction and characterization of different forms of criteria aggregation models, as well as the design of interactive solution and decision aid procedures and systems. The diverse nature of these topics often calls for an interdisciplinary approach.
A significant part of the research on the connections of MCDA with other disciplines has focused on intelligent systems. Over the past decades enormous progress has been made in the field of artificial intelligence, in areas such as expert systems, knowledge-based systems, case-based reasoning, fuzzy logic, and data mining. This chapter focuses on computational intelligence, which has emerged as a distinct sub-field of artificial intelligence involved with the study of adaptive mechanisms to enable intelligent behavior in complex and changing environments (Engelbrecht 2002). Typical computational intelligence paradigms include machine learning algorithms, evolutionary computation and nature-inspired computational methodologies, as well as fuzzy systems. We provide an overview of the main contributions of popular computational intelligence approaches in MCDA, covering areas such as multiobjective optimization, preference modeling, and model building through preference disaggregation.
The rest of the chapter is organized as follows: Section 1.2 presents an introduction to the MCDA paradigm, its main concepts and methodological streams. Section 1.3 is devoted to the overview of the connections between MCDA and computational intelligence, focusing on three main fields of computational intelligence, namely statistical learning/data mining, fuzzy set theory, and metaheuristics. Finally, Section 1.4 concludes the chapter and discusses some future research directions.
The major goal of MCDA is to provide a set of criteria aggregation methodologies that enable the development of decision support models considering the DM's preferential system and judgment policy. Achieving this goal requires the implementation of complex processes. Most commonly, these processes do not lead to optimal solutions/decisions, but to satisfactory ones that are in accordance with the DM's policy. Roy (1985) introduced a general framework that covers all aspects of the MCDA modeling philosophy (Figure 1.1).
Figure 1.1 The MCDA modeling process.
The first level of the process, involves the specification of a set A of feasible alternative solutions for the decision problem at hand. This set can be continuous or discrete. In the former case, it is specified through a set of constraints. In the case where A is discrete, it is assumed that the DM can list the alternatives which will be subject to evaluation within the given decision-making framework. The form that the output of the analysis should have is also defined at the first phase of the process. This involves the selection of an appropriate decision ‘problematic’, which may involve: (a) the choice of the best alternative or a set of good alternatives; (b) the ranking of the alternatives from the best to the worst ones; (c) the classification of the alternatives into predefined categories; and (d) the description of the alternatives and their characteristics.
The second stage involves the identification of all factors related to the decision. MCDA assumes that these factors have the form of criteria. A criterion is a real function f measuring the performance of the alternatives on each of their individual characteristics. The set of selected criteria must form a consistent family of criteria. A consistent family of criteria is characterized by the following properties (Bouyssou 1990):
Monotonicity: If alternative
x
is preferred over alternative
y
, the same should also hold for any alternative
z
such that for all
k
.
Completeness: If for all criteria, then the DM should be indifferent between alternatives
x
and
y
.
Nonredundancy: The set of criteria satisfies the nonredundancy property if the elimination of any criterion results to the violation of monotonicity and/or completeness.
Once a consistent family of criteria has been specified, the next step is to proceed with the specification of the criteria aggregation model that meets the requirements of the problem. Finally, the last stage involves all the necessary supportive actions needed for the successful implementation of the results of the analysis and the justification of the model's recommendations.
MCDA provides a wide range of methodologies for addressing decision-making problems of different types. The differences between these methodologies involve the form of the models, the model development process, and their scope of application. On the basis of these characteristics, Pardalos et al. (1995) suggested the following four main streams in MCDA research:
Multiobjective mathematical programming.
Multiattribute utility/value theory.
Outranking relations.
Preference disaggregation analysis.
The following subsections provide a brief overview of these methodological streams.
Multiobjective mathematical programming (MMP) extends the well-known single objective mathematical programming framework to problems involving multiple objectives. Formally, a MMP problem has the following form:
1.1
where x is the vector of the decision variables, are the objective functions (in maximization form), and is the set of feasible solutions defined through multiple constraints.
In a MMP context, the objectives are assumed to be in conflict, which implies that it is not impossible to find a solution that maximizes all the objectives simultaneously. In that regard, efficient solutions (Pareto optimal or nondominated solutions) are of interest. A solution is referred to as efficient if there is no other solution x that dominates , i.e., for all k and for at least one objective j. An overview of the MMP theory and different techniques for finding Pareto optimal solutions can be found in the books of Steuer (1985), Miettinen (1998), Ehrgott and Gandibleux (2002), and Ehrgott (2005).
An alternative approach to model multiobjective optimization problems is through goal programming formulations. In the context of goal programming a function of the deviations from some pre-specified goals is optimized. The goals are set by the DM and may represent ideal points on the objectives, some benchmark or reference points, or a set of satisfactory target levels on the objectives that should be met as closely as possible. The general form of a goal programming formulation is the following:
1.2
where is the target level (goal) set for objective k, and are the deviations from the target, and is a function of the deviations, which is parameterized by a vector w of weighting coefficients. These coefficients may either represent the trade-offs between the deviations corresponding to different objectives or indicate a lexicographic ordering of the deviations' significance (pre-emptive goal programming). An overview of the theory and applications of goal programming can be found in Aouni and Kettani (2001), Jones and Tamiz (2002), as well as in the book of Jones and Tamiz (2010).
Multiattribute utility/value theory (MAUT/MAVT) extends the traditional utility theory to the multidimensional case.1 MAVT has been one of the cornerstones of the development of MCDA and its practical applications. The objective of MAVT is to model and represent the DM's preferential system into a value function , where x is the vector with the data available over a set of n evaluation criteria. The value function is defined on the criteria space, such that:
1.3
where denotes preference and denotes indifference. The most commonly used form of value function is the additive one:
1.4
where is the trade-off constant for criterion k (usually the trade-off constants are assumed to sum up to one) and is the corresponding marginal value function, which defines the partial value (performance score) of the alternatives on criterion k, in a predefined scale (e.g., in [0, 1]). If the marginal value function is assumed to be linear the additive model reduces to a simple weighted average of the criteria. Keeney and Raiffa (1993) present in detail the theoretical principles of MAVT under both certainty and uncertainty, and discuss the independence conditions that characterize different types of value models (e.g., additive, multiplicative, multi-linear).
The foundations of the outranking relation theory (ORT) have been set by Bernard Roy during the late 1960s through the development of the ELECTRE family of methods (ELimination Et Choix Traduisant la REalité; Roy 1968). Since then, ORT has been widely used by MCDA researchers, mainly in Europe. All ORT techniques operate in two major stages. The first stage involves the development of an outranking relation, whereas the second stage involves the exploitation of the outranking relation in order to perform the evaluation of the alternatives for choice, ranking, and classification purposes.
An outranking relation can be defined as a binary relation used to estimate the strength of the preference for an alternative x over an alternative y. In comparison with MAVT, outranking techniques have two special features:
An outranking relation is not necessarily transitive: in MAVT models the evaluation results are transitive. On the other hand, models developed on the basis of outranking relations allow intransitivities.
An outranking relation is not complete: the main preference relations used in a MAVT modeling framework involve preference and indifference as defined in (
1.3
). In addition to these two relations, outranking methods also consider the incomparability relation, which arises when comparing alternatives with very special characteristics and diverse performance on the criteria.
The most popular methods implementing the outranking relations framework are the ELECTRE methods (Roy 1991), as well as the PROMETHEE methods (Brans and Mareschal 2005), with different variants for addressing choice, ranking and classification problems.
The development of the MCDA model can be performed through direct or indirect procedures. The former are based on structured communication sessions between the analyst and the DM, during which the analyst elicits specific information about the DM's preferences (e.g., weights, trade-offs, goals, etc.). The success of this approach is heavily based on the willingness of the DM to participate actively in the process, as well as the ability of the analyst to guide the interactive process in order to address the DM's cognitive limitations. This kind of approach is widely used in situations involving decisions of strategic character.
However, depending on the selected criteria aggregation model, a considerable amount of information may be needed by the DM. In ‘repetitive’ decisions, where time limitations exist, the above direct approach may not be applicable. Disaggregation methods (Jacquet-Lagrèze and Siskos 2001) are very helpful in this context. Disaggregation methods use regression-like techniques to infer a decision model from a set of decision examples on some reference alternatives, so that the model is as consistent as possible with the actual evaluation of the alternatives by the DM. This model inference approach provides a starting basis for the decision-aiding process. If the obtained model's parameters are in accordance with the actual preferential system of the DM, then the model can be directly applied to new decision instances. On the other hand, if the model is consistent with the sample decisions, but its parameters are inconsistent with the DM's preferential system (which may happen if, for example, the decision examples are inadequate), then the DM has a starting basis upon which he/she can provide recommendations to the analyst about the calibration of the model in the form of constraints about the parameters of the model. Thus, starting with a model that is consistent with a set of reference examples, an interactive model calibration process is invoked.
Jacquet-Lagrèze and Siskos (1982) introduced the paradigm of preference disaggregation in the context of decision aiding through the development of the UTA method (UTilité Additive), which enables the development of evaluation models in the form of an additive value function for ranking purposes. A comprehensive review of this methodological approach of MCDA can be found in Jacquet-Lagrèze and Siskos (2001) and Siskos et al. (2005). Recent research has focused on extensions covering:
other types of decision models, including among others outranking models (Doumpos and Zopounidis 2002b 2004 Mousseau
et al
. 2001), and rule-based models (Greco
et al
. 2001);
other decision problematics (e.g., classification, Doumpos and Zopounidis 2002a);
new modeling forms in the context of robustness decision-making (Dias
et al
. 2002),(Greco
et al
. 2008b).
Computational intelligence has evolved rapidly over the past couple of decades and it is now considered as a distinct sub-field that emerged within the area of artificial intelligence. Duch (2007) discusses the unique features of computational intelligence as opposed to the artificial intelligence paradigm, analyzes the multiple aspects of computational intelligence and introduces a definition of the field as ‘the science of solving non-algorithmizable problems using computers or specialized hardware.’ Craenen and Eiben (2003) view artificial intelligence and computational intelligence as two complementary fields of ‘machine intelligence.’ In their view, artificial intelligence is mostly concerned with knowledge-based approaches whereas computational intelligence is a different stream involved with non-knowledge-based principles.
In the following subsections, we focus on three major computational intelligence paradigms, namely statistical learning/data mining, fuzzy sets, and metaheuristics, which all have been extremely popular among researchers and practitioners involved with the area of computational intelligence. We analyze the contributions of the paradigms within the context of decision-making problems by overviewing their connections with MCDA.
Hand et al. (2001) define data mining as ‘the analysis of (often large) observational data sets to find unsuspected relationships and to summarize the data in novel ways that are both understandable and useful to the data owner.’ Statistical learning plays an important role in the data mining process, by describing the theory that underlies the identification of such relationships and providing the necessary algorithmic procedures.
Modern statistical learning and data mining adopt an algorithmic modeling culture as described by Breiman (2001), in which the focus is shifted from data models to the characteristics and predictive performance of learning algorithms. This approach is very different from the MCDA paradigm (a discussion of the similarities and differences in the context of the preference disaggregation approach of MCDA can be found in Doumpos and Zopounidis 2011b as well as in the work of Waegeman et al. 2009). Nevertheless, the algorithmic developments in statistical learning and data mining, such as the focus on the analysis of large scale data sets, as well as the wide range of different types of generalized modeling forms employed in these fields, provide new capabilities in the context of MCDA.
Artificial neural networks (ANNs) can be considered as directed acyclic graphs with nodes (neurons) organized into layers. The most popular feed-forward architecture consists of a layer of input nodes, a layer of output nodes, and a series of intermediate processing layers. The input nodes correspond to the information that is available for every input vector, whereas the output nodes provide the recommendations of the network. The nodes in the intermediate (hidden) layers are parallel processing units that define the input–output relationship. Every neuron at a given layer receives as input the weighted average of the outputs of the neurons at the preceding layer and maps it to an output signal through a predefined transformation function.
Depending on the topology of the network and the selection of the neurons' transformation functions, a neural network can model real functions of arbitrary complexity. This flexibility has made ANNs a very popular modeling approach in addressing complex real-world problems in engineering and management. This characteristic has important implications for MCDA, mainly with respect to modeling general preference structures.
Within this context, ANNs have been successfully used for learning generalized MCDA models from decision examples in a preference disaggregation setting. Wang and Malakooti (1992), and Malakooti and Zhou (1994) used feedforward ANN models to learn an arbitrary value function for ranking a set of alternatives, as well as to learn a relational multicriteria model based on pairwise comparisons (binary relations) among the alternatives. Generalized network decision models have a function free form, which is less restricted by the assumptions imposed in MAVT (Keeney and Raiffa 1993). Experimental simulation results showed that ANN models performed very well in representing various forms of decision models, outperforming other popular model development techniques based on linear programming formulations. Wang et al. (1994) applied a similar ANN model to a job shop production system problem.
In a different framework compared with the aforementioned studies, Stam et al. (1996) used ANNs within the context of the analytic hierarchy process (AHP; Saaty 2006). AHP is based on a hierarchical structuring of the decision problem, with the overall goal on the top of the hierarchy and the alternatives at the bottom. With this hierarchical structure, the DM is asked to perform pairwise comparisons of the elements at each level of the hierarchy with respect to the elements of the preceding (higher) level. Stam et al. investigated two different ANN structures for accurately approximating the preferences ratings of the alternatives, within the context of imprecise preference judgments by the DM. They showed that a modified Hopfield network has very close connections to the mechanics of the AHP, but found that this network formulation cannot provide good results in estimating the mapping from a positive reciprocal pairwise comparison matrix to its preference rating vector. On the other hand, a feed-forward ANN model was found to provide very good approximations of the preference ratings in the presence of impreciseness. This ANN model was actually superior to the standard principal eigenvector method.
Similar ANN-based methodologies have also be used to address dynamic MCDA problems (where the DM's preferences change over time; Malakooti and Zhou 1991), to learn fuzzy preferences (Wang 1994a,b; Wang and Archer 1994) and outranking relations (Hu 2009), to provide support in group decision-making problems (Wang and Archer 1994), as well as in multicriteria clustering (Malakooti and Raman 2000).
ANNs have also been employed for preference representation and learning in multiobjective optimization. Within this context, Sun et al. (1996) proposed a feed-forward ANN model, which is trained to represent the DM's preference structure. The trained ANN model serves as a value function, which is maximized in order to identify the efficient solution that best fits the DM's preferences. Sun et al. (2000) used a similar feed-forward ANN approach to facilitate the interactive solution process in multiobjective optimization problems. Other ANN architectures have also been used as multiobjective optimizers (Gholamian et al. 2006; McMullen 2001) and hybrid evaluation systems (Raju et al. 2006; Sheu 2008).
A comprehensive overview of the contributions of ANNs in MCDA in provided by Hanne in Chapter 5.
Rule-based and decision tree models are very popular within the machine learning research community. The symbolic nature of such models makes them easy to understand, which is important in the context of decision aiding. During the last decade significant research has been devoted to the use of such approaches as preference modeling tools in MCDA.
In particular, a significant part of the research related to the use of rule-based models in MCDA has focused on rough set theory (Pawlak 1982; Pawlak and Słowiski 1994), which provides a complete and well-axiomatized methodology for constructing decision rule preference models from decision examples. Rough sets have been initially introduced as a methodology to describe dependencies between attributes, to evaluate the significance of attributes and to deal with inconsistent data in the context of machine learning. However, significant research has been conducted on the use of the rough set approach as a methodology for preference modeling in multicriteria decision problems (Greco et al. 1999, 2001). The decision rule models developed through the rough set approach for MCDA problems are built on the basis of the dominance relation. Each ‘if… then…’ decision rule is composed of a condition part specifying a partial profile on a subset of criteria to which an alternative is compared using the dominance relation, and a conclusion part suggesting a decision recommendation.
Decision rule preference models have been initially developed in the context of multicriteria classification problems. In this case the recommendations in the conclusion part of each rule involve the assignment of the alternatives either in a specific class or a set of classes. Extensions to ranking and choice decision problems have been developed by Greco et al. (2001) and Fortemps et al. (2008), whereas Greco et al. (2008a) presented a dominance-based rough set approach for multiobjective optimization.
The decision rule preference model has also been considered in terms of conjoint measurement (Greco et al. 2004) and Bayesian decision theory (Greco et al. 2007). Greco et al. (2004) showed that there is an equivalence of simple cancellation property, a general discriminant function and a specific outranking relation, on the one hand, and the decision rule model on the other hand. They also showed that the decision rule model resulting from the dominance-based rough set approach has an advantage over the usual functional and relational models because it permits the handling of inconsistent decision instances. Inconsistency decision instances often appear due to the instability of preferences, the incomplete determination of criteria, and the hesitation of the DM.
In Chapter 6, Szelg et al. provide a comprehensive presentation of rule-based decision models, focusing on MCDA ranking problems.
Kernel methods are widely used for pattern classification, regression analysis, and density estimation. Kernel methods map the problem data to a high dimensional space (feature space), thus enabling the development of complex nonlinear decision and prediction models, using linear estimation methods (Schölkopf and Smola 2002). The data mapping process is implicitly defined through the introduction of (positive definite) kernel functions. Support vector machines (SVMs) are the most widely used class of kernel methods. Recently, they have also been used within the context of preference learning for approximating arbitrary utility/value functions and preference aggregation.
Herbrich et al. (2000) illustrated the use of kernel approaches, within the context of SVM formulations, for representing value/ranking functions of the generalized form , where is a possibly infinite-dimensional and in general unknown feature mapping. The authors derived bounds on the generalizing performance of the estimated ranking models, based on the margin separating objects in consecutive ranks.
Waegeman et al. (2009) extended this approach to relational models. In this case, the preference model of the form is developed to represent the preference of alternative i compared with alternative j. This framework is general enough to accommodate special modeling forms. For instance, it includes value models as a special case, and similar techniques can also be used to kernelize Choquet integrals. As an example, Waegeman et al. illustrated the potential of this framework in the case of valued concordance relations, which are used in the ELECTRE methods.
Except for the development of generalized decision models, kernel methods have also been employed for robust model inference purposes. For instance, Evgeniou et al. (2005) showed how the regularization principle (which is at the core of the theory of kernel methods) is related to the robust fitting of linear and polynomial value function models in ordinal regression problems. Doumpos and Zopounidis (2007) employed the same regularization principle for developing new improved linear programming formulations for fitting additive value functions in ranking and classification problems. The development of additive value function was also addressed by Dembczynski et al. (2006) who presented a methodology integrating the dominance-based rough set approach and SVMs.
SVMs have also be used in the context of multiobjective optimization (Aytug and Sayin 2009; Yun et al. 2009) in order to approximate the set of Pareto optimal solutions in complex nonlinear problems. Multiobjective and goal programming formulations has also been used for training SVM models (Nakayama and Yun 2006; Nakayama et al. 2005). Finally, hybrid systems based on SVMs have been proposed. For instance, Jiao et al. (2009) combined SVMs with the UTADIS disaggregation method (Doumpos and Zopounidis 2002a) for the development of accurate multi-group classification models.
Decision making is often based on fuzzy, ambiguous, and vague judgments. Verbal expressions such as ‘almost,’ ‘usually,’ ‘often,’ etc., are simple yet typical examples of the ambiguity and vagueness often encountered in the decision-making process. The fuzzy set theory first introduced by Zadeh (1965), provides the necessary modeling tools for such situations. The concept of a fuzzy set is at the center of this approach. In the traditional set theory, a set is considered as a collection of well defined and distinct objects, which implies that sets have clearly defined (crisp) boundaries. Therefore, a statement of the form ‘object x belongs to set A’ is either true or false. On other hand, a fuzzy set has no crisp boundaries, and every object is associated with a degree of membership with respect to a fuzzy set.
Since its introduction, fuzzy set theory has been an extremely active research field with numerous practical applications in engineering and management. Its uses in the context of decision aiding have also attracted much interest.
The traditional multiobjective programming framework assumes that all the parameters of the problem are well-defined. However, imprecision, vagueness, and uncertainty can make the specification of goals, targets, objectives, and constraints troublesome and unclear. Bellman and Zadeh (1970) were the first to explore optimization models in the context of fuzzy set theory. Zimmermann (1976, 1978) further investigated this idea both in the case of single-objective problems as well as in the context of multiobjective optimization.
Fuzzy multiobjective programming formulations have a similar form to conventional multiobjective programming problems (i.e., the optimization of several objective functions over some constraints). The major distinction between these two approaches is that while in deterministic multiobjective programming all objective functions and constraints are specified in a crisp way, in fuzzy multiobjective programming they are specified using the fuzzy set theory through the introduction of membership functions. Fuzzy coefficients for the decision variables in the objective function and the constraints can also be introduced.
A major advantage of fuzzy multiobjective programming techniques over conventional mathematical programming with multiple objectives, is that it provides a framework to address optimization problems within a less strict context regarding the sense of the imposed constraints, as well as the degree of satisfaction of the DM from the compromise solutions that are obtained (i.e., introduction of fuzzy objectives).
The FLIP method (Słowiski 1990) for multiobjective linear programming problems is a typical example of the integration of the fuzzy set theory with multiobjective optimization techniques. FLIP considers uncertainty through the definition of all problem parameters (objective function coefficients, variables' coefficients in the constraints, right-hand side coefficients) as fuzzy numbers, each one associated with a possibility distribution. The recent book by Sakawa et al. (2011) presents a comprehensive overview of the theory of fuzzy multiobjective programming including stochastic problems, whereas Roubens and Teghem (1991) present a survey of fuzzy multiobjective programming and stochastic multiobjective optimization and perform a comparative investigation of the two areas.
A detailed presentation of the principles and techniques for fuzzy multiobjective optimization is presented by Sakawa in Chapter 10.
Preference modeling is a major research topic in MCDA. The modeling of a DM's preferences can be viewed within the context of MAVT models as well as in the context of outranking relations (Fodor and Roubens 1994; Roubens 1997). The concept of outranking relation is closely connected with the philosophy of fuzzy sets. For instance, in ELECTRE methods the outranking relation is constructed to evaluate whether alternative i is at least as good as alternative j. Similarly, in the PROMETHEE methods a preference relation is constructed to measure the preference for alternative i over alternative j. In both sets of methods the outranking/preference relations are not treated in a crisp setting. Instead, the relations are quantified by proper measures (e.g., credibility index in ELECTRE and preference index in PROMETHEE) representing the strength of the outranking/preference of one alternative over another. For instance, the credibility index used in ELECTRE methods represents the validity of the affirmation ‘alternative i outranks alternative j.’ Thus, it is a form of membership function. Perny and Roy (1992) provided a comprehensive discussion on the use of fuzzy outranking relations in preference modeling together with an analysis of the characteristics and properties of such relations.
Despite the above fundamental connection between commonly used MCDA outranking techniques and fuzzy theory, it should be noted that traditional outranking methods consider crisp data. However, many extensions for handling fuzzy data in outranking methods have been proposed. For instance, Czyzak and Słowiski (1996) considered the evaluations of the alternatives on the criteria as fuzzy numbers in order to construct an outranking relation. Common aggregation operators (e.g., maximum and minimum) are employed to aggregate these fuzzy numbers in order to perform the necessary concordance and discordance tests similarly to the traditional outranking relations approach. Roubens (1996) presented several procedures for aggregating fuzzy criteria in an outranking context for choice and ranking problem, whereas a more recent overview of this research direction is given by Bufardi et al. (2008). Fuzzy relations can also be used to handle the fuzziness that characterize the DM's preferences. For instance, Siskos (1982) proposed a methodology using disaggregation techniques to build a fuzzy outranking relation on the basis of the information represented in multiple additive value functions which are compatible with the DM's preferences, thus modeling the DM's fuzzy preferential system.
Fuzzy preference modeling approaches have also been developed in the context of MAVT. Grabisch (1995; 1996) introduced an approach to manage uncertainty in the MAVT framework through the consideration of the concept of fuzzy integrals initially introduced by Sugeno (1974). In the proposed approach fuzzy integrals are used instead of the additive and multiplicative aggregation operators that are commonly used in MAVT in order to aggregate all attributes into a single evaluation index (value function). The major advantageous feature of employing fuzzy integrals within the MAVT context is their ability to consider the interactions among the evaluation criteria including redundancy and synergy. On the other hand, the major drawback of such an approach that is a consequence of its increased complexity over simple aggregation procedures (e.g., weighted average), involves the increased number of parameters that should be defined, either directly by the DM, or employing heuristics and optimization techniques. The use of the Choquet integral as an aggregation function has also attracted much interest among MCDA researchers. Marichal and Roubens (2000) first introduced a methodology implementing this approach in a preference disaggregation context. Some work on this topic can be found in the papers of Angilella et al. (2004; 2010) and Kojadinovic (2004; 2007), while a review of this area is given in the paper by Grabisch et al. (2008). Other applications of the fuzzy set theory to MAVT are discussed in the book by Lootsma (1997).
A final class of decision models developed within the context of fuzzy set theory that attracted much interest in the context of MCDA is based on the ordered weighted averaging (OWA) approach first introduced by Yager (1988). An OWA aggregation model, is a particular case of the Choquet integral, which is similar to a simple weighted average model. However, instead of weighting the criteria, an OWA model assigns weights to the relative position of one criterion value with respect to the other values (Torra 2010). In this way, OWA models allow for different compensation levels to be modeled. For instance, assigning high weights to low performances lead to a noncompensatory mode, whereas compensation can be allowed if higher weight is given to good performance levels. In the context of decision making under uncertainty, the OWA aggregation scheme is a generalization of the Hurwicz rule. Yager (1993) and Xu and Da (2003) provide overviews of different OWA models, whereas Yager (2004) extends this framework to consider different criteria priorities in an MCDA context.
Metaheuristics have been one of the most active and rapidly evolving fields in computational intelligence and operations research. Their success and development is due to the highly complex nature of many decision problems. As a consequence the corresponding mathematical models are nonlinear, nonconvex, and/or combinatorial in nature, thus making it very difficult to solve them through traditional optimization algorithms. Metaheuristics and evolutionary techniques have been very successful in dealing with computationally intensive optimization problems, as they make few or no assumptions about the problem and can search very large solution spaces very efficiently. In the context of MCDA, such methods have been primarily used for multiobjective optimization. Their use for fitting complex decision models in a preference disaggregation setting has also attracted some interest.
Traditional multiobjective optimization techniques seek to find an efficient solution that best fits the preferences of a DM. The solution process is performed iteratively so that the DM's preferences are progressively specified and refined until the most satisfactory solution is identified. During this process a series of optimization problems needs to be solved, which may not be easy in the case of combinatorial or highly complex nonlinear and nonconvex problems. Furthermore, in such procedures the DM is often not provided with a full idea of the whole set of Pareto optimal solutions. Metaheuristics are well-suited in this context as they are applicable in all types of computationally intensive multiobjective optimization problems and enable the approximation of complex Pareto sets in a single run of an algorithmic procedure.
Different classes of algorithms can be identified in this research direction. Approaches based on genetic algorithms (GAs) are probably the most popular. GAs are computational procedures that mimic the process of natural evolution for solving complex optimization problems. They implement stochastic search schemes to evolve an initial population (set) of solutions through selection, mutation, and crossover operators until a good solution is reached. The first GA-based approach for multiobjective optimization problem was proposed by Schaffer (1985). During the 1990s and the 2000s many other algorithms implementing a similar GA approach have been proposed. A comprehensive presentation of this approach can be found in the book by Deb (2001), whereas Konak et al. (2006) presented a tutorial and review of the field.
The differential evolution (DE) algorithm has also attracted much interest for multiobjective optimization. DE has been introduced by Storn and Price (1997) as a powerful alternative to GAs, which is well-suited to continuous optimization problems. Similarly to a GA, DE also employs evolution operators to evolve a generation of solutions, but it is based on greedy search strategies, which ensure that solutions are strictly improved in every iteration of the algorithm. Abbass and Sarker (2002) presented one of the first implementations of the DE scheme in multiobjective optimization. Some recent extensions have been presented by Gong et al. (2009), Krink and Paterlini (2011), and Wang and Cai (2012), whereas Mezura-Montes et al. (2008) present a review of DE-based multiobjective optimization algorithms.
A third class of computational intelligence techniques for solving multiobjective optimization problems involves other metaheuristic algorithms, such as simulated annealing, tabu search, ant colony optimization, and particle swarm optimization, which have been proved very successful in solving complex optimization problems of a combinatorial nature. The use of such algorithms in multiobjective optimization can be found in Landa Silva et al. (2004), Molina et al. (2007), Bandyopadhyay et al. (2008), Doerner et al. (2008), and Elhossini et al. (2010). Jones et al. (2002) present an overview of the field, whereas Ehrgott and Gandibleux (2008) focus on recent approaches, where metaheuristics are combined with exact methods.
In Chapter 8, Jaimes and Coello Coello present in detail different interactive methods for multiobjective optimization.
Inferring simple decision-making models (e.g., additive or linear value functions) from decision examples poses little computational problems. Most existing preference disaggregation techniques use linear programming for this purpose (Jacquet-Lagrèze and Siskos 2001; Zopounidis and Doumpos 2002). However, more complex models cannot be constructed with exact methods. Metaheuristics are well-suited in this context and have attracted the interest of MCDA researchers over the past few years.
Most of the research on this area has focused on outranking models. Goletsis et al. (2004) used a GA for the development of an outranking model based on the philosophy of the ELECTRE methods in a medical classification problem. Belacel et al. (2007) used the reduced variable neighborhood search metaheuristic to infer the parameters of the PROAFTN outranking method from a set of reference examples. Focusing on the same outranking method Al-Obeidat et al. (2011) used a particle swarm optimization algorithm. Fernandez et al. (2009) developed a model based on a fuzzy indifference relation for classification purposes. In order to infer the parameters of the model from a set of reference examples they used the NSGA-II multiobjective evolutionary algorithm (Deb et al. 2002) considering four measures related to the inconsistencies and the correct recommendations of the decision model. A similar approach was also presented by Fernandez and Navarro (2011). Doumpos et al. (2009) presented a methodology based on the differential evolution algorithm for estimating all the parameters of an ELECTRE TRI model from assignment examples in classification problems under both the optimistic and the pessimistic assignment rules (Roy and Bouyssou 1993). Doumpos and Zopounidis (2011a) applied this methodology to a large data set for the development of credit rating models and demonstrated how the special features of ELECTRE TRI can provide useful insights into the relative importance of the credit rating criteria and the characteristics of the alternatives. Eppe et al. (2011) employed the NSGA-II algorithm for inferring the parameters of PROMETHEE II models from decision instances. The authors suggested a bi-objective approach according to which the model is developed so that the number of inconsistencies compared with the DM's evaluation of the reference alternatives is minimized and the robustness of the model's parameters estimates is maximized. In contrast to all the aforementioned studies, which focused on outranking models, Doumpos (2012) considered the construction of a nonmonotone additive value function, assuming that the marginal value functions are quasi-convex. The differential evolution algorithm was used to infer the additive function from reference examples in a classification setting.
In a dynamic environment characterized by increasing complexity and considerable uncertainties, the interdisciplinary character of decision analysis and decision aiding is strengthened. Complex and ill-structured decision problems in engineering and management cannot be handled in a strictly defined methodological context. Instead, integrated approaches often need to be implemented, combining concepts and techniques from various research fields. In this context, the relationship between artificial intelligence and MCDA has attracted much interest among decision scientists.
This chapter presented an overview of this area, focusing on the computational intelligence paradigm. Computational intelligence has been one of the most active areas in artificial intelligence research, with numerous applications engineering and management systems. The overview focused on the contributions of computational intelligence methodologies in decision support, covering important issues such as the introduction of new preference modeling techniques, advanced algorithmic solution procedures for complex problems, as well as new techniques for constructing decision models. The advances in each of these areas provides new capabilities for extending the research and practice of the MCDA paradigm, thus enabling its use in new ill-structured decision domains, characterized by uncertainty, vagueness, and imprecision, complex preference and data structures, and high data dimensionality.
