Particle Swarm Optimization - Maurice Clerc - E-Book

Particle Swarm Optimization E-Book

Maurice Clerc

0,0
144,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This is the first book devoted entirely to Particle Swarm Optimization (PSO), which is a non-specific algorithm, similar to evolutionary algorithms, such as taboo search and ant colonies. Since its original development in 1995, PSO has mainly been applied to continuous-discrete heterogeneous strongly non-linear numerical optimization and it is thus used almost everywhere in the world. Its convergence rate also makes it a preferred tool in dynamic optimization.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 313

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Foreword

Introduction

Part I. Particle Swarm Optimization

Chapter 1. What is a Difficult Problem?

1.1. An intrinsic definition

1.2. Estimation and practical measurement

1.3. For “amatheurs”: some estimates of difficulty

1.4. Summary

Chapter 2. On a Table Corner

2.1. Apiarian metaphor

2.2. An a side on the spreading of a rumor

2.3. Abstract formulation

2.4. What is really transmitted

2.5. Cooperation versus competition

2.6. For “amatheurs”: a simple calculation of propagation of rumor

2.7. Summary

Chapter 3. First Formulations

3.1. Minimal version

3.2. Two common errors

3.3. Principal drawbacks of this formulation

3.4. Manual parameter setting

3.5. For “amatheurs”: average number of informants

3.6. Summary

Chapter 4. Benchmark Set

4.1. What is the purpose of test functions?

4.2. Six reference functions

4.3. Representations and comments

4.4. For “amatheurs”: estimates of levels of difficulty

4.5. Summary

Chapter 5. Mistrusting Chance

5.1. Analysis of an anomaly

5.2. Computing randomness

5.3. Reproducibility

5.4. On numerical precision

5.5. The rare KISS

5.6. On the comparison of results

5.7. For “amatheurs”: confidence in the estimate of a rate of failure

5.8. C programs

5.9. Summary

Chapter 6. First Results

6.1. A simple program

6.2. Overall results

6.3. Robustness and performance maps

6.4. Theoretical difficulty and note difficulty

6.5. Source code of OEP 0

6.6. Summary

Chapter 7. Swarm: Memory and Graphs of Influence

7.1. Circular neighborhood of the historical PSO

7.2. Memory-swarm

7.3. Fixed topologies

7.4. Random variable topologies

7.5. Influence of the number of informants

7.6. Influence of the number of memories

7.7. Reorganizations of the memory-swarm

7.8. For “amatheurs”: temporal connectivity in random recruitment

7.9. Summary

Chapter 8. Distributions of Proximity

8.1. The random possibilities

8.2. Review of rectangular distribution

8.3. Alternative distributions of possibilities

8.4. Some comparisons of results

8.5. For “amatheurs”

8.6. C program of isotropic distribution

8.7. Summary

Chapter 9. Optimal Parameter Settings

9.1. Defense of manual parameter setting

9.2. Better parameter settings for the benchmark set

9.3. Towards adaptation

9.4. For “amatheurs”: number of graphs of information

9.5. Summary

Chapter 10. Adaptations

10.1. Demanding criteria

10.2. Rough sketches

10.3. For “amatheurs”

10.4. Summary

Chapter 11. TRIBES or Cooperation of Tribes

11.1. Towards an ultimate program

11.2. Description of TRIBES

11.3. Results on the benchmark set

11.4. Summary

Chapter 12. On the Constraints

12.1. Some preliminary reflections

12.2. Representation of the constraints

12.3. Imperative constraints and indicative constraints

12.4. Interval confinement

12.5. Discrete variable

12.6. Granularity confinement

12.7. “all different” confinement

12.8. Confinement by dichotomy

12.9. Multicriterion treatment

12.10. Treatment by penalties

12.11. C source code. Dichotomic search in a list

12.12. For “amatheurs”

12.13. Summary

Chapter 13. Problems and Application

13.1. Ecological niche

13.2. Typology and choice of problems

13.3. Canonical representation of a problem of optimization

13.4. Knapsack

13.5. Magic squares

13.6. Quadratic assignment

13.7. Traveling salesman

13.8. Hybrid JM

13.9. Training of a neural network

13.10. Pressure vessel

13.11. Compression spring

13.12. Moving Peaks

13.13. For “amatheurs”: the magic of squares

13.14. Summary

Chapter 14. Conclusion

14.1. End of the beginning

14.2. Mono, poly, meta

14.3. The beginning of the end?

Part II. Outlines

Chapter 15. On Parallelism

15.1. The short-sighted swarm

15.2. A parallel model

15.3. A counter-intuitive result

15.4. Qualitative explanation

15.5. For “amatheurs”: probability of questioning an improved memory

15.6. Summary

Chapter 16. Combinatorial Problems

16.1. Difficulty of chaos

16.2. Like a crystal

16.3. Confinement method

16.4. Canonical PSO

16.5. Summary

Chapter 17. Dynamics of a Swarm

17.1. Motivations and tools

17.2. An example with the magnifying glass

17.3. Energies

17.4. For experienced “amatheurs”: convergence and constriction

17.5. Summary

Chapter 18. Techniques and Alternatives

18.1. Reprise

18.2. Stop-restart/reset

18.3. Multi-swarm

18.4. Dynamic optimization

18.5. For “amatheurs”

18.6. Summary

Further Information

Bibliography

Index

First published in France in 2005 by Hermes Science/Lavoisier under the title “L’Optimisation par essaims particulaires” First published in Great Britain and the United States in 2006 by ISTE Ltd

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd 6 Fitzroy Square London W1T 5DX UK www.iste.co.ukISTE USA 4308 Patrice Road Newport Beach, CA 92663 USA

© LAVOISIER, 2005

© ISTE Ltd, 2006

The rights of Maurice Clerc to be identified as the authors of this work has been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Cataloging-in-Publication Data

Clerc, Maurice.

[Optimisation par essaims particulaires. English]

Particle swarm optimization / Maurice Clerc.

p. cm.

Includes bibliographical references and index.

ISBN-13: 978-1-905209-04-0

ISBN-10: 1-905209-04-5

1. Mathematical optimization. 2. Particles (Nuclear physics) 3. Swarm intelligence. I. Title.

QC20.7.M27C5513 2006

539.7'2--dc22

2005037211

British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 10: 1-905209-04-5 ISBN 13: 978-1-905209-04-0

Foreword

Goal and limits

This book is the first to deal exclusively with particle swarm optimization. In his Swarm Intelligence[KEN 01], originally entitled Particle Swarm Optimization (PSO), my friend Jim Kennedy has devoted three chapters out of eleven to this subject, above all as an illustration of the more general concept of collective intelligence without dwelling on the details of practical implementation.

For this book, my goal was simpler: to give you the concepts and tools necessary and sufficient for the resolution of problems of optimization, including the codes of various programs.

After having assimilated the contents of the first and more important part, you should be able to apply PSO to practically any problem of minimization of an assessable function in a continuous, discrete or mixed search space. You will also be able to deal with multi-objective problems, either as such, or as methods of taking into account complex constraints of a mono-objective problem.

PSO is in constant and fast evolution, but the corpus of techniques presented here is already sufficiently reliable and particularly effective, even though, as we shall see, many and interesting ways of research are yet to be explored, particularly regarding adaptive PSO. An important international collaboration, XPS (eXtended Particle Swarms), led by the University of Essex in Great Britain, began at the end of 2004. It should lead to major breakthroughs both theoretical and practical. As the promoters of the project put it:

“[The goal is] to include strategies inspired by a broad range of collective behavior, in biology and particle physics, to deal with many problems in engineering and to establish solid theoretical and mathematical bases […]”.

In spite of its brief history, PSO has already entered into science fiction: Michael Crichton, in his novel Prey [CRI 03], has explicitly referred to it, in particular using the concept of constriction … albeit in a form that is very different from the original one!

Organization of the book

The book is structured in two parts. The first describes PSO in detail, from a very simple primitive parametric version to an adaptive version that does not require the user to supply parameters. The discussion thread is a benchmark set of six test functions which enable us to compare the influence of the parameters and search strategies. The final chapter of this part focuses on some more realistic problems.

The second part is entitled “Outlines”, indicating that the items discussed are not dealt with in detail, as this would go beyond the scope of this book. It is primarily about parallelism, the canonical PSO (a basis, among others, of the combinatorial PSO) and the dynamics of the swarms. The final chapter very briefly presents some techniques and alternatives such as the stop-reset, the multi-swarm and the dynamic PSO (optimization of a function changing during the very process of search). The interested reader will be able to refer to the documents cited.

Many chapters end with a more mathematical part. This part specifies or justifies some of the assertions made in the body of the text but is by no means necessary for the comprehension of those ideas. It can thus be comfortably skipped if you do not have the taste or the time for it.

Various versions of PSO are studied, some in a very thorough manner, others very briefly. The diagram below shows the links between them and the levels of detail of the presentations. In particular, the significant field of specific implementations of PSOs is only skimmed through. It would be, in itself, worth a later work, particularly as the methods implemented are very often hybrid, i.e. use several methods of optimization jointly, in particular for difficult combinational problems.

Figure 1.Various versions of PSO considered. Those with a gray background and a thick continuous outline are really detailed. The outline is dotted if there is presentation without implementation. The versions indicated in the white zone are only mentioned

On the source codes

The programs used were developed under Linux and deliberately written in pure ANSI C to be easily compilable under any operating system. There is consequently neither graphic interface, nor specific memory management.

For certain small programs, the source codes are given explicitly. The others are available on the Internet, starting from the following link: http://www.hermesscience.com/clerc/oep.zip. More generally, the portal of PSO is Particle Swarm Central: http://www.particleswarm.info/.

On technical terms

Normally the essence of each chapter (including some rather delicate reasoning) may be read without any deep mathematical knowledge. Nevertheless some specialized terms are used here and there, particularly for the sake of conciseness, but these are easily comprehensible. For example, “hypersphere in a space with D dimensions” will often be replaced by “D-sphere”, and “hyperparallelepiped in a space with D dimensions” will be replaced by “D-rectangle”.

To contact the author

If you wish to send comments, point out errors or make suggestions, you can contact Mr Maurice Clerc:

– by email, at [email protected];

– via the author’s website, http://www.mauriceclerc.net;

– via the editor.

Introduction

On some types of optimization

Iterative optimization is as old as life itself. Even very primitive beings act according to a simple impulse that can be summarized in a few words: “To improve their situation”. Hence, many strategies are conceivable, but those we see every day in action in nature, and prove their effectiveness by the persistence of the species that practice them, already offer a broad range of solutions. It is therefore not surprising that, explicitly or implicitly, several mathematical models of optimization take as a starting point biological behaviors and make an abundant use of metaphors and terms originating from genetics, ethology, and even from ethnology or psychology.

Among these models, one can distinguish those corresponding to individual behavior and those using collective behavior. In the first case, the most obvious strategy is to seek to benefit permanently from any obvious immediate improvement. If the objective is to reach a summit, at every crossroads one will systematically take the route that seems to go up more; for example, by testing them all over a small length. Obviously, by doing this, one may well end up on a secondary summit, which could be only a very poor local optimum.

To compensate for the limitations of this primitive gradient strategy, it would be necessary to make a real conceptual leap and allow the situation to more or less deteriorate for a long time, in the hope that it would improve eventually. Since this behavior could be suicidal, it is advisable to be protected by a safeguard, i.e., in practice, to remember the best position already found, in order to be able return to it if necessary. At the same time, the individual can afford to explore on the basis of a wider variety of rules, even straightforwardly randomly, or, more intelligently, according to a chance “guided” by gradually acquired knowledge.

In the second case, i.e. collective optimization, this maintenance of the asset can be done quite naturally, since it is enough that the individual who has the best position does not move, leaving others to explore the environment. But now, two new parameters come into play: the size of the group and its structure.

The structure relates to the way in which information is transmitted between individuals. To what is each individual related? Are these links constant or variable? Are the exchanges bidirectional or not? Is there a hierarchy? Sub-groups? The basic problem is that of the use of knowledge. One rightly feels that the more the search space is sampled by successively evaluated positions, the better one should be able to predict the areas that are interesting to explore, by making certain assumptions about the regularity of the search space. However, these forecasts have a price. Is it worthwhile?

Not always. The most obvious academic case is that of a function to be optimized completely at random: the best strategy is the most stupid and very cheap, since it simply consists in generating equally random positions. Generally, the more progressive sampling of the studied function presents a higher degree of randomness, the more the strategy of research must itself call for randomness.

The size of the group can be fixed at the beginning or be variable during the research. In the second case, it is necessary to define mechanisms of selection or generation, or, more often, both. Moreover, even in the first case, such mechanisms can be used, the constant size being preserved by a dynamic equilibrium, any selection being compensated by a generation.

On PSO

Particle swarm optimization (PSO), in its historical version, is a collective, anarchic (in the original sense of the term), iterative method, with the emphasis on cooperation; it is partially random and without selection. The goal of the early chapters will be to detail these characteristics and formalize them to obtain an exploitable model that is particularly effective for strongly nonlinear problems.

We will see initially why and how this model can treat continuous and heterogeneous (i.e. in which some of the variables are continuous and others discrete, possibly coding combinational aspects) optimizations in a uniform way. Then we will study some alternatives. The goal here is not to make an exhaustive survey, but to work on a selection of those which either have already proved to be of interest, or seem most promising. In other words, their choice is necessarily subjective. We will look in particular at the versions known as adaptive, whose “ultimate” form, called TRIBES, does not require any parameter other than those defining the problem.

The few problems with accompanying notes should then allow you to become familiar with PSO, to better determine its domain of competence and hopefully to use it yourself later with profit, perhaps even to make improvements upon it.

PART I Particle Swarm Optimization

Chapter 1

What is a Difficult Problem?

1.1. An intrinsic definition

As regards optimization, certain problems are regarded as more difficult than others. This is the case, inter alia, for combinatorial problems. But what does that mean? Why should a combinatorial problem necessarily be more difficult than a problem in continuous variables and, if this is the case, to what extent is it so? Moreover, the concept of difficulty is very often more or less implicitly related to the degree of sophistication of the algorithms in a particular research field: if one cannot solve a particular problem, or it takes a considerable time to do so, therefore it is difficult.

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!