Probabilistic Search for Tracking Targets - Irad Ben-Gal - E-Book

Probabilistic Search for Tracking Targets E-Book

Irad Ben-Gal

0,0
80,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Presents a probabilistic and information-theoretic framework for a search for static or moving targets in discrete time and space.

Probabilistic Search for Tracking Targets uses an information-theoretic scheme to present a unified approach for known search methods to allow the development of new algorithms of search. The book addresses search methods under different constraints and assumptions, such as search uncertainty under incomplete information, probabilistic search scheme, observation errors, group testing, search games, distribution of search efforts, single and multiple targets and search agents, as well as online or offline search schemes. The proposed approach is associated with path planning techniques, optimal search algorithms, Markov decision models, decision trees, stochastic local search, artificial intelligence and heuristic information-seeking methods. Furthermore, this book presents novel methods of search for static and moving targets along with practical algorithms of partitioning and search and screening.

Probabilistic Search for Tracking Targets includes complete material for undergraduate and graduate courses in modern applications of probabilistic search, decision-making and group testing, and provides several directions for further research in the search theory.

The authors:

  • Provide a generalized information-theoretic approach to the problem of real-time search for both static and moving targets over a discrete space.
  • Present a theoretical framework, which covers known information-theoretic algorithms of search, and forms a basis for development and analysis of different algorithms of search over probabilistic space.
  • Use numerous examples of group testing, search and path planning algorithms to illustrate direct implementation in the form of running routines.
  • Consider a relation of the suggested approach with known search theories and methods such as search and screening theory, search games, Markov decision process models of search, data mining methods, coding theory and decision trees.
  • Discuss relevant search applications, such as quality-control search for nonconforming units in a batch or a military search for a hidden target. 
  • Provide an accompanying website featuring the algorithms discussed throughout the book, along with practical implementations procedures.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 640

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title Page

Copyright

Dedication

List of figures

Preface

Notation and terms

Chapter 1: Introduction

1.1 Motivation and applications

1.2 General description of the search problem

1.3 Solution approaches in the literature

1.4 Methods of local search

1.5 Objectives and structure of the book

References

Chapter 2: Problem of search for static and moving targets

2.1 Methods of search and screening

2.2 Group-testing search

2.3 Path planning and search over graphs

2.4 Summary

References

Chapter 3: Models of search and decision making

3.1 Model of search based on MDP

3.2 Partially observable MDP model and dynamic programming approach

3.3 Models of moving target search with constrained paths

3.4 Game theory models of search

3.5 Summary

References

Chapter 4: Methods of information theory search

4.1 Entropy and informational distances between partitions

4.2 Static target search: Informational LRTA* algorithm

4.3 Moving target search: Informational moving target search algorithm

4.4 Remarks on programming of the ILRTA* and IMTS algorithms

4.5 Summary

References

Chapter 5: Applications and perspectives

5.1 Creating classification trees by using the recursive ILRTA* algorithm

5.2 Informational search and screening algorithm with single and multiple searchers

5.3 Application of the ILRTA* algorithm for navigation of mobile robots

5.4 Application of the IMTS algorithm for paging in cellular networks

5.5 Remark on application of search algorithms for group testing

References

Chapter 6: Final remarks

References

Index

This edition first published 2013

© 2013 John Wiley & Sons, Ltd

Registered office

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Kagan, Eugene.

Probabilistic search for tracking targets : theory and modern application / Eugene Kagan, Irad Ben-Gal.

pages cm

Includes bibliographical references and index.

ISBN 978-0-470-97393-6 (cloth)

1. Search theory. 2. Target acquisition– Mathematics. 3. Motion– Mathematical models. I. Ben-Gal, Irad.

II. Title.

T57.97.K34 2013

658.′034– dc23

2012051596

A catalogue record for this book is available from the British Library.

ISBN: 978-0-470-97393-6

List of figures

Figure 2.1

Example of gridded sample space and its associated location probabilities

Figure 2.2

A general decision process for the probabilistic search procedure

Figure 2.3

Examples of search effort distribution

Figure 2.4

Initial and final target location density without search. (a) Initial target location density. (b) Target location density without search at time

t

=100

Figure 2.5

Detection probabilities concentrated in the center of the domain and the corresponding target location probabilities. (a) Detection probabilities concentrated in the center of the domain. (b) Target location density at time

t

=100

Figure 2.6

Detection probabilities concentrated in the center of the domain and the corresponding target location probabilities. (a) Detection probabilities concentrated at the sides the domain. (b) Target location density at time

t

=100

Figure 2.7

Search agent trajectories and observed areas. (a) Continuous space and time. (b) Discrete space and time

Figure 2.8

Search agent trajectory and search density. (a) Detection probability at starting point

x

t

=0

=(3, 3) and a search agent trajectory up to time

t

=100. (b) Search density

u

t

=100

or allocation

w

at time

t

=100

Figure 2.9

The search density along the search trajectory and the search effort at an arbitrary point. (a) Search density along the agent's trajectory. (b) The search effort as applied to point

x

=(5, 5) as a funtion of time

Figure 2.10

Search density defined by an optimal allocation of efforts according to Equation 2.6. (a) Search density

u

t

=0

=w

*

; available search effort

K

=10. (b) Search density

u

t

=0

*

; available search effort

K

=100

Figure 2.11

Dynamics of a Markovian target location density and the corresponding search density for an optimal allocation following Equation 2.6. (a) Target location density

v

t

=0

. (b) Search density

u

t

=0

; available search effort

K

=10. (c) Target location density

v

t

=10

. (d) Search density

u

t

=10

; available search effort

K

=100. (e) Target location density

v

t

=100

. (f) Search density

u

t

=100

; available search effort

K

Figure 2.12

A search density that corresponds to an optimal allocation as obtained by Algorithm 2.1. (a) Search density

u

t

=0

=w

*

; available search effort

K

=10. (b) Search density

u

t

=0

=w

*

; available search effort

K

=100

Figure 2.13

Dynamics of Markovian target location density and search density for optimal allocations provided by Algorithm 2.2. (a) Target location density

v

t

=0

. (b) Search density

u

t

=0

; available search effort

K

=10. (c) Target location density

v

t

=10

. (d) Search density

u

t

=10

; available search effort

K

=100. (e) Target location density

v

t

=100

. (f) Search density

u

t

=100

; available search effort

K

Figure 2.14

Dynamics of Markovian target location density and search density for optimal allocations provided by Algorithm 2.3. (a) Target location density

v

t

=0

. (b) Search density

u

t

=0

; available search effort

K

=10. (c) Target location density

v

t

=10

. (d) Search density

u

t

=10

; available search effort

K

=100. (e) Target location density

v

t

=100

. (f) Search density

u

t

=100

; available search effort

K

Figure 2.15

Dynamics of the group-testing search

Figure 2.16

Example of binary search tree for static target search

Figure 2.17

Example of binary search tree for a moving target search

Figure 2.18

Example of the Dorfman group-testing search tree

Figure 2.19

Example of the Sterrett group-testing search graph

Figure 2.20

Example of binary search tree for Hwang's algorithm

Figure 2.21

Example of the search tree for Sobel and Groll's algorithm

Figure 2.22

Example of the search tree for Graff and Roeloffs' algorithm

Figure 2.23

Example of `division by half' search tree

Figure 2.24

Example of binary tree and maximal binary tree

Figure 2.25

Example of optimal binary search tree

Figure 2.26

Huffman coding tree corresponding to the optimal binary search tree

Figure 2.27

Entropy for two-point sample space

Figure 2.28

Example of the GOTA search tree

Figure 2.29

Example of the graph of states for the BF* algorithm

Figure 2.30

Example of the resulting path obtained by the BF* algorithm with a cumulative evaluation cost function

Figure 2.31

Example of the graph of states with estimated costs for the A* algorithm

Figure 2.32

Example of the expanded states and the resulting path obtained by the A* algorithm

Figure 2.33

Example of the expanded states and resulting paths obtained by two trials of the LRTA* algorithm. (a) Graph of the states with initial estimated costs and costs . (b) Resulting path and updated costs after the first trial. (c) Resulting path and updated costs after the second trial

Figure 2.34

Example of the graph of states for the MTS algorithm

Figure 2.35

Example of the expanded states and the resulting paths of the searcher and the target for the MTS algorithm. (a) The movements of the searcher and the target, and the updated costs after Step 1. (b) The movements of the searcher and the target, and the updated costs after Step 2. (c) The movements of the searcher and the target, and the updated costs after Step 3

Figure 3.1

Example of the decision tree for MDP

Figure 3.2

General MDP for the search procedure

Figure 3.3

Example of the Markov process for the target's movements

Figure 3.4

Target's movements in the Pollock model of search

Figure 3.5

Optimal binary search tree for initial target location probabilities

Figure 3.6

Relocation of the search efforts in the branch-and-bound procedure

Figure 3.7

Dynamics of the searcher and target in the search-evasion game. (a) Initial searcher and target location probabilities,

t

=0. (b) Searcher and target location probabilities,

t

=1. (c) Searcher and target location probabilities,

t

=2

Figure 3.8

Dynamics of pursuit-evasion game

Figure 3.9

Diameter and radius of the graph

Figure 3.10

Examples of complete, cyclic, and path graphs. (a) Complete graph . (b) Cycle graph . (c) Path graph

Figure 3.11

Dual definitions of the target's policies. (a) Moving target and unrestricted searcher's movements. (b) Static target and restricted searcher's movements

Figure 4.1

Dependence of the Rokhlin distances on location probabilities

Figure 4.2

Actual and estimated distances and the neighborhood of the partition in the ILRTA* algorithm

Figure 4.3

Correspondence between the Huffman– Zimmerman procedure and the ILRTA* algorithm. (a) Bottom-up Huffman search tree and corresponding partitions of the sample space. (b) The path created by the ILRTA* algorithm in the partitions space with Rokhlin distances between the partitions

Figure 4.4

Correspondence between the GOTA and the ILRTA* algorithm. (a) GOTA neighborhoods and weighted distances between the partitions of the sample space. (b) Bottom-up GOTA search tree and corresponding partitions of the sample space. (c) The path created by the ILRTA* algorithm acting in the GOTA conditions

Figure 4.5

Partitions space for the search by three and four searchers. (a) Space χ(2) for

m

=3 searchers. (b) Space χ(2) for

m

=4 searchers

Figure 4.6

Actual and estimated distances and the neighborhoods of the partitions in the IMTS algorithm

Figure 4.7

Markov process for the Pollock model of search with a fictive point

Figure 4.8

Example of the IMTS algorithm's actions for the Pollock model of search with a fictive point. (a) Partitions of the sample space and estimated distances. (b) Start with partition θ: Step 1. (c) Choice of fictive partition ξ

3

: Steps 2– 4. (d) Choice of fictive partition ξ

1

: Steps 5– 7. (e) Choice of fictive partition ξ

2

and stop: Steps 8– 9

Figure 5.1

Example of the simulation frame during the search by Algorithm 5.2. (a) The states of the searcher and the target and the probabilities at time

t

=1. (b) The states of the searcher and the target and the probabilities at time

t

=80

Figure 5.2

Example of the simulation frame during the search by Algorithm 5.2 with three searchers. (a) The states of the searchers and the target and probabilities at time

t

=1. (b) The states of the searchers and the target and probabilities at time

t

=50

Figure 5.3

Location updating initiated by a mobile station. (a) Mobile station is in location area 1 and moves to location area 2. (b) Location updating when mobile station arrives at location area 2

Preface

Humans have been involved in search activities from the beginning of their history: searching for food and shelter in prehistoric times; searching for new continents and countries in the Middle Ages; and searching for information in the digital age.

It should not be surprising to see that in the new ‘information age’ some of the world's leading companies are building their business on the basis of their search abilities. To name just two, Google has changed the world by providing search engines for users to look for specific content, and Facebook enables users to search for their own social identity. Both companies rely on dual search targets in their business models. On the one hand, they search and provide the exact information that their users are looking for. On the other hand, they search and provide lists of potential target users to their clients, such as media publishers and business organizations, according to predefined categories and queries.

It should be noted that all the above-mentioned search activities are probabilistic in nature. An ancient hunter-gatherer could only estimate were he should look for food. And although he could not formalize his thoughts in probabilistic terms and within a statistical framework, he captured information by his senses, processed it in his brain, and moved to areas where he believed that his chances of success would be higher.

Google and Facebook use an identical search scheme in a more formalized and modern manner. Information is gathered from the network and processed by sophisticated algorithms with a similar mission—to increase the probability of a successful search.

This book is about probabilistic search. We address this challenge and provide a more formalized framework for one of the most investigated questions in the history of science. A major part of this challenge is to consolidate the different research directions, schemes, notations, assumptions, and definitions that are related to probabilistic search in various research fields, such as operations research, artificial intelligence, stochastic search, information theory, statistics, and decision theory—to name but a few.

The book is intended for practitioners as well as theorists. It presents some of the main concepts and building blocks that are shared by many probabilistic search algorithms. It includes both well-known methods and our own methods for an information-driven search. It uses information theory concepts that are general enough to cope with the generalized search scheme. Moreover, it focuses on group testing, which is the theory that enables conclusions to be drawn on a group of search points simultaneously, for example, a conclusion made by the ancient hunter-gatherer to avoid a certain search area where the chances of success are lower.

We do not claim to provide a unified and general probabilistic search framework, yet we hope that this book bridges some of the gaps between the various research areas that tackle probabilistic search tasks.

In the book, Algorithm 5.1 was developed and programmed in collaboration with Oded Mery and Niv Shkolnik; Algorithm 5.2 was developed in collaboration with Boaz Golany and Gal Goren and was programmed in collaboration with Gal Goren; and the algorithms for mobile robot navigation discussed in Section 5.3 were developed and programmed in collaboration with Emanuel Salmona. Technical information regarding the cellular networks discussed in Section 5.4 was provided by Sergey Savitsky and Baruch Bar from Celcom. We would like to thank all these people for their help.

We are grateful to Havatzelet Tryster, Shmuel Gal, Yigal Gerchak, Gal Goren, Ron Kennet, Lev Levitin, and Shelemyahu Zaks, who read and commented on parts of the manuscript, and to Boaz Golany for fruitful discussions.

We would also like to express our sincere thanks to Heather Kay, Ilaria Meliconi, Richard Davies, and Prachi Sinha Sahay from John Wiley & Sons, Ltd, and Radhika Sivalingam from Laserwords, Ltd for their kind assistance and goodwill; without their help this book would not be published. Finally, we would like to thank our families for their continuous support and love. The book is dedicated to them and to information-seekers around the world who search for good answers.

This book includes an accompanying website. Please visit www.wiley.com/go/probabilistic-search

Notation and terms

Chapter 1

Introduction

The terms ‘search’ and ‘search problem’ arise in a number of problems that appear in different fields of applied mathematics and engineering. In its literal meaning, the notion of search problem corresponds to the class of situations in which an agent (a searcher) is looking for a hidden object (either one of them can be physical or abstract) by screening a certain defined area. The search problem is formulated under various restrictions on the considered search implementation as well as on the agent and the target functionalities.

To illustrate the potential complexity that might be considered in a search problem let us start with a simple search example and gradually add to it various assumptions and conditions. The figure below presents a simple schematic view of a search problem where the target (a vehicle) is located at some point in a given space, while the searcher or searchers (aircraft) are looking for it. An initial classification of the problem depends on the definition of the sample space, which can be either discrete or continuous. In this book we mainly consider the former case, which implies that the target and the searcher move to well-defined points in the space. These discrete positions can also be modeled by nodes in a graph. This type of presentation is popular in the artificial intelligence (AI) literature where cost parameters or weights are often added to the edges of the graph, such that the overall cost of the search is obtained by accumulating these costs along the search path over the edges. When the weights are distributed unevenly, the search procedure can account for different considerations, such as non-homogeneous search distances or search efforts. We consider some of these cases. A second critical feature of the problem is related to the ability of the target to move in the space. In the case of a moving target, several versions exist for representing its type and path. Some of the most popular schemes are random moves, Markovian moves, and Brownian moves. We address these assumptions in the relevant sections. In general, optimal solutions exist for static search problems but they often do not exist for dynamic search problems with a moving target. In this book we consider both approaches, and in particular we propose a general search scheme that applies to both cases. A third feature of the search is related to the information available to the searcher. If the location of the target is known, then a complete-information search problem can be mapped to a relatively simpler path-planning or chase-planning problem. These types of problems often appear in the operations research literature.

The origin of these deterministic search problems for a moving target was the pursuit problem that was formulated in the eighteenth century. This class of problem is computationally tractable and often focuses on capturing the target with a minimal number of search moves. In this book we focus almost entirely on the incomplete-information search, where the exact location of the target is generally unknown to the searcher. Note that there are several methodological concepts for addressing the incomplete-information search, for example, by relying on rough-set theory, fuzzy logic, or on probability theory. We follow the latter probabilistic search approach, modeling the incomplete information on the target location by a function that quantifies the probability of the target to be located at any point in the space (we call it the location probability). The search then becomes a probabilistic search and in many cases an adaptive one, where the results of the search up to a certain time are used to update the location probability distribution over the space, often by using a Bayesian statistics approach as we do here. A popular extension of the pursuit problem with incomplete information is known as the search and screening problem, formulated during World War II by Bernard Koopman. The problem is probabilistic not only with respect to the location of the target, but also with respect to the distribution of the search efforts that are applied in a continuous manner by the searcher to the search space.

We follow this approach, although we do not use the notion of distributed efforts. Instead, we assume that the search can be applied to discrete points of the search space. An important extension of the distributed search efforts, under a consideration of a discrete search space, is the group-testing search. In group testing the searcher can look for the target in a subspace of the search space and obtain an indication of whether the target is located somewhere in this subspace. The size of the allowed subspace is treated as an input parameter, while the search terminates if the subspace contains only a single point, thus representing complete information on the target location. In this book, we explicitly consider methods of group testing. If the target is static the search can be modeled by a coding theory process (where a code represents the location of the target in the space), while the coding procedures can be easily mapped to obtain the optimal search policy. These cases are often represented by decision trees that have become extremely popular in data-mining applications. In dynamic search, when the target is moving, such isomorphism between coding theory and search is no longer valid, so we propose a methodology that can be extended also to these cases.

There are several variants of the incomplete-information search. We often assume that the target is unaware of the search agent. If this is not the case, then the search process becomes a search game relying on some game theory concepts. We will shortly address these search games. Another conventional and realistic assumption is that the searcher's observations are prone to some observation errors. In these cases, two types of statistical errors have to be taken into account – either missing the target even though the searcher has searched the right point (a false negative error), or falsely indicating that the target has been found at a certain point (a false positive error). Of these two errors, the false negative one is much more popular, and we consider a few such cases in later chapters. Another version of the incomplete-information search, which is also considered in this book, addresses the situation of side information, where the searcher obtains some (incomplete) indication during the search of where the target is located. Another natural extension to all of the above methods is obtained when assuming that there is more than one target or one searcher in the search space. In such a case the question regarding the amount of cooperation among the targets or among search agents arises. A simple example of such cooperation is information sharing between the searchers in order to better estimate the location probability distribution and better utilize the joint search efforts. These extensions are discussed in the book as well.

We must stress the fact that the general formulation of the search problem as presented in this book does not distinguish between search for existing physical objects, such as cell (mobile) phones, people, and devices, and search for abstract entities, such as records in a database, an e-commerce customer on the Internet, a targeted customer type, or a search for feasible solutions of a given problem within a predefined solution space. Some popular tools for such search procedures can be found in the data-mining and statistics literature. We draw clear lines of similarities between search procedures for physical entities and those found in problem-solving procedures, typically in stochastic local search methods that are used to obtain feasible solutions to a given schematic problem.

In any case, even from the above simple example it can be understood that the search problem in its general form can be very complex, highly variant, and call for different solution schemes, which are partially covered in this book.

In summary, it is important to note that despite the similar properties of the above-mentioned variants of the search problem, there is no formal and unified search theory that captures all these points. Instead, one can find different search procedures and considerations in various research areas, such as operations research, coding theory, information theory, graph theory, computer science, data mining, machine learning, statistics, and AI. We do not claim to present such a unified search theory in this book, but we do try to bridge some of these gaps by formalizing the main properties, procedures, and assumptions that are related to many of these search problems and their variants.

In this chapter we start by discussing the motivation and applications of the search problem. This is done through a brief historical note on the origin of the search problem and its development. Then, we provide a general description of the search problem, focusing on three research fields in which it appears, namely, search and screening, group testing, and stochastic local search. We specifically discuss the core features of the stochastic local search, which provides the main insights on the proposed information-based search approach, and attempt to bridge some of the gap between different search theories. We end the chapter by discussing the structure of the book.

1.1 Motivation and applications

The problem of search is one of the oldest mathematical problems which were originated by the first applications of calculus in the seventeenth century. The problem arose by considering an agent which is required to determine the location of a hidden object and to intercept it. In the second half of the seventeenth century, the problem was explicitly proposed by Claude Perrault to Leibniz and studied in cooperation with Huygens. Perrault himself associated this problem with earlier studies conducted by Fermat and Newton [1].

The first essentially mathematical formulation and analysis of the search problem, according to Nahin [1], relates to the paper of Pierre Bouguer, which was submitted to the French Academy on 16 January 1732. This paper considered the pursuit of a merchant ship by a pirate ship. This is the reason why nowadays such a problem is often called the pure pursuit problem. In it the pursuing agent has complete information about the current location of the pursued object, and the task is to find the agent's path which leads to a guaranteed interception of the object in minimal time. In Bouguer's original pursuit problem, it was assumed that the merchant's movement was directed in a straight line. In 1748 a required generalization of the problem in which the pursued object moved in circles was formulated and published in the English Ladies' Diary Journal by John Ash in the form of a puzzle.

Formal mathematical considerations of the pursuit problem for various paths of the pursued object and different characteristics of the pursuer were proposed in a review published by John Runkle (1859) in Mathematical Monthly. Following this work, the pursuit problem was considered as an application of the theory of differential equations, and after publication of the classical monographs by Isaacs [2] and Pontryagin et al. [3], it was often considered as a variant of the optimal control problem.

Nowadays, the terms ‘search’ and ‘search problem’ arise in a number of problems that appear in different areas of applied mathematics and engineering. Some of the problems deal with essentially discrete models that can be described by walks over graphs, while other problems address continuous models that require the application of differential equations and certain variation principles. The problem of search in its literal/physical meaning appears in military applications (see, e.g., [4–6]) as the search for an object hidden by the enemy, in geological tasks as a search for minerals, as well as in other fields such as in quality control as the search for faulty units in a production batch [7, 8]. In such tasks, the searcher's detection abilities are restricted by sensor quality, and the order of search actions is often restricted by the physical abilities of the searcher.

Other variants of the search problem are obtained in search for intangible (abstract) items, such as in medical procedures when searching for the right diagnosis using decision trees, in cellular network paging in which the network controller looks for a mobile device address to redirect incoming calls [9], or in search for specific records in a file or database. In such tasks, the search can be modeled by relying on fault detection and group-testing principles [10, 11], by which the searcher is able to test parts of the system in a different order. Note that similar search properties appear in the tasks of data mining and classification, where the target of the search is defined as a certain property or data record.

Contrary to these indicated fields, in the tasks of optimization and operations research the search is considered as a method of problem solving, where the target is defined as a required solution that depends on a certain problem. In particular, the method of local search [12, 13] for the solutions of NP-hard problems is based on sequential consideration of the solutions that are obtained at each step of the search while choosing the next solution from a close neighborhood of the current solution. Moreover, during recent decades search has obtained a similar meaning in the studies of AI [14, 15], where, for example, the A* family of algorithms is being intensively considered and used to solve related problems.

For each of these approaches to the search problem, certain algorithms and results were obtained over the years. Nevertheless, in spite of the clear association and analogies in such problems and methods, the connections between heuristic search methods (including the AI version), search and screening methods, and group-testing methods have not been fully established to date. The present work attempts to close the gap between these approaches to the search problem.

1.2 General description of the search problem

Search problems appear in different applications of invariant methods, formulations, and considerations. In the most general formulation, the problem includes a searcher that moves over a search space and looks for a target that is located somewhere in the search space. In this context, the searcher and the target can be considered either as physical agents so that the goal of the searcher is to find the target in minimal time (or minimal search effort), or as an abstract object, such as searching for the right medical diagnosis, where the searcher is considered as a problem-solver while the target is considered as a required solution of the problem.

The initial formulation of the problem dealt with the search for a hidden object by a physical agent acting in continuous space and continuous or discrete time. In such a formulation, the problem was studied within the framework of stochastic optimization and control theory [16–18], and for most cases of search for a static target, exact solutions of the problem in terms of shorter search path or minimal search time were obtained. When considering search for a moving target, the problem was often addressed by using different approaches, including optimization methods [19, 55, 6] and game theory [20–22].

In contrast to search by physical agents, the abstract formulation of the search problem often considers the actions of the problem-solver over a discrete space of candidate solutions, where at each search step a single solution is obtained. In such a formulation, the problem appears both in the models of search over a gridded plane and in the tasks of AI [15, 23] and optimization theory [12], including group-testing methods for the processes of quality control and inspection [7, 8]. Moreover, in many studies [10], a group-testing formulation of the search problem has been considered, while in other applications the problem has been referred to as a problem of screening and detection [6].

A general formulation of the search problem in the group-testing form allows the implementation of a wide set of assumptions regarding the searcher and the target that relate to different tasks. The main assumption corresponds to the searcher's knowledge regarding the target, which determines the nature of the search space. In most cases, it is assumed that the search space consists of abstract states that represent either target locations or candidate solutions, and that the probability distribution over the search space is defined. If the probabilities of the states are not available to the searcher, then it can be assumed that this distribution is uniform or that the states are equally probable. This assumption stems naturally from the well-known maximum entropy principle [24].

Other assumptions that correspond to the searcher's knowledge deal with detection and decision making. The first assumption is addressed by a detection function that defines the probability of errors of the first and second types. The Type I error corresponds to an erroneous declaration that the target has been found at a certain searched point. The Type II error corresponds to an erroneous declaration that the target is not located at a certain searched point. In most applications, it is assumed that the search is conservative – thus the search is prone to errors when checking a state where the target is located, while the detection of the absence of the target is errorless. Detection results, which can be represented either by Boolean ‘true’ or ‘false’ values or by detection probabilities, are used by the searcher to make decisions about the next state of the target and the next search action. In search by a physical agent, the detection function often corresponds to the properties of the available sensors, for example, when considering a car equipped with radar and cameras to avoid collisions with objects.

In the abstract version of the search problem, the detection function is often defined on the basis of the search space and its metric or topological properties. For example, when considering a problem-solving approach, a set of constraints can define the feasible search space.

Decision making and selection of the next search action usually depend on the detection result and on the information that is available to the searcher. In most optimization and group-testing algorithms, decision making is based on the probabilities of the states and on the search cost of the candidate decisions. The probabilities and costs are updated by the searcher during detection and support the decision making in further stages of the search [25]. In some AI methods [15], in contrast, the search is not defined by probabilistic terms – thus location probabilities are not modeled and decision making is mainly based on the distances (costs) between the states. In the popular A* algorithm of problem solving [14], the searcher uses side information about the required solution and applies an estimated distance (cost) to it, while choosing the next solution from the neighboring candidates.

Depending on the nature of the search problem, two main types of solution neighborhoods are defined. In the first type, the neighborhood includes all feasible solutions that are known at the current search step, and the choice of candidate solution is conducted over this set of feasible solutions. In such a case, some solution of the search problem can be obtained and there exist a number of algorithms, for example, Huffman search [26] or dynamic programming search [27, 28], that obtain the required solution. Nevertheless, these algorithms are mainly applicable to the search for a static target that corresponds to the search for a solution under the assumption that the uncertainty regarding the solution depends on the searcher's actions and not on external factors. In the second solution type, the neighborhood includes a restricted subset of a search space so that the selection procedure requires a small number of calculations and comparisons, and a complete solution is obtained by sequential decision making and acceptance of a number of intermediate solutions that converge to the required solution. Such a search method is referred to as a local search and can be implemented in the search both for static and for moving targets. In terms of problem solving, the search for a moving target means that the uncertainty regarding a required solution changes over time and that the solution depends on some external factors beyond the control of the searcher. With some additional assumptions, such search procedures can be associated with a process of sequential decision making in game theory [29, 30] and, in particular, in search games [20–22], where the physical target agent can move over the space according to its internal rule, while the goal of the search agent is to find the target in minimal time or with minimal effort.

Finally, the formulation and methods of consideration of the search problem depend on the number of search agents and on the number of targets. The abstract search problems usually deal with a single decision-maker looking for a single solution or a group of solutions that represent this solution. In contrast, the search problems that consider the actions of physical agents depend on the number of search agents and the number of target agents and their movement. In such problems, additional questions arise regarding collective decision making, shared (or common) information, as well as communication between the searchers. Similar questions are considered regarding the target agents and their responses to the searchers' actions.

Based on the type of search problem and implemented assumptions, the problem can be addressed and solved by using different methods. Some of these methods are applicable to the problem of search in which physical agents act in a physical space; other methods are used in an abstract search for a solution of optimization or AI problems. In the next section, we consider some of the main approaches to problem-solving search that provide a context for the local search methods considered in the book.

1.3 Solution approaches in the literature

Methods for solving the search problem depend on its formulation and on the implemented assumptions and applied restrictions. For example, in the original pursuit problem or in its game theory formulation, called the pursuit-evasion problem, the pursuer has complete information on the position of the pursued object at any time. Accordingly, the goal is to determine the path of the pursuer given the velocity of the pursuer, the velocity and trajectory of the pursued object, and the governing rules of its movement. In such a formulation, the solution can be obtained by using suitable variation principles and, in most cases, it has a well-defined structure.

A variant of the pursuit problem with incomplete information – the so-called ‘dog in a fog’ problem – was initiated in 1942 by the US Navy's Antisubmarine Warfare Operations Research Group. In a focused report [4] on the problem and later in a book [19], Koopman named this problem the search and screening problem. As indicated in a later report by Frost and Stone [31], the considered theory

is the study of how to most effectively employ limited resources when trying to find an object whose location is not precisely known. The goal is to deploy search assets to maximize the probability of locating the search object with the resources available. Sometimes this goal is stated in terms of minimizing the time to find the search object.

The objective in the search and screening problem, as stated in [31], is ‘to find methods, procedures, and algorithms that describe how to achieve these goals,’ where the pursuer is called the searcher and the pursued object is the target. It is assumed that the searcher acts under uncertainty, and that the available information on the target location changes during the search. The dependence between the amount of available information and the locations of the searcher and the target is defined by a detection function