Trust between Cooperating Technical Systems - Walter Bamberger - E-Book

Trust between Cooperating Technical Systems E-Book

Walter Bamberger

0,0
2,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Researchers from social sciences and economics consider trust a requirement for successful cooperation between people. It helps to judge the risk in situations, in which a person has the choice to rely on another one. In the future, technical systems will face similar situations. Assume, for example, self-organised robots, which reload some goods at a large logistics centre together. For this, they will need a mechanism like trust. This book gives the reader tools to understand trust and to introduce a trust mechnism into own applications. The tools include generic requirements for own trust mechanisms and the Enfident Model - a conceptual, implementation-independent model of trust. These theoretical tools are complemented with state-of-the-art algorithms from statistical relational learning. Finally, as an example, all this is applied to cooperating cognitive vehicles. As trust is a social phenomenon, this evaluation features a virtual society of vehicles, which cooperate in a vehicular network. It shows that the postulated requirements and the Enfident Model lead to intuitive and consistent results.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 351

Veröffentlichungsjahr: 2014

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Technische Universität München Lehrstuhl für Datenverarbeitung

Trust between Cooperating Technical Systems

With an Application on Cognitive Vehicles

Walter Bamberger

Vollständiger Abdruck der von der Fakultät für Elektrotechnik und Informationstechnik der Technischen Universität München zur Erlangung des akademischen Grades eines

Doktor-Ingenieurs (Dr.-Ing.)

genehmigten Dissertation.

Vorsitzender: Univ.-Prof. Dr. sc. techn. Gerhard Kramer

Prüfer der Dissertation:

Univ.-Prof. Dr.-Ing. Klaus Diepold

Prof. Dr. Sandra Zilles, University of Regina, Kanada

Die Dissertation wurde am 29. Januar 2013 bei der Technischen Universität München eingereicht und durch die Fakultät für Elektrotechnik und Informationstechnik am 24. Juli 2013 angenommen.

Walter Bamberger. Trust between Cooperating Technical Systems. With an Application on Cognitive Vehicles. Tredition, Hamburg, 2014.

You can obtain various flavours of this book: paperback (ISBN 978-3-73230946-7), hardcover (ISBN 978-3-7323-0947-4), e-book (ISBN 978-3-7323-09481), free PDF document for open access (http://nbn-resolving.org/urn:nbn:de:bvb: 91-diss-20130724-1129245-0-2).

Published by tredition, Hamburg, Germany, http://www.tredition.de.

Cover design by Christian Ullermann, http://www.der-kleine-buecherladen.de.

This book features research within the Fidens project at the Institute for Data Processing, Technische Universität München (TUM), Munich, Germany. For accompanying information and new research visit http://www.ldv.ei.tum.de/fidens.

Key words: cognitive system, cooperation, Dirichlet process, infinite hidden Markov model, interpersonal trust, relational dynamic Bayesian network, reliability, reputation, requirements, self-organising system, social science, society, time-varying behaviour, transfer learning, trust model, vehicle, vehicular ad hoc network, VANET.

© 2014 Walter Bamberger

This work is licenced under the Creative Commons Attribution 4.0 International License. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California 94105, USA. The license does not apply to the following parts:

Cited material and referenced work. Please refer to the original work to find the copyright holder and license.

The comic in Figure 3.4 is printed with the kind permission of Bulls Press, ©Solo Syndication/Distr. Bulls.

The figures 8.1 and 8.2 contain colour modifications of 2CV by the Openclipart user spadassin, public domain (https://openclipart.org/detail/202003/2cv-by-spadassin-202003) as well as Parkhaus Rheinauhafen by the Wikimedia user H005, public domain (http://commons.wikimedia.org/w/index.php?title=File:Parkhaus_Rheinauhafen.jpg&oldid=133724723).

On the cover, the hands making a pretzel and the feet are printed with the kind permission of Christina Bamberger. The cover contains Parkhaus Rheinauhafen too.

Abstract

Researchers from the social sciences and economics consider trust a requirement for successful cooperation between people. It helps to judge the risk in situations, in which a person has the choice to rely on another one. In the future, technical systems will face similar situations. Assume for example that, at a large logistics centre, a robot should reload goods of a ship in cooperation. In the beginning, it must find the right partner out of a set of diverse other robots. To do this selection efficiently without exaggerated security mechanisms, the robot needs trust. Here I consider trust a mechanism, which estimates the certainty of the outcome of the partner’s actions.

This dissertation formalises trust between technical systems to set the theoretical foundation for the above idea. It reviews the socio-scientific and technical literature and identifies generic requirements for the mechanism trust. Based on the requirements and further considerations, it presents a conceptual, implementation-independent framework. This new framework, called the Enfident Model, incorporates various facets of trust in form of submodels. Amongst others, it regards the temporal development of cooperation, the dependency on the task and bargaining, time-varying behaviour of the cooperation partner, learning from experiences, logical constraints of the present situation, and transfer learning to handle unknown situations. With these manifold features described on a conceptual level, the Enfident Model captures existing trust procedures and is suitable for designing new ones. The theoretical part is complemented with algorithms for prototyping trust in individual applications. These algorithms use statistical relational learning to combine logic, learning, clustering and statistics for trust development. They work on a relational dynamic Bayesian network.

Since trust is a social phenomenon, the evaluation features a virtual society of vehicles. These systems cooperate by exchanging information in a vehicular network. They use a trust algorithm to distinguish correct from incorrect information. The simulation shows that the identified trust requirements and the Enfident Model lead to intuitive and consistent results.

Contents

1    Introduction

1.1    Problem Statement

1.2    Motivation and Applications

1.3    Contribution of This Dissertation

1.4    Organisation

2    Clarifying the Concept of Trust

2.1    Interpersonal Trust

2.2    Functions of Interpersonal Trust

2.3    Trust in Information Security

2.4    Trust in Multi-Agent Systems Research

2.5    Definition of Trust in This Dissertation

3    An Input-Output View on a Trusting Person

3.1    Overview

3.1.1    Trust as a Dependent and Independent Variable

3.1.2    Attitude, Decision and Manifest Behaviour

3.2    External Influences – the Inputs

3.2.1    Interpersonal Perception

3.2.2    Properties of the Relationship

3.2.3    Properties of the Situation

3.3    External Effects – the Outputs

3.4    Inner Processing

3.4.1    Trust as a Generalised and a Specific Expectation

3.4.2    Matching of the Actual and the Desired Relationship

3.4.3    Self-Perception

3.5    Summary

4    Requirements and Related Work

4.1    The Causality of Trust Development

4.2    Influences

4.3    Output of a Trust Algorithm

4.3.1    Trust Representation

4.3.2    Trust and Decision

4.3.3    Trust, Risk and Utility

4.4    Reasoning Process

4.4.1    Present Constraints and Experience

4.4.2    Specialisation and Generalisation

4.4.3    Entities as a Time-Varying Process

4.5    Summary

5    Notation

5.1    Probabilistic Notation

5.2    Graphical Notation

6    The Enfident Model

6.1    Trust-Related Situations

6.2    Data Definition

6.2.1    Overview

6.2.2    Detailed Description of the Entities and Relationships

6.2.3    Conclusion

6.3    Reasoning Process

6.4    Querying

6.5    Implementation Notes

6.5.1    Determining the Attributes

6.5.2    Distinguishing Entities

6.5.3    Trusting a Group of Systems

6.5.4    Modelling the Social Structure

6.5.5    Designing the Reasoning

6.5.6    Connecting the Enfident Model to a Reputation System

7    Reasoning Algorithms for the Enfident Model

7.1    Introduction

7.2    Realisations with Finite Mixture Models

7.2.1    The Dirichlet Distribution

7.2.2    Time-Invariant Entity Types

7.2.3    Time-Varying Entity Types

7.3    Realisations with Infinite Mixture Models

7.3.1    Introduction to the Infinite Mixture Model

7.3.2    Infinite Mixture Models for Time-Invariant Entity Types

7.3.3    Infinite Mixture Models for Time-Varying Entity Types

7.4    Summary

8    Evaluation Method

8.1    The Inter-Vehicular Communication Scenario

8.2    Use Cases for the Trust Model

8.3    The Trust Problem

8.4    Simulation Environment

8.4.1    The Social Structure

8.4.2    Events and the Information Model

8.4.3    The Processing in the Vehicles

8.4.4    Post-processing with Mates

8.4.5    Further Limitations of the Simulation Environment

8.5    Simulation Scenarios

8.5.1    Verifying the Fulfilment of the Requirements

8.5.2    Sensor Quality Scenario

8.5.3    Defect Scenario

8.6    The Algorithms Used for the Evaluation

9    Evaluation Results and Their Discussion

9.1    Learning the Competence-Related Influences

9.2    Specialisation and Generalisation

9.3    Time-varying Behaviour of the Trustee

9.4    Generality of a Trust Algorithm and Its Convergence

9.5    Summary

10  Conclusion

10.1  Summary

10.2  The Enfident Model and Interpersonal Trust

10.2.1  Influencing Factors

10.2.2  Perceived Characteristics of the Cooperation Partner

10.2.3  Trust as a Generalised and Specific Expectancy

10.2.4  Trust as an Inner State or Manifest Behaviour

10.2.5  Summary

10.3  Limitations of the Enfident Model and Future Research Issues

Bibliography

1  Introduction

In a future with many self-organising systems, socio-scientific issues also apply to the society of those machines. Imagine, for example, a future scenario of robots at a large construction site. They have different shapes and abilities as they have been optimised for different purposes, like moving big and heavy items, or cutting and screwing. Some of them have worked together before; others do not know each other, because they are new or belong to different companies.

In this scenario, various complex tasks can only be executed jointly by a group of robots. Imagine a robot has got the job to carry out such a task. It looks for partners, asks them whether they would be willing to do the job, and finally performs the task with their support. Selecting the right cooperation partners is important for an optimal outcome: The partner could have insufficient abilities, be partly defect, or be manipulated to sabotage the task. Thus the organising robot should select those partners, with which it expects to gain the best outcome. This is where trust comes in.

1.1  Problem Statement

The scenario above is an example for the problem this dissertation addresses. The general setting consists of a system that wants to cooperate with another system or a group of systems. Here I understand cooperation as any form of relying on the action of another party. That setting is related to various subjects like reputation, identification of the partner, individual trust development, decision making, reciprocity and information security (see Figure 1.1 on page 3). This dissertation picks out just one. It focuses on the single problem: How can a system that wants to cooperate with another system or a group of systems predict the cooperation outcome? This prediction should have the form of beliefs in or likelihoods for all possible cooperation outcomes.

The problem can also be considered from another point of view: If a system can predict cooperation outcomes, it has a certain model of the other’s manifest behaviour. It cannot look into the other system to see how that system really works. But it can obtain a limited idea of how the other system works just from observing its behaviour over several interactions. This idea is a model of the other’s manifest behaviour regarding cooperation. So the problem treated in this dissertation can also be formulated as: How can a system learn a model of other systems’ cooperation-related behaviour?

I call a mechanism, which can learn this, trust between technical systems. The term trust has different meanings in different fields. To address this fact, the next chapter introduces the views of some researchers in social sciences, cryptology and the field of multi-agent systems. It relates them to the problem described above to clarify why I use the term trust in this dissertation. Finally it defines some trust-related terms for the present work. Chapter 4 summarises the state of the art for technical trust mechanisms. The contribution of this dissertation beyond the state of the art is compiled in Section 1.3.

More specifically, this dissertation does not try to simply solve the described problem with a certain algorithm for a specific application. Instead it collects requirements for a trust mechanism in general and derives a conceptual trust model from them. To realise and evaluate this model, an exemplary algorithm is presented. More implementations of the model and optimisations are subject to future research.

In the remainder of this section, I further detail the problem and delimit it from selected other problems that the reader may possibly think of. For this, Figure 1.1 gives some orientation. The term cooperation is interpreted very widely in this document. It includes delegation and all sorts of relying on another party. Consider, for example, a driver that is overtaking another car on a highway. The situation seems free of risk as no third car is around. But still, each of the drivers relies on the other one not to hit the own car (for whatever strange reasons). This situation features a form of loose reliance without any explicit agreement. In this dissertation, I still consider it an implicit form of cooperation, as it constitutes a trust situation.

Figure 1.1: This figure shows some mechanisms the reader may think of when talking about trust. The blue ellipse contains modules that work on the individual level. Those in the grey part are used for the interaction with systems: the society level. This dissertation only treats the trust mechanism marked in dark blue.

Furthermore the systems here should cooperate without human support. Especially they should develop trust on their own. This is in contrast to systems that use humans as trust sources like classical online reputation systems. That points to an important pre-requisite: In this thesis, a trusting system must be able to assess all facets of a cooperation outcome. Only then, it can learn the cooperation-related behaviour of others on its own.

Related trust methods often include mechanisms for decision making, reputation building, reciprocity enforcement as well as cryptographic data and platform security. I focus on the trust development in the individual and omit society-level features like cryptographic network protocols or reputation building. Moreover I consider decision making and also reciprocity to be different from trust development (see Chapter 2 and 3).

So I propose a mechanism, which just learns a model of the other’s behaviour. All the tools mentioned above are related to trust and important for a trusting society. Figure 1.1 depicts this. But they are different from a trust mechanism in the strict sense that is proposed in this dissertation. Moreover the dissertation focuses on machine-machine interaction without human intervention. Every time, when cooperating and trusting systems or agents are mentioned, I refer to technical systems, except if interpersonal trust is considered explicitly.

What comes very close to a trust mechanism is a sensor model. Such a model describes how a sensor transforms the observed physical quantity in an output signal. So it reflects the behaviour of a sensor. A trust mechanism goes beyond this. It learns behavioural models for many other systems, not just one sensor, and for many tasks, not just the single task of obtaining a certain physical quantity. In addition, these other systems are unknown in advance and their basic way of functioning may vary. Still the trust mechanism should provide accurate expectations, even if only few experiences have been made with the other systems before. Thus the trust mechanism must be able to learn various behavioural models; it must be generic. And it should involve transfer learning to quickly adapt to new situations.

The next section introduces various scenarios in which a technical form of trust is useful. The scenarios show that the present work has relevance for the research on cognitive systems, multi-agent systems, sensor networks, vehicular networks and – to some extent – on cryptology; it features techniques from the field of statistical relational learning.

1.2  Motivation and Applications

Trust is only a minor subject in the development of today’s technical systems. In contrast to this, interpersonal trust is considered important for personal relationships as well as business organisations (see Section 3.3 and, e.g., Gennerich, 2000, pp. 10–12 for an overview). It improves communication and cooperation, and it is considered a pre-requisite of efficient work flows in groups. If it is so important for people, why is it used only rarely in technical systems? The main reason might be that trust is especially necessary for cooperation between self-organising agents. Strictly controlled work flows, as they are typical today for machine-to-machine interaction, make trust needless. But the proposed idea is important for systems that cooperate in a self-organised way. Such systems will need a trust mechanism to handle the uncertainty when relying on other systems. As a consequence, the reader should venture a glimpse into the future to find application scenarios for trust between cooperating systems.

I use the following exemplary scenarios throughout this dissertation. The first is the scenario of a construction site as described in the previous section. It is similar to the scenario of a large logistics centre with various kinds of robots that cooperate to reload goods from a ship. In both scenarios, the cooperation helps to extend the physical capabilities or to perform tasks more efficiently. In the third example, future cognitive vehicles are driving around while perceiving their environment. To extend their perception range, they exchange all sorts of information, which some vehicles have perceived before. With this form of cooperation, they can efficiently maintain a model of their surrounding world (like a map or a model of the traffic situation) and advise the driver (e.g., where to go or what to give attention to). The fourth scenario features virtual agents at a virtual market place, which trade with each other. So they cooperate as substitutes of persons. These scenarios should give the reader the feeling that trust is helpful for future self-organising systems.

In general, trust supports the following reasoning tasks that appear when cooperating:

Select a cooperation partner from several possible ones;

Decide whether to cooperate or not if there is a choice not to cooperate at all;

Know about the weaknesses of a certain act of cooperation and take their consequences into account;

Decide about the correctness of received information;

Decide whether the received information about a certain subject is sufficient; and if not,

Decide whom to ask for a further opinion about the subject (which is related to Item 1); and finally,

Decide whether to accept a cooperation request of another party (which is related to Item 2). So trust is usually needed by both, the one that asks for cooperation and the other one that is asked.

In summary, a self-organising cooperating system needs trust to decide on “how, when, and who to interact with” (Ramchurn et al., 2004, p. 3).

The reader can find many scenarios, in which future technical systems could perform the above reasoning tasks. To support this, I give an overview of the various forms of cooperation, which can be expected in the future (based on Hirche, 2010). It was proposed in CoTeSys, a cluster of excellence of the Deutsche Forschungsgemeinschaft (German Research Foundation), which investigates cognitive systems.

Two systems interact solely through the environment during the cooperation.

Two systems share single components and couple one another via information exchange

–  to extend the perception range (joint perception),

–  to extend the physical capabilities (joint manipulation),

–  to increase the learning performance (joint learning), and

–  to find good and efficient strategies for the task execution (joint planning and decision making).

Figure 1.2 illustrates how two cognitive systems can share various components directly.

Both main forms of cooperation can also be mixed. Trust is helpful in all cases.

With this schema of cooperation forms, the reader might get an idea of the various applications we can expect of future self-organising systems. The previous list of reasoning tasks shows that a trust mechanism can strongly support the reasoning in these applications. So there is a wide range of use cases trust can be applied to. But is trust really necessary or could it be substituted with better planning and control in the scenarios? Full control over complex situations with several interested parties is difficult and, thus, expensive. Imagine, for example, a large harbour in the future. The robots there belong to different parties, have various ages and come from several manufacturers. So full control is difficult here. Avoiding strict global control is the exact idea behind self-organising systems. Thus trust enables those systems to cooperate efficiently without expensive procedures for security enforcement. This concern is similar to that of Gerck (2002), who recommends trust for the Internet because of its self-organising nature. For him, using trust instead of full surveillance has the advantages of a simpler and more modular system design as well as lower costs.

Figure 1.2: Examples of how two cognitive systems can share their components (based on Hirche, 2010). The black lines indicate data flows between the components in one system. The orange lines refer to data flows, which are realised by communication between two different systems.

Above I used the notion of a cognitive system. This kind of system has the ability to trust, because it can perceive and understand its environment in order to judge past acts of cooperation and to learn from them. And this kind of system has a need for trust, because it should engage in cooperation and reason about cooperation. Therefore cognitive systems are widely used in this dissertation, but the application of trust is not limited to them. This term is defined in CoTeSys as follows:

“Cognitive technical systems (CTS) are information processing systems equipped with artificial sensors and actuators, integrated and embedded into physical systems, and acting in a physical world. They differ from other technical systems as they perform cognitive control and have cognitive capabilities. Cognitive control orchestrates reflexive and habitual behavior in accord with longterm intentions. Cognitive capabilities such as perception, reasoning, learning, and planning turn technical systems into systems that ‘know what they are doing’.” (Buss et al., 2007, p. 25)

1.3  Contribution of This Dissertation

This dissertation has the objective to improve the understanding and modelling of trust between cooperating technical systems. To achieve this, it contributes the following to a theory of technical trust.

It discusses the term and mechanism “trust” across disciplines and introduces research on interpersonal and technical trust to compare various views. In contrast to the state of the art (e.g. Castelfranchi and Falcone, 2010; Engler, 2007; Kassebaum, 2004), this dissertation presents interpersonal trust as an input-output system. This new view makes it easier to relate trust between persons and between machines with each other. In addition, the presented interdisciplinary discussion is deeper than the state of the art. This leads to a different understanding of technical trust, especially regarding the following questions: What notions of trust can be distinguished (Section 2.5)? How does trust differ from related mechanisms (Chapter 1 and 2)? What influences trust development (Section 3.2 and 6.2)? How do interpersonal trust and inter-machine trust differ from one another (Section 3.4 and 10.2.5)? This work results in clear, well-founded technical concepts for different notions of inter-machine trust. It is necessary, because the present state of the art lacks a sufficient theoretical framework for the trust model presented in this document.

The interdisciplinary research together with an analysis of future trust scenarios leads to a formalisation of trust between technical systems. This formalisation is the core contribution of this dissertation. It consists of general application-independent requirements on a trust algorithm and a conceptual implementation-independent model of trust. The requirements are postulated together with a review of the technical literature in Chapter 4. Formal requirements for a trust mechanism are unique in the literature. While some authors (e.g. Ramchurn et al., 2004) review the literature on trust, they do not derive requirements from it. Furthermore the new conceptual model of trust describes various aspects of trust development and can be understood as a meta-model to create new application-specific trust algorithms. It is presented in Chapter 6 and called the Enfident Model. The following list details its main features with a focus on those that are rarely found in other trust models.

The Enfident Model evaluates a trust situation comprehensively. It explicitly names three aspects: the cooperation partner(s), the cooperation agreement and the task to fulfil. It combines them as entity classes in a relational sub-model; each of the entity classes groups several attributes of the trust situation. Present trust models consider the attributes of one or two of those entity classes only, as Section 4.2 points out.

This relational sub-model can reunite two lines of research on technical trust, which are detailed in Section 4.2. Today, most trust algorithms rate previous cooperation outcomes and derive trust from these ratings. In contrast, the socio-cognitive trust models derive trust from beliefs about the cooperation partner in the given trust situation, basing their theory on belief-desire-intention agents. These beliefs can be located in the Enfident Model in the same way as the cooperation outcomes and contextual information.

Section 4.4.1 shows that some trust algorithms base their outcome prediction on past experiences, while others use logical constraints of the present situation. The Enfident Model addresses both information sources. This is unique in the literature.

Most trust algorithms just rate the act of cooperation. In contrast, this dissertation makes the cooperation outcome the first class object. The subjective likelihoods of the possible cooperation outcomes (named the trust distribution) should be predicted directly and as complete as possible. If necessary, a rating can be derived from them in a subsequent step, either in the trust algorithm or in a decision algorithm. The trust algorithm in ElSalamouny et al., 2010 is one of few examples that put out the cooperation outcome instead of a rating.

Present trust algorithms compute specific trust for a certain purpose. The needs of a reputation system, for example, or the trust problem of an autonomous agent define that situation. The literature of the social sciences shows though that people can express trust for all sorts of attribute combinations like: the trust in a certain cooperation partner or the trust regarding a certain situational setting (e.g. meeting at night) (see Section 3.4.1). The Enfident Model resembles this with the concept of querying. This concept is unique in the technical literature. It enables a system to compute trust for a specific trust situation or to exchange the trust in various objects with other systems – with just one single trust model.

The Enfident Model explicitly models trust-related changes in the mentioned entities over time. For example a cooperation partner could change its behaviour, which means its internal way of working, because of defects or software updates. I found a related functionality only recently in the literature: ElSalamouny et al. (2010) model the time-varying behaviour of a single cooperation partner as a hidden Markov model. The Enfident Model includes similar sub-models for all entity types not just the cooperation partner and entangles those sub-models across entities. Moreover the Enfident Model proposes a time-dependent likelihood for the state transitions.

Trust develops over an ordered sequence of acts of cooperation. An act of cooperation may in turn consist of an ordered sequence of interactions. The trustor can evaluate trust at any time during an act of cooperation. Some information may be known at that time, other information may be unknown and some information may change from interaction to interaction. To my knowledge, no present work contains such a comprehensive sub-model for the temporal development during a single act of cooperation.

A trust mechanism should help to handle new, uncertain situational settings. Therefore it must transfer knowledge from other, even different settings to this new one by utilising similarities (Pan and Yang, 2010). Rettinger et al., 2008 is the only present work that realises this functionality satisfyingly.

The Enfident Model combines all these features in a coherent model and shows how they can interplay with each other. Present trust models focus on few of them only. This listing also clarifies why the Enfident Model can serve as a meta-model to analyse existing trust algorithms.

To realise this functionality, I propose algorithms that combine clustering, learning, logic and probability theory in a relational dynamic Bayesian network (e.g. Manfredotti, 2009). They are based on the algorithms in Xu, 2007 for static relational Bayesian networks and the algorithms in Van Gael, 2011 for infinite hidden Markov models.

For the evaluation, the Enfident Model is applied to the scenario of cooperating cognitive vehicles. This scenario features a whole “society” of selforganising systems. Since trust addresses a social problem, the evaluation with a realistic technical society matches best here. To my knowledge, such an evaluation is unique in the literature and was a complex undertaking.

1.4  Organisation

The organisation of this dissertation uses a methodology that follows the phases of a systematic engineering process with use cases, requirements, design, implementation and testing. At the same time, the text is organised in two parts: a generic and an application-specific part. To avoid duplication of text, some phases of the above process are detailed in one part or the other only, as described in the following.

Problem definition and use cases. Chapter 1 introduces the problem and sketches application scenarios. Chapter 2 then compiles views on trust from various fields to find a definition of trust and related terms for this dissertation. Those views and the definitions further clarify the problem. A comprehensive description of a single application together with use cases can be found in Chapter 8.

Requirements. Chapter 4 presents the requirements. They are based on a review of the socio-scientific literature on interpersonal trust in Chapter 3 and of the technical literature on trust in Chapter 4. Own considerations complement them.

Design. The requirements lead to an application- and implementationindependent design of a trust mechanism: the Enfident Model (Chapter 6). Chapter 4 and 6 together show that the Enfident Model suits as a framework to analyse existing technical trust algorithms and to design new ones. The preceding Chapter 5 introduces the notation of some mathematical tools that are used throughout the remainder of this document.

Implementation. Chapter 7 proposes implementation techniques for the Enfident Model. These techniques originate from statistical relational learning and are just implementation examples, because other techniques seem reasonable as well. Chapter 7 marks a first step towards a concrete algorithm. However the attributes are still unknown; they depend on the application. Chapter 8 then applies the model to a specific scenario. In this step, attributes can be identified and the algorithms can be completed.

Test. Chapter 8 describes the evaluation method. It introduces the application scenario of cognitive vehicles that cooperate through a vehicular network and defines the simulation environment. The evaluation results and the discussion are combined in Chapter 9, but separated in the subsections. In this way, one subject can be evaluated and discussed in one place, while the reader can still distinguish the results and their discussion.

Chapter 10 summarises the dissertation. For this purpose, it also relates the Enfident Model back to selected findings from social sciences. Finally it points out directions for future research.

2  Clarifying the Concept of Trust

Trust is a term of everyday speech. People know it and have formed it during the integration in her linguistic environment. As a consequence, the meaning of the term varies between individuals – but also between researchers on trust. Various disciplines investigate trust and even within a field, people have a different understanding of what trust is. In contrast, a central term of a scientific paper should have a clearly delimited meaning.

As a consequence, I introduce conceptualisations of trust from different disciplines in this chapter. Because trust is primarily associated with humans, the view of social scientists is discussed first. Because interpersonal trust serves as a prototype for the trust concept in other disciplines, it is discussed more comprehensively than the other trust concepts.

Interpersonal trust is a mechanism that has not been invented for a special aim, but simply found to be there. Therefore some scientists have argued on its purposes. Their considerations are introduced in Section 2.2. Some of the purposes the same problem as that mentioned in the introduction. This is the reason, why I speak of trust between technical systems: This technical trust mechanism should provide a similar functionality as interpersonal trust, although both mechanisms might work differently.

Section 2.3 and 2.4 cover the concept of trust in the technical fields of cryptology and multi-agent systems. Finally, Section 2.5 introduces a definition of trust between technical systems in the form it underlies the remaining dissertation.

2.1  Interpersonal Trust

In the literature, many authors choose their own definition for interpersonal trust. Often these definitions are operationalisations with only a limited applicability (Narowski, 1974). In order to represent a construct that is subject to investigations, the concept must describe something observable. These observable criteria constitute an operationalisation of the term then.

This section describes the concept of interpersonal trust (German: zwischenmenschliches Vertrauen or interpersonales Vertrauen) as an attempt to integrate considerations from different authors. To avoid just another new definition of the concept, that of Kassebaum (2004) is taken. It integrates many definitions of the literature. Especially it incorporates the affective, behavioural and cognitive component of trust; many other authors considered only some of them (Narowski, 1974, p. 125). However it is hardly possible to come to a common understanding of trust between you as the reader and me as the author within three sentences. For this reason, I highlight key aspects of the definition afterwards.

“Interpersonal trust is an expectation about a future behaviour of another person and an accompanying feeling of calmness, confidence, and security depending on the degree of trust and the extent of the associated risk. That other person shall behave as agreed, not agreed but loyal, or at least according to subjective expectations, although she/he has the freedom and choice to act differently, because it is impossible or voluntarily unwanted to control her/him. That other person may also be perceived as a representative of a certain group.” (Freely translated from Kassebaum, 2004, p. 21)

Most parts of the definition describe the so called trust situation, in which someone reasons about the behaviour of another one. Both persons may tightly work together or be loosely coupled according to the definition. This is the wide understanding of cooperation that is underlying this dissertation as already mentioned in the introduction. The term cooperation is still chosen because it emphasises the relational aspect of trust between two systems.

Figure 2.1: Key aspects of the interpersonal trust definition. A trust situation involves an object to trust, the trusted person, and uncertainty about a trust subject in the future. The trusting person forms an expectation about the outcome of the trust situation. For some authors, the possible outcomes need to involve risk, for others they just may do so. The formed trust attitude can in the end result in actions (the behavioural component), feelings (the affective component) and thinking (the cognitive component).

But Kassebaum goes beyond the definition of a trust situation. He also emphasises that interpersonal trust is an expectation and a feeling. As an attitude, trust expresses in affection, behaviour and cognition. The affective component can be considered one difference between interpersonal trust and trust between technical systems.

In the following, key aspects of the definition are detailed and discussed with regard to the literature. Figure 2.1 visualises them.

The trusted person. Interpersonal trust involves two parties who interact with each other: On the one hand side, there is the person who trusts, ego, the trustor (German: Vertrauender or Treugeber). In this document, I use the name Paula (P) for this person in many examples. On the other hand side, there is the person whom is trusted, alter, the trustee (German: Vertrauensperson or Treuhänder). I name this person Oliver (O). In the case that mutual trust develops over time in many interactions, both parties are ego and alter at the same time.

As part of the interaction, ego judges alter to be trustworthy or untrustworthy. Such a judgement about alter’s traits and motives is called attribution in social psychology. Studies have shown that the attribution process is very subjective (Forgas, 1985, p. 77). This is a basic finding that should be kept in mind when thinking about interpersonal trust.

The actor can perceive the other person as an individual or as a representative of a specific group. For example, one trusts a police man in a dangerous situation, because this person holds the role of a police man, but not because this person is trusted as a known and maybe familiar person. This kind of trust in the role of the other is called role trust (German: Rollenvertrauen) by Strasser and Voswinkel (1997). In showing trust in the role, the trust in the abstract system of the police becomes practical. Therefore a person can have trust in the working of a system. Luhmann (1979, Chap. 7) calls this type of trust system trust (German: Systemvertrauen). It is important for a complex society with a high degree in the division of labour. Trust can be established between two persons, unfamiliar with each other, but acting on behalf of a trusted system. Gennerich (2000, pp. 40–44) extends this concept to general social groups to which a person can manifest a social identity. For example, fans of a soccer team form a community within that they trust each other to a certain extent. In contrast to system trust, Luhmann (Chap. 6) calls the trust in an individual – which is mostly based on familiarity – personal trust (German: persönliches Vertrauen).

Note that the object of trust can also be a thing or my self (selfconfidence). These forms of trust are out of the scope of this thesis, as they are not referred to as interpersonal trust.

Lack of control, complexity and uncertainty. The trusted person must be free to some extent to behave trustworthy or untrustworthy (Kee and Knox, 1970). This freedom may be forced by the situation or voluntarily given by the trustor. From the point of view of the trusting person, it is a lack of control over the situation that forces to trust.

For Luhmann, this is an important feature that characterises that kind of complexity the trust mechanism addresses: It is “that complexity which enters the world in consequence of the freedom of other human beings” (Luhmann, 1979, p. 30).

The lack of control can also be understood as a lack of knowledge. Trust is a “middle state between knowing and not-knowing” about another person (Simmel, 1968, p. 263). Luhmann (1979, Chap. 2) details this in the following way. If Paula knows how Oliver will act and how the cooperation will end up, she can make a rational decision and needs not trust. If she knows nothing about the specific problem, she cannot trust but only hope. The trust decision forces the trustor to choose one out of the many possible scenarios the future offers. Altogether the lack of control and knowledge results in an increased uncertainty about the future and, thus, in an increased complexity that is inherit to a trust situation. It comes from the trusted person being there and free to act. Trust is a mechanism to cope with this uncertainty.

Some authors think that this mechanism is an irrational process only partly based on clear evidence. It incorporates some rational decision calculus about the uncertainty but deliberately goes beyond that. “Trust always extrapolates from the available evidence” (Luhmann, 1979, p. 26). Kee and Knox speak of a “subjective probability” and an inner, not rational “certainty or uncertainty about O’s trustworthiness” (1970, p. 359). This irrational process is driven by wishful thinking (Koller, 1997; Oswald, 1997). Section 3.4.3 summarises some of the irrational findings about interpersonal trust.

In contrast to this, users expect a technical system to act predictable and rational. So while both, persons and technical systems, need trust to handle the uncertainty, the way they do it might be different.

Subject of the trust situation and expectation. Despite the uncertainty, Paula must still act, either by relying or by not relying on Oliver. For this, she forms an expectation about the future. Burt and Knez propose “Trust is anticipated cooperation” (1995, p. 257) as a compact definition of interpersonal trust. Luhmann emphasises the anticipation of the future, as well: “To show trust is to anticipate the future. It is to behave as though the future were certain” (1979, p. 10). The future consists of many possible scenarios; only one can become present – a process of complexity reduction. Someone who trusts chooses from all the possibilities of future presents. With this choice, the trusting person simplifies her internal future.

At the same time, Paula looks back into the past too. She uses prior experiences to form an expectation about the future. This is detailed below.

The expectation also specifies what to expect, the cooperation subject of the trust situation. “Trust therefore always bears upon a critical alternative“ (Luhmann, 1979, p. 24). Note that this statement already points to the proposition of Requirement 2 that a trust mechanism should put out a probability distribution or a belief mass distribution over all possible future worlds (see page 79 of this document).

Paula is only able to build up a clear expectation for one of the future worlds, if Oliver acts predictable. So the attribution of predictability and consistency to Oliver is a key requirement to establish trust (Gennerich, 2000; Rempel et al., 1985). Paula takes her collected experiences from the past and transforms it to an expectation for the future. Trust is thus based on social learning (Blomqvist, 1997, pp. 280 and 283).

Risk. Luhmann restricts that not every expectation is trust-related. Expectations of trust are “only those in the light of which one commits one’s own actions and which if unrealized will lead one to regret one’s behaviour“ (Luhmann, 1979, p. 25). Thus the individual must have some interest in the outcome of the trust situation. This interest corresponds to a value. And because of the uncertainty the value is at risk. In addition to the expectation above, risk incorporates a value because of own interest.

Many authors support this restriction. Some others negate it though. For example, Jones (2002), a researcher from the field of informatics, criticises: “While it is true to say that a goal-component of this sort is often present, this is by no means always so. For example, x might trust y to pay his (y’s) taxes […], even though it is not a goal of x that y pays“ (p. 229). He regards trust as an expectation towards another one without the need of own interest. Thus a trust situation needs or needs not involve risk. Figure 2.1 depicts this with the additional arrow that bypasses the term risk. Note that the concept of interpersonal trust can merely be observed only. In contrast to this, trust between technical systems is designed. So whether to include the risked value in the trust computation, is a design decision. It is discussed in Section 2.5.

Of what kind is the risked resource? It may be a material resource resulting in a direct financial harm, but also time, effort, and trouble. Rempel et al. give a couple of examples mostly relevant in intimate relationships: “[…] trust involves a willingness to put oneself at risk, be it through intimate disclosure, reliance on another’s promises, sacrificing present rewards for future gains, and so on” (1985, p. 96). Gennerich (2000) regards the own identity as a resource that is always at risk, sometimes more, sometimes less. I detail this – in my opinion interesting – thought in Section 2.2.

The harm arises if the other one acts untrustworthy. If he fulfils the trust though, the trustor has a benefit from that. Examples for the benefit are future reciprocity in the relationship, health when going to the doctor, or a monetary benefit when accepting a “good deal”. The benefit is also associated with a subjective probability. Both together form a positive risk or chance. Some authors like Luhmann (1979, Chap. 4) require that the perceived risk must be larger than the perceived chance in a trust situation. Otherwise the decision is more rational.