Safety of Computer Architectures -  - E-Book

Safety of Computer Architectures E-Book

4,9
139,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

It is currently quite easy for students or designers/engineers to find very general books on the various aspects of safety, reliability and dependability of computer system architectures, and partial treatments of the elements that comprise an effective system architecture. It is not so easy to find a single source reference for all these aspects of system design. However, the purpose of this book is to present, in a single volume, a full description of all the constraints (including legal contexts around performance, reliability norms, etc.) and examples of architectures from various fields of application, including: railways, aeronautics, space, automobile and industrial automation. The content of the book is drawn from the experience of numerous people who are deeply immersed in the design and delivery (from conception to test and validation), safety (analysis of safety: FMEA, HA, etc.) and evaluation of critical systems. The involvement of real world industrial applications is handled in such as a way as to avoid problems of confidentiality, and thus allows for the inclusion of new, useful information (photos, architecture plans/schematics, real examples).

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 622

Veröffentlichungsjahr: 2013

Bewertungen
4,9 (16 Bewertungen)
15
1
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Introduction

Chapter 1. Principles

1.1. Introduction

1.2. Presentation of the basic concepts: faults, errors and failures

1.3. Safe and/or available architecture

1.4. Resetting a processing unit

1.5. Overview of safety techniques

1.6. Conclusion

1.7. Bibliography

Chapter 2. Railway Safety Architecture

2.1. Introduction

2.2. Coded secure processor

2.3. Other applications

2.4. Regulatory and normative context

2.5. Conclusion

2.6. Bibliography

Chapter 3. From the Coded Uniprocessor to 2oo3

3.1. Introduction

3.2. From the uniprocessor to the dual processor with voter

3.3. CSD: available safety computer

3.4. DIVA evolutions

3.5. New needs and possible solutions

3.6. Conclusion

3.7. Assessment of installations

3.8. Bibliography

Chapter 4. Designing a Computerized Interlocking Module: a Key Component of Computer-Based Signal Boxes Designed by the SNCF

4.1. Introduction

4.2. Issues

4.3. Railway safety: fundamental notions

4.4. Development of the computerized interlocking module

4.5. Conclusion

4.6. Bibliography

Chapter 5. Command Control of Railway Signaling Safety: Safety at Lower Cost

5.1. Introduction

5.2. A safety coffee machine

5.3. History of the PIPC

5.4. The concept basis

5.5. Postulates for safety requirements

5.6. Description of the PIPC architecture7

5.7. Description of availability principles

5.8. Software architecture

5.9. Protection against causes of common failure

5.10. Probabilistic modeling

5.11. Summary of safety concepts

5.12. Conclusion

5.13. Bibliography

Chapter 6. Dependable Avionics Architectures: Example of a Fly-by-Wire system

6.1. Introduction

6.2. System breakdowns due to physical failures

6.3. Manufacturing and design errors

6.4. Specific risks

6.5. Human factors in the development of flight controls

6.6. Conclusion

6.7. Bibliography

Chapter 7. Space Applications

7.1. Introduction

7.2. Space system

7.3. Context and statutory obligation

7.4. Specific needs

7.5. Launchers: the Ariane 5 example

7.6. Satellite architecture

7.7. Orbital transport: ATV example

7.8. Summary and conclusions

7.9. Bibliography

Chapter 8. Methods and Calculations Relative to “Safety Instrumented Systems” at TOTAL

8.1. Introduction

8.2. Specific problems to be taken into account

8.3. Example 1: system in 2/3 modeled by fault trees

8.4. Example 2: 2/3 system modeled by the stochastic Petri net

8.5. Other considerations regarding HIPS

8.6. Conclusion

8.7. Bibliography

Chapter 9. Securing Automobile Architectures

9.1. Context

9.2. More environmentally-friendly vehicles involving more embedded electronics

9.3. Mastering the complexity of electronic systems

9.4. Security concepts in the automotive field

9.5. Which security concepts for which security levels of the ISO 26262 standard?

9.6. Conclusion

9.7. Bibliography

Chapter 10. SIS in Industry

10.1. Introduction

10.2. Safety loop structure

10.3. Constraints and requirements of the application

10.4. Analysis of a safety loop

10.5. Conclusion

10.6. Bibliography

Chapter 11. A High-Availability Safety Computer

11.1. Introduction

11.2. Safety computer

11.3. Applicative redundancy

11.4. Integrated redundancy

11.5. Conclusion

11.6. Bibliography

Chapter 12. Safety System for the Protection of Personnel in the CERN Large Hadron Collider

12.1. Introduction

12.2. LACS

12.3. LASS

12.4. Functional safety methodology

12.5. Test strategy

12.6. Feedback

12.7. Conclusions

12.8. Bibliography

Glossary

List of Authors

Index

First published 2010 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc. Adapted and updated from Sécurisation des architectures informatiques published 2009 in France by Hermes Science/Lavoisier © LAVOISIER 2009

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2010

The rights of Jean-Louis Boulanger to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Cataloging-in-Publication Data

Sécurisation des architectures informatiques. English

Safety of computer architectures / edited by Jean-Louis Boulanger.

p. cm.

Includes bibliographical references and index.

ISBN 978-1-84821-197-1

1. Computer architecture. 2. Computer systems--Reliability. 3. Computer security. 4. Avionics--Safety measures. I. Boulanger, Jean-Louis. II. Title.

QA76.9.A73S4313 2010

005.8--dc22

2010016489

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-197-1

Introduction

In recent years, we have experienced an increase in the use of computers and an increase of the inclusion of computers in systems of varying complexity. This evolution affects products of daily life (household appliances, automobiles, etc.) as well as industrial products (industrial control, medical devices, financial transactions, etc.).

The malfunction of systems within these products can have a direct or indirect impact on integrity (injury, pollution, alteration of the environment) and/or on the lives of people (users, population, etc.) or an impact on the functioning of an organization. Processes in industry are becoming increasingly automated. These systems are subject to dependability requirements.

Today, dependability has become a requirement, not a concern (which was previously the case in high-risk domains such as the nuclear or aerospace industries), in a similar fashion to productivity, which has gradually imposed itself on most industrial and technological sectors.

Dependable systems must protect against certain failures that may have disastrous consequences for people (injury, death), for a company (branding, financial aspects), and/or for the environment.

In the context of systems incorporating "programmed" elements, two types of elements are implemented: hardware elements (computing unit, central processing unit (CPU), memory, bus, field programmable gate array (FPGA), digital signal processor (DSP), programmable logic controller, etc.) and software elements (program, library, operating system, etc.). In this book, we will focus on the safety of the hardware element.

Where the gravity and/or frequency associated with the risks is very important, it is said that the system is "critical". These "critical" systems are subjected to evaluations (assessment of conformity to standards) and/or certifications (evaluation leading to a certificate of conformity to a standard). This work is carried out by teams that are outside of the realization process.

This book aims to present the principles of securing computer architectures through the presentation of tangible examples.

In Chapter 1 the overall set of techniques (diversity, redundancy, recovery, encoding, etc.) for securing the hardware element of an architecture is presented.

For the railway transport field, Chapters 2, 3, 4, 5 and 11 present the applicable standards (CENELEC EN 50126, EN 50128, and EN 50129) as well as tangible examples (SACEM, SAET-METEOR, CSD, PIPC and the DIGISAFE XME architecture).

Chapters 6 and 7 will cover the field of aeronautics and outer space through three known examples, which are the aircraft from the AIRBUS Company, satellites and the ARIANE 5 launcher. The aviation field was one of the first to establish a referential standard that is currently composed of the DO 178 standard for embedded software development aspects, a trade referential consisting of a set of regulations FAR/JAR, applicable to all aircraft manufacturers and a set of methodological guides produced by the aviation community, ARP 45.74 and ARP 47.61. This referential has been recently complemented by the DO 254 standard, which applies to digital component aspects, such as FPGAs and other ASICs. The DO 278 standard applies to ground software aspects.

For automation-based systems, Chapter 8 presents examples of installations in the oil industry. The IEC 61508 standard allows for a definition and control of the safety objectives (SIL). Chapter 8 presents an opportunity to revisit this standard and its use. This chapter is supplemented by Chapter 10, which is a summary of the implementation of safety instrumented systems (SIS) in industry.

It should be noted that Chapter 12 provides an example of the implementation of a rather interesting automation-based system: the Large Hadron Collider (LHC).

Finally, in Chapter 9 we present examples in the automotive field. The automotive field is currently evolving. This development will result in the establishment of a variation of the IEC 61508 standard for the automotive industry called ISO 26262. This standard takes the safety level concept (called here the automotive safety integrity level, or ASIL) and identifies recommendations for activities and methodologies for implementation in order to achieve a given safety objective. The automotive field is driven by different types of objectives (cost, place, weight, volume, delays, safety), which requires the establishment of new solutions (see Chapter 9).

It is hoped that this book will enlighten the reader as to the complexity of the systems that are used everyday and the difficulty in achieving a dependable system. It should be noted that this encompasses the need to produce a dependable system but also the need to guarantee the safety during the operational period, which can range from a few days to over 50 years.

Chapter 1

Principles1

1.1. Introduction

The objective of this chapter1 is to present the different methods for securing the functional safety of hardware architecture. We shall speak of hardware architecture as safety can be based on one or more calculating units. We shall voluntarily leave aside the software aspects.

1.2. Presentation of the basic concepts: faults, errors and failures

1.2.1. Obstruction to functional safety

As indicated in [LAP 92], the functional safety of a complex system can be compromised by three types of incidents: failures, faults, and errors. The system elements are subjected to failures, which can potentially result in accidents.

DEFINITION 1.1: FAILURE as indicated in the IEC 61508 [IEC 98] standard: a failure is the suspension of a functional units ability to accomplish a specified function. Since the completion of a required function necessarily excludes certain behavior, and certain functions can be specified in terms of behavior to avoid, then the occurrence of a behavior to avoid is a failure.

From the previous definition, the need to define the concepts of normal (safe) and abnormal (unsafe) conduct can be removed, with a clear boundary between the two.

Figure 1.1.Evolution of the state of the system

Figure 1.1 shows a representation of the different states of a system (correct, incorrect) and the possible transitions between these states. The system states can be classified into three families:

correct states: there is no dangerous situation;

incorrect safe states: a failure was detected and the system is in a safe state;

incorrect states: this is a dangerous, uncontrolled situation: there are potential accessible accidents.

When the system reaches a fallback state, there may be a partial or complete shutdown of service. The conditions of fallback may allow a return to the correct state after a recovery action.

Failures can be random or systematic. A random failure occurs unpredictably and is the result of damage affecting the hardware aspects of the system. In general, random failure can be quantified because of its nature (wear, aging, etc.).

A systematic failure is linked deterministically to a cause. The cause of the failure can only be eliminated by a reapplication of the production process (design, manufacture, documentation) or by recovery procedures. Given its nature, a systematic failure is not quantifiable.

A failure (definition 1.1) is an external manifestation of an observable error (the IEC 61508 [IEC 98] standard speaks of an anomaly).

Despite all the precautions taken during the production of a component, it may be subject to design flaws, verification flaws, usage defects, operational maintenance defects, etc.

DEFINITION 1.2: ERROR an error is the consequence of an internal defect occurring during the implementation of the product (a variable or an erroneous program condition).

The notion of fault may be derived from the defect, the fault being the cause of the error (e.g. short-circuit, electromagnetic disturbance, design flaw).

DEFINITION 1.3: FAULT a fault is a non-conformity inserted in the product (for example an erroneous code).

In conclusion, it should be noted that confidence in the functional safety of a system might be compromised by the appearance of obstacles such as faults, errors, and failures.

Figure 1.2.Fundamental chain

Figure 1.2 shows the fundamental chain linking these obstacles. The onset of a failure may reveal a fault, which in turn will result in one or more errors: this (these) new error(s) may lead to the emergence of a new failure.

Figure 1.3.System propagation

The link between the obstacles must be viewed throughout the entire system as shown in Figure 1.3.

The fundamental chain (Figure 1.2) can happen in a single system (Figure 1.3), and affect the communication of components (sub-system, equipment, software, hardware), or occur in a system of systems (Figure 1.4), where the failure generates a fault in the next system.

Figure 1.4.Propagation in a system

Figure 1.5 provides an example of the implementation of failures. As previously indicated, a failure is detected through the divergent behavior of a system in relation to its specification. This failure occurs within the limits of the system due to the fact that a series of internal system errors has implications for the development of the output. In our case, the source of the errors is a fault in the embedded executable software. These defects can be of three kinds: either they are faults introduced by the programmer (BUG), or they are faults introduced by the tools (generated by the executable, download methods, etc.) or by hardware failure (memory failure, component short-circuit, external disturbance (for example EMC), etc.).

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!