82,99 €
Today, formal methods are widely recognized as an essential step in the design process of industrial safety-critical systems. In its more general definition, the term formal methods encompasses all notations having a precise mathematical semantics, together with their associated analysis methods, that allow description and reasoning about the behavior of a system in a formal manner. Growing out of more than a decade of award-winning collaborative work within the European Research Consortium for Informatics and Mathematics, Formal Methods for Industrial Critical Systems: A Survey of Applications presents a number of mainstream formal methods currently used for designing industrial critical systems, with a focus on model checking. The purpose of the book is threefold: to reduce the effort required to learn formal methods, which has been a major drawback for their industrial dissemination; to help designers to adopt the formal methods which are most appropriate for their systems; and to offer a panel of state-of-the-art techniques and tools for analyzing critical systems.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 454
Veröffentlichungsjahr: 2012
Table of Contents
COVER
TITLE PAGE
COPYRIGHT PAGE
FOREWORD
FOREWORD
PREFACE
CONTRIBUTORS
PART I: INTRODUCTION AND STATE OF THE ART
CHAPTER 1 FORMAL METHODS: APPLYING {LOGICS IN, THEORETICAL} COMPUTER SCIENCE
1.1 INTRODUCTION AND STATE OF THE ART
1.2 FUTURE DIRECTIONS
ACKNOWLEDGMENTS
PART II: MODELING PARADIGMS
CHAPTER 2 A SYNCHRONOUS LANGUAGE AT WORK: THE STORY OF LUSTRE
2.1 INTRODUCTION
2.2 A FLAVOR OF THE LANGUAGE
2.3 THE DESIGN AND DEVELOPMENT OF LUSTRE AND SCADE
2.4 SOME LESSONS FROM INDUSTRIAL USE
2.5 AND NOW . . .
CHAPTER 3 REQUIREMENTS OF AN INTEGRATED FORMAL METHOD FOR INTELLIGENT SWARMS
3.1 INTRODUCTION
3.2 SWARM TECHNOLOGIES
3.3 NASA FAST PROJECT
3.4 INTEGRATED SWARM FORMAL METHOD
3.5 CONCLUSION
ACKNOWLEDGMENTS
PART III: TRANSPORTATION SYSTEMS
CHAPTER 4 SOME TRENDS IN FORMAL METHODS APPLICATIONS TO RAILWAY SIGNALING
4.1 INTRODUCTION
4.2 CENELEC GUIDELINES
4.3 SOFTWARE PROCUREMENT IN RAILWAY SIGNALING
4.4 A SUCCESS STORY: THE B METHOD
4.5 CLASSES OF RAILWAY SIGNALING EQUIPMENT
4.6 CONCLUSIONS
CHAPTER 5 SYMBOLIC MODEL CHECKING FOR AVIONICS
5.1 INTRODUCTION
5.2 APPLICATION: THE RUNWAY SAFETY MONITOR
5.3 A DISCRETE MODEL OF RSM
5.4 DISCUSSION
PART IV: TELECOMMUNICATIONS
CHAPTER 6 APPLYING FORMAL METHODS TO TELECOMMUNICATION SERVICES WITH ACTIVE NETWORKS
6.1 OVERVIEW
6.2 ACTIVE NETWORKS
6.3 THE CAPSULE APPROACH
6.4 PREVIOUS APPROACHES ON ANALYZING ACTIVE NETWORKS
6.5 MODEL CHECKING ACTIVE NETWORKS WITH SPIN
6.6 CONCLUSIONS
CHAPTER 7 PRACTICAL APPLICATIONS OF PROBABILISTIC MODEL CHECKING TO COMMUNICATION PROTOCOLS
7.1 INTRODUCTION
7.2 PTAS
7.3 PROBABILISTIC MODEL CHECKING
7.4 CASE STUDY: CSMA/CD
7.5 DISCUSSION AND CONCLUSION
ACKNOWLEDGMENTS
PART V: INTERNET AND ONLINE SERVICES
CHAPTER 8 DESIGN FOR VERIFIABILITY: THE OCS CASE STUDY
8.1 INTRODUCTION
8.2 THE USER MODEL
8.3 THE MODELS AND THE FRAMEWORK
8.4 MODEL CHECKING
8.5 VALIDATING EMERGING GLOBAL BEHAVIOR VIA AUTOMATA LEARNING
8.6 RELATED WORK
8.7 CONCLUSION AND PERSPECTIVES
CHAPTER 9 AN APPLICATION OF STOCHASTIC MODEL CHECKING IN THE INDUSTRY: USER-CENTERED MODELING AND ANALYSIS OF COLLABORATION IN THINKTEAM®
9.1 INTRODUCTION
9.2 THINKTEAM
9.3 ANALYSIS OF THE THINKTEAM LOG FILE
9.4 THINKTEAM WITH REPLICATED VAULTS
9.5 LESSONS LEARNED
9.6 CONCLUSIONS
ACKNOWLEDGMENTS
PART VI: RUNTIME: TESTING AND MODEL LEARNING
CHAPTER 10 THE TESTING AND TEST CONTROL NOTATION TTCN-3 AND ITS USE
10.1 INTRODUCTION
10.2 THE CONCEPTS OF TTCN-3
10.3 AN INTRODUCTORY EXAMPLE
10.4 TTCN-3 SEMANTICS AND ITS APPLICATION
10.5 A DISTRIBUTED TEST PLATFORM FOR THE TTCN-3
10.6 CASE STUDY I: TESTING OF OPEN SERVICE ARCHITECTURE (OSA)/PARLAY SERVICES
10.7 CASE STUDY II: TESTING OF IP MULTIMEDIA SUBSYSTEM (IMS) EQUIPMENT
10.8 CONCLUSION
CHAPTER 11 PRACTICAL ASPECTS OF ACTIVE AUTOMATA LEARNING
11.1 INTRODUCTION
11.2 REGULAR EXTRAPOLATION
11.3 CHALLENGES IN REGULAR EXTRAPOLATION
11.4 INTERACTING WITH REAL SYSTEMS
11.5 MEMBERSHIP QUERIES
11.6 RESET
11.7 PARAMETERS AND VALUE DOMAINS
11.8 THE NGLL
11.9 CONCLUSION AND PERSPECTIVES
INDEX
Cover Photograph: Courtesy of Stefania Gnesi and Tiziana Margaria
Cover Design: John Wiley & Sons, Inc.
Copyright © 2013 by IEEE. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.
Library of Congress Cataloging-in-Publication Data:
Formal methods for industrial critical systems : a survey of applications / edited by Stefania Gnesi, Tiziana Margaria.
pages cm
Includes index.
ISBN 978-0-470-87618-3
1. Formal methods (Computer science) I. Gnesi, Stefania, 1954– II. Margaria-Steffen, Tiziana, 1964–
QA76.9.F67F654 2012
004.01'51–dc23
2012016775
FOREWORD
BY MIKE HINCHEY
The motivation for what many credit as being the forerunner to the first computer, as we know it, was a significant problem in the shipping industry in the early-to-late nineteenth century. Logarithmic tables, critical to this industry, published around that time, often contained simple—but significant—errors that resulted in lost ships, cargos, and lives. Babbage’s difference engine is generally regarded as embodying many of the concepts now standard in computer systems (including memory, programs, and even a design for a printer that would work on a principle similar to modern laser printers). The goal was to automate the printing of these tables used in the shipping industry, removing inaccuracies.
Correctness of the data being relied on for calculating position was crucial to the shipping industry. The very idea of “correctness” has plagued computer science from virtually its inception. Turing was concerned with the issue in the 1930s, before the first practical computers (as we now know them) were to come along. Computer pioneers such as Zuse and Wilkes recognized early that correctness would be an issue they would need to address.
Over the six decades or so during which we have had modern electronic computers, the related issues of dependability and reliability have arisen along with concerns over safety, security, performance, and many other issues. The field of formal methods (a term that, interestingly, itself predates the modern computer) has arisen to address correctness in software and hardware systems, along with the other related issues just referred to.
To those of us who are confirmed “formal methodists,” this has been a very welcome development, and we are pleased to see the significant developments in the field and the contributions that formal methods have made to improve our critical applications. Yet, in reality, formal methods are still not used as much in practice as we would like, being believed by many not to scale, to be prohibitively expensive, and to be too difficult to understand and use.
The formal methods community has responded by focusing on education, developing more usable notation and better (integrated) tool support, emphasizing particular aspects of a system rather than the entire system (so-called formal methods light), building user communities, and trying to encourage the use of formal methods in real-life applications. It is in critical applications that industrial adoption of formal methods has been greater. Interestingly, our definition of “critical” has evolved, to mean not just loss of life or property, or breaches of security, as a consequence of failure, but also in a business sense, meaning failure results in financial loss, loss of competitiveness, or loss of reputation.
Formal Methods in Industrial Critical Systems (FMICS) is the longest running Working Group of the European Research Consortium for Informatics and Mathematics (ERCIM), having run now since 1992. The working group consists of more than a dozen ERCIM partners, and dozens of other associated research partners and groups throughout Europe. It has focused on research on improving technologies based on formal methods and encouraging the application of formal methods in critical industrial applications through technology transfer.
This book highlights some examples of the excellent research that has been undertaken by this working group and its application in a number of critical industrial examples, in such domains as avionics, aerospace, and railway signaling (itself an industry that has become a major driver for formal methods technology). While the book addresses all aspects of formal methods from specification through implementation and verification, there is a clear emphasis on model checking, reflecting the significant advances in this area and successful industrial applications over the past decade or so.
The applications illustrate the appropriateness of the use of formal methods in industrially critical systems. While the authors are all experts in their respective fields, and we should not downplay the often significant hurdle that there can be to introducing formal methods into an organization, the message is simple: for particular domain applications and particular critical industries, formal methods are here to stay.
MIKE HINCHEYLero, Limerick
FOREWORD
BY ALESSANDRO FANTECHI and PEDRO MERINO
This book has a long story, which is part of the story of the working group on Formal Methods for Industrial Critical Systems (FMICS) inside the European Research Consortium for Informatics and Mathematics (ERCIM), the oldest active working group in this consortium.
The FMICS WG focuses on the development of formal verification techniques and leads activities, such as joint international projects, related to verification and formal aspects of software, and the series of annual FMICS workshops, begun in 1996. Moreover, several special issues of international journals have been published and selected were the best papers presented at the workshop.
These activities have promoted an ongoing scientific discussion on identifying the most efficient formal development and verification techniques, with a keen eye on their industrial applicability. Most of the members of the FMICS community have strong links with the industry and have thus directly contributed to the slow but constant introduction of formal methods in the development cycle of industrial critical systems witnessed in the last decade.
The idea of this book was born in a workshop held in Aix les Bain in 2004. The continuous evolution of formal methods and, in particular, the growing importance of model-checking techniques due to their ever-increasing performance of tools, as well as the recent emergence of model-based design, have made it particularly difficult to present an up-to-date account of industrial application of formal methods in a book. So the idea of the book has kept shifting over the past years.
As the last coordinators of the FMICS working group, we are therefore grateful to this book’s editors, who have succeeded in capturing a collection of snapshots that document the continuous evolution of this research field, as well as the increasing application in the industrial production of software and computer-controlled systems. As such, we believe that this gallery is a witness to the success that formal methods are experiencing in a number of different domains.
ALESSANDRO FANTECHI AND PEDRO MERINOFMICS Working Group Chairs
PREFACE
Nowadays, the necessity of formal methods as an essential step in the design process of industrial safety critical systems is widely recognized. In its more general definition, the term “formal methods” encompasses all notations that have precise mathematical semantics, together with their associated analysis methods, which describe the behavior of a system in a formal manner.
Many formal methods have emerged during the last few decades: declarative methods, process calculi for concurrent and mobile systems, and their related languages, to mention a few. Although the benefits of using these formal methods are undeniable, practical experience shows that each method is particularly adapted for handling certain aspects of a system. Therefore, the design of a complex industrial system ideally requires an expertise in several formal methods in order to describe and analyze different views of the system.
This book aims at providing a comprehensive presentation of the mainstream formal methods that are currently used for designing industrial critical systems. The purpose of this book is threefold: to reduce the learning effort of formal methods, which is a major drawback for their industrial dissemination; to help designers adopt the formal methods that are most appropriate for their systems; and to offer state-of-the-art techniques and tools for analyzing critical systems.
The book is organized into six parts. Part I presents the introductory chapter. Part II is devoted to modeling paradigms. Chapter 2 is on development of the synchronous data-flow language LUSTRE and its industrial transfer inside the toolset SCADE; Chapter 3 is on the basic concepts of swarms, which are becoming prevalent in a variety of application domains: medical, bioinformatics, military/defense, surveillance, even Internet television broadcasting. The requirements of a formal method suitable for use with swarm-based systems are discussed.
Part III includes chapters on the use of formal methods and related tools in the development of applications in practical system domains. Chapter 4 is on transportation systems, where a survey about the current industrial usage of formal methods in railway signaling is presented, and Chapter 5 reports on the application of model-checking techniques to avionics applications.
Part IV, on telecommunication systems, begins with Chapter 6 on how to employ formal methods to increase reliability of active services, in particular when aspects such as code mobility, routing information, a high degree of reconfiguration, interaction between services, or security policies are considered, including Internet and online services. Chapter 7 is on the applications of probabilistic model checking, in particular using probabilistic timed automata, to communication protocols, with the emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA/CD) protocol.
Part V covers the Internet and online services. Chapter 8 shows how to use models first to describe and verify the single components of an online distributed decision system that is designed as a large collection of cooperating models. Automata learning is applied to find out the collective emerging behavior of the real system after implementation. Chapter 9 shows the application of model checking to verifying user awareness in a groupware system with a publish/subscribe notification service.
Part VI introduces the application of formal methods at runtime. In Chapter 10, the Testing and Test Control Notation version 3 (TTCN-3) is introduced and applied to testing web services. Chapter 11 reviews the essence of practical automata learning, its major challenges, variants, possible solutions, and illustrative case studies, in order to show how the theoretically well-studied active learning technology can be an enhanced application to become a powerful tool for practical system development.
We realize that this book cannot be an exhaustive representation of the application of formal methods in industrial applications, but we believe and hope that the readers may find in it some interesting hints that may be useful in promoting and stimulating the application of formal methods in practice.
STEFANIA GNESITIZIANA MARGARIA
CONTRIBUTORS
GIANFRANCO CIARDO, Department of Computer Science and Engineering, University of California, Riverside, CA
MARÍA DEL MAR GALLARDO, Dpto. de Lenguajes y Ciencias de la Computacion, University of Málaga, Málaga, Spain
MARIE DUFLOT, LORIA, Equipe MOSEL/VERIDIS, Vandoeuvre-lès-Nancy, France
ALESSANDRO FANTECHI, Dipartimento di Sistemi e Informatica, Università degli Studi di Firenze, Florence, Italy; and Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo,” Consiglio Nazionale delle Ricerche, Pisa, Italy
WAN FOKKINK, Sectie Theoretische Informatica, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands; and Specificatie en Verificatie van Embedded Systemen, CWI, Amsterdam, The Netherlands
STEFANIA GNESI, Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo,” Consiglio Nazionale delle Ricerche, Pisa, Italy
NICOLAS HALBWACHS, Vérimag, Grenoble, France
MIKE HINCHEY, Lero––the Irish Software Engineering Research Centre, University of Limerick, Limerick, Ireland
FALK HOWAR, Chair Programming Systems, TU Dortmund University, Dortmund, Germany
MARTA KWIATKOWSKA, Department of Computer Science, University of Oxford, Oxford, UK
DIEGO LATELLA, Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo,” Consiglio Nazionale delle Ricerche, Pisa, Italy
TIZIANA MARGARIA, Chair Service and Software Engineering, University of Potsdam, Potsdam, Germany
JESÚS MARTÍNEZ, Dpto. de Lenguajes y Ciencias de la Computacion, University of Málaga, Málaga, Spain
MIEKE MASSINK, Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo,” Consiglio Nazionale delle Ricerche, Pisa, Italy
PEDRO MERINO, Dpto. de Lenguajes y Ciencias de la Computacion, University of Málaga, Málaga, Spain
MAIK MERTEN, Chair Programming Systems, TU Dortmund University, Dortmund, Germany
ANGELO MORZENTI, Dipartimento di Elettronica e Informazione, Politecnico di Milano, Milan, Italy
JOHANNES NEUBAUER, Chair Programming Systems, TU Dortmund University, Dortmund, Germany
GETHIN NORMAN, Department of Computing Science, University of Glasgow, Glasgow, UK
DAVID PARKER, School of Computer Science, University of Birmingham, Birmingham, UK
SYLVAIN PEYRONNET, LRI, INRIA Université Paris-Sud, Orsay, France
CLAUDINE PICARONNY, LSV, CNRS & ENS de Cachan, Cachan, France
JAMES L. RASH, NASA Goddard Space Flight Center (Emeritus), Greenbelt, MD
CHRISTOPHER A. ROUFF, Near Infinity Corporation, Reston, VA
INA SCHIEFERDECKER, Fraunhofer FOKUS, Berlin, Germany
MAURIZIO SEBASTIANIS, Focus PLM srl, Ferrara, Italy
RADU I. SIMINICEANU, National Institute of Aerospace, Hampton, VA
JEREMY SPROSTON, Dipartimento di Informatica, Università degli Studi di Torino, Torino, Italy
BERNHARD STEFFEN, Chair Programming Systems, TU Dortmund University, Dortmund, Germany
MAURICE H. TER BEEK, Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo,” Consiglio Nazionale delle Ricerche, Pisa, Italy
GIANLUCA TRENTANNI, Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo,” Consiglio Nazionale delle Ricerche, Pisa, Italy
WALT F. TRUSZKOWSKI, NASA Goddard Space Flight Center (Emeritus), Greenbelt, MD
AMY K.C.S. VANDERBILT, Vanderbilt Consulting, Fairfax VA
ALAIN-GEORGES VOUFFO-FEUDJIO, Fraunhofer FOKUS, Berlin, Germany
PART IINTRODUCTION AND STATE OF THE ART
CHAPTER 1
FORMAL METHODS: APPLYING {LOGICS IN, THEORETICAL} COMPUTER SCIENCE
DIEGO LATELLA
ISTI-CNR, Pisa, Italy
Giving a comprehensive definition of formal methods (FMs) is a hard and risky task, since they are still subject of ongoing research and several aspects of FMs, going from the foundational ones to those purely applicative, are in continuous change and evolution. Nevertheless, in order to fix the terms of discussion for the issues dealt with in this book, and only for the purpose of this book, by FMs we will mean all notations having a precise mathematical definition of both their syntax and semantics, together with their associated theory and analysis methods, that allow for describing and reasoning about the behavior of (computer) systems in a formal manner, assisted by automatic (software) tools. FMs play a major role in computer engineering and in the broader context of system engineering, where also the interaction of machines with humans is taken into consideration:
All engineering disciplines make progress by employing mathematically based notations and methods. Research on ‘formal methods’ follows this model and attempts to identify and develop mathematical approaches that can contribute to the task of creating computer systems (both their hardware and software components). [26]
The very origins of FMs go back to mathematical logic—or, more precisely, to that branch of mathematical logic that gave rise to logics in computer science (LICS) and theoretical computer science (TCS). An eminent example of these roots is the milestone work of Alan Turing [42] where key concepts and theoretical, intrinsic limitations of the algorithmic method—central to computer science in general and FMs in particular—are captured in precise terms. The above issues of computability have become one of the main subjects of scientific development in the field of automatic computation in the 30th’s of the previous century. Central to the scientific discussion of that period was the clarification of the key notion of formal system, with a clear separation of syntax and semantics and with the notion of a formal proof calculus, defining computational steps [41]. Several fields of LICS and TCS have contributed to the development of the foundations of FMs, like language and automata theory, programming/specification language syntax and semantics, and program verification. More recently, solid contributions to the specification and analysis of systems—in particular concurrent/distributed systems—have been provided. These contributions range from the definition of specific notations, or classes of notations, to the development of solid mathematical and/or logic theories supporting such notations, to the development of techniques and efficient algorithms for the automatic or semiautomatic analysis of models of systems or of their requirements, and to the development of reliable automatic tools that implement such algorithms. In the following, we briefly describe some of the most relevant examples of (classes of) notations equipped with solid theories—process algebras; Petri nets; state-based approaches like VDM, Z, and B; and temporal logics—and analysis techniques—model checking and theorem proving. We will also recall abstract interpretation, which, although not bound to any particular notation, as a general theory of approximation of semantics constitutes another key contribution to FMs. We underline that what follows is not intended to be a comprehensive treatment of FMs, which would require much more space, and that it offers an overview of the field as of the time of writing:
Notations and Supporting Theories
Process Algebras
.
Process algebra [30] is an algebraic approach to the study of concurrent processes. Its tools are algebraic languages for the specification of processes and the formulation of statements about them, together with congruence laws on which calculi are defined for the verification of these statements. Typical process algebraic operators are
sequential
,
nondeterministic
, and
parallel composition
. Some of the main process algebras are CCS, CSP, and ACP.
Petri Nets
.
Petri nets were originally proposed by Petri [33] for describing interacting finite-state machines and are constituted by a finite set of
places
, a finite set of
transitions
, a flow relation from places to transitions and from transitions to places, and weights associated to the flows. A Petri net can be given an appealing graphical representation, which makes specifications intuitively understandable. A state of a Petri net is given by marking its places, that is, associating a number of
tokens
to places. Rules are defined to fire a transition by moving tokens, hence changing the state of the net. Many extensions and variations of Petri nets have been proposed, for example, adding values to tokens [18], time [19, 29], or probabilities to transitions [3].
VDM, Z, B
.
The first widely used formal specification languages resorted to the use of traditional mathematical concepts such as sets, functions, and first-order predicate logic. VDM [5] was proposed mainly for describing the denotational semantics of programming languages. Z [38] used the same concepts for defining types, which describe the entities and the state space of the system of interest. Properties of the state space are described in Z by means of
invariant
predicates. State transitions are expressed by relations between inputs and outputs of the operations of the models. The B method [1] adds behavioral specifications by means of
abstract machines
. The language is complemented with a refinement-based development method, which includes the use of theorem proving for maintaining the consistency of refinements. Reference 2 is a nice, informal introduction to the key concepts of system modeling in a mathematical framework as above.
Temporal Logics
.
Temporal logic [13] is a special type of modal logic, and it provides a formal system for qualitatively describing and reasoning about how the truth values of assertions change over system computations. Typical temporal logic operators include
sometimes P
, which is true now if there is a future moment in time in which
P
becomes true, and
always P
, which is true now if
P
is true at all future moments. Specific temporal logics differ for the model of time they use (e.g., linear time vs. branching time) and/or the specific set of temporal operators they provide.
Analysis Techniques and Tools
Model Checking
.
Model checking is a verification technique in which efficient algorithms are used to check, in an automatic way, whether a desired property holds for a (usually finite) model of the system, typically a state-transition structure, like an automaton [4, 7]. Very powerful logics have been developed to express a great variety of system properties, and high-level languages have been designed to specify system models. Examples of the former are various variants of temporal logic, and notable examples of the latter are process algebras, imperative languages, and graphical notations. Prominent examples of model checkers are SMV [9], SPIN [22], and TLC [28].
Automated Theorem Proving
.
Automated theorem proving is the process of getting a computer to agree that a particular theorem is true. The theorems of interest may be in traditional mathematical domains, or they may be in other fields such as digital computer design [37, 44]. When used for system validation, the system
specification S
and its
realization R
are formulas of some appropriate logic. Checking whether the realization satisfies the specification amounts to verify the validity of the formula
S
⇒
R
, and this can be done—at least partially, and sometimes completely—automatically by a computer program, the theorem prover. Classical theorem provers are PVS [39], HOL [43], and Nqthm [6].
Abstract Interpretation
.
Abstract interpretation is a theory of the approximation of semantics of (programming or specification) languages. It applies both to data and to control. It formalizes the idea that the semantics can be more or less precise according to the considered level of observation. If the approximation is coarse enough, the abstraction of a semantics yields a less precise but computable version. Because of the corresponding loss of information, not all questions can be answered, but all answers given by the effective computation of the approximate semantics are always correct [12].
A key difference between FMs and their mother disciplines of LICS and TCS is that the former attempt to provide the (software or systems) engineer with
concepts and techniques as thinking tools, which are clean, adequate, and convenient, to support him (or her) in describing, reasoning about, and constructing complex software and hardware systems. [41]
Emphasis is thus on construction rather than reduction and in pragmatics rather than classical issues like completeness. This shift in emphasis applies in general to what W. Thomas calls logic for computer science [41]—as opposed to more traditional logic in computer science—but it certainly applies to FMs in particular. Are FMs succeeding in their mission? Although a complete answer cannot be given yet—since we have not witnessed a complete and widespread uptake of FMs by an industry in computer engineering—there is a clear trend that justifies a tendency toward a positive answer to the above question, as it will be briefly elaborated below.
FMs have been used extensively in the past for security, fault tolerance, general consistency, object-oriented programs, compiler correctness, protocol development, hardware verification and computer-aided design, and human safety (see References 32 and 46 for extensive bibliographies on the use of FMs in the above areas). There are several IT industries, mainly larger ones like IBM, Intel, Lucent/Cadence, Motorola, and Siemens, which use FM techniques for quality assurance in their production processes [10, 31], and that FMs are making their way into software development practices is clear even from the words of Bill Gates:
Things like even software verification, this has been the Holy Grail of computer science for many decades but now in some very key areas, for example, driver verification we're building tools that can do actual proof about the software and how it works in order to guarantee the reliability. [17].
FMs and related tools are also used as a supporting technology in branches of fundamental sciences, like biology (see for instance Reference 40), physics, and others [24]. Moreover, FMs are used for the design and validation of safety critical systems, for example, in areas like aerospace* [34]. In the international standards for the engineering of safety critical systems, like those for space or transportation, more and more recommend or mandate the use of FMs. Examples of these engineering standards are the ECSS E-40-01 [15] and the CENELEC EN 50128 [14]. The need of FMs has been recognized also in other scientific communities, in particular the dependability one, as witnessed, for instance, by the organization of the specific workshop on Model Checking for Dependable Software-Intensive Systems in one edition of the International Conference on Dependable Systems and Networks, the most respectable conference in the field (for a discussion on the prominent role that FMs play—and will more and more play in the future—in the area of dependability, see also Reference 45). Finally, several commercial organizations nowadays provide FM-oriented services (some are listed in Reference 11). Unfortunately, the question on the success of FM cannot be fully answered yet also because many tools used in the industry are proprietary tools on which detailed information is very seldom publicly available; similarly, details on the specific methodologies and processes using such tools and the methodologies they support are difficult to be obtained for similar reasons†—a notable exception to this trend is Reference 21 where the AT&T Bell Laboratories NewCoRe project is described, which clearly shows how beneficial FMs can be in the software development process.
On the other hand, one can assess the success of a scientific/technological discipline also by the very advances that have been accomplished in the discipline itself, and, in the last few years, there have been tremendous advances in the field of FMs. First of all, their coverage has been broadened from purely functional aspects of behavior toward nonfunctional features of systems such as the following [27]:
Space and Mobility
.
Several calculi and logics—and associated support tools—have been extended/developed in order to explicitly deal with notions like the (physical, discrete)
space
occupied by computing elements, their
migration
from one place to another, and its implication on
network connectivity
and the
communication structure
; thus, space and mobility—which are essential notions when developing or reasoning about large distributed networked applications—are first-class objects in these formal frameworks.
Security
.
Several calculi and logics—and associated support tools—have been extended/developed specifically for the modeling of
security
protocols and their desirable properties and for verifying or falsifying that the latter are satisfied by the former.
Bounded Resources
.
Methods based on logics, types, and calculi—and associated support tools—have been developed for modeling and controlling
allocation
and
deallocation
of resources and for expressing the
trust
involved in resource allocation and its
propagation
.
Continuous Time (and Hybrid Systems in General)
.
In the last 15 years, several varieties of automata, process algebras, and logics—and associated support tools—have been extended with elements of
continuous
nature, like for instance
time
, and functions of continuous variables with their
derivatives
.
Stochastic Behavior
.
Elements of stochastic behavior, like
probabilistic choices
and
stochastic durations
, have been introduced in models like automata and process algebra, and corresponding operators have been added to proper logics, thus providing conceptual
formal
, language-based tools for the modeling and reasoning about of
performance
and
dependability
attributes and requirements. Automatic software tools for (process algebras/automata)
discrete simulation
and for Markov chains and Markov decision processes
model checking
constitute their practical support counterpart.
An account of advances in the above-mentioned fields together with a rich bibliography can be found in Reference 31. Moreover, the field of application of FMs has been broadened too, including novel disciplines, like computational biology, to mention just one. Finally, the capabilities of the tools supporting FMs have been dramatically improved. For instance, some model checking techniques
work especially well for concurrent software, which, as luck will have it, is the most difficult to debug and test with traditional means [22],
and nowadays, model checkers are able to deal with system models of up to 10100 states and, in some cases, even up to 10100 [4, 20]. Nevertheless, there are still several open problems in the area of FM, among which are the following:
Most specification—and especially automatic verification—techniques do not easily
scale up
due to the infamous
state explosion
problem, which arises when the system to be modeled and verified has many independent actions that can occur in parallel.
Although significant progress has taken place in the field of compositional specification of concurrent systems (notably by the introduction of process algebras), the same cannot be claimed for verification. In particular,
compositionality
is not currently fully exploited in model checking techniques.
Many specification paradigms and verification techniques developed independently from one another, often giving rise to maybe technically unjustified dichotomies. Lack of
full integration
between different paradigms—like state-oriented ones of some specification languages versus
action/event-oriented
ones of others—or between different verification techniques—like automated theorem proving versus model checking—is likely to be detrimental for each of such paradigms or techniques.
FMs should find their way toward full acceptance and use in system—and particularly software—engineering. They should be smoothly embedded into more traditional industrial production processes.
In the medium/long-term future, the grand challenge for FMs will be large-scale industrialization and intensification of fundamental research efforts. This is necessary for tackling the challenge of computer—and in particular software—reliability. The latter will be a major grand challenge in computer science and practice [11].
On a more technical level, several research directions will require particular effort. In the following list, some of them are briefly presented. The list is necessarily incomplete and its items necessarily interrelated; the order of appearance of the items in the list does not imply any relative priority or level of importance:
In the context of
abstract interpretation
, “general purpose, expressive and cost-effective abstractions have to be developed e.g. to handle floating point numbers, data dependencies (e.g. for parallelization), liveness properties with fairness […], timing properties for embedded software [and] probabilistic properties, etc. Present-day tools will have to be enhanced to handle higher-order compositional modular analysis and to cope with new programming paradigms involving complex data and control concepts (such as objects, concurrent threads, distributed/mobile programming, etc.), to automatically combine and locally refine abstractions in particular to cope with ‘unknown’ answers” [11]. Moreover, new ways for exploiting abstraction for (extended) static analysis, including the use of theorem proving, will need to be investigated [35].
In the context of
model checking
[7, 8, 24],
abstraction
will be a major pillar. It will be of vital help especially for winning over the state explosion problem. Moreover, it will allow the extension of model checking to infinite models. Finally, it will make model checking of software
programs
(and not only their specifications) a reality [23]. Research is needed in order to embed abstraction techniques within model checking ones. The following are some of the issues that need to be further investigated: (1) how to build—possibly automatically—an abstract model from a concrete one; (2) how to identify spurious abstract counterexamples provided by the model checking algorithms when checking an abstract model reports an error—spurious counterexamples originate from the very fact that an abstract model necessarily contains less information than the concrete one; (3) how to refine an abstract model when a spurious counterexample is detected in order to eliminate it; (4) how to derive concrete counterexamples from abstract (nonspurious) ones; and (5) how to develop methods for verifying
parameterized
systems, that is, systems with arbitrarily many identical components. Another major pillar will be
compositionality
. There is no hope to be able to model check—and in general, analyze—complex real-life systems of the future, like global ubiquitous systems, without having the possibility of exploiting their structure and architecture—that is, them being composed of simpler components. Compositional reasoning and compositional model checking will play a major role in the area of automated verification of concurrent systems. Finally, a smooth
integration
of model checking and theorem proving will result in a quantum leap in verification for example, by facilitating the modeling and verification of infinite-state systems, for which powerful induction principles will be available together with efficient proof of base cases.
In the area of
FMs for security
,
access to resources
, and
trust
, we expect the development of security-oriented languages, directly derived from the theories and calculi for mobility and security mentioned in Section 1.1. Moreover, specific protocols for mobile agents to acquire and to manage resources need to be developed, which will base access negotiation on an evaluation of
trust
, the level of which will in turn depend on
knowledge
and
belief
about agents and on how trust is
propagated
. Such protocols will undergo serious and intense investigation and assessment including automatic verification [27].
In the area of
hybrid
systems, the basic ideas used in the current tools for representing state sets generated by complex continuous functions in hybrid automata (including timed automata) will be further developed, and related efficient algorithms will be developed, which will facilitate automated reasoning on hybrid system models. More emphasis will be put on efforts for unification with control theory [27].
Model checking techniques for
stochastic
and
probabilistic
models and logics will be further developed in order to scale up to real-life system sizes. Particular emphasis will be put on the problem of counterexample generation and on the tailoring of numerical analysis algorithms and techniques required for such kinds of model checking. Moreover, integration with mobile and hybrid models as above will be required in order to model and verify essential aspects of future global ubiquitous computing [27].
Game semantics
is a promising approach to the formal semantics of specification/programming languages and logics, which has developed from mathematical games, logics, and combinatorial game theory and category theory. Although it lays more on the side of foundations for FMs, it will provide solid basis for advances in model checking—especially with respect to compositionality—partial specification and modeling, integration of qualitative and quantitative approaches, and developments in physics-based models of computation—for example, quantum computing [27].
Finally, in order to properly face the “engineering challenge” of FMs, a proper
merging
of the languages of formulas and diagrams must be devised. Attempts have been made in the software engineering community, and especially in the industry, most notably with the UML—although not particularly impressive from the mathematical foundations’ point of view. On the other hand, there are other, historical examples like the equivalence of Boolean formulas and ordered binary decision diagrams, or regular expressions and finite automata, which have been quite successful and which resulted in useful engineering tools. This should push research on “[t]heories which support merging diagram-based languages with term- or formula-based ones” since this “would help in designing better specification languages” [41].
Although the above list is far to be complete, we can definitely claim that all the above lines of research are essential for properly facing the grand research challenges in information systems [25], in TCS and its applications [36], and for making the potentials of LICS and TCS fully exploited in computer science and engineering as well as in other branches of science and engineering [24].
The author wishes to thank Peter Neumann (SRI Int., USA), Rocco de Nicola (University of Florence, Italy), and Scott Smolka (SUNY, USA) for precious suggestions; Stefania Gnesi and Alessandro Fantechi, for their help on a number of issues; and Mieke Massink and Angelo Morzenti for reviewing the present report.
Notes
* A guide on the selection of appropriate FM can be found in the official site of Formal Methods Europe [16].
† Some advocates of FMs claim that this itself is a proof that FM and related tools are considered strategic in industrial contexts and this by itself can show their success.
REFERENCES
1. J. R. Abrial. The B-Book. Cambridge University Press, 1996.
2. J. R. Abrial. Faultless systems: Yes we can. IEEE Computer Society, 42(9):30–36, 2009.
3. M. Ajmone Marsan, G. Conte, and G. Balbo. A class of generalized stochastic Petri nets for the performance evaluation of multiprocessor systems. ACM Transactions on Computer Systems, ACM Press, 2(2):93–122, 1984.
4. C. Baier and J.-P. Katoen. Principles of Model Checking. The MIT Press, 2008.
5. D. Bjorner and C. Jones. Formal Specification and Software Development. Prentice Hall International, 1982.
6. R. Boyer. Nqthm, the Boyer-Moore prover, 2003. Available at: http://www.cs.utexas.edu/users/boyer/ftp/nqthm/index.html.
7. E. Clarke, E. Emerson, and J. Sifakis. Model checking: Algorithmic verification and debugging. Turing lecture. Communications of the ACM, 52(11):75–84, 2009.
8. E. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. Progress in the state explosion problem in model checking. In R. Wilhelm, ed., Informatics 10 Years Back 10 Years Ahead, Volume 2000 of Lectures Notes in Computer Science, pp. 176–194. Springer-Verlag, 2000.
9. E. Clarke, O. Grumberg, and D. Peled. Model Checking. MIT Press, 1999.
10. E. Clarke, J. Wing, et al. Formal methods: State of the art and future directions. ACM Computing Surveys, ACM Press, 28(4):626–643, 1996.
11. P. Cousot. Abstract interpretation based formal methods and future challenges. In R. Wilhelm, ed., Informatics 10 Years Back 10 Years Ahead, Volume 2000 of Lectures Notes in Computer Science, pp. 138–156, 2000.
12. P. Cousot. Abstract Interpretation and Semantics, September 30, 2003. Available at: http://www.di.ens.fr/∼cousot/Equipeabsint-eg.shtml.
13. A. Emerson. Temporal and modal logics. In J. van Leeuwen, ed., Handbook of Theoretical Computer Science—Vol. B: Formal Models and Semantics, pp. 995–1072. Elsevier, 1990.
14. European Committee for Electrotechnical Standardization. CENELEC. Railway application—Communications, signalling and processing systems—Software for railway control and protection systems. CENELEC EN 50128. 2011.
15. European Cooperation for Space Standardization ECSS. Space segment software. ECSS E-40-01. 1999.
16. Formal Methods Europe. FME Home Page, September 30, 2003. Available at: http://www.fmeurope.org.
17. B. Gates. Remarks by Bill Gates. WinHEC 2002 Seattle—Washington April 18, 2002. Available at: http://www.microsoft.com/billgates/speeches/2002/04-18winhec.asp [Accessed October 6, 2003].
18. H. Genrich. Predicate/transition nets. In G. Rozenberg, ed., Advances in Petri Nets 1986, Volume 254 of Lectures Notes in Computer Science, pp. 207–247. Springer-Verlag, 1986.
19. C. Ghezzi, D. Mandrioli, S. Morasca, and M. Pezzè. A unified high-level Petri net formalism for time-critical systems. IEEE Transactions on Software Engineering, 17(2):160–172, 1991.
20. L. Hoffman. Talking model-checking technology. A conversation with the 2007 ACM A. M. Turing Award winners. Communications of the ACM, 51(7):110–112, 2008.
21. G. Holzmann. The theory and practice of a formal method: NewCoRe. In Proceedings of the 13th IFIP World Congress, pp. 35–44. IFIP, 1994.
22. G. Holzmann. The SPIN Model Checker. Primer and Reference Manual. Addison-Wesley, 2003.
23. R. Jhala and R. Majumdar. Software model checking. ACM Computing Surveys, 41(4):21:2–21:54, 2009.
24. D. Johnson. Challenges for Theoretical Computer Science, 2000. Draft Report from the Workshop on Challenges for Theoretical Computer Science held in Portland on May 19, 2000. Available at: http://www2.research.att.com/~dsj/nsflist.html.
25. A. Jones, ed. Grand research challenges in information systems. Computer Research Association, 2003.
26. C. Jones. Thinking tools for the future of computing science. In R. Wilhelm, ed., Informatics 10 Years Back 10 Years Ahead, Volume 2000 of Lectures Notes in Computer Science, pp. 112–130, 2000.
27. M. Kwiatkowska and V. Sassone (moderators). Science for Global Ubiquitous Computing, 2003. Proposal for discussion nr. 2 in the context of the Grand Challenges for Computing Research initiative sponsored by the UK Computing Research Committee with support from EPSRC and NeSC.
28. L. Lamport. Specifying Systems: The TLA+ Language and Tools for Hardware and Software Engineers. Addison-Wesley, 2002.
29. P. Merlin and D. Farber. Recoverability of communication protocols. IEEE Transactions on Communications, 24(9):1036–1043, 1976.
30. R. Milner. Operational and algebraic semantics of concurrent processes. In J. van Leeuwen, ed., Handbook of Theoretical Computer Science—Vol. B: Formal Models and Semantics, pp. 1201–1242. Elsevier, 1990.
31. R. Milner, A. Gordon, V. Sassone, P. Buneman, F. Gardner, S. Abramsky, and M. Kwiatkowska. Theories for ubiquitous processes and data. Platform for a 15-year grand challenge, 2003. This paper is written as a background for Reference [27].
32. P. Neumann. Practical architectures for survivable systems and networks. Technical Report Cont. 1-732-427-5099—Final Report, SRI International, 2000. Available at: http://www.csl.sri.com/users/neumann/ [Accessed September 30, 2003].
33. W. Reisig. Petri Nets—An Introduction, Volume 4 of EATCS Monographs on Theoretical Computer Science. Springer-Verlag, 1985.
34. J. Rushby. Formal methods and the certification of critical systems. Technical Report CSL-93-7, SRI International, 1993. Also issues under the title Formal Methods and Digital Systems Validation for Airborne Systems as NASA CR 4551.
35. K. Rustan and M. Leino. Extended static checking: a ten-year perspective. In R. Wilhelm, ed., Informatics 10 Years Back 10 Years Ahead, Volume 2000 of Lectures Notes in Computer Science, pp. 157–175, 2000.
36. A. Selman. Challenges for Theory of Computing, 1999. Report of an NSF-Sponsored Workshop on Research in Theoretical Computer Science held in Chicago on March 11–12, 1999. Available at: http://http://www.cse.buffalo.edu/~selman/report/.
37. N. Shankar. Automated deduction for verification. ACM Computing Surveys, 41(4):20:2–20:56, 2009.
38. J. Spivey. The Z Notation—A Reference Manual. Prentice Hall International, 1989.
39. SRI International—Computer Science Laboratory. The PVS Specification and Verification System, September 30, 2003. Available at: http://pvs.csl.sri.com/.
40. The BioSPI Project. The BioSPI Project Home Page, October 27, 2003. Available at: http://www.wisdom.weizmann.ac.il/~biopsi.
41. W. Thomas. Logic for computer science: The engineering challenge. In R. Wilhelm, ed., Informatics 10 Years Back 10 Years Ahead, Volume 2000 of Lectures Notes in Computer Science, pp. 257–267, 2000.
42. A. M. Turing. On computable numbers with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 2(42):230–265, 1936.
43. University of Cambridge—Computer Laboratory. Automated Reasoning Group HOL page. Available at: http://www.cl.cam.ac.uk/research/hvg/HOL/.
44. Various Authors. Wikipedia, September 30, 2003. Available at: http://www.wikipedia.org
45. J. Woodcock (moderator). Dependable Systems Evolution. A Grand Challenge for Computer Science, 2003. Proposal for discussion nr. 6 in the context of the Grand Challenges for Computing Research initiative sponsored by the UK Computing Research Committee with support from EPSRC and NeSC.
46. J. Woodcock, P. G. Larsen, J. Bicarregui, and J. Fitzgerald. Formal methods: Practice and experience. ACM Computing Surveys, 41(4):19:2–19:36, 2009.
PART IIMODELING PARADIGMS
CHAPTER 2
A SYNCHRONOUS LANGUAGE AT WORK: THE STORY OF LUSTRE
NICOLAS HALBWACHS
Vérimag, Grenoble, France
The design of the synchronous language LUSTRE started more than 20 years ago, and resulted in an industrial software development tool, SCADE, which is now in use in many major companies developing embedded software (avionics, transportation, energy, etc.). It seemed to us that this success story deserves to be reported and analyzed, from the point of view of the problems raised by the industrial transfer of a new technology: Why did it succeed? How could it succeed better?
So-called embedded systems are much more fashionable nowadays than in the 1980s, when they first appeared in large industrial applications. It is now admitted that this domain concerns systems presenting one or several of the following features: (1) they have to run in strong interaction with their—possibly physical—environment (real-time systems, industrial control, etc.), (2) their development combines software and hardware aspects, (3) they are subjected to strong nonfunctional constraints (execution time, memory limitations, fault tolerance, power consumption, etc.), and (4) they are safety critical—often because of (1), since they influence physical processes and devices. Because of (1) and (2), the domain of embedded systems is strongly related to both control theory and hardware design, and this is why the inspiration of synchronous languages comes both from control engineering formalisms and from hardware description languages.
One can wonder why the main three synchronous languages—ESTEREL [4], SIGNAL [27], and LUSTRE [18]—were all born in France, approximately at the same time and quite independently. Of course, at the beginning of the 1980s, the idea of synchrony was already in the air, be it in theoretical works by Milner [30], or in “almost synchronous” formalisms, like Grafcet (or IEC 1131 Sequential Function Charts) [3, 13] or Statecharts [23]. But the conditions were particularly favorable, both for academic and industrial reasons.
On the academic side, the three involved teams mixed researchers from control theory and computer science: Jean-Paul Rigault, Jean-Paul Marmorat, and Gérard Berry for ESTEREL; Albert Benveniste and Paul Le Guernic for SIGNAL; and Paul Caspi and the author of this chapter for LUSTRE. This double competence seems to have played an important role in the design of the languages.
On the other hand, strong industrial needs were appearing: the European and French industry of embedded software was faced with some big challenges:
For the very first time, in the new family of French nuclear reactors, called N4, the most critical functions (in particular, the emergency stop) were realized by a computer system, the SPIN (for “integrated nuclear protection system”).
At the same time, the Airbus A320 was designed, which was the very first fully “fly-by-wire” aircraft.
In the railways industry, various automatic subways were designed (e.g., the VAL [15]), and the successive versions of the French TGV (high-speed train) were more and more computerized.
Started in 1984, the development of LUSTRE beneficiated from these very good circumstances. After briefly recalling the principles of the language (Section 2.2), we will detail in Section 2.3 the main stages of its development, both from academic and industrial points of view. Section 2.4 analyzes the feedback from industrial usages of the language. Finally, Section 2.5 outlines the current evolutions of the language and its associated tools.
A preliminary version of this chapter was published in Reference 16.
Let us first recall, in a simplified way, the principles of LUSTRE: A LUSTRE program operates on flows of values. Any variable (or expression) x represents a flow, that is, an infinite sequence (x0, x1, … , xn, …) of values. A program is intended to have a cyclic behavior, and xn is the value of x at the nth cycle of the execution. A program computes output flows from input flows. Output (and possibly local) flows are defined by means of equations (in the mathematical sense), an equation “x = e” meaning “∀n,xn = en.” So, an equation can be understood as a temporal invariant. LUSTRE operators operate globally on flows: For instance, “x + y” is the flow (x0 + y0, x1 + y1, … , xn + yn, …). In addition to usual arithmetic, Boolean, conditional operators—extended pointwise to flows as just shown—we will consider only two temporal operators:
The operator “
pre
” (“previous”) gives access to the previous value of its argument: “
pre(x)
” is the flow (
nil
,
x
0
, … ,
x
n
−1
, …), where the very first value
nil
is an undefined (“noninitialized”) value.
The operator “
->
” (“followed by”) is used to define initial values: “
x -> y
” is the flow (
x
0
,
y
1
, … ,
y
n
, …), initially equal to
x
, and then equal to
y
forever.
As a very simple and classical example, the program shown below is a counter of “events”: It takes as inputs two Boolean flows “evt” (true whenever the counted “event” occurs) and “reset” (true whenever the counter should be reinitialized), and returns the number of occurrences of “events” that occurred since the last “reset”:
node Count(evt, reset: bool)returns(count: int);
let
count = if (true -> reset) then 0
else if evt then pre(count) + 1
else pre(count);
tel
Intuitively, “true -> reset” is a Boolean flow, which is true at initial instant and whenever “reset” is true; when it is true, the value of “count” is 0; otherwise, when “event” is true, “count” is incremented. Otherwise, it keeps its previous value.
Once declared, such a “node” can be used anywhere in a program, as a user-defined operator. For instance, our counter can be used to generate an event “minute” every 60 “second,” by counting “second” modulo 60:
mod60 = Count(second, pre(mod60) = 59);
minute = (mod60 = 0);
Here, “mod60” is the output of a “Count” node, counting “second,” and reset whenever the previous value of “mod60” is 59; “minute” is true whenever “mod60” equals 0.
So, through the notion of node, LUSTRE naturally offers hierarchical description and component reuse. Data traveling along the “wires” of an operator network can be complex, structured information.
From a temporal point of view, industrial applications show that several processing chains, evolving at different rates, can appear in a single system. LUSTRE offers a notion of Boolean clock, allowing the activation of nodes at different rates.
The graphical counterpart of LUSTRE textual syntax is obvious; for instance, Figure 2.1 is a SCADE view of the “minute detector” described before.
FIGURE 2.1 A graphical view in SCADE.
The initial idea of LUSTRE came from our previous works [9] about modeling real-time systems by means of time functions. Such a global handling of variable “histories,” together with the inspirating proposal of the Lucid* language [2], suggested us to design a programming language describing variables as timed sequences of values. Moreover, from his background in control theory, Paul Caspi knew that this kind of description—we would say, today, “MATLAB/Simulink-like”—was the natural one for control engineers: From their experience with previous technologies, they were used to declarative or data-flow formalisms, that is, at high levels, differential or finite-difference equations, and at lower levels, various kinds of graphical network (block diagrams, analog networks, switches schemas, etc.).
Caspi’s intuition was promptly confirmed: We found, in many companies, many in-house development tools based on these kinds of formalisms. The goal of such tools ranged from simple graphical description (without any idea of mechanical exploitation) to some attempts in consistency checking, simulation, and even automatic code generation. In particular, the Merlin Gerin company (now Schneider Electric), located in Grenoble, was in charge of the development of a large part of the SPIN (“integrated nuclear protection system”). Being aware that they were confronted with radically new problems, because of the software criticality, the management people decided to develop their own development environment, and they naturally chose a data-flow formalism. Our great chance was to collaborate with them from the very beginning of the design of this environment, which was called SAGA (for “assisted specification and automatic generation”): SAGA was based on LUSTRE, provided with a mixed graphical/textual syntax, and offered a simple, but efficient, code generator. Two members of the LUSTRE team, Eric Pilaud and Jean-Louis Bergerand, were hired by Merlin Gerin to supervise the development of the tool. The use of SAGA was very successful for the design of the SPIN and several other systems.
