84,99 €
Computers are ubiquitous throughout all life-cycle stages of engineering, from conceptual design to manufacturing maintenance, repair and replacement. It is essential for all engineers to be aware of the knowledge behind computer-based tools and techniques they are likely to encounter. The computational technology, which allows engineers to carry out design, modelling, visualisation, manufacturing, construction and management of products and infrastructure is known as Computer-Aided Engineering (CAE).
Engineering Informatics: Fundamentals of Computer-Aided Engineering, 2nd Edition provides the foundation knowledge of computing that is essential for all engineers. This knowledge is independent of hardware and software characteristics and thus, it is expected to remain valid throughout an engineering career. This Second Edition is enhanced with treatment of new areas such as network science and the computational complexity of distributed systems.
Key features:
Engineering Informatics: Fundamentals of Computer-Aided Engineering, 2nd Edition provides essential knowledge on computing theory in engineering contexts for students, researchers and practising engineers.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 517
Veröffentlichungsjahr: 2013
Table of Contents
Title Page
Copyright
Foreword to the First Edition
Preface to the Second Edition
Preface to the First Edition
Chapter 1: Fundamental Logic and the Definition of Engineering Tasks
1.1 Three Types of Inference
1.2 Engineering Tasks
1.3 A Model of Information and Tasks
1.4 Another Task Definition
1.5 The Five Orders of Ignorance
1.6 Summary
Exercises
References
Chapter 2: Algorithms and Complexity
2.1 Algorithms and Execution Time of Programs
2.2 ‘Big Oh’ Notation
2.3 Practical Methods for Determining the Complexity of Algorithms
2.4 P, NP and NP-Completeness
2.5 Summary
Exercises
Reference
Further Reading
Chapter 3: Data Structures
3.1 Introduction
3.2 Definitions
3.3 Derived Data Types
3.4 Abstract Data Types
3.6 Network Science
3.7 Hashing
3.8 Summary
Exercises
Further Reading
Chapter 4: Object Representation and Reasoning
4.1 Introduction
4.2 Grouping Data and Methods
4.3 Definitions and Basic Concepts
4.4 Important Characteristics of Objects
4.5 Applications Outside Programming
4.6 An Object-Oriented Design Methodology
4.7 Summary
Exercises
References
Further Reading
Chapter 5: Database Concepts
5.1 Introduction
5.2 Basic Concepts
5.3 Relational Database Systems
5.4 Relational Database Design
5.5 Transaction Processing
5.6 Other Types of Database
5.7 Summary
Exercises
Reference
Further Reading
Chapter 6: Computational Mechanics
6.1 Introduction
6.2 From Physical Principles to Practical Systems
6.3 Methods for Finding Solutions
6.4 Issues in Computer-Aided Engineering
6.5 Summary
References
Further Reading
Chapter 7: Constraint-Based Reasoning
7.1 Introduction
7.2 Terminology
7.3 Constraint-Solving Methods
7.4 Reasoning with Constraints on Discrete Variables
7.5 Reasoning with Constraints on Continuous Variables
7.6 Summary
References
Chapter 8: Optimization and Search
8.1 Introduction
8.2 Basic Concepts
8.3 Classification of Methods
8.4 Deterministic Optimization and Search
8.5 Stochastic Methods
8.6 A Closer Look at Genetic Algorithms
8.7 Summary of Methods
Exercises
References
Further Reading
Chapter 9: Knowledge Systems for Decision Support
9.1 Introduction
9.2 Important Characteristics of Knowledge Systems
9.3 Representation of Knowledge
9.4 Reasoning with Knowledge
9.5 Importance of the User Interface
9.6 Maintenance of Knowledge
9.7 Model-based Reasoning
9.8 Case-Based Reasoning
9.9 Summary
Reference
Further Reading
Chapter 10: Machine Learning
10.1 Introduction
10.2 Improving Performance with Experience
10.3 Formalizing the Learning Task
10.4 Learning Algorithms
10.5 A Closer Look at Artificial Neural Networks
10.6 Support Vector Machines
10.7 Summary
Exercises
References
Further Reading
Chapter 11: Geometric Modelling
11.1 Introduction
11.2 Engineering Applications
11.3 Mathematical Models for Representing Geometry
11.4 Representing Complex Solids
11.5 Applications
11.6 Summary
Further Reading
Chapter 12: Computer Graphics
12.1 Introduction
12.2 Tasks of Computer Graphics
12.3 Display Devices
12.4 Representing Graphics
12.5 The Graphics Pipeline
12.6 Interactive Graphics
12.7 Graphical User Interfaces (GUI) and Human–Computer Interaction (HCI)
12.8 Applications
12.9 Summary
References
Further Reading
Chapter 13: Distributed Applications and the Web
13.1 Introduction
13.2 Examples of Client–Server Applications
13.3 Distinctive Features of C/S Systems
13.4 Client–server System Design
13.5 Advantages of Client–Server Systems
13.6 Developing Client–Server Applications
13.7 The World Wide Web
13.8 Peer-to-Peer Networks
13.9 Agent Technology
13.10 Cloud Computing
13.11 Complexity
13.12 Summary
Reference
Further Reading
Index
This edition published in 2013
© 2013, John Wiley & Sons, Ltd
First Edition published in 2003
© 2003, John Wiley & Sons, Ltd. All rights reserved.
Registered office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloguing-in-Publication Data
Raphael, B. (Benny)
Engineering informatics / Benny Raphael, Ian Smith.—Second edition.
pages cm
Includes bibliographical references and index.
ISBN 978-1-119-95341-8 (cloth : alk. paper) 1. Engineering–Data processing. 2. Artificial intelligence. 3. Expert systems (Computer science) I. Smith, Ian F. C. II. Title.
TA345.R38 2013
620.00285—dc23
2013000149
A catalogue record for this book is available from the British Library.
Print ISBN: 978-1-119-95341-8
During the past four decades, computers and computer-based information technologies have penetrated into every aspect of our daily lives and have profoundly changed the way we perform many of our daily activities. Computers have also changed every aspect of engineering practice and research. In practice, engineers would have found it difficult, if not impossible, to respond to increased pressures without the support of computers. These pressures include, among others, the increased scope and complexity of the artefacts and projects that engineers design, produce and manage, accelerated schedules, growing regulatory and environmental demands, and the globalization of the profession – increasing opportunities for collaboration as well as competition. In engineering research, much of the experimental work that is aimed at a better understanding of physical phenomena has been translated into expanded simulation models. In parallel, the increased computerization of engineering processes has brought about research on the fundamental nature of these processes themselves.
The trends have brought about a fundamentally changed view of computing. The initial view of computing as a means for calculating has given way to viewing computing as a means for generating, communicating and sharing data, information and knowledge. Computing in this broad sense is the subject of this book.
Within the general trends sketched above, there have been a series of major changes in the way engineers acquire and use computing aids. Computer hardware has undergone three generations, defined by mode of access: from sequential batch processing, through time-shared access, to ubiquitous computing on desktop computers. More relevant to this book are the generational changes in how engineers acquire and use software. Initially, every engineer who wanted to use the computer was his or her own programmer. Later, program development groups emerged within each engineering organization, developing and maintaining the organization's software in-house or through contracted custom programming, with possibly some support from user groups. Out of these two sources, in-house and contract development groups, there emerged a highly competitive marketplace of commercial software products. Today the vast majority of engineers use commercial software.
The near-total dependence of practising engineers on commercial software has stratified their roles with respect to computing. The largest stratum consists of engineers who are users of software developed by others. A much smaller stratum is the information technology (IT) staffs of engineering organizations, the successors to the in-house development staffs, responsible for selecting, adapting, and integrating hardware, networks and software for the organizations. A second small stratum works for the commercial software companies, developing, upgrading and maintaining that software. Computing in research may be somewhat more fluid with respect to these three strata, but it generally follows the same pattern.
The computing component of engineering education, for future engineers and existing practitioners, has had to adapt to these changing use patterns. Initially, many of us argued that programming was an excellent pedagogic device for teaching engineering concepts. This view turned out to be too limited: the effort to craft a program embodying a concept, say, in statics, turned out to be counterproductive to learning the concept. While there was a preponderance of engineer-programmers, procedural programming was a skill needed for practice. Today productivity tools such as spreadsheets and equation solvers, courseware, and an array of commercial software are all available to support and augment a student's coursework. There is actually a concern about students' overdependence on such tools. In today's environment the educational value of a procedural programming course for all engineering students is being questioned. Nevertheless, in today's curriculum dominated by analysis courses, the design of a program provides one of the few opportunities for students to synthesize an artefact.
How should engineering education respond to the stratification in engineering computing? For the two smaller strata, several universities provide specialized graduate-level tracks. These two strata appear to be adequately served: as a fraction of the total number of graduates in engineering, the number of graduates on these tracks appears to be roughly in proportion to the fraction of the two strata in the engineering population at large. But there is no consensus on the educational foundations in computing needed by all students.
In this book, Raphael and Smith provide an innovative proposal: they present a carefully selected, well-integrated set of topics that constitute the core of computer-aided engineering, presented at a level suitable for engineering undergraduates. Many of the chapters in the book correspond to one or more courses in computer science, but this should not be seen as making the book shallow. On the contrary, the contents of the book reflect its multiple purposes: for future users of computing aids and for casual programmers, it is intended to provide self-sufficiency in the basics of computing and self-confidence in the use of their software; for future IT and commercial software development staff, it provides motivation and structure for further study; and for current practitioners it offers a means of self-study.
The authors have been careful to choose topics that will retain their relevance for some time, and which will not age prematurely. Will the book hold up to the future and have a reasonably long shelf life? Over the years, I have repeatedly demonstrated that I am not a reliable forecaster of computing trends. However, it is clear that there will be many further innovations in information technology and in software, and that engineers will continue to adapt and exploit these new developments. At the same time, entirely novel engineering applications of computing are to be expected. If anything, the rate of change will accelerate further. The topics included in this book have demonstrated their applicability and longevity over many previous changes, and can be expected to provide the basis for further developments as well. The science base of computer-aided engineering is likely to continue growing by spawning subdisciplines, as has already been demonstrated by topics in the book such as database theory and computational geometry. The net result will be the progressive deepening of the topics presented, rather than the addition of brand-new topics.
There is one aspect of computing that I can predict with reasonable certainty. For the largest stratum of engineers, very substantial changes in human–computer interaction, visual programming tools, a host of other support disciplines, and in the nature of engineering practice will have to occur before there can be a large-scale return from the engineer-user of today to the engineer-programmer of the past. Until these changes occur, if they ever do, I expect this book to provide the essentials of computer-aided engineering. I recommend the book to all future and present engineers and to their teachers. It has been a pleasurable and informative experience to review early drafts of this book. I acknowledge Carnegie Mellon University and the National Institute for Standards and Technology for allowing me the time to do this, and the staff of the Swiss Federal Institute of Technology, Lausanne, for hosting me.
Steven J. Fenves University Professor Emeritus, Carnegie Mellon University, USA Lausanne, Switzerland May 2002
This second edition contains several extra features and new information is added to many chapters. At the beginning of each chapter, text has been added to say what there is to learn in order to bring important points to the front. Extra sections include topics such as network science, hashing, BIM, 4D simulations, engineer–computer interaction, support vector machines, cloud computing and complexity of distributed systems. Additional examples on topics such as linear programming and computer graphics have been introduced. Since this book covers fundamentals, much information has not changed. Nevertheless, where relevant, references have been added to provide more contemporary bibliographic sources.
This edition was prepared while the second author was in Singapore as a Principal Investigator in the Future Cities Laboratory of the Singapore-ETH Centre (SEC). Support from the Centre and its Director, Professor Gerhard Schmitt, is gratefully acknowledged.
Benny Raphael and Ian F. C. Smith Singapore March 2013
Accompanied by a website (www.wiley.com/go/raphael) hosting updates and solutions.
What makes computer-aided-engineering software a good return on investment? This is a more difficult question to answer than it seems; astonishingly, not many practising engineers have given it much consideration. The term ‘engineering software’ is used in this book to include commercial software as well as software that is developed by engineers at their place of work to do specific tasks. Let us briefly evaluate the returns on investment that are currently observed by practising engineers for both types of engineering software.
In-house engineering software is often written by engineers who have little or no formal training in computer science. Furthermore, development of this software often finishes before they acquire an adequate understanding of the task. Under such circumstances, software usually works correctly for a short time only and only in the presence of its developers. When developers leave the organization, utilization drops and new people have to start again. Even if key people are still present, changes in requirements may make software obsolete and difficult to revise. Such software rarely provides a good return on investment. Over the past decades, redevelopment of software has been repeated countless times in engineering firms and industries at a tremendous cost.
The use of commercial software packages is commonplace in engineering practice. University educators also employ software packages to support teaching of concepts and practical aspects related to topics such as analysis, drafting and design. Learning to use engineering software is seldom easy and often exacerbated by regular releases of new versions. What is the best software package to use for a given task? Why is one package better than others? Which package is most likely to adapt to changing needs? Such questions are usually answered only by installing a trial version and testing it.
While these activities are essential, a more fundamental knowledge of software attributes does help to avoid costly mistakes. Poor application of commercial software can often be traced to a lack of understanding of the fundamentals of computer-aided engineering. When one considers the number of computer-aided engineering products that have been proposed over the years, it is easy to conclude that few commercial software products currently provide a positive return on investment, either in the classroom or in practice. Some practitioners have commented that the best investment they have ever experienced has provided only a marginally positive return on initial costs and operating expenses.
This situation is somewhat troubling. Many current engineering applications are defective and the understanding of fundamental concepts by users and commercial software developers is low. Nevertheless, the use of computers in engineering is growing exponentially. Advances are feeding an unstoppable trend, advances in areas such as internet applications, multimedia, computer-aided design, model-based diagnosis, autonomous computing, speech recognition and synthesis, adaptive control, wearable computing, concurrent engineering, robotics, computer vision and visualization, including augmented and virtual reality. Therefore, this situation may become even worse. Also, this list is hopelessly incomplete; there are many more areas that are contributing to a vast, multidimensional array of potentially useful software for engineers.
The objective of this book is to present fundamental computing concepts that have an impact on engineering tasks in all areas of traditional engineering. Here ‘traditional’ refers to those engineering areas that do not have a direct relationship with electrical and computing engineering. Examples in this book are drawn mostly from elementary engineering mechanics because this is common domain knowledge within a wide range of traditional engineering disciplines. Particular attention is given to aspects of fundamental computing that are independent of specific tasks. For those who wish to go further into individual topics, this book will clarify terminology and the basic concepts, making it easier to continue studying.
This book does not contain information related to programming languages and commercial software. It has no availability lists and product comparisons. The information that is expected to remain relevant for a long time, perhaps throughout a career. Also, the topics are presented according to the engineering context. For example, the chapter on knowledge systems does not cover knowledge acquisition and representation in detail since successful systems in engineering are developed by domain specialists who work with knowledge representations that are mostly predefined.
Raphael would like to thank the late C. S. Krishnamoorthy and S. Rajiv (Indian Institute of Technology, Madras) for introducing him to the field of computer-aided engineering and to B. Kumar (Glasgow Caledonian University) and I. MacLeod (University of Strathclyde) for many discussions that have contributed to the development of several themes in the book. He also gratefully acknowledges that the knowledge of software engineering practices he acquired while working at Infosys Technologies, Bangalore, has significantly influenced certain concepts.
Smith is indebted to several people who provided support over many years. M. A. Hirt and J.-C. Badoux originally placed their faith in several initial studies in the 1980s and helped obtain the first of many research contracts and industrial collaborations. In the early 1990s B. Faltings accepted Smith, coming from a traditional engineering field, into his laboratory in the Computer Science Department for a five-year period. That experience initiated some of the themes in this book. In particular, key concepts in Chapters 1, 2, 7 and 9 come from teaching and research activities carried out in collaboration with B. Faltings; some examples in Chapters 2 and 7 are taken from course notes. Finally, many insights came from collaboration with G. Schmitt (Swiss Federal Institute of Technology in Zurich) during ten years of projects.
The authors are grateful to several people who contributed directly and indirectly to this book. K. Shea (Cambridge University) contributed to the first set of lectures on these topics. S. Fenves (Carnegie Mellon University and US National Institute of Standards) reviewed most chapters and provided valuable comments. He also inspired some examples. A few examples and exercises were prepared by Y. Robert-Nicoud. T. Schumacher proofread chapters and also provided technical input. M. Jirasek contributed to concepts in Chapter 6. J.-L. Guignard prepared the final versions of all the figures. Many others at the Applied Mechanics and Computing Laboratory of the Swiss Federal Institute of Technology, Lausanne (EPFL) provided comments and suggestions.
The need for this book was identified after discussions with colleagues from around the world revealed that fundamental knowledge of important aspects of computing was hindering successful applications in traditional engineering fields. We would like to recognize the International Association for Bridge and Structural Engineering, the American Society of Civil Engineers and the European Group for Intelligent Computing in Engineering for organizing meetings where many of these discussions took place.
Benny Raphael and Ian F. C. Smith EPFL, Lausanne November 2002
What there is to learn: For many important engineering tasks computers should provide choices, not answers. The type of engineering task will determine important aspects of computer support. The ‘open-world’ nature of engineering is a difficult challenge.
The success of computer software in engineering rarely depends upon the performance of the hardware and the system-support software. Computer applications have to match engineering needs. Although the principal objective of this book is to introduce fundamental concepts of computing that are relevant to engineering tasks, it is essential to introduce first the fundamental concepts of engineering that are relevant to computing. This is the objective of this first chapter, and key concepts are described in the following sections.
From the perspective of fundamental logic, engineering reasoning is categorized into the following three types of inference: deduction, abduction and induction. The best way to understand inference types is through an example. Consider a bar of length L and area A that is fixed at one end and loaded at the other with a load Q (Figure 1.1).
Figure 1.1 An axially loaded bar
two facts:
Q
and δ
one causal rule: if
Q
then δ (a causal rule has the form, if
cause
then
effect
)
The difference between deduction, abduction and induction originates from what is given and what is inferred. Deduction begins with one fact Q and the causal rule. With these two items as given information, we infer that a deformation δ occurs. Abduction begins with the effect δ and the causal rule. Starting with these two items, we infer that a load Q is present. Induction begins with the two facts, Q and δ, and then infers the causal rule. Table 1.1 summarizes the important elements of these three types of inference.
Table 1.1 Three types of inference
Type of inference
Given information
Inferred information
Deduction
Cause and rule
Effect
Abduction
Effect and rule
Cause
Induction
Cause and effect
Rule
Logically, deduction is the only legal inference type in an open world. The use of abduction and induction requires a closed-world hypothesis. A closed world hypothesis involves the assumption that all facts and all rules are known for a given task. Such hypotheses are typically found at the beginning of mathematical proofs using statements such as ‘for all x …’ and ‘y is an element of …’.
For illustration, refer to the example of the loaded bar. We can state with some confidence that if there is a load Q and when the causal rule, if Q then δ, is known, we can infer that a deformation δ is expected (deduction). However, when we know the rule and only that a deformation δ is observed, we cannot necessarily say in an open world that this deformation was caused by a load Q (abduction). For example, the deformation may have been caused by an increase in temperature. Similarly, just because we observe that there is a load Q and a deformation δ, we cannot necessarily formulate the rule, if Q then δ (induction). Other facts and contextual information may justify the formulation of another rule. Abduction and induction become valid inferences for the example on hand only when we make the closed-world hypothesis that the only relevant items of information are the two facts and one rule.
Scientists and mathematicians frequently restrict themselves to closed worlds in order to identify in a formal manner kernel ideas related to particular phenomena. Also, the closed-world idea is not limited to research. Job definitions in factories, offices and many other places often only require decision-making based on closed, well-defined worlds. However, engineers (as well as other professionals such as doctors and lawyers) are not allowed to operate only in closed worlds. Society expects engineers to identify creative solutions in open worlds. For example, engineers consider social, economic and political aspects of most of their key tasks before reaching decisions. Such aspects are difficult to define formally. Moreover, attempts to specify them explicitly, even partially, are hindered by environments where conditions change frequently.
Therefore, engineers must proceed carefully when they perform abductive and inductive tasks. Since a closed-world hypothesis is not possible for most of these tasks, any supporting software needs to account for this. For example, engineering software that supports abductive tasks has to allow for frequent user interaction in order to accommodate open-world information. Also, providing good choices rather than single answers is more attractive to engineers. Differences in ‘world hypotheses’ provide the basic justification for variations between requirements of traditional computer software, such as those applied to book-keeping, and those applied to certain tasks in engineering. The next section will expand on this theme by providing a classification of engineering tasks according to their type of inference.
Important engineering tasks are simulation, diagnosis, synthesis and interpretation of information. All of these tasks transform one type of information to another through inference. Some more terminology is needed. In this discussion the word structure refers to information that is commonly represented in drawings, such as dimensions, location, topography and other spatial information, as well as properties of materials and environmental influences such as temperature, humidity and salinity of the air. This category also includes the magnitude, direction and position of all loading that is expected to be applied to the object. The word behaviour is used to include parameters that describe how the structure reacts, such as deformations, stresses, buckling, shrinkage, creep and corrosion.
In simulation, causes are applied to particular structures in order to observe effects (or more precisely, behaviour). For example, a factory process is simulated to observe output and other aspects, such as sensitivity to breakdown of a machine. Analysis is a special case of simulation. Analysis is performed when behavioural parameters are required for a given physical configuration in a particular environment (the word structure is used loosely to define this information). For example, bridges are analysed for various loading (such as wind, truck and earthquake) in order to determine behaviour that is expressed in terms of stresses and deflections. These transformations are summarized in Figure 1.2.
Figure 1.2 Four common engineering tasks. The arrows indicate transformation of information through inference
Diagnosis can be thought of as the reverse of simulation. For this task, observed effects are considered with the physical configuration and the environment to infer causes. For example, maintenance engineers observe the vibrations of motors in order to identify the most likely causes. Similarly, from an information viewpoint, synthesis is the reverse of analysis, where target behaviour is used to infer a physical configuration within an environment. For example, engineers design building structures to resist many types of loading in such a way that stresses and deflections do not exceed certain values.
Interpretation of information includes a wide range of engineering activities. For example, interpretation of information refers to tasks that infer knowledge from activities such as load tests, commissioning procedures and laboratory experiments. Test results in the form of values of behavioural parameters are combined with known structure to infer models of behaviour. The form of the model may already be known; in such cases, testing is used for model calibration. For example, engineers test aircraft components in laboratories to improve knowledge of strength against fatigue cracking. These results are often used to improve models that predict behaviour of other components. Although such models are usually expressed in terms of formulae, they are transformable into causal rules, as in the example in Figure 1.1.
Each arrow in Figure 1.2 refers to an inference process where the left-hand side is the input, and the right-hand side is the output. Reviewing the definitions for deduction, abduction and induction that are given in Section 1.1, a classification of engineering tasks in terms of inference types becomes possible. Since deduction begins with known causes and causal rules to infer effects, analysis and simulation are examples of deduction. Abduction begins with known effects and causal rules to infer causes; therefore, diagnosis and synthesis are abductive inferences. Interpretation of information begins with known causes and known effects to infer causal rules (models of behaviour), therefore this task is an example of induction. Results are summarized in Table 1.2.
Table 1.2 Engineering tasks classified into inference types
Engineering task
Inference type
Analysis
Deduction
Simulation
Deduction
Diagnosis
Abduction
Synthesis
Abduction
Interpretation of information
Induction
Therefore, important engineering tasks of diagnosis, synthesis and interpretation of information involve logical inferences that are not reliable in open worlds (abduction and induction). This result has a very important impact on requirements for software that is intended to support such tasks. This classification also provides clues to an explanation for the early success of analysis and simulation software. Nevertheless, it can be argued that engineering tasks that provide the greatest added value to society are mostly abductive and inductive. Therefore, the potential gain with appropriate computer-aided engineering support is correspondingly greater as well. It is ‘just’ harder to do things correctly.
Indeed, failure to recognize logical differences between engineering tasks has been a central reason for failures of software such as early artificial intelligence applications (expert systems) proposed during the last quarter of the twentieth century (see Chapter 7 for further discussion). More generally, software applications that support abduction and induction in open worlds are much more sensitive to changing knowledge and contexts. Therefore, maintaining such software over long periods is more difficult than software that supports deduction. The open-world nature of engineering presents important challenges to those who wish to use computers to support abductive and inductive tasks.
In Section 1.2 we explained that inference processes transform information from one type, such as structural information, to another type, such as behavioural information. How many types of information are there? What is the best way to organize engineering information so that we can obtain the most support from computer-aided engineering (CAE)?
These questions have been the subject of much discussion. A framework that has been thoroughly discussed by CAE researchers and has stood the test of time is the function–behaviour–structure schema. In this schema, information is divided into the three categories that form its name. Information in the function category includes functional requirements such as design criteria, specifications, desires of society and purpose. The categories of behaviour and structure have been defined in the previous section.
The transformations associated with these categories of information have been collected into a schema that was initially proposed by Gero (1990). For the purpose of describing this schema systematically, a design example is used in the following discussion. Subsequent discussion demonstrates the importance of this schema for a range of engineering activities.
At the beginning of a design activity, engineers identify functional requirements for future objects through contacts with clients and meetings with other people that could influence design decisions. Engineers then transform these requirements into required behaviour. For example, the functional requirement of a building to provide safe shelter for its occupants leads to behaviour requirements related to strength and stability. Behaviour requirements are different from functional requirements because behavioural parameters are explicitly mentioned. Other requirements for occupant comfort lead to serviceability requirements such as maximum beam deflections. This transformation is a task called formulation and it is shown schematically in Figure 1.3.
Figure 1.3 Formulation
In the second stage of design, parameters associated with the required behaviour, along with environmental parameters such as loading, are used to generate spatial information within the structure category. For example, the foundations and load-carrying elements of the building are chosen and sized during this stage. As introduced in the previous section, this transformation is a task called synthesis and this is described schematically in Figure 1.4.
Figure 1.4 Synthesis added to Figure 1.3
Once the as-designed structural information has been completed, the structure is then analysed in order to obtain predicted behaviour. In the building example, the structural configuration would be analysed to determine lateral deflections.1 Such predicted behaviour is then compared during an evaluation with required behaviour. Analysis and comparison tasks are included in Figure 1.5.
Figure 1.5 Analysis and evaluation
Results of the evaluation may lead to a new formulation or to new synthesis tasks and new analyses. These additional tasks are particularly likely if the evaluation reveals that predicted behaviour is unsatisfactory. For example, if predicted building deflections exceed the maximum deflections fixed during formulation, designers may reformulate the required behaviour, redo the synthesis so that the building is stiffened, or rework the analysis to provide a more accurate prediction. These three actions are often not equivalent in their ease of application and their effectiveness. For example, if the formulated requirements are parts of legal documents (such as building codes), reformulating required behaviour is not possible. Also, reworking the analysis may not provide the necessary correction to the calculations. The best course of action is usually to redo the synthesis step. If the designer is satisfied with the results of all the evaluations, the construction task is begun. Adding this to the schema results in Figure 1.6.
Figure 1.6 Adding the construction task
At this point in the process, engineers are often motivated to question the applicability of two general assumptions. The first assumption is that the as-built structure is the same as the as-designed structure. Since structural information in this schema includes information related to the environment, including loading, such an assumption is notoriously risky. The second assumption is that the results of the analysis task are sufficiently accurate for the comparison with required behaviour. Accuracy implies other, more specific assumptions related to material properties (for example, homogeneity, isotropy, linear elastic behaviour, and so on), deflections, and boundary conditions such as perfect pins and absolute fixity.
Often these two general assumptions are not acceptable. For example, engineering products are often not built exactly as they are designed; real loading may be very different from design loading; material may be non-homogeneous; deflections could lead to geometric non-linearities; finally, engineers know that all connections between elements are neither perfectly rigid nor perfectly free. Indeed, the evaluation task in Figure 1.6 may seem dangerously far away from the as-built structural information in many practical situations.
When too much doubt surrounding these assumptions exists and when engineers determine that such uncertainties may have an important influence on subsequent decisions, the structure might be monitored. The monitoring task provides a third type of behaviour, measured behaviour, and this is compared with predictions from analysis to improve models as well as the accuracy of values of parameters that quantify environmental effects. Finally, more accurate structural information may be used to improve predictions of structural behaviour. These tasks are described in Figure 1.7.
Figure 1.7 Addition of monitoring and prediction transformation tasks as well as an additional comparison task to improve models
Opportunities for increasing the details related to required behaviour, such as more specific constraints related to functional requirements, arise when predicted values of behaviour are more reliable. Such activities are often referred to as performance-based engineering.
In 1972, Newell and Simon introduced another classification of tasks that accounts for how well tasks were defined.2 Well-defined tasks are:
carried out in closed worlds
defined by goals explicitly and unambiguously expressed using, for example, a mathematical formula (this may be used as an objective function, see Chapter 8)
described by algorithms that lead to identification of solutions.
Many engineering tasks are decomposed into well-defined tasks in order to simplify matters. This strategy is a common engineering tactic. When applied judiciously, reasonable solutions to complex tasks are possible. However, when simplification is not possible, engineering tasks are usually poorly defined. Poorly-defined tasks occur when at least one of the following aspects is important to the nature of the solution:
open-world conditions prevail
goals are defined only partially
definitions of solutions and their forms are incomplete
procedures for obtaining solutions are not known completely.
In reality, most engineering tasks are poorly defined. Accommodating such conditions while being able to perform useful tasks is the job of engineers. Requirements for software that support engineering tasks are subsequently more challenging for poorly-defined tasks when compared with well-defined tasks.
When tasks begin, engineers are often not able to appreciate every subtlety associated with aspects such as requirements, interdependencies, goals and contextual information. Armour (2000) believed that the core value-added activities are those of acquiring knowledge and reducing ignorance. While much discussion has centred on acquiring knowledge, little attention has been given to its corollary, reducing ignorance. Armour proposed a classification of ignorance into the following five orders:
0
th
- order ignorance (0OI) –
lack of ignorance
. This means that answers are available and skills for completing tasks exist. For example, most engineers know how to find the roots of a polynomial equation in one variable.
1
st
- order ignorance (1OI) –
lack of knowledge
. This shortfall is exactly quantifiable, tasks required to overcome it are known, and skills exist to complete the tasks. For example, engineers who have learned one programming language can easily learn another and thereby satisfy their 1OI related to the new language.
2
nd
- order ignorance (2OI) –
lack of awareness
. This is when it is not known what is not known. For example, having 1OI and not knowing it is 2OI. Some engineers, thinking that computers are just big calculators, suffer from 2OI regarding the topics discussed in this book. They are not aware that computing is a science and not a skill.
3
rd
- order ignorance (3OI) –
lack of process
. This order occurs when it is not known how to go about finding out what is not known. For example, there are people who do not realize that going to university is the best way to find out what is not known (2OI) about important aspects of engineering knowledge and methodologies.
4
th
- order ignorance (4OI) –
meta ignorance
. 4OI occurs when one is unaware of the other four orders of ignorance. If you have read and understood this far, then you no longer suffer from 4OI.
The first four orders have a generality that exceeds the boundaries of computer-aided engineering. Nevertheless, they are particularly pertinent for developing and using computer-aided engineering software. Engineers need mechanisms to identify and organize knowledge that is important for the task. Now that 4OI is no longer present, the goal of this book is to provide readers with capabilities to overcome 3OI, 2OI and 1OI when confronted with tasks that involve computers in engineering.
Inference is the basis of engineering reasoning.
There are three types of inference: deduction, abduction and induction.
Use of conclusions from abduction and induction implicitly requires a closed-world assumption.
Engineers work in open worlds.
In open worlds, software that provides choices and not just a single answer is often the best for abductive and inductive tasks.
Engineers deal with three categories of information: function, behaviour and structure.
Many important engineering tasks involve the transformation of this information from one category to another.
Although engineers are required to carry out both well-defined and poorly-defined tasks, the added value to society is usually greater when poorly-defined tasks are successfully completed than when well-defined tasks are successfully completed.
Development of engineering software is a process of information design and construction. This process involves setting up mechanisms where developers become able to discover the existence of information.
1.1 Find an example of (a) deduction, (b) abduction and (c) induction in engineering.
1.2 For the example in Exercise 1.1b, formulate another causal rule that would make abduction ambiguous.
1.3 Identify two engineering tasks that would be hard to carry out with an open-world hypothesis.
1.4 Should a computer give only one answer when supporting the engineering tasks identified in Exercise 1.3?
1.5 In Figure 1.7, which tasks are deductive, abductive and inductive?
1.6 Name three tasks that are well defined.
1.7 Name three tasks that are poorly defined.
1.8 Which tasks add the most value to society when performed successfully: well-defined or poorly-defined tasks?
Armour, P.G. 2000. The five orders of ignorance. Communications of the ACM, 43, 10, 17–20.
Gero, J.S. 1990. Design prototypes: a knowledge representation schema for design. AI Magazine, 11, 4, 26–48.
Newell, A. and Simon, H. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall.
1 There is often more than one synthesis–analysis cycle. For example, once the element layout is determined and section properties are estimated for a building in a synthesis task, this may be analysed to determine bending moments which are then used to fix the cross-sectional dimensions of elements. This information is then used to re-analyse the structure to obtain values for additional behavioural parameters such as deflections.
2 Newell used the terms ‘well structured’ and ‘ill structured’ to classify tasks. For the purposes of avoiding ambiguity in terminology, the word ‘structured’ is replaced by the word ‘defined’ in this text. Also, the word ‘ill’ is replaced by the word ‘poorly’ for clarity.
What there is to learn: Computers cannot complete certain well-defined algorithms in a reasonable time for full-scale engineering tasks. In some cases, faster computers will not help. The efficiency of performing some tasks, on the other hand, is nearly independent of task size. Small changes to algorithms may have big effects. Algorithms can be classified according to their ability to perform well for full-scale tasks. Engineering experience (knowledge of domain heuristics) is essential to perform difficult engineering tasks, even when these tasks are well defined.
Complexity is a central theme in computer science and an important topic in computer-aided engineering (CAE). Indicators of complexity are used to determine whether or not, and if so how well, a task can be performed by a computer. The following three types of complexity are relevant:
computational complexity;
descriptive complexity;
cognitive complexity.
Computational complexity is of interest for well-defined tasks (Chapter 1) when the goal is to find a solution efficiently. Although efficiency can be measured in terms of use of memory and hardware resources, algorithms are traditionally classified according to factors that influence trends in execution time. Such trends are generally independent of hardware characteristics.
Descriptive complexity classifies levels of difficulty associated with providing descriptive structures to information. While much engineering information can be assigned descriptive structures without difficulty (Chapter 3), it is often not easy to describe and classify information describing things such as music, paintings, economic climate and parameters related to politics. Complex engineering tasks also involve descriptive complexity challenges. For example, engineering product and process models have been standardized for only a few subdomains. The descriptive complexity of the remainder is too great to allow standardization.
Cognitive complexity metrics classify difficulties related to describing and simulating human thought for various activities. For example, the ease of simulating many tasks performed by bank tellers is demonstrated by the increased use of machines by clients wishing to perform transactions. On the other hand, tasks performed by professionals such as those in medicine, law and engineering are not easily simulated. Although this is partly due to the ‘open world’ characteristics of their tasks (Chapter 1), professional people are required to consider aspects such as multiple criteria, complex constraints and highly coupled parameters within environments that are changing and difficult to predict. The cognitive complexity associated with such professional activity is high.
This chapter describes fundamental concepts of the first type of complexity only – computational complexity. The other two types of complexity do not yet have fixed and generally accepted classification schemas. Nevertheless, they are both important areas of computer science and many aspects are important to engineers.
This chapter uses a standard notation, called the ‘Big Oh’ notation, to represent trends in execution time with respect to changes in task parameters. Through the use of examples, it is demonstrated that a small change in an engineering task can result in important changes in computational complexity of an algorithm. In the final section, a category of hard tasks is identified, leading to the description of an important fundamental and unsolved research challenge.
An algorithm is a detailed description of a method for performing a task. It consists of a series of steps involving arithmetic and logical operations, input and output, conditional statements and iterations. Iterations refer to repetition of a sequence of steps and are also known as loops. These steps can be implemented by writing a computer program. However, algorithms are usually described in less formal ways than through the syntax of a programming language. Flowcharts and pseudo-code are commonly used for this. Flowcharts provide a pictorial representation of the sequence of steps using standard notation for conditional statements, branching, and so on. However, flowcharts are cumbersome. Pseudo-code provides a more concise representation of algorithms. This is usually written in natural language using constructs similar to that in a programming language. This makes it easy for non-programmers to read and understand the method, and at the same time the steps are clearly defined for programmers. For example, it is easy to recognize loops and conditional statements in pseudo-code. In this book, pseudo-code is mostly used to describe algorithms.
The execution time of a program depends on factors such as task formulation, desired results, algorithm optimality, hardware capacity and operating environment. The first factor is important; a task may be modeled by an unnecessarily complex algorithm resulting in inefficiency. As will be seen in this chapter, results of an algorithm might not be possible within a lifetime, whereas by using a better algorithm a solution could be found within seconds. For example, several algorithms exist for general tasks such as the sorting of numbers. Some are very efficient, while the time taken by others could be several orders of magnitude higher. The most important hardware characteristic that influences execution time is the clock speed of the computer. Other characteristics include available memory, the number of central processing units (CPUs) and disk access rate. The operating environment may also affect a program's execution time. For example, if every piece of data required by the program needs to be downloaded over the network, input and output operations might consume most of the time.
The relationships between most of these factors and execution time cannot be quantified or expressed in mathematical equations. Moreover, their effects are likely to change with inevitable improvements in hardware and software. However, there is one factor that is likely to remain constant. This factor is the relative influence of the size of input and the time taken for the algorithm to terminate. In other words, trends in execution time with respect to the size of the task as modeled by the algorithm are largely independent of the technology employed. Certain problems are inherently complex. No amount of improvement in hardware and software is able to reduce the complexity. The following examples provide a qualitative understanding of the relationship between the size of the task and the execution time of a program.
A standard task of minimization is considered here to illustrate the influence of the size of the task and the execution time of a program. The function chosen here is called Griewank's function, which is used to test the performance of optimization algorithms that are described in Chapter 8. The model of the task is stated as follows:
Given the function, find the value of in the range such that the value of is at a minimum. In order to avoid an infinite number of points in the range, we shall assume that takes only 21 discrete values at regular intervals within the specified range.
The function is plotted on a grid consisting of 21 points (20 intervals) in Figure 2.1. The global minimum is at Other local minima differ by a small amount. In one dimension, the solution is easily found by evaluating all the points and selecting the minimum. The global minimum is approximated by evaluating 21 points. The general form of Griewank's function for dimensions is:
2.1
In two dimensions the function is:
Here, the grid consists of 21 21 points. A total of 441 evaluations are required to find the minimum. In three dimensions, the function is:
Here, the grid consists of 21 21 21 points. Generalizing, evaluations are required to find the minimum of the function in dimensions. The number of evaluations required for completing the task with 8 variables is more than one billion. Assuming that it is possible to perform 1 billion floating-point operations per second, this problem would require more than a century to complete the task for 15 variables!
Figure 2.1 Griewank's function in a single variable
The relationship between the size of the task and the execution time is of fundamental importance in determining the feasibility of an algorithm. This relationship can be expressed in mathematical terms. Several schemes have been suggested, including (from Wilf, 2002):
Little oh,
Big Oh,
Theta,
Asymptotic,
Omega,
These notations are used to express relationships between the size of input and the amount of computational resources required. Since the Big Oh notation is the most widely used for analysing computational complexity, this notation is described in more detail here.
The principal interest is how fast a function grows, rather than the actual values of the function. For example, the time taken to solve a problem with variables is not very important here. Whether the time taken increases as fast as or or is of greater interest since this determines how the algorithm scales up to tasks that involve greater values of . This determines whether the algorithm is feasible for solving practical tasks, since most practical tasks involve large values of .
Define if there exists a positive integer , and a positive real number constant such that:
2.2
In the context of analysing computational complexity, represents the task size (positive integer); is the amount of computational resource required (execution time); and is a reference function to compare the rate of growth of . is called the real function and the reference function. The Big Oh notation provides an indication of the order of magnitude of the computational resources required to carry out the task.
For example, consider a polynomial function:
We can say that because for all , there exists a positive real number constant, in this case , such that:
2.3
In order to verify this result, we proceed as follows. For , the left-hand side (LHS) of (2.3) is positive. Hence we may drop the absolute value symbol and write the expression to be proven as:
This inequality is valid for all , and hence expression (2.3) is verified. This is generalized in Section 2.2.2.1.
By using the Big Oh notation, we omit many details of the function and concentrate on the orders of magnitude of values. Also, we are not interested in specific values of and , provided these values exist. Omitting these details allows classification of the level of complexity of an algorithm into categories such as linear, polynomial and exponential. This classification is described in more detail in Section 2.2.3.
The following definitions are necessary for the discussion in this section:
Upper and lower bounds of functions are estimated through mathematical analyses without finding the maximum and minimum of the function in a given range. (Finding the maximum of a function is a difficult task.) For example, consider the function in the range A ‘loose’ upper bound is found by taking the absolute value of the maximum of each term in the expression, that is:
Thus an estimate of the upper bound of the function is 7. The real maximum of the function in this range is 5. Through other procedures, it is possible to find upper bounds that are closer to the real maximum.
The Big Oh notation provides an upper bound for a function's rate of growth; that is, the rate of growth is certain not to exceed a value. However, the definition of Big Oh does not place any restriction on how close the real function is to the reference function. The difference between the real function and the reference function is the tightness of the bound. For example, the linear function may be written as (a polynomial function) as well as (an exponential function). However, for practical purposes, it is best to express in terms of a function having the lowest possible rate of growth, since we are interested in a ‘tight’ upper bound. The following examples demonstrate important details in complexity analysis. Although, these mathematical proofs can be skipped, their results should be noted since they are used later in this chapter.
2.4
where , are integers greater than or equal to zero.
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
The Big Oh notation provides an indication of the trend of increasing time taken by an algorithm as the size of the task increases and hence is used to classify the complexity of the algorithm. The following classification is useful to denote the way an algorithm scales up with increasing values of :
logarithmic, ;
linear, ;
polynomial, ;
exponential, ;
factorial, ;
double exponential, .
The growth rates of these functions are shown in Figure 2.2. A logarithmic function remains almost constant for large task sizes. The practical implication of this is that after a certain value of , the time taken remains almost constant. This behaviour is exploited extensively in many successful software applications. The rate of growth (derivative) of a linear function is constant, and hence linear algorithms lead to solutions in many practical tasks. The rate of growth of polynomial functions increases with the degree of the polynomial. An algorithm that provides answers in a lower-degree polynomial function is better than an algorithm in a higher-degree polynomial. Exponential and factorial algorithms are not usually feasible for practical tasks.
Figure 2.2 Growth rate of functions
A task that permits the use of a reasonable or polynomial-time algorithm is said to be tractable. If the formulation of a task permits only the use of algorithms of exponential complexity or higher, the task formulation is termed intractable. There are many engineering task formulations that are considered to be intractable. There are others that are considered to be tractable and for which very efficient algorithms have already been found. Examples of both categories are described in the following sections.
Searching for an item in a list is a common task in engineering and non-engineering contexts. We search for information such as telephone numbers and addresses in directories, book titles in library catalogues, and so on. From experience, we know that searching in an ordered list is much simpler than searching in an unordered list. In this example, it will be shown that the task of searching in an ordered list has the tightest complexity of , where is the number of items in the list. Searching in an unordered list has complexity (Section 2.2.4.2).
Consider an ordered list (sorted in ascending order) as shown in Figure 2.3
