Software Technology -  - E-Book

Software Technology E-Book

0,0
84,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A comprehensive collection of influential articles from one of IEEE Computer magazine's most popular columns This book is a compendium of extended and revised publications that have appeared in the "Software Technologies" column of IEEE Computer magazine, which covers key topics in software engineering such as software development, software correctness and related techniques, cloud computing, self-managing software and self-aware systems. Emerging properties of software technology are also discussed in this book, which will help refine the developing framework for creating the next generation of software technologies and help readers predict future developments and challenges in the field. Software Technology provides guidance on the challenges of developing software today and points readers to where the best advances are being made. Filled with one insightful article after another, the book serves to inform the conversation about the next wave of software technology advances and applications. In addition, the book: * Introduces the software landscape and challenges associated with emerging technologies * Covers the life cycle of software products, including concepts, requirements, development, testing, verification, evolution, and security * Contains rewritten and updated articles by leaders in the software industry * Covers both theoretical and practical topics Informative and thought-provoking throughout, Software Technology is a valuable book for everyone in the software engineering community that will inspire as much as it will teach all who flip through its pages.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 608

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



CONTENTS

Cover

Series Page

Title Page

Copyright

Foreword

Preface

Part I: The Software Landscape

Part II: Autonomous Software Systems

Part III: Software Development and Evolution

Part IV: Software Product Lines and Variability

Part V: Formal Methods

Part VI: Cloud Computing

Acknowledgments

List of Contributors

Part I: The Software Landscape

Chapter 1: Software Crisis 2.0

1.1 Software Crisis 1.0

1.2 Software Crisis 2.0

1.3 Software Crisis 2.0: The Bottleneck

1.4 Conclusion

References

Chapter 2: Simplicity as a Driver for Agile Innovation

2.1 Motivation and Background

2.2 Important Factors

2.3 The Future

2.4 Less Is More: The 80/20 Principle

2.5 Simplicity: A Never Ending Challenge

2.6 IT Specifics

2.7 Conclusions

Acknowledgments

References

Chapter 3: Intercomponent Dependency Issues in Software Ecosystems

3.1 Introduction

3.2 Problem Overview

3.3 First Case Study: Debian

3.4 Second Case Study: The R Ecosystem

3.5 Conclusion

Acknowledgments

References

Chapter 4: Triangulating Research Dissemination Methods: A Three-Pronged Approach to Closing the Research–Practice Divide

4.1 Introduction

4.2 Meeting the Needs of Industry

4.3 The Theory–Practice Divide

4.4 Solutions: Rethinking Our Dissemination Methods

4.5 Obstacles to Research Relevance

4.6 Conclusion

Acknowledgments

References

Part II: Autonomous Software Systems

Chapter 5: Apoptotic Computing: Programmed Death by Default for Software Technologies

5.1 Biological Apoptosis

5.2 Autonomic Agents

5.3 Apoptosis within Autonomic Agents

5.4 NASA SWARM Concept Missions

5.5 The Evolving State-of-the-Art Apoptotic Computing

5.6 “This Message Will Self-Destruct”: Commercial Applications

5.7 Conclusion

Acknowledgments

References

Chapter 6: Requirements Engineering for Adaptive and Self-Adaptive Systems

6.1 Introduction

6.2 Understanding ARE

6.3 System Goals and Goals Models

6.4 Self-* Objectives and Autonomy-Assistive Requirements

6.5 Recording and Formalizing Autonomy Requirements

6.6 Conclusion

Acknowledgments

References

Chapter 7: Toward Artificial Intelligence through Knowledge Representation for Awareness

7.1 Introduction

7.2 Knowledge Representation

7.3 KnowLang

7.4 Awareness

7.5 Challenges and Conclusion

References

Part III: Software Development and Evolution

Chapter 8: Continuous Model-Driven Engineering

8.1 Introduction

8.2 Continuous Model-Driven Engineering

8.3 CMDE in Practice

8.4 Conclusion

Acknowledgment

References

Chapter 9: Rethinking Functional Requirements: A Novel Approach Categorizing System and Software Requirements

9.1 Introduction

9.2 Discussion: Classifying Requirements – Why and How

9.3 The System Model

9.4 Categorizing System Properties

9.5 Categorizing Requirements

9.6 Summary

Acknowledgments

References

Chapter 10: The Power of Ten—Rules for Developing Safety Critical Code1

10.1 Introduction

10.2 Context

10.3 The Choice of Rules

10.4 Ten Rules for Safety Critical Code

10.5 Synopsis

References

Chapter 11: Seven Principles of Software Testing

11.1 Introduction

11.2 Defining Testing

11.3 Tests and Specifications

11.4 Regression Testing

11.5 Oracles

11.6 Manual and Automatic Test Cases

11.7 Testing Strategies

11.8 Assessment Criteria

11.9 Conclusion

References

Chapter 12: Analyzing the Evolution of Database Usage in Data-Intensive Software Systems

12.1 Introduction

12.2 State of the Art

12.3 Analyzing the Usage of ORM Technologies in Database-Driven Java Systems

12.4 Coarse-Grained Analysis of Database Technology Usage

12.5 Fine-Grained Analysis of Database Technology Usage

12.6 Conclusion

12.7 Future Work

Acknowledgments

References

Part IV: Software Product Lines and Variability

Chapter 13: Dynamic Software Product Lines

13.1 Introduction

13.2 Product Line Engineering

13.3 Software Product Lines

13.4 Dynamic SPLs

References

Chapter 14: Cutting-Edge Topics on Dynamic Software Variability

14.1 Introduction

14.2 The Postdeployment Era

14.3 Runtime Variability Challenges Revisited

14.4 What Industry Needs from Variability at Any Time?

14.5 Approaches and Techniques for Dynamic Variability Adoption

14.6 Summary

14.7 Conclusions

References

Part V: Formal Methods

Chapter 15: The Quest for Formal Methods in Software Product Line Engineering

15.1 Introduction

15.2 SPLE: Benefits and Limitations

15.3 Applying Formal Methods to SPLE

15.4 The Abstract Behavioral Specification Language

15.5 Model-Centric SPL Development with ABS

15.6 Remaining Challenges

15.7 Conclusion

References

Chapter 16: Formality, Agility, Security, and Evolution in Software Engineering

16.1 Introduction

16.2 Formality

16.3 Agility

16.4 Security

16.5 Evolution

16.6 Conclusion

Acknowledgments

References

Part VI: Cloud Computing

Chapter 17: Cloud Computing: An Exploration of Factors Impacting Adoption

17.1 Introduction

17.2 Theoretical Background

17.3 Research Method

17.4 Findings and Analysis

17.5 Discussion and Conclusion

References

Chapter 18: A Model-Centric Approach to the Design of Resource-Aware Cloud Applications

18.1 Capitalizing on the Cloud

18.2 Challenges

18.3 Controlling Deployment in the Design Phase

18.4 ABS: Modeling Support for Designing Resource-Aware Applications

18.5 Resource Modeling with ABS

18.6 Opportunities

18.7 Summary

Acknowledgments

References

Index

End User License Agreement

List of Tables

Table 3.1

Table 3.2

Table 3.3

Table 3.4

Table 4.1

Table 4.2

Table 4.3

Table 9.1

Table 9.2

Table 12.1

Table 12.2

Table 12.3

Table 12.4

Table 12.5

Table 14.1

Table 14.2

Table 17.1

Table 17.2

List of Illustrations

Figure 1.1

Figure 1.2

Figure 1.3

Figure 2.1

Figure 2.2

Figure 2.3

Figure 3.1

Figure 3.2

Figure 3.3

Figure 3.4

Figure 3.5

Figure 3.6

Figure 4.1

Figure 4.2

Figure 4.3

Figure 4.4

Figure 4.5

Figure 4.6

Figure 4.7

Figure 4.8

Figure 5.1

Figure 5.2

Figure 6.1

Figure 6.2

Figure 7.1

Figure 7.2

Figure 7.3

Figure 7.4

Figure 7.5

Figure 8.1

Figure 8.2

Figure 8.3

Figure 9.1

Figure 9.2

Figure 9.3

Figure 9.4

Figure 9.5

Figure 9.6

Figure 11.1

Figure 12.1

Figure 12.2

Figure 12.3

Figure 12.4

Figure 12.5

Figure 12.6

Figure 12.7

Figure 12.8

Figure 12.9

Figure 12.10

Figure 12.11

Figure 12.12

Figure 12.13

Figure 13.1

Figure 14.1

Figure 14.2

Figure 14.3

Figure 14.4

Figure 15.1

Figure 16.1

Figure 18.1

Figure 18.2

Figure 18.3

Figure 18.4

Figure 18.5

Guide

Cover

Table of Contents

Begin Reading

Part 1

Chapter 1

Pages

ii

iii

iv

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxiv

xxv

xxvi

xxvii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

139

140

141

142

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

IEEE Press Editorial BoardEkram Hossain, Editor in Chief

Giancarlo Fortino

Andreas Molisch

Linda Shafer

David Alan Grier

Saeid Nahavandi

Mohammad Shahidehpour

Donald Heirman

Ray Perez

Sarah Spurgeon

Xiaoou Li

Jeffrey Reed

Ahmet Murat Tekalp

About IEEE Computer Society

IEEE Computer Society is the world's leading computing membership organization and the trusted information and career-development source for a global workforce of technology leaders including: professors, researchers, software engineers, IT professionals, employers, and students. The unmatched source for technology information, inspiration, and collaboration, the IEEE Computer Society is the source that computing professionals trust to provide high-quality, state-of-the-art information on an on-demand basis. The Computer Society provides a wide range of forums for top minds to come together, including technical conferences, publications, and a comprehensive digital library, unique training webinars, professional training, and the TechLeader Training Partner Program to help organizations increase their staff's technical knowledge and expertise, as well as the personalized information tool myComputer. To find out more about the community for technology leaders, visit http://www.computer.org.

IEEE/Wiley Partnership

The IEEE Computer Society and Wiley partnership allows the CS Press authored book program to produce a number of exciting new titles in areas of computer science, computing, and networking with a special focus on software engineering. IEEE Computer Society members continue to receive a 35% discount on these titles when purchased through Wiley or at wiley.com/ieeecs.

To submit questions about the program or send proposals, please contact Mary Hatcher, Editor, Wiley-IEEE Press: Email: [email protected], Telephone: 201-748-6903, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774.

Software Technology

10 Years of Innovation in IEEE Computer

Edited by Mike Hinchey

This edition first published 2018

© 2018 the IEEE Computer Society, Inc.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Mike Hinchey to be identified as the author of the editorial material in this work has been asserted in accordance with law.

Registered Office

John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

Editorial Office

111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging-in-Publication Data

Names: Hinchey, Michael G. (Michael Gerard), 1969- editor.

Title: Software technology : 10 years of innovation in IEEE Computer /edited by Mike Hinchey.

Description: First edition. | Hoboken, NJ : IEEE Computer Society, Inc., 2018. | Includes bibliographical references and index. |

Identifiers: LCCN 2018024346 (print) | LCCN 2018026690 (ebook) | ISBN97 81119174226 (Adobe PDF) | ISBN 9781119174233 (ePub) | ISBN9781119174219 (hardcover)

Subjects: LCSH: Software engineering–History. | IEEE Computer Society–History.

Classification: LCC QA76.758 (ebook) | LCC QA76.758 .S6568 2018 (print) |

DDC 005.1–dc23

LC record available at https://lccn.loc.gov/2018024346

ISBN: 9781119174219

Cover image: © BlackJack3D/Getty Images

Cover design by Wiley

Foreword

Generally, you cannot claim to fully understand software engineering until you have attended at least 1-day-long planning session. Even then, you may not completely grasp how all the pieces work and interact. Software engineering involves a surprisingly large number of topics, ranging from the purely technical to the unquestionably human, from minute details of code and data to large interactions with other systems, from immediate problems to issues that may not appear for years.

However, if you confine yourself to a single planning meeting, you will never see the dynamic nature of the field, how software engineers learn and grow over the lifetime of a project. Like all engineering disciplines, software engineering is a process of self-education. As engineers work on the problems of building new software systems, they learn the limits of their own ideas and begin the search for more effective ways of creating software.

The great value of this book is that it exposes the educational nature of software engineering. Based on a column that appeared in IEEE Computer magazine, it shows how both researchers and practitioners were striving to learn the nuances of their field and develop ideas that would improve the nature of software. It deals with issues that were prominent during the past decade but at the same time all of its chapters discuss ideas that have been prominent for the entire history of software and software engineering.

The opening chapter explores the idea of a crisis in software development. Historically, software engineers have pointed to a crisis in the mid-1960s as the start of their field. At that time, all software was custom built. All customers employed had to develop software to support their business. This approach was challenged by the third-generation computers, notably the IBM 360 family, which radically expanded the computer market and created an unfulfilled demand for programmers.

The computing field responded to the crisis of the 1960s in two ways. First, it created a software industry that would sell the same piece of software to multiple customers. Second, it created the field of software engineering in order to create software products that could be run at multiple sites by different companies. Any present crisis in software, according to author Brian Fitzgerald, is driven by forces similar to those that drove the 1960s crisis. However, in the second decade of the twenty-first century, the software market is shaped by a sophisticated group of users or “digital natives.” These individuals have born in a technological world and make strong demands on software and on software engineering.

Many of the chapters that follow address the perception that we are unable to produce the amount of software that we need. Cloud computing and Agile development have taken over the position held by the software industry and the original software engineering models in the 1960s. The cloud is an approach that offers computing as a service. More than its 1960s equivalent, time-sharing services, it has the potential of delivering software to a large market and allowing customers to buy exactly the services that they need. However, cloud computing demands new kind of engineering and design, as a single piece of software has to satisfy a large and diverse customer base. So, in this book, we find numerous chapters dealing with the complexity of software and the need to satisfy the needs of sophisticated customers without producing burdensome and inefficient programs. In these chapters, we find that many of the traditional techniques of software engineering need to be adapted to modern computing environments. The tool of software product lines, for example, is a common way of reducing complexity. It allows developers to use a common base of technology while limiting the complexity of each application. At the same time, as the chapter by Hallsteinsen, Hinchey, Park, and Schmid notes, software product lines do not always work in dynamic programming environments where many constraints are not known before runtime. Hence, developers need to recognize that the needs of customers may not be known in advance and that the development team may not always be able to identify those needs that will change.

To create software, either for the cloud or any other environment, software engineers began turning to the ideas of Agile development since 2001, when a group of software engineers drafted the Agile Manifesto to describe a new approach to creating software. The Agile Manifesto drew heavily from the ideas of lean manufacturing, which came to prominence during the 1980s, and from the ideas of Frederick P. Brooks, the original software architect for the IBM 360 operating system. The proponents of agile argue for a close relationship between customer and developer, a short time line for creating new code, a rigorous approach for testing software, and a distinctive minimalism. They claim that software is best produced by small, self-organized teams. The chapters in this book show how thoroughly the ideas of agile have permeated software engineering. The chapter by Margaria and Steffen argues that the core idea of Agile is indeed managerial simplicity.

These chapters are valuable not only from the perspectives of their completeness but rather that they deal with the full scope of cloud-based software or Agile development as well as the other topics that have gained importance over the past decades. They offer some insight into key topics such as formal methods, model-based software engineering, requirements engineering, adaptive systems, data and knowledge engineering, and critical safety engineering. Above all, these chapters show how experienced software engineers have been thinking about problems, how they have been expanding the methods of the field, and how these software engineers are looking toward the future.

Washington, DC

David Alan Grier

Preface

Although not always obvious, software has become a vital part of our life in modern times. Millions of lines of software are used in many diverse areas of our daily lives ranging from agriculture to automotive systems, entertainment, FinTech, and medicine.

Software is also the major source of innovation and advancement in a large number of areas that are currently highly hyped and covered extensively in the media. These include IoT (Internet of Things), Big Data and data science, self-driving cars, AI and machine learning, robotics, and a range of other areas. Software is the enabler in cyber-physical systems, wearable devices, medical devices, smart cities, smart grids, and smart everything.

More and more industries are becoming highly software intensive and software dependent, and this is not restricted to the IT sector. Automotive manufacturers are using software at an unprecedented level. The world's largest bookstore (Amazon) is in fact a software vendor and a major provider of cloud computing services; one of the largest fleets of cars for hire (Uber) is completely dependent on apps to run its business.

Software development is far from being perfect (as covered by a number of chapters in this book), and new technologies and approaches are constantly emerging and evolving to bring new solutions, new techniques, and indeed entirely new business models.

The Software Technology column of IEEE Computer published bimonthly for 10 years produced 60 columns on a broad range of topics in the software arena. Some of the topics covered were short-lived, or were very much the subjective opinion of the authors. Many others, however, have had a long-term effect, or led to further developments and remained as research topics for the authors.

Structured into six parts, this book brings together a collection of chapters based on enhanced, extended, and updated versions of various columns that appeared over the 10 year period. They cover a diverse range of topics, but of course all have a common theme: software technology.

Part I: The Software Landscape

In Chapter 1 Fitzgerald warns that we are approaching a new impasse in software development: The pervasiveness of software requires significant improvements in productivity, but this is at a time when we face a great shortage of appropriately skilled and trained programmers.

Margaria and Steffen in Chapter 2 argue that an approach based on simplicity – “less is more” – is an effective means of supporting innovation. They point to a number of organizations who have learned this lesson and highlight a number of technologies that may facilitate agility in software organizations.

Claes et al. in Chapter 3 consider component-based reuse in the context of the complex software ecosystems that have emerged in recent years. They analyze issues of interdependencies that affect software developers and present two case studies based on popular open-source software package ecosystems.

Beecham et al. in Chapter 4 question whether practitioners ever really read academic research outputs and whether academics care enough about practical application of their results to warrant a change in the way they disseminate their work.

Part II: Autonomous Software Systems

In Chapter 5, Hinchey and Sterritt describe an approach to developing autonomous software systems based on the concept of apoptosis, whereby software components are programmed to self-destruct unless they are given a reprieve.

In Chapter 6, Vassev and Hinchey address the issue of how to express requirements for adaptive and self-adaptive systems, recognizing that additional issues need to be addressed above and beyond nonadaptive systems. In Chapter 7, these authors further describe an approach to achieving awareness in software systems via knowledge representation in KnowLang.

Part III: Software Development and Evolution

In Chapter 8, Margaria et al. point out that agile approaches to software development mean less documentation and an emphasis on code. However, they highlight that we should focus on the level of models, not on the code.

In Chapter 9, Broy does exactly that, and approaches functional system requirements in a different way.

In Chapter 10, Holzmann gives guidance on developing high-integrity software for critical applications; in Chapter 11, Meyer gives testing tips in software. This chapter originally appeared in IEEE Computer and is simply reprinted here.

Updating their original column 5 years on, Meurice et al. in Chapter 12 describe the state of the art in the evolution of open-source Java projects that make use of relational database technology.

Part IV: Software Product Lines and Variability

In Chapter 13, Hallsteinsen et al. introduce the field, which brings the concept of software product lines to dynamic, adaptive, and self-adaptive systems, as a means of handling variability. This chapter is reprinted here exactly as it appeared in the original column. Many more papers on dynamic software product lines have been published since then, including those by Hallsteinsen et al., but this chapter is widely cited in the literature.

In Chapter 14, Capilla et al. again address dynamic software product lines in the context of the challenges, benefits, problems, and solutions offered by dynamic variability.

Part V: Formal Methods

In Chapter 15, Hähnle and Schafer consider the role of formal methods in software product lines; in Chapter 16, Bowen et al. consider the interrelationship of formal methods, agile development methods, security, and software evolution.

Part VI: Cloud Computing

While technical aspects of cloud computing have been well-addressed, what makes cloud computing relevant and beneficial to an organization has not been well studied. In Chapter 17, Morgan and Conboy report on a field trial in 10 organizations and what influenced their uptake of cloud computing.

In Chapter 18, Hähnle and Johnsen point to the role of formal methods, executable models, and deployment modeling as means of moving deployment decision up the development chain to meet SLAs at lower costs and provide the client with better control of resource usage.

Acknowledgments

First and foremost, I would like to acknowledge Science Foundation Ireland for Grant 13/RC/2094 for the preparation of this book.

I am grateful to David Alan Grier, a regular columnist in IEEE Computer and former President of IEEE Computer Society, for writing such a nice introduction to the collection, to all of the authors of the chapters, and all of the authors who contributed to the column over 10 years without whom this book would not have been possible. Many thanks to Mary Hatcher, Victoria Bradshaw, Vishnu Narayanan, and all at Wiley, as well as Abhishek Sarkari at Thomson Digital, for their assistance and support in the preparation of this book.

Doris Carver was the Editor-in-Chief of IEEE Computer who invited me to edit the book in the first place. Subsequent Editors-in-Chief – Ron Vetter, Carl Chang, and Sumi Helai – were also very supportive. The editors at IEEE Computer Society Press – Chris Nelson, Bob Werner, Yu-Tzu Tsai, and Carrie Clark – did a great job of taking often scrappy notes and turning them into a polished column. Managing Editors Judi Prow and Carrie Clark were always great to work with. Dozens of people contributed to making the column, and hence this book a success over 10 years. But it would not have been possible without the late Dr. Scott Hamilton, a great editor and a really great friend.

List of Contributors

Sean Baker

Lero – The Irish Software Research Centre

University of Limerick

Limerick

Ireland

Sarah Beecham

Lero – The Irish Software Research Centre

University of Limerick

Limerick

Ireland

Jan Bosch

Department of Computer Science and Engineering

Chalmers University of Technology

Goteborg

Sweden

Jonathan P. Bowen

School of Engineering

London South Bank University

Borough Road

London

UK

Manfred Broy

Institut für Informatik

Technische Universität München

München

Germany

Rafael Capilla

Department of Informatics

Rey Juan Carlos University

Madrid

Spain

Maëlick Claes

COMPLEXYS Research Institute

University of Mons

Belgium

Anthony Cleve

PReCISE Research Center on Information Systems Engineering

Faculty of Computer Science

University of Namur

Namur

Belgium

Kieran Conboy

Lero – The Irish Software Research Centre

NUI Galway

Galway

Ireland

Alexandre Decan

COMPLEXYS Research Institute

Software Engineering Lab

Faculty of Sciences

University of Mons

Mons

Belgium

Brian Fitzgerald

Lero – The Irish Software Research Centre

University of Limerick

Limerick

Ireland

Mathieu Goeminne

COMPLEXYS Research Institute

Software Engineering Lab

Faculty of Sciences

University of Mons

Mons

Belgium

Reiner Hähnle

Department of Computer Science

Software Engineering

Technische Universität Darmstadt

Darmstadt

Germany

Svein Hallsteinsen

SINTEF ICT

Trondheim

Norway

Mike Hinchey

Lero – The Irish Software Research Centre

University of Limerick

Limerick

Ireland

Gerard J. Holzmann

JPL Laboratory for Reliable Software

NASA

Pasadena

CA

USA

Helge Janicke

Software Technology Research Laboratory

De Montfort University

Leicester

UK

Einar Broch Johnsen

University of Oslo

Norway

Anna-Lena Lamprecht

Department of Computer Science and Information Systems

University of Limerick

and Lero – The Irish Software Research Centre

Limerick

Ireland

Tiziana Margaria

Department of Computer Science and Information Systems

University of Limerick

and Lero – The Irish Software Research Centre

Limerick

Ireland

Tom Mens

COMPLEXYS Research Institute

Software Engineering Lab

Faculty of Sciences

University of Mons

Mons

Belgium

Loup Meurice

PReCISE Research Center on Information Systems Engineering

Faculty of Computer Science

University of Namur

Namur

Belgium

Bertrand Meyer

E.T.H. Zürich

Zurich

Switzerland

Lorraine Morgan

Lero – The Irish Software Research Centre

Maynooth University

Maynooth

Ireland

Csaba Nagy

PReCISE Research Center on Information Systems Engineering

Faculty of Computer Science

University of Namur

Namur

Belgium

John Noll

Lero – The Irish Software Research Centre

University of Limerick

Limerick

Ireland

and

University of East London

London

UK

Padraig O'Leary

School of Computer Science

University of Adelaide

Australia

Sooyong Park

Center for Advanced Blockchain Research

Sogang University

Seoul

Republic of Korea

Ita Richardson

Lero – The Irish Software Research Centre

University of Limerick

Limerick

Ireland

Ina Schaefer

Institute of Software Engineering and Vehicle Informatics

Technische Universität Braunschweig

Braunschweig

Germany

Klaus Schmid

Institute of Computer Science

University of Hildesheim

Hildesheim

Germany

Ian Sommerville

School of Computer Science

University of St Andrews

Scotland

UK

Bernhard Steffen

Fakultät für Informatik

TU Dortmund University

Dortmund

Germany

Roy Sterritt

School of Computing

and Computer Science Research Institute

Ulster University

County Antrim

Northern Ireland

Emil Vassev

Lero – The Irish Software Research Centre

University of Limerick

Limerick

Ireland

Martin Ward

Software Technology Research Laboratory

De Montfort University

Leicester

UK

Hussein Zedan

Department of Computer Science

Applied Science University

Al Eker

Bahrain

Part IThe Software Landscape

1Software Crisis 2.0

Brian Fitzgerald

Lero – The Irish Software Research Centre, University of Limerick, Limerick, Ireland

1.1 Software Crisis 1.0

In 1957, the eminent computer scientist, Edsger Dijkstra, sought to record his profession as “Computer Programmer” on his marriage certificate. The Dutch authorities, probably more progressive than most, refused on the grounds that there was no such profession. Ironically, just a decade later, the term “software crisis” had been coined, as delegates at an international conference in 1968 reported a common set of problems, namely that software took too long to develop, cost too much to develop, and the software which was eventually delivered did not meet user expectations.

In the early years of computing during the 1940s, the computer was primarily used for scientific problem solving. A computer was needed principally because of its speed of mathematical capability, useful in areas such as the calculation of missile trajectories, aerodynamics, and seismic data analysis. The users of computers at the time were typically scientific researchers with a strong mathematical or engineering background who developed their own programs to address the particular areas in which they were carrying out research. For example, one of the early computers, ENIAC (Electronic Numerical Integrator and Calculator), which became operational in 1945, by the time it was taken out of service in 1955 “had probably done more arithmetic than had been done by the whole human race prior to 1945” [1].

During the 1950s, the use of computers began to spread beyond that of scientific problem solving to address the area of business data processing [2]. These early data processing applications were concerned with the complete and accurate capture of the organization's business transactions, and with automating routine clerical tasks to make them quicker and more accurate. This trend quickly spread, and by 1960, the business data processing use of computers had overtaken the scientific one [3]. Once underway, the business use of computers accelerated at an extremely rapid rate. The extent of this rapid expansion is evidenced by the fact that in the United States, the number of computer installations increased more than twentyfold between 1960 and 1970 [4].

However, this rapid expansion did not occur without accompanying problems. The nature of business data processing was very different from the computation-intensive nature of scientific applications. Business applications involved high volumes of input and output, but the input and output peripherals at the time were very slow and inefficient. Also, memory capacity was very limited, and this led to the widespread conviction among developers that good programs were efficient programs, rather than clear, well-documented, and easily understood programs [3]. Given these problems, writing programs required much creativity and resourcefulness on the part of the programmer. Indeed, it was recognized that it was a major achievement to get a program to run at all in the early 1960s [5].

Also, there was no formal training for developers. Programming skills could only be learned through experience. Some programmers were drawn from academic and scientific environments and thus had some prior experience. However, many programmers converted from a diverse range of departments. As Friedman [3] describes it

People were drawn from very many areas of the organization into the DP department, and many regarded it as an ‘unlikely’ accident that they became involved with computers.

Also, during the 1960s, the computer began to be applied to more complex and less-routinized business areas. Aron [6] identifies a paradox in that as the early programmers improved their skills, there was a corresponding increase in the complexity of the problem areas for which programs had to be written.

Thus, while the term “software” was only introduced in 1958 [7], within 10 years, problems in the development of software led to the coining of the phrase “software crisis” at the NATO Conference in Garmisch [8]. The software crisis referred to the reality that software took longer to develop and cost more than estimated, and did not work very well when eventually delivered.

Over the years, several studies have confirmed these three aspects of the software crisis. For example, in relation to development timescales: Flaatten et al. [9] estimated development time for the average project to be about 18 months – a conservative figure perhaps given that other estimates put the figure at about 3 years [10] or even up to 5 years [11]. Also, an IBM study estimated that 68% of projects overran schedules [12]. In relation to the cost, the IBM study suggested that development projects were as much as 65% over budget [12], while a Price Waterhouse study in the United Kingdom in 1988 concluded that £500 million was being lost per year through ineffective development. Furthermore, in relation to performance, the IBM study found that 88% of systems had to be radically redesigned following implementation [12]. Similarly, a UK study found that 75% of systems delivered failed to meet users expectations. This has led to the coining of the term “shelfware” to refer to those systems that are delivered but never used.

Notwithstanding the bleak picture painted above, the initial software crisis has largely been resolved, while the Standish Chaos Report continues to report high rates of software project failure – estimated at 68%, for example [13].1 Although there has been no “silver bullet” advance, using Brooks [16] term, which affords an order of magnitude improvement in software development productivity, a myriad of advances have been made in more incremental ways, and software is now routinely delivered on time, within budget, and meets user requirements well. Software is really the success story of modern life. Everything we do, how we work, travel, communicate, entertain ourselves has been dramatically altered and enhanced by the capabilities provided by software.

1.2 Software Crisis 2.0

However, a new software crisis is now upon us, one that I term “Software Crisis 2.0.” Software Crisis 2.0 is fuelled by a number of “push factors” and “pull factors.” Push factors include advances in hardware such as that perennially afforded by Moore's law, multiprocessor and parallel computing, big memory servers, IBM's Watson platform, and quantum computing. Also, concepts such as the Internet of Things and Systems of Systems have led to unimaginable amounts of raw data that fuel the field of data analytics. Pull factors include the insatiable appetite of digital native consumers – those who have never known life without computer technology – for new applications to deliver initiatives such as the quantified self, lifelogging, and wearable computing. Also, the increasing role of software is evident in the concept of software-defined * (where * can refer to networking, infrastructure, data center, enterprise). The Software Crisis 2.0 bottleneck arises from the inability to produce the volume of software necessary to leverage the absolutely staggering increase in the volume of data being generated, which in turn allied to the enormous amount of computational power offered by the many hardware devices also available, and both complemented by the appetite of the newly emerged “digital native” consumer and a world where increasingly software is increasingly the key enabler (see Figure 1.1).

Figure 1.1 Software Crisis 2.0.

1.2.1 Hardware Advances

There are many eye-catching figures and statistics that illustrate the enormous advances in the evolution of hardware capacity over the past half-century or so. Moore's law, for example, predicted the doubling of hardware capacity roughly every 18 months or so. To illustrate this in a more familiar context, if one had invested just a single dollar in some shares when Moore was declaring his prediction initially in 1964, and if the stock market return on this shares had kept pace accordingly with Moore's prediction, the individual's net worth would now exceed $17 billion – not bad for a $1 investment. On each occasion when hardware appears to be halted due to an insurmountable challenge in the fundamental laws of physics – the impurity of atoms, sculpting light-wavelength limits, heat generation, radiation-induced forgetfulness, for example – new advances have emerged to overcome these problems as we move into the quantum computing area.

Moore's law is paralleled by similar “laws” in relation to storage capacity (Kryder's law) and network capacity (Butter's law) that portray similar exponential performance with decreasing costs. Big memory servers are disrupting the technology landscape with servers now capable of providing terabytes of physical memory. This has led to the observation that disks have become the new magnetic tape, as it is now possible to use physical memory for random access operations and to reserve the traditional random access disk for purely sequential operations.

1.2.1.1 Parallel Processing

In the era of Software Crisis 1.0, the programming paradigm was one based on serial computation. Instructions were executed one at a time on a single processor. Parallel processing allows programs to be decomposed to run on multiple processors simultaneously. Two significant advantages of parallel processing are significantly faster execution times and lower power consumption. Certain types of processing graphics, cryptography, and signal processing, for example, are suited to parallel decomposition.

1.2.1.2 IBM Watson Technology Platform

The IBM Watson platform uses a natural language and machine learning to allow a computer to appreciate context in human language, a task in which computers traditionally perform very poorly. Watson achieved widespread recognition for its ability to beat human experts in the game of Jeopardy. Configuring Watson to achieve this victory was far from trivial as it was underpinned by 200 million pages of content (including the full text of Wikipedia), 2800 processor cores, and 6 million logic rules [17]. Watson has been deployed in a variety of contexts: as a call center operator, hotel concierge, and chef as it has even published its own cookbook. However, IBM believes it has the power to revolutionize human–computer interaction with many applications in domains very beneficial to society, such as medicine where Watson technology is being used to create the ultimate physician's assistant as a cancer specialist. IBM estimate that a person can generate 1 million gigabytes of health-related data across his or her lifetime – roughly the equivalent of 300 million books. These large data problems are ones in which Watson can perform well as Watson can process 500GB (the equivalent of a million books) per second.

1.2.1.3 Quantum Computer and the Memcomputer

Although still primarily theoretical in nature, quantum computing has significant advantages over the traditional digital computer in terms of its vastly superior performance in solving certain problems. Digital computers are so called because they are based on the binary system of two digital states, 0 and 1, which conveniently map to electronic switching states of “off” or “on.” Quantum computers, on the other hand, operate on quantum bits or qubits in short. Qubits are not restricted to being in a state of 0 or 1; rather they can be either 0 or 1 or through what is termed superposition, they can be both 0 and 1 (and all points in between) at the same time. As mentioned, digital computers typically perform only one instruction at a time, whereas a quantum computer performs many calculations simultaneously, and hence it is inherently delivering parallelism. Estimates suggest that a quantum computer could solve within seconds a problem that might take a digital computer 10,000 years to calculate [18]. Quantum computers are suited to optimization problems, an application of artificial intelligence that IBM Watson is also addressing, albeit still operating within the overall digital computer paradigm. In 2015, Google and NASA announced their collaboration on the D-Wave 2X quantum computer that has over 1000 qubits.2

Other alternatives to the traditional digital computer are also being investigated. These include the memcomputer, so called because it seeks to mimic the functioning of the memory cells of the human brain [19]. However, while the D-Wave 2X quantum computer requires an environment 150 times colder than interstellar space, in contrast, the memcomputer operates at room temperature. The fundamental innovation in the memcomputer is that, like the human brain, it stores and processes information in the same physical space, thereby overcoming a central problem in traditional digital computers that of transfer of information between the central processing unit and memory. The traditional computer uses millions of times more power than the brain on such data transfer, which is ultimately extremely wasteful of time and energy as such transfers do not add any essential value.

1.2.2 “Big Data”

While it is extremely difficult to quantify the increases in the volume of electronic data that potentially exists, there is undoubtedly a similar pattern of exponential increases paralleling that of the hardware arena. Eric Schmidt, CEO of Google, suggested in 2005 that the amount of data available electronically comprised 5 million terabytes (that is 5 million billion megabytes), of which only 0.004% was being indexed by Google. He estimated the amount of data as doubling every five years.

Dave Evans, Chief Futurist at Cisco Systems estimated in 2010 that there were about 35 billion devices connected to the Internet, which is more than five times the population of the planet [20]. This figure was estimated to increase to 100 billion devices by 2020. This has given rise to the concept of the “Internet of Things” (IoT) [21] or network of everything. An exemplar project designed for the IoT is the plan by HP as part of the Central Nervous System for the Planet (CeNSE) project to place a trillion “smart dust” sensors all over the planet as a planet-wide sensing network infrastructure. These sensors would detect a wide variety of factors, including motion, vibrations, light, temperature, barometric pressure, airflow and humidity, and have obvious applications in transportation, health, energy management, and building automation.

Similar predictions were made, such as that of the World Wireless Research Forum's (WWRF) that there would be 7 trillion devices for the world's 7 billion people by 2017 – which is a thousand devices for every human being – all of which would be intelligently connected to create an individual personal network for all. This suggests that more structure is needed for IoT, in that a “systems of systems” approach is necessary to govern and deliver these networks.

To cope with this proliferation of devices, a migration is underway from the IPv4 protocol that has about four billion unique addresses to the IPv6 protocol that can support 2128 addresses – enough to uniquely address every grain of sand on every beach in the world. In this brave new world, the vast majority of communications will be machine-to-machine rather than machine-to-person, thereby generating an enormous amount of electronic information that is available for processing.

Big Data needs has several technological implications. For example, columnar databases that invert the traditional row-oriented relational databases can perform much more efficiently on search tasks as the data becomes the primary key effectively. Columnar databases lend themselves to greater compression and therefore require less space and can achieve faster transfer rates.

Complementing the basic “push” factors of hardware advances and big data availability are a number of “pull” or demand factors that underpin the need for more software. These include the software-hungry “digital natives” and the trend toward software-defined *, where * can represent networking, infrastructure, datacenter, or enterprise, reflecting the fact that software is the primary mechanism mediating the modern world. These pull factors are discussed next.

1.2.3 Digital Natives Lifelogging and the Quantified Self

An interesting distinction has been drawn between “digital immigrants” – those who began using digital technology at some stage during their adult lives, and “digital natives” – those who have been immersed in the world of technology since birth and have as a consequence developed a natural fluency for technology [22]. By the age of 20, digital natives will have spent 20,000 h online [23] and can cope with, and indeed even welcome, an abundance of information [24]. This category of digital native consumer represents a significant “pull” factor in seeking to take advantage of the opportunities afforded by advances in processing power and increased availability of data. The advent of wearable computing fuels big data and has led to initiatives such as lifelogging and the quantified self. With such initiatives individuals can collect data about all aspects of their daily lives – diet, health, recreation, mood states, performance – in some cases recording a terabyte of data per annum [25].

The paradoxical success of the open-source software phenomenon has led to a broader interest in crowd science or citizen science as a collaborative model of problem analysis and solving. Notable areas of success are user-led innovation, cocreation of value, and high-profile crowdsourcing of solutions for solving complex R&D problems in NASA, Eli Lilly, and Du Pont, which provides real testimony to the potential of the digital native.

Mass customization has been succinctly defined as “producing goods and services to meet individual customer's needs with near mass production efficiency” [26]. While not a new concept, it resonates well with catering to the personal needs of the digital native. Also, it is now typically delivered through some form of software-mediated configurability to meet individual customer needs. The concept of automated personalization is linked to the desired benefits of big data.

1.2.4 Software-Defined*

The increasing demand for software already discussed is fuelled by the increasing capability of software to perform tasks that were previously accomplished through hardware. This is evident in phenomena such as software-defined networking [27] or software-defined infrastructure [28], even software-defined datacenters [29], right through to the concept of the software-defined enterprise that has enough intelligence to automate all business processes. This is also evident in the move beyond IoT to Systems of Systems where the sensors and sources of data, such as household appliances, are fully integrated into web-enabled systems capable of utilizing machine-learning techniques to offer real-time data analytics on the morass of acquired raw data, with the ultimate goal of enabling societal benefits for citizens through the provision of useful and precisely customized information – the quantified self, for example.

1.3 Software Crisis 2.0: The Bottleneck

Given these “push” and “pull” factors, it is clear that a massive increase in the volume of software being produced is required to address these emerging initiatives. This creates a Software Crisis 2.0 bottleneck as we illustrate further. There are two dimensions to this crisis. One is the massive increase in the volume of software required to fuel the demand in new domains where software has not been always of primary significance – medicine and healthcare for example – where terabytes of raw data need to be analyzed to provide useful actionable insights. The second dimension, however, is a more challenging one as it requires software development practitioners to acquire fundamentally different new skills to take advantage of advances in hardware – parallel processing, big memory servers, and quantum computing, for example – these will all require significant new skills on the part of practitioners (see Figure 1.2).

Figure 1.2 Increasing volume of software and complex developer skill sets.

1.3.1 Significant Increase in Volume of Software Required

“Our organization has become a software company. The problem is that our engineers haven't realized that yet!”

This is how the Vice President for Research of a major semiconductor manufacturing company, traditionally seen as the classic hardware company, characterized the context in which software solutions were replacing hardware in delivering his company's products. This situation is replicated across several business domains as the transformation to software has been taking place for quite some time. The telecommunications industry began the move to softwareization in the 1970s with the introduction of computerized switches, and currently, the mobile telephony market is heavily software focused. The automotive industry has very noticeably been moving toward softwareization since the 1960s–today, 80–90% of innovations in the automotive industry are enabled by software [30,31]. This is evidenced in the dramatic increase in the numbers of software engineers being employed in proportion to the numbers employed in traditional engineering roles. Indeed, an extremely striking example of the growing importance of software arises in the automotive industry. In 1978, a printout of the lines of code in the car would have made a paper stack about 12 cm high. By 1995, this was already a 3-m-high stack, and by 2005, the printout was about 50 m tall. By 2013, the printout had grown to 150 m in height. By 2020, the estimate is that the stack would be a staggering 830 m tall, higher than the Burj Khalifa – the tallest man-made structure in the world [32]. This is illustrated graphically in Figure 1.3.

Figure 1.3 Height of software printout in Mercedes S-Class [32].

1.3.2 New Skill Sets Required for Software Developers

This demand for additional software is clearly replicated in several other industry domains. However, the software required for these domains is typically “more of the same” in that no major paradigm change is present that requires developers to possess new skills and techniques. In the overall backdrop to this software bottleneck, however, it is worth bearing in mind that estimates suggest the population of professional software engineers worldwide to comprise no more than 500,000 people [33]. Clearly, there are more software development practitioners in the world, and development resources may even be boosted by a general willingness for additional people to get involved based on a crowdsourcing model. However, the skills required in this brave new world are not those possessed by the average software developer.

In the area of parallel processing on multicore architectures, for example, a number of fundamental challenges emerge. The traditional programming paradigm is one of runtime task allocation and scheduling, that is, the operating system allocates tasks to processors and takes care of scheduling and load balancing. In a multicore architecture, these decisions can be made at design-time or compile-time and developers need to design program threads accordingly. The analysis, design, and debug phases are significantly more challenging, and also an optimization/tuning phase is necessary. In the analysis phase, for example, new questions arise. For example, not all code might benefit from parallelization. Code which is executed more frequently would be likely to lead to greater benefit, but code may be so simple that no performance benefit may arise through any potential parallelism, or there may not be any parallelizable loops. In the design phase, issues such as method of threading and decomposition need to be addressed. In the debug phase, handling data races and deadlocks and implementing thread synchronization accordingly is the focus. The optimization/tuning phase considers performance issues such as the amount of code parallelism, and whether performance benefits can be achieved as the number of processors increases.

1.3.2.1 From Runtime to Design-Time Switch

This is an interesting issue as much focus in recent times in software engineering has been on runtime adaptation, that is delaying decisions that are normally taken at design time until runtime [34]. This is evident in work on adaptive security and privacy, for example [35]. However, in the case of programming for multicore processors, issues such as the allocation of tasks to processors and load-balancing must be done at design time.

Programming big memory servers is also likely to lead to significant new programming challenges. The concept of garbage collection, for example, could be extremely problematic in a 64 terabyte single RAM space. Likewise, the mechanisms for dealing with a crash, or the notions of transient and persistent memory need to be reconceptualized when programming for a big memory server environment.

Quantum computing is not likely to replace traditional computing in the near future. However, understanding the quantum concepts of superposition and entanglement is far from trivial. At present, only a certain class of problem lends itself to be solved more efficiently by a quantum computer, optimization problems for example. Analyzing and understanding such problems is clearly not the forte of the majority of software developers at present. Quantum computing languages have also been created – QCL or quantum computing language [36] and Quipper [37], for example, but the quantum operations in these languages will be completely alien to the traditionally trained developer.

1.4 Conclusion

Given the scarcely comprehensible increases in hardware power and data capacity mentioned already, it is perhaps surprising that there has not been a “silver bullet” to deliver even a modest one order of magnitude improvement in software productivity. Without wishing to deny the enormous advances that have been brought about by software, which has truly revolutionized life and society in the twentieth and twenty-first centuries, it is intriguing to imagine what life would be like if the software area had evolved at the same pace as that of hardware and data. But that has not been the case: Wirth's law [38] effectively summarizes the comparative evolution in the software domain, namely, that software is getting slower more rapidly than the hardware is becoming faster.

Notes

1

It is worth noting that the Chaos report findings and methodology have been challenged (e.g., [14,15]).

2

http://www.dwavesys.com/blog/2015/08/announcing-d-wave-2x-quantum-computer

References

1 B. Cohen (1988) The computer: a case study of the support by government, especially the military, of a new science and technology. Cited in Pickering, A. Cyborg history and WWII regime.

2 G. Davis and M. Olson (1985)

Management Information Systems: Conceptual Foundations, Structure and Development

, 2nd Edition, McGraw-Hill, New York.

3 A. Friedman (1989)

Computer Systems Development: History, Organisation and Implementation

, John Wiley & Sons, Ltd., Chichester.

4 C. Lecht (1977)

The Waves of Change

, McGraw-Hill, New York.

5 M. Shaw (1990) Prospects for an engineering discipline of software.

IEEE Software

7

, 15–24.

6 J. Aron (1976) Estimating resources for large programming systems. In Naur, P. Randell, B., and Buxton, J. (eds.),

Software Engineering: Concepts and Techniques

, Charter Publishers, New York, 206–217.

7 I. Peterson (2000) Software's Origin. Available at

http://www.maa.org/mathland/mathtrek_7_31_00.html

(accessed Oct. 2011).

8 P. Naur, and B. Randell (eds.) (1968)

Software Engineering: A Report on a Conference Sponsored by the NATO Science Committee

. Scientific Affairs Division, NATO, Brussels.

9 P. Flaatten, D. McCubbrey, P. O'Riordan, and K. Burgess (1989)

Foundations of Business Systems

, Dryden Press, Chicago.

10 Anonymous (1988) The software trap: automate—or else, Business Week, 142–154.

11 T. Taylor and T. Standish (1982) Initial thoughts on rapid prototyping techniques.

ACM SIGSOFT Software Engineering Notes

,

7

(5), 160–166.

12 P. Bowen (1994) Rapid Application Development: Concepts and Principles, IBM Document No. 94283UKT0829.

13 Standish Group (2009) The CHAOS Report, The Standish Group, Boston, MA.

14 J. Eveleeens and C. Verhoef (2010) The rise and fall of the Chaos reports.

IEEE Software

,

27

, 30–36.

15 R. Glass (2006) The Standish report: does it really describe a software crisis?

Communications of the ACM

,

49

(8), 15–16.

16 F. Brooks (1987) No silver bullet: essence and accidents of software engineering.

IEEE Computer Magazine April

, 10–19.

17 I. Paul (2010) IBM Watson Wins Jeopardy, Humans Rally Back, PCWorld. Available at

http://www.pcworld.com/article/219900/IBM_Watson_Wins_Jeopardy_Humans_Rally_Back.html

18 D. Basulto (2015) Why Google's new quantum computer could launch an artificial intelligence arms race, Financial Review, Available at

http://www.afr.com/technology/why-googles-new-quantum-computer-could-launch-an-artificial-intelligence-arms-race-20151228-glvr7s

19 F. Traversa, C. Ramella, F. Bonani, and M. Di Ventra (2015) Memcomputing

NP

-complete problems in polynomial time using polynomial resources and collective states.

Science Advances

,

1

(6). doi: 10.1126/sciadv.1500031.

20 A. Jeffries (2010) A Sensor In Every Chicken: Cisco Bets on the Internet of Things. Available at

http://www.readwriteweb.com/archives/cisco_futurist_predicts_internet_of_things_1000_co.php

21 K. Ashton (2009) That ‘Internet of Things’ Thing. RFID Journal.

22 M. Prensky (2001) Digital natives, digital immigrants.

On Horizon

9

(5), 1–2.

23 P. M. Valkenburg and J. Peter (2008) Adolescents' identity experiments on the Internet: consequences for social competence and self-concept unity.

Communication Research

35

(2), 208–231.

24 S. Vodanovich, D. Sundaram, and M. Myers (2010) Digital natives and ubiquitous information systems.

Information Systems Research

21

(4), 711–723.

25 C. Gurrin, A. Smeaton, and A. Doherty (2014) LifeLogging: Personal Big Data. doi: 10.1561/1500000033.

26 M.M. Tseng and J. Jiao (2001) Mass Customization

Handbook of Industrial Engineering, Technology and Operation Management

, 3rd Edition, John Wiley & Sons, Inc., New York, NY,

27 K. Kirkpatrick (2013) Software-defined networking.

Communications of the ACM

,

56

(9), 16–19.

28 B. Fitzgerald, N. Forsgren, K. Stol, J. Humble, and B. Doody, (2015) Infrastructure is Software Too. Available at

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2681904

29 Dell (2015) Technology Outlook White paper. Available at

dell.com/dellresearch

.

30 J. Mössinger (2010) Software in automotive systems.

IEEE Software

,

27

(2), 92–94.

31 Swedsoft (2010) A Strategic Research Agenda for the Swedish Software Intensive Industry.

32 J. Schneider (2015) Software-innovations as key driver for a Green, Connected and Autonomous mobility. ARTEMIS-IA/ITEA-Co-Summit.

33 D. Grier (2015) Do We Engineer Software in Software Engineering. Available at

https://www.youtube.com/watch?v=PZcUCZhqpus

34 L. Baresi and C. Ghezzi (2010) The disappearing boundary between development-time and runtime.

Future of Software Engineering Research

2010

, 17–22.

35 M. Salehie L. Pasquale, I. Omoronyia, R. Ali, and B. Nuseibeh (2012) Requirements-driven adaptive security: protecting variable assets at runtime. 20th IEEE Requirements Engineering Conference (RE), 2012

36 B. Omer (2014) Quantum Computing Language. Available at

http://www.itp.tuwien.ac.at/∼oemer/qcl.html

37 P. Selinger (2015) The Quipper Language. Available at

http://www.mathstat.dal.ca/∼selinger/quipper/

38 N. Wirth (1995) A plea for lean software.

Computer

28

(2), 64–68.