116,99 €
This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity vulnerabilities and threats.
This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity threats. The author builds from a common understanding based on previous class-tested works to introduce the reader to the current and newly innovative approaches to address the maliciously-by-human-created (rather than by-chance-occurring) vulnerability and threat, and related cost-effective management to mitigate such risk. This book is purely statistical data-oriented (not deterministic) and employs computationally intensive techniques, such as Monte Carlo and Discrete Event Simulation. The enriched JAVA ready-to-go applications and solutions to exercises provided by the author at the book’s specifically preserved website will enable readers to utilize the course related problems.
• Enables the reader to use the book's website's applications to implement and see results, and use them making ‘budgetary’ sense
• Utilizes a data analytical approach and provides clear entry points for readers of varying skill sets and backgrounds
• Developed out of necessity from real in-class experience while teaching advanced undergraduate and graduate courses by the author
Cyber-Risk Informatics is a resource for undergraduate students, graduate students, and practitioners in the field of Risk Assessment and Management regarding Security and Reliability Modeling.
Mehmet Sahinoglu, a Professor (1990) Emeritus (2000), is the founder of the Informatics Institute (2009) and its SACS-accredited (2010) and NSA-certified (2013) flagship Cybersystems and Information Security (CSIS) graduate program (the first such full degree in-class program in Southeastern USA) at AUM, Auburn University’s metropolitan campus in Montgomery, Alabama. He is a fellow member of the SDPS Society, a senior member of the IEEE, and an elected member of ISI. Sahinoglu is the recipient of Microsoft's Trustworthy Computing Curriculum (TCC) award and the author of Trustworthy Computing (Wiley, 2007).
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 816
Veröffentlichungsjahr: 2016
COVER
TITLE PAGE
ABOUT THE COVER
PROLOGUE
REVIEWS
PREFACE
ACKNOWLEDGMENTS AND DEDICATION
ABOUT THE AUTHOR
1 METRICS, STATISTICAL QUALITY CONTROL, AND BASIC RELIABILITY IN CYBER-RISK
1.1 DETERMINISTIC AND STOCHASTIC CYBER-RISK METRICS
1.2 STATISTICAL RISK ANALYSIS
1.3 ACCEPTANCE SAMPLING IN QUALITY CONTROL
1.4 POISSON AND NORMAL APPROXIMATION TO BINOMIAL IN QUALITY CONTROL
1.5 BASIC STATISTICAL RELIABILITY CONCEPTS AND MC SIMULATORS
1.6 DISCUSSIONS AND CONCLUSION
1.7 EXERCISES
REFERENCES
2 COMPLEX NETWORK RELIABILITY EVALUATION AND ESTIMATION IN CYBER-RISK
2.1 INTRODUCTION
2.2 OVERLAP TECHNIQUE TO CALCULATE COMPLEX NETWORK RELIABILITY
2.3 THE OVERLAP METHOD: MONTE CARLO AND DISCRETE EVENT SIMULATION
2.4 MULTISTATE SYSTEM RELIABILITY EVALUATION
2.5 WEIBULL TIME DISTRIBUTED RELIABILITY EVALUATION
2.6 DISCUSSIONS AND CONCLUSION
APPENDIX 2.A OVERLAP ALGORITHM AND EXAMPLE
2.7 EXERCISES
REFERENCES
3 STOPPING RULES FOR RELIABILITY AND SECURITY TESTS IN CYBER-RISK
3.1 INTRODUCTION
3.2 METHODS
3.3 EXAMPLES MERGING BOTH STOPPING RULES: LGM AND CPM
3.4 STOPPING RULE FOR TESTING IN THE TIME DOMAIN
3.5 DISCUSSIONS AND CONCLUSION
3.6 EXERCISES
REFERENCES
4 SECURITY ASSESSMENT AND MANAGEMENT IN CYBER-RISK
4.1 INTRODUCTION
4.2
SECURITY
METER
(SM)
MODEL
DESIGN
4.3 VERIFICATION OF THE PROBABILISTIC SECURITY METER (SM) METHOD BY MONTE CARLO SIMULATION AND MATH-STATISTICAL TRIPLE-PRODUCT RULE
4.4 MODIFYING THE SM QUANTITATIVE MODEL FOR CATEGORICAL, HYBRID, AND NONDISJOINT DATA
4.5 MAINTENANCE PRIORITY DETERMINATION FOR 3 × 3 × 2 SM
4.6 PRIVACY METER (PM): HOW TO QUANTIFY PRIVACY BREACH
4.7 POLISH DECODING (DECOMPRESSION) ALGORITHM
4.8 DISCUSSIONS AND CONCLUSION
4.9 EXERCISES
REFERENCES
5 GAME-THEORETIC COMPUTING IN CYBER-RISK
5.1 HISTORICAL PERSPECTIVE TO GAME THEORY’S ORIGINS
5.2 APPLICATIONS OF GAME THEORY TO CYBER-SECURITY RISK
5.3 INTUITIVE BACKGROUND: CONCEPTS, DEFINITIONS, AND NOMENCLATURE
5.4 RANDOM SELECTION FOR NASH MIXED STRATEGY
5.5 ADVERSARIAL RISK ANALYSIS MODELS BY BANKS, RIOS, AND RIOS
5.6 AN ALTERNATIVE MODEL: SAHINOGLU’S SECURITY METER FOR NEUMANN AND NASH MIXED STRATEGY
5.7 OTHER INTERDISCIPLINARY APPLICATIONS OF RISK METERS
5.8 MIXED STRATEGY FOR RISK ASSESSMENT AND MANAGEMENT- UNIVERSITY SERVER AND SOCIAL NETWORK EXAMPLES
5.9 APPLICATION TO HOSPITAL HEALTHCARE SERVICE RISK
5.10 APPLICATION TO ENVIRONMETRICS AND ECOLOGY RISK
5.11 APPLICATION TO DIGITAL FORENSICS SECURITY RISK
5.12 APPLICATION TO BUSINESS CONTRACTING RISK
5.13 APPLICATION TO NATIONAL CYBERSECURITY RISK
5.14 APPLICATION TO AIRPORT SERVICE QUALITY RISK
5.15 APPLICATION TO OFFSHORE OIL-DRILLING SPILL AND SECURITY RISK
5.16 DISCUSSIONS AND CONCLUSION
5.17 EXERCISES
REFERENCES
6 MODELING AND SIMULATION IN CYBER-RISK
6.1 INTRODUCTION AND A BRIEF HISTORY TO SIMULATION
6.2 GENERIC THEORY: CASE STUDIES ON GOODNESS OF FIT FOR UNIFORM NUMBERS
6.3 WHY CRUCIAL TO MANUFACTURING AND CYBER DEFENSE
6.4 A CROSS SECTION OF MODELING AND SIMULATION IN MANUFACTURING INDUSTRY
6.5 A REVIEW OF MODELING AND SIMULATION IN CYBER-SECURITY
6.6 APPLICATION OF QUEUING THEORY AND MULTICHANNEL SIMULATION TO CYBER-SECURITY
6.7 DISCUSSIONS AND CONCLUSION
APPENDIX 6.A
6.8 EXERCISES
REFERENCES
7 CLOUD COMPUTING IN CYBER-RISK
7.1 INTRODUCTION AND MOTIVATION
7.2 CLOUD COMPUTING RISK ASSESSMENT
7.3 MOTIVATION AND METHODOLOGY
7.4 VARIOUS APPLICATIONS TO CYBER SYSTEMS
7.5 LARGE CYBER SYSTEMS USING STATISTICAL METHODS
7.6 REPAIR CREW AND PRODUCT RESERVE PLANNING TO MANAGE RISK COST EFFECTIVELY USING CYBERRISKSOLVER CLOUD MANAGEMENT JAVA TOOL
7.7 REMARKS FOR “PHYSICAL CLOUD” EMPLOYING PHYSICAL PRODUCTS (SERVERS, GENERATORS, COMMUNICATION TOWERS, ETC.)
7.8 APPLICATIONS TO “SOCIAL (HUMAN RESOURCES) CLOUD”
7.9 STOCHASTIC CLOUD SYSTEM SIMULATION
7.10 CLOUD RISK METER ANALYSIS
7.11 DISCUSSIONS AND CONCLUSION
7.12 EXERCISES
REFERENCES
8 SOFTWARE RELIABILITY MODELING AND METRICS IN CYBER-RISK
8.1 INTRODUCTION, MOTIVATION, AND METHODOLOGY
8.2 HISTORY AND CLASSIFICATION OF SOFTWARE RELIABILITY MODELS
8.3 SOFTWARE RELIABILITY MODELS IN TIME DOMAIN
8.4 SOFTWARE RELIABILITY GROWTH MODELS
8.5 NUMERICAL EXAMPLES USING PEDAGOGUES
8.6 RECENT TRENDS IN SOFTWARE RELIABILITY
8.7 DISCUSSIONS AND CONCLUSION
8.8 EXERCISES
REFERENCES
9 METRICS FOR SOFTWARE RELIABILITY FAILURE-COUNT MODELS IN CYBER-RISK
9.1 INTRODUCTION AND METHODOLOGY ON FAILURE-COUNT ESTIMATION IN SOFTWARE RELIABILITY
9.2 PREDICTIVE ACCURACY TO COMPARE FAILURE-COUNT MODELS
9.3 DISCUSSIONS AND CONCLUSION
APPENDIX 9.A
9.4 EXERCISES
REFERENCES
10 PRACTICAL HANDS-ON LAB TOPICS IN CYBER-RISK
10.1 SYSTEM HARDENING
10.2 EMAIL SECURITY
10.3 MS-DOS COMMANDS
10.4 LOGGING
10.5 FIREWALL
10.6 WIRELESS NETWORKS
10.7 DISCUSSIONS AND CONCLUSION
APPENDIX 10.A
10.8 EXERCISES
REFERENCES
WHAT THE CYBER-RISK INFORMATICS TEXTBOOK AND THE AUTHOR ARE ABOUT?
INDEX
END USER LICENSE AGREEMENT
Chapter 01
Table 1.1
Types of Errors Associated with Hypothesis Tests
Table 1.2
Utilities Related to the Cross Products of Types of Errors Associated with Tests of Hypotheses
Table 1.3
Input Parameters and Qual-C (Software) Outcomes for Example 1
Table 1.4
Power and Type II Error for the Differences,
θ
μ
1
−
μ
0
Table 1.5
Tabulations for Figure 1.2
Table 1.6
Input Spreadsheet for Example 3 on a Cyberware Test of Hypothesis
Table 1.7
Output Spreadsheet for Example 3 on a Cyberware Test of Hypothesis
Table 1.8
Output Spreadsheet for Example 3 Using EXCEL LP Solver Algorithm
Table 1.9
JAVA Input Table for Example 3
Table 1.10
Output Spreadsheet for Example 3 using Java Coding
Table 1.11
Input DATA and Qual-C Outcome for Example 4
Table 1.12
The Outcomes for Example 5 of the Poisson Approximation to the Binomial
Table 1.13
The Outcomes for Example 6 of the Normal Approximation to the Binomial distribution
Table 1.14
Snapshot for the Common Probability Distributions and Their Reliability and Other Related Functions of 1.5.1
Table 1.15
The Commonly Used Probability Distributions and Their Monte Carlo Simulators
Table 1.16
Probability Density Functions for the Common Probability Distributions Used in Table 1.15
Table 1.17
500 (50 Rows by 10 Columns) Computer-Generated Random Numbers
Table 1.18
Pedagogues for Exercise 1.7.49
Chapter 02
Table 2.1
Five-Node/Two-Path Static Network State Enumeration Table for “1–5” in Figure 2.1
Table 2.2
Five-Node and Two-Path Static Network State Probability Table with Useful Paths for Example 1
Table 2.3
Nineteen-Node Weibull Overlap Results
Table 2.4
Nineteen-Node Linear Data for
β
with Least Squares Calculations
Table 2.5
Nineteen-Node Linear Data for
α
with Least Squares Calculations
Table 2.6
Nineteen-Node Weibull Results for Comparison
Table 2.7
Fifty-Two-Node Least Squares Sum for
β
Table 2.8
Fifty-Two-Node Linear Data for
α
with Least Squares Calculations
Table 2.9
Fifty-Two-Node Least Squares Sum for the Intercept Alpha
Table 2.10
Fifty-Two-Node Linear Data for
β
with Least Squares Calculations
Table 2.11
Fifty-Two-Node Weibull Results for Comparison
Chapter 03
Table 3.1
PROC NLIN (SAS: Statistical Analysis Systems, Procedure Nonlinear)
Table 3.2
Output from SAS PROC GLM
Table 3.3
Marquardt Algorithmic Output from PROC NLIN Procedure
Table 3.4
Cost Analysis for DR5 with 50% in MESAT-1 for
c
= $20,
b
= $10, and
a
= $50
Table 3.5
Marquardt Algorithmic Output from SPSS for LGM for DR1–DR5 with
q
= 0.5
Table 3.6
MESAT-1 Cost Analysis for DR4 after LGM with
c
= $20,
b
= $10, and
a
= $50
Table 3.7
Historical Failure Data (from 3/9 to 2/10) for the Supercomputer XSEDE
Table 3.8
Sample of CLOUD (Supercomputer Infrastructure XSEDE) Historical Failure Data from Table 3.7 in Section 3.3.3.
Table 3.9
Time-Domain Data Set
T
4, as in Table 3.11 with 47 Singletons and 3 Doubles
Table 3.10
Snapshot of the application of Equation (3.42) for MESAT-2 example
Table 3.11
A Cross Section of Historical Failure Times of Data Sets
T
1(
X
,
Y
) to
T
5(
X
,
Y
) in MESAT-2
Chapter 04
Table 4.1
Probabilistic Input for the Security Meter: Home PC (12 Legs)
Table 4.2
Comparison of Results for the PC Example as in Table 4.1
Table 4.3
Probabilistic Input for the Security Meter: University Center (10 Legs)
Table 4.4
Comparison of Results for the University Center Server Example
Table 4.5
Probabilistic Input Data for the Security Meter: Privacy Risk (26 Legs)
Table 4.6
Comparison of Results for Privacy Risk Example of Section 4.3.2.3
Table 4.7
Description of Input Data
Table 4.8
Description of Input Data
Table 4.9
A Priori Existence and A Posteriori Defective Probabilities of Figure 4.21
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
