168,99 €
The definitive guide to bringing accuracy to measurement, updated and supplemented Adjustment Computations is the classic textbook for spatial information analysis and adjustment computations, providing clear, easy-to-understand instruction backed by real-world practicality. From the basic terms and fundamentals of errors to specific adjustment computations and spatial information analysis, this book covers the methodologies and tools that bring accuracy to surveying, GNSS, GIS, and other spatial technologies. Broad in scope yet rich in detail, the discussion avoids overly-complex theory in favor of practical techniques for students and professionals. This new sixth edition has been updated to align with the latest developments in this rapidly expanding field, and includes new video lessons and updated problems, including worked problems in STATS, MATRIX, ADJUST, and MathCAD. All measurement produces some amount of error; whether from human mistakes, instrumentation inaccuracy, or environmental features, these errors must be accounted and adjusted for when accuracy is critical. This book describes how errors are identified, analyzed, measured, and corrected, with a focus on least squares adjustment--the most rigorous methodology available. * Apply industry-standard methodologies to error analysis and adjustment * Translate your skills to the real-world with instruction focused on the practical * Master the fundamentals as well as specific computations and analysis * Strengthen your understanding of critical topics on the Fundamentals in Surveying Licensing Exam As spatial technologies expand in both use and capability, so does our need for professionals who understand how to check and adjust for errors in spatial data. Conceptual knowledge is one thing, but practical skills are what counts when accuracy is at stake; Adjustment Computations provides the real-world training you need to identify, analyze, and correct for potentially crucial errors.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 925
Veröffentlichungsjahr: 2017
COVER
TITLE PAGE
PREFACE
ACKNOWLEDGMENTS
CHAPTER 1: INTRODUCTION
1.1 INTRODUCTION
1.2 DIRECT AND INDIRECT MEASUREMENTS
1.3 MEASUREMENT ERROR SOURCES
1.4 DEFINITIONS
1.5 PRECISION VERSUS ACCURACY
1.6 REDUNDANT OBSERVATIONS IN SURVEYING AND THEIR ADJUSTMENT
1.7 ADVANTAGES OF LEAST SQUARES ADJUSTMENT
1.8 OVERVIEW OF THE BOOK
PROBLEMS
CHAPTER 2: OBSERVATIONS AND THEIR ANALYSIS
2.1 INTRODUCTION
2.2 SAMPLE VERSUS POPULATION
2.3 RANGE AND MEDIAN
2.4 GRAPHICAL REPRESENTATION OF DATA
2.5 NUMERICAL METHODS OF DESCRIBING DATA
2.6 MEASURES OF CENTRAL TENDENCY
2.7 ADDITIONAL DEFINITIONS
2.8 ALTERNATIVE FORMULA FOR DETERMINING VARIANCE
2.9 NUMERICAL EXAMPLES
2.10 ROOT MEAN SQUARE ERROR AND MAPPING STANDARDS
2.11 DERIVATION OF THE SAMPLE VARIANCE (BESSEL'S CORRECTION)
2.12 SOFTWARE
PROBLEMS
PRACTICAL EXERCISES
CHAPTER 3: RANDOM ERROR THEORY
3.1 INTRODUCTION
3.2 THEORY OF PROBABILITY
3.3 PROPERTIES OF THE NORMAL DISTRIBUTION CURVE
3.4 STANDARD NORMAL DISTRIBUTION FUNCTION
3.5 PROBABILITY OF THE STANDARD ERROR
3.6 USES FOR PERCENT ERRORS
3.7 PRACTICAL EXAMPLES
PROBLEMS
PROGRAMMING PROBLEMS
NOTE
CHAPTER 4: CONFIDENCE INTERVALS
4.1 INTRODUCTION
4.2 DISTRIBUTIONS USED IN SAMPLING THEORY
4.3 CONFIDENCE INTERVAL FOR THE MEAN:
T
STATISTIC
4.4 TESTING THE VALIDITY OF THE CONFIDENCE INTERVAL
4.5 SELECTING A SAMPLE SIZE
4.6 CONFIDENCE INTERVAL FOR A POPULATION VARIANCE
4.7 CONFIDENCE INTERVAL FOR THE RATIO OF TWO POPULATION VARIANCES
4.8 SOFTWARE
PROBLEMS
NOTES
CHAPTER 5: STATISTICAL TESTING
5.1 HYPOTHESIS TESTING
5.2 SYSTEMATIC DEVELOPMENT OF A TEST
5.3 TEST OF HYPOTHESIS FOR THE POPULATION MEAN
5.4 TEST OF HYPOTHESIS FOR THE POPULATION VARIANCE
5.5 TEST OF HYPOTHESIS FOR THE RATIO OF TWO POPULATION VARIANCES
5.6 SOFTWARE
PROBLEMS
NOTES
CHAPTER 6: PROPAGATION OF RANDOM ERRORS IN INDIRECTLY MEASURED QUANTITIES
6.1 BASIC ERROR PROPAGATION EQUATION
6.2 FREQUENTLY ENCOUNTERED SPECIFIC FUNCTIONS
6.3 NUMERICAL EXAMPLES
6.4 SOFTWARE
6.5 CONCLUSIONS
PROBLEMS
PRACTICAL EXERCISES
NOTE
CHAPTER 7: ERROR PROPAGATION IN ANGLE AND DISTANCE OBSERVATIONS
7.1 INTRODUCTION
7.2 ERROR SOURCES IN HORIZONTAL ANGLES
7.3 READING ERRORS
7.4 POINTING ERRORS
7.5 ESTIMATED POINTING AND READING ERRORS WITH TOTAL STATIONS
7.6 TARGET-CENTERING ERRORS
7.7 INSTRUMENT CENTERING ERRORS
7.8 EFFECTS OF LEVELING ERRORS IN ANGLE OBSERVATIONS
7.9 NUMERICAL EXAMPLE OF COMBINED ERROR PROPAGATION IN A SINGLE HORIZONTAL ANGLE
7.10 USING ESTIMATED ERRORS TO CHECK ANGULAR MISCLOSURE IN A TRAVERSE
7.11 ERRORS IN ASTRONOMICAL OBSERVATIONS FOR AZIMUTH
7.12 ERRORS IN ELECTRONIC DISTANCE OBSERVATIONS
7.13 CENTERING ERRORS WHEN USING RANGE POLES
7.14 SOFTWARE
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
CHAPTER 8: ERROR PROPAGATION IN TRAVERSE SURVEYS
8.1 INTRODUCTION
8.2 DERIVATION OF ESTIMATED ERROR IN LATITUDE AND DEPARTURE
8.3 DERIVATION OF ESTIMATED STANDARD ERRORS IN COURSE AZIMUTHS
8.4 COMPUTING AND ANALYZING POLYGON TRAVERSE MISCLOSURE ERRORS
8.5 COMPUTING AND ANALYZING LINK TRAVERSE MISCLOSURE ERRORS
8.6 SOFTWARE
8.7 CONCLUSIONS
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
CHAPTER 9: ERROR PROPAGATION IN ELEVATION DETERMINATION
9.1 INTRODUCTION
9.2 SYSTEMATIC ERRORS IN DIFFERENTIAL LEVELING
9.3 RANDOM ERRORS IN DIFFERENTIAL LEVELING
9.4 ERROR PROPAGATION IN TRIGONOMETRIC LEVELING
PROBLEMS
PROGRAMMING PROBLEMS
CHAPTER 10: WEIGHTS OF OBSERVATIONS
10.1 INTRODUCTION
10.2 WEIGHTED MEAN
10.3 RELATIONSHIP BETWEEN WEIGHTS AND STANDARD ERRORS
10.4 STATISTICS OF WEIGHTED OBSERVATIONS
10.5 WEIGHTS IN ANGLE OBSERVATIONS
10.6 WEIGHTS IN DIFFERENTIAL LEVELING
10.7 PRACTICAL EXAMPLES
PROBLEMS
CHAPTER 11: PRINCIPLES OF LEAST SQUARES
11.1 INTRODUCTION
11.2 FUNDAMENTAL PRINCIPLE OF LEAST SQUARES
11.3 THE FUNDAMENTAL PRINCIPLE OF WEIGHTED LEAST SQUARES
11.4 THE STOCHASTIC MODEL
11.5 FUNCTIONAL MODEL
11.6 OBSERVATION EQUATIONS
11.7 SYSTEMATIC FORMULATION OF THE NORMAL EQUATIONS
11.8 TABULAR FORMATION OF THE NORMAL EQUATIONS
11.9 USING MATRICES TO FORM THE NORMAL EQUATIONS
11.10 LEAST SQUARES SOLUTION OF NONLINEAR SYSTEMS
11.11 LEAST SQUARES FIT OF POINTS TO A LINE OR CURVE
11.12 CALIBRATION OF AN EDM INSTRUMENT
11.13 LEAST SQUARES ADJUSTMENT USING CONDITIONAL EQUATIONS
11.14 THE PREVIOUS EXAMPLE USING OBSERVATION EQUATIONS
11.15 SOFTWARE
PROBLEMS
NOTES
CHAPTER 12: ADJUSTMENT OF LEVEL NETS
12.1 INTRODUCTION
12.2 OBSERVATION EQUATION
12.3 UNWEIGHTED EXAMPLE
12.4 WEIGHTED EXAMPLE
12.5 REFERENCE STANDARD DEVIATION
12.6 ANOTHER WEIGHTED ADJUSTMENT
12.7 SOFTWARE
PROBLEMS
PROGRAMMING PROBLEMS
CHAPTER 13: PRECISIONS OF INDIRECTLY DETERMINED QUANTITIES
13.1 INTRODUCTION
13.2 DEVELOPMENT OF THE COVARIANCE MATRIX
13.3 NUMERICAL EXAMPLES
13.4 STANDARD DEVIATIONS OF COMPUTED QUANTITIES
PROBLEMS
PROGRAMMING PROBLEMS
NOTE
CHAPTER 14: ADJUSTMENT OF HORIZONTAL SURVEYS: TRILATERATION
14.1 INTRODUCTION
14.2 DISTANCE OBSERVATION EQUATION
14.3 TRILATERATION ADJUSTMENT EXAMPLE
14.4 FORMULATION OF A GENERALIZED COEFFICIENT MATRIX FOR A MORE COMPLEX NETWORK
14.5 COMPUTER SOLUTION OF A TRILATERATED QUADRILATERAL
14.6 ITERATION TERMINATION
14.7 SOFTWARE
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
CHAPTER 15: ADJUSTMENT OF HORIZONTAL SURVEYS: TRIANGULATION
15.1 INTRODUCTION
15.2 AZIMUTH OBSERVATION EQUATION
15.3 ANGLE OBSERVATION EQUATION
15.4 ADJUSTMENT OF INTERSECTIONS
15.5 ADJUSTMENT OF RESECTIONS
15.6 ADJUSTMENT OF TRIANGULATED QUADRILATERALS
PROBLEMS
PROGRAMMING PROBLEMS
NOTE
CHAPTER 16: ADJUSTMENT OF HORIZONTAL SURVEYS: TRAVERSES AND HORIZONTAL NETWORKS
16.1 INTRODUCTION TO TRAVERSE ADJUSTMENTS
16.2 OBSERVATION EQUATIONS
16.3 REDUNDANT EQUATIONS
16.4 NUMERICAL EXAMPLE
16.5 MINIMUM AMOUNT OF CONTROL
16.6 ADJUSTMENT OF NETWORKS
16.7
χ
2
TEST: GOODNESS OF FIT
PROBLEMS
PROGRAMMING PROBLEMS
NOTE
CHAPTER 17: ADJUSTMENT OF GNSS NETWORKS
17.1 INTRODUCTION
17.2 GNSS OBSERVATIONS
17.3 GNSS ERRORS AND THE NEED FOR ADJUSTMENT
17.4 REFERENCE COORDINATE SYSTEMS FOR GNSS OBSERVATIONS
17.5 CONVERTING BETWEEN THE TERRESTRIAL AND GEODETIC COORDINATE SYSTEMS
17.6 APPLICATION OF LEAST SQUARES IN PROCESSING GNSS DATA
17.7 NETWORK PREADJUSTMENT DATA ANALYSIS
17.8 LEAST SQUARES ADJUSTMENT OF GNSS NETWORKS
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
CHAPTER 18: COORDINATE TRANSFORMATIONS
18.1 INTRODUCTION
18.2 THE TWO-DIMENSIONAL CONFORMAL COORDINATE
18.3 EQUATION DEVELOPMENT
18.4 APPLICATION OF LEAST SQUARES
18.5 TWO-DIMENSIONAL AFFINE COORDINATE TRANSFORMATION
18.6 THE TWO-DIMENSIONAL PROJECTIVE COORDINATE TRANSFORMATION
18.7 THREE-DIMENSIONAL CONFORMAL COORDINATE TRANSFORMATION
18.8 STATISTICALLY VALID PARAMETERS
PROBLEMS
PROGRAMMING PROBLEMS
CHAPTER 19: ERROR ELLIPSE
19.1 INTRODUCTION
19.2 COMPUTATION OF ELLIPSE ORIENTATION AND SEMIAXES
19.3 EXAMPLE PROBLEM OF STANDARD ERROR ELLIPSE CALCULATIONS
19.4 ANOTHER EXAMPLE PROBLEM
19.5 THE ERROR ELLIPSE CONFIDENCE LEVEL
19.6 ERROR ELLIPSE ADVANTAGES
19.7 OTHER MEASURES OF STATION UNCERTAINTY
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
CHAPTER 20: CONSTRAINT EQUATIONS
20.1 INTRODUCTION
20.2 ADJUSTMENT OF CONTROL STATION COORDINATES
20.3 HOLDING CONTROL STATION COORDINATES AND DIRECTIONS OF LINES FIXED IN A TRILATERATION ADJUSTMENT
20.4 HELMERT'S METHOD
20.5 REDUNDANCIES IN A CONSTRAINED ADJUSTMENT
20.6 ENFORCING CONSTRAINTS THROUGH WEIGHTING
PROBLEMS
PRACTICAL PROBLEMS
CHAPTER 21: BLUNDER DETECTION IN HORIZONTAL NETWORKS
21.1 INTRODUCTION
21.2 A PRIORI METHODS FOR DETECTING BLUNDERS IN OBSERVATIONS
21.3 A POSTERIORI BLUNDER DETECTION
21.4 DEVELOPMENT OF THE COVARIANCE MATRIX FOR THE RESIDUALS
21.5 DETECTION OF OUTLIERS IN OBSERVATIONS: DATA SNOOPING
21.6 DETECTION OF OUTLIERS IN OBSERVATIONS: THE TAU CRITERION
21.7 TECHNIQUES USED IN ADJUSTING CONTROL
21.8 A DATA SET WITH BLUNDERS
21.9 SOME FURTHER CONSIDERATIONS
21.10 SURVEY DESIGN
21.11 SOFTWARE
PROBLEMS
PRACTICAL PROBLEMS
NOTES
CHAPTER 22: THE GENERAL LEAST SQUARES METHOD AND ITS APPLICATION TO CURVE FITTING AND COORDINATE TRANSFORMATIONS
22.1 INTRODUCTION TO GENERAL LEAST SQUARES
22.2 GENERAL LEAST SQUARES EQUATIONS FOR FITTING A STRAIGHT LINE
22.3 GENERAL LEAST SQUARES SOLUTION
22.4 TWO-DIMENSIONAL COORDINATE TRANSFORMATION BY GENERAL LEAST SQUARES
22.5 THREE-DIMENSIONAL CONFORMAL COORDINATE TRANSFORMATION BY GENERAL LEAST SQUARES
PROBLEMS
PROGRAMMING PROBLEMS
CHAPTER 23: THREE-DIMENSIONAL GEODETIC NETWORK ADJUSTMENT
23.1 INTRODUCTION
23.2 LINEARIZATION OF EQUATIONS
23.3 MINIMUM NUMBER OF CONSTRAINTS
23.4 EXAMPLE ADJUSTMENT
23.5 BUILDING AN ADJUSTMENT
23.6 COMMENTS ON SYSTEMATIC ERRORS
23.7 SOFTWARE
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
CHAPTER 24: COMBINING GNSS AND TERRESTRIAL OBSERVATIONS
24.1 INTRODUCTION
24.2 THE HELMERT TRANSFORMATION
24.3 ROTATIONS BETWEEN COORDINATE SYSTEMS
24.4 COMBINING GNSS BASELINE VECTORS WITH TRADITIONAL OBSERVATIONS
24.5 ANOTHER APPROACH TO TRANSFORMING COORDINATES BETWEEN REFERENCE FRAMES
24.6 OTHER CONSIDERATIONS
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
CHAPTER 25: ANALYSIS OF ADJUSTMENTS
25.1 INTRODUCTION
25.2 BASIC CONCEPTS, RESIDUALS, AND THE NORMAL DISTRIBUTION
25.3 GOODNESS OF FIT TEST
25.4 COMPARISON OF GNSS RESIDUAL PLOTS
25.5 USE OF STATISTICAL BLUNDER DETECTION
PROBLEMS
NOTES
CHAPTER 26: COMPUTER OPTIMIZATION
26.1 INTRODUCTION
26.2 STORAGE OPTIMIZATION
26.3 DIRECT FORMATION OF THE NORMAL EQUATIONS
26.4 CHOLESKY DECOMPOSITION
26.5 FORWARD AND BACK SOLUTIONS
26.6 USING THE CHOLESKY FACTOR TO FIND THE INVERSE OF THE NORMAL MATRIX
26.7 SPARENESS AND OPTIMIZATION OF THE NORMAL MATRIX
PROBLEMS
PROGRAMMING PROBLEMS
NOTES
APPENDIX A: INTRODUCTION TO MATRICES
A.1 INTRODUCTION
A.2 DEFINITION OF A MATRIX
A.3 SIZE OR DIMENSIONS OF A MATRIX
A.4 TYPES OF MATRICES
A.5 MATRIX EQUALITY
A.6 ADDITION OR SUBTRACTION OF MATRICES
A.7 SCALAR MULTIPLICATION OF A MATRIX
A.8 MATRIX MULTIPLICATION
A.9 COMPUTER ALGORITHMS FOR MATRIX OPERATIONS
A.10 USE OF THE MATRIX SOFTWARE
PROBLEMS
PROGRAMMING PROBLEMS
NOTE
APPENDIX B: SOLUTION OF EQUATIONS BY MATRIX METHODS
B.1 INTRODUCTION
B.2 INVERSE MATRIX
B.3 THE INVERSE OF A 2 × 2 MATRIX
B.4 INVERSES BY ADJOINTS
B.5 INVERSES BY ELEMENTARY ROW TRANSFORMATIONS
B.6 EXAMPLE PROBLEM
PROBLEMS
PROGRAMMING PROBLEMS
APPENDIX C: NONLINEAR EQUATIONS AND TAYLOR'S THEOREM
C.1 INTRODUCTION
C.2 TAYLOR SERIES LINEARIZATION OF NONLINEAR EQUATIONS
C.3 NUMERICAL EXAMPLE
C.4 USING MATRICES TO SOLVE NONLINEAR EQUATIONS
C.5 SIMPLE MATRIX EXAMPLE
C.6 PRACTICAL EXAMPLE
C.7 CONCLUDING REMARKS
PROBLEMS
PROGRAMMING PROBLEMS
APPENDIX D: THE NORMAL ERROR DISTRIBUTION CURVE AND OTHER STATISTICAL TABLES
D.1 DEVELOPMENT FOR NORMAL DISTRIBUTION CURVE EQUATION
D.2 OTHER STATISTICAL TABLES
NOTE
APPENDIX E: CONFIDENCE INTERVALS FOR THE MEAN
APPENDIX F: MAP PROJECTION COORDINATE SYSTEMS
F.1 INTRODUCTION
F.2 MATHEMATICS OF THE LAMBERT CONFORMAL CONIC MAP PROJECTION
F.3 MATHEMATICS FROM THE TRANSVERSE MERCATOR
F.4 STEREOGRAPHIC MAP PROJECTION
F.5 REDUCTION OF OBSERVATIONS
NOTES
APPENDIX G: COMPANION WEBSITE
G.1 INTRODUCTION
G.2 FILE FORMATS AND MEMORY MATTERS
G.3 SOFTWARE
G.4 USING THE SOFTWARE AS AN INSTRUCTIONAL AID
APPENDIX H: ANSWERS TO SELECTED PROBLEMS
BIBLIOGRAPHY
INDEX
END USER LICENSE AGREEMENT
Chapter 2
TABLE 2.1
Fifty Readings
TABLE 2.2
Data in Ascending Order
TABLE 2.3
Frequency Table
TABLE 2.4
Data Arranged for the Solution of Example 2.1
TABLE 2.5
Data Arranged for the Solution of Example 2.2
TABLE 2.6
Frequency Table for Example 2.2
TABLE 2.7
Map Coordinates versus Surveyed Checkpoint Coordinates
Chapter 3
TABLE 3.1
Occurrence of Random Errors
TABLE 3.2
Multipliers for Various Percent Probable Errors
Chapter 4
TABLE 4.1
Population of 100 Values
TABLE 4.2
Increasing Sample Sizes
TABLE 4.3
Random Sample Sets from Population
Chapter 5
TABLE 5.1
Relationships in Statistical Testing
TABLE 5.2
Test Variables and Statistical Tests
Chapter 7
TABLE 7.1
Data for Example 7.9
Chapter 8
TABLE 8.1
Distance and Angle Observations for Figure 8.2
TABLE 8.2
Estimated Errors in the Computed Azimuths of Figure 8.2
TABLE 8.3
Latitudes and Departures for Example 8.2
TABLE 8.4
Data for Link Traverse in Example 8.3
TABLE 8.5
Computed Azimuths and Their Uncertainties
TABLE 8.6
Computed Latitudes and Departures
Chapter 10
TABLE 10.1
Adjustment of Example 10.2
TABLE 10.2
Route Data for Example 10.5
TABLE 10.3
Data for Standard Deviations in Example 10.5
Chapter 11
TABLE 11.1
Comparison of an Arbitrary and Least Squares Solution
TABLE 11.2
Tabular Formation of Normal Equations
TABLE 11.3
EDM Instrument–Reflector Calibration Data
Chapter 12
TABLE 12.1
Weights for Example in Section 12.2
Chapter 14
TABLE 14.1
Structure of the Normal Matrix for Complex Network in Figure 14.3
TABLE 14.2
Structure of the Coefficient or
Matrix for Example in Figure 14.4
Chapter 15
TABLE 15.1
Relationship between the Quadrant,
C
, and Azimuth
TABLE 15.2
Substitutions
TABLE 15.3
Structure of the Coefficient or
J
Matrix in Example 15.3
Chapter 16
TABLE 16.1
Subscript Substitution
TABLE 16.2
Data for Example 16.2
TABLE 16.3
Format for Coefficient Matrix
J
of Example 16.4
TABLE 16.4
Two-Tailed
χ
2
Test on
Chapter 17
TABLE 17.1
Observed Baseline Data for the Network of Figure17.1
TABLE 17.2
Comparisons of Observed and Fixed Baseline Components
TABLE 17.3
Comparisons of Repeat Baseline Measurements
Chapter 18
TABLE 18.1
Data for Example 18.1
TABLE 18.2
Coordinates of Points for Example 18.2
TABLE 18.3
Data for Example 18.3
TABLE 18.4
Data for a Three-Dimensional Conformal Coordinate Transformation
Chapter 19
TABLE 19.1
Selection of the Proper Quadrant for 2
t
a
TABLE 19.2
F
(α,2,degrees of freedom)
Statistics for Selected Probability Levels
TABLE 19.3
Other Measures of Two-Dimensional Positional Uncertainties
TABLE 19.4
Measures of Three-Dimensional Positional Uncertainties
TABLE 19.5
1998 FGDC Accuracy Standards: Horizontal, Ellipsoid Height, and Orthometric Height
TABLE 19.6
Coefficients for a Third-Order Polynomial Approximations of Radial Errors
TABLE 19.7
Map Coordinates versus Surveyed Checkpoint Coordinates
Chapter 20
TABLE 20.1
The
J
Matrix of Figure 20.3
Chapter 21
TABLE 21.1
Rejection Criteria with Corresponding Significance Levels
TABLE 21.2
Requirements for a Minimally Constrained Adjustment
Chapter 22
TABLE 22.1
Data for a Two-Dimensional Conformal Coordinate Transformation
TABLE 22.2
Control Data for Three-Dimensional Conformal Coordinate Transformation
Chapter 23
TABLE 23.1
Coefficients for Linearized Equations in Equations (23.11) through (23.13)
TABLE 23.2
Coefficients for Linearized Equation (23.14)
TABLE 23.3
Data for Figure 23.4
Chapter 24
TABLE 24.1
Defining Ellipsoidal Parameters
Chapter 26
TABLE 26.1
Creation of a Mapping Table
TABLE 26.2
Comparison of Indexing Methods
TABLE 26.3
Algorithms for Building the Normal Equations Directly from Their Observations
TABLE 26.4
Computer Algorithms for Computing Cholesky Factors of a Normal Matrix
TABLE 26.5
Computer Algorithms for Forward and Back Substitutions
TABLE 26.6
Pseudocode for Algorithm Computing the Inverse of a Cholesky Decomposed Matrix
TABLE 26.7
Computer Algorithms to Find the Inverse of a Cholesky Factored Matrix
TABLE 26.8
Comparison of Number of Operations in Computing One Column
TABLE 26.9
Connectivity Matrix
Appendix A
TABLE A.1
Addition Algorithm in BASIC, C, FORTRAN, and Pascal
TABLE A.2
Multiplication Algorithm in BASIC, C, FORTRAN, and Pascal
Appendix B
TABLE B.1
Inverse Algorithm in BASIC, C, FORTRAN, and PASCAL
Appendix D
TABLE D.1
Percentage Points for the Standard Normal Distribution Function
TABLE D.2
Critical Values for the
χ
2
Distribution
TABLE D.3
Critical Values for the
t
Distribution
TABLE D.4
Critical Values for the
F
Distribution
Appendix E
TABLE E.1
1000 95% Confidence Intervals
Appendix G
TABLE G.1
Brief Summary of Software Options Contained in ADJUST
Chapter 1
FIGURE 1.1
Line plot of distance quantities.
FIGURE 1.2
Examples of precision versus accuracy.
Chapter 2
FIGURE 2.1
Frequency histogram.
FIGURE 2.2
Common histogram shapes.
FIGURE 2.3
Histogram for Example 2.2.
Chapter 3
FIGURE 3.1
Plots of probability versus size of errors.
FIGURE 3.2
Normal distribution curve.
FIGURE 3.3
Normal density function.
FIGURE 3.4
Area under the normal distribution curve determined by Equation (3.10).
FIGURE 3.5
Area representing the probability in Equation (3.14).
FIGURE 3.6
Area representing the probability in Equation (3.16).
FIGURE 3.7
Normal distribution curve.
FIGURE 3.8
Skewed data set.
Chapter 4
FIGURE 4.1
χ
2
distribution.
FIGURE 4.2
t
distribution.
FIGURE 4.3
F
distribution.
FIGURE 4.4
t
α/2
plot.
FIGURE 4.5
Selecting the
t
distribution from the ADJUST statistics menu.
FIGURE 4.6
Entering the upper-tail percentage points and degrees of freedom for the
t
-distribution critical value.
FIGURE 4.7
Computed critical value from a
t
distribution for a 99.7% confidence interval with 43 degrees of freedom.
FIGURE 4.8
Entry of data from Example 4.1 into STATS to compute a confidence interval.
FIGURE 4.9
Confidence interval computed from STATS for Example 4.1.
Chapter 5
FIGURE 5.1
Graphical interpretation of Type I and Type II errors.
FIGURE 5.2
Graphical interpretation of (
a
) one- and (
b
) two-tailed tests.
FIGURE 5.3
Entry screen for performing the
t
test as shown in Example 5.2 in STATS.
FIGURE 5.4
Results for
t
test discussed in Example 5.2 in STATS.
Chapter 6
FIGURE 6.1
Rectangular tank.
FIGURE 6.2
Horizontal distance from slope observations.
FIGURE 6.3
Elevation of chimney determined using intersecting angles.
FIGURE 6.4
Partial listing of Example 6.3 calculated in Mathcad.
FIGURE 6.5
Example 6.3 performed in a spreadsheet.
FIGURE P6.17 and P6.18
FIGURE P6.24
Chapter 7
FIGURE 7.1
Possible target locations.
FIGURE 7.2
Error in angle due to target centering.
FIGURE 7.3
Error in angle due to error in instrument centering.
FIGURE 7.4
Analysis of instrument-centering error.
FIGURE 7.5
Centering errors at a station.
FIGURE 7.6
Effects of instrument-leveling error.
FIGURE 7.7
Closed-polygon traverse.
FIGURE 7.8
Excel® worksheet for computing estimated errors in angles.
Chapter 8
FIGURE 8.1
Latitude and departure uncertainties due to (a) the distance standard error (σ
D
) and (b) the azimuth standard error (σ
α
). Note that if either the distance or azimuth changes, both the latitude and departure are affected.
FIGURE 8.2
Link traverse example.
FIGURE 8.3
Closed-link traverse.
FIGURE 8.4
Traverse computations option dialog box.
FIGURE 8.5
ADJUST data file for Example 8.2.
Chapter 9
FIGURE 9.1
Collimation error in differential leveling.
FIGURE 9.2
Nonvertical level rod.
FIGURE 9.3
Determination of elevation difference by trigonometric leveling.
Chapter 10
FIGURE 10.1
Differential leveling network.
Chapter 11
FIGURE 11.1
Plot of
e
−x
.
FIGURE 11.2
Fitting points on a line.
FIGURE 11.3
Fitting points on a parabolic curve.
FIGURE P11.18
Chapter 12
FIGURE 12.1
Differential leveling observation.
FIGURE 12.2
Interlocking leveling network.
FIGURE 12.3
Differential leveling network for Example 12.1.
FIGURE 12.4
MATRIX file for Example 12.1.
FIGURE 12.5
ADJUST file for Example 12.1.
FIGURE 12.6
Differential leveling least squares options in ADJUST.
FIGURE P12.1
FIGURE P12.4
Chapter 13
FIGURE P13.14
FIGURE P13.15
Chapter 14
FIGURE 14.1
Observation of a distance.
FIGURE 14.2
Trilateration example.
FIGURE 14.3
File format for Example in ADJUST.
FIGURE 14.4
Trilateration network.
FIGURE 14.5
Quadrilateral network.
FIGURE 14.6
Example of a spreadsheet that develops matrices for Example 14.2 for use in MATRIX.
FIGURE 14.7
File format for Example 14.2 in MATRIX.
FIGURE P14.3
Chapter 15
FIGURE 15.1
Relationship between the azimuth and the computed angle,
α
.
FIGURE 15.2
Relationship between an angle and two azimuths.
FIGURE 15.3
Intersection example.
FIGURE 15.4
Resection example.
FIGURE 15.5
Quadrilateral example.
FIGURE 15.6
Portion of the spreadsheet for Example 15.3.
FIGURE 15.7
Portion of the spreadsheet for Example 15.3.
FIGURE P15.1
FIGURE P15.9
FIGURE P15.11
Chapter 16
FIGURE 16.1
(a) Polygon and (b) link traverses.
FIGURE 16.2
Simple link traverse.
FIGURE 16.3
ADJUST data file for Example 16.1.
FIGURE 16.4
Horizontal network.
FIGURE P16.1
FIGURE P16.2
FIGURE P16.3
Chapter 17
FIGURE 17.1
GPS survey network.
FIGURE 17.2
Satellite reference coordinate system.
FIGURE 17.3
Earth-related, three-dimensional coordinate system used in GPS carrier-phase differencing computations.
FIGURE 17.4
Geodetic coordinates (with the Earth-centered, Earth-fixed coordinates
X
P
,
Y
P
, and
Z
P
geocentric coordinate system superimposed).
FIGURE 17.5
Results of loop misclosure computations for example in Section 17.7.3.
FIGURE 17.6
ADJUST data file for computing loop misclosure as discussed in Section 17.7.3.
FIGURE 17.7
ADJUST data file for the example in Section 17.8.
FIGURE P17.3
FIGURE P17.5
FIGURE P17.7
FIGURE P17.11
FIGURE P17.17
FIGURE P17.20
FIGURE P17.22
Chapter 18
FIGURE 18.1
Superimposed coordinate systems.
FIGURE 18.2
Two-dimensional coordinate systems.
FIGURE 18.3
ADJUST data file for Example 18.1.
FIGURE 18.4
θ
1
rotation.
FIGURE 18.5
θ
2
rotation.
FIGURE 18.6
θ
3
rotation.
FIGURE 18.7
ADJUST data file for Example 18.4.
Chapter 19
FIGURE 19.1
Standard error rectangle.
FIGURE 19.2
(
a
) Three-dimensional view and (
b
) contour plot of a bivariate distribution.
FIGURE 19.3
Standard error ellipse.
FIGURE 19.4
Two-dimensional rotation.
FIGURE 19.5
Graphical representation of error ellipses.
FIGURE 19.6
Graphical representation of error ellipse.
FIGURE 19.7
Network analysis using error ellipses: (
a
) trilateration for 19 distances; (
b
) triangulation for 19 angles.
FIGURE 19.8
A 95% circular error overlaying the 95% error ellipse for Station
S
.
Chapter 20
FIGURE 20.1
Trilateration network.
FIGURE 20.2
A
,
X
, and
L
matrices partitioned.
FIGURE 20.3
Holding direction
IJ
fixed.
FIGURE 20.4
Holding direction
AB
fixed in a trilateration adjustment.
FIGURE 20.5
Differential leveling network.
FIGURE 20.6
Network for Example 20.3.
Chapter 21
FIGURE 21.1
Presence of distance blunder in computations.
FIGURE 21.2
Effects of a single blunder on the traverse closure error.
FIGURE 21.3
Distribution of residuals by sign.
FIGURE 21.4
Survey network.
FIGURE 21.5
Effects of a blunder on the
t
distribution.
FIGURE 21.6
Data set with blunders.
FIGURE 21.7
Standard error ellipse data for Example 21.1.
FIGURE 21.8
Horizontal least squares option screen shows blunder detection options in ADJUST.
FIGURE 21.9
Mathcad code to compute standardized residuals and redundancy numbers.
FIGURE P21.11
FIGURE P21.14
Chapter 22
FIGURE 22.1
General least squares fits of points to a line.
FIGURE 22.2
Residuals for point
C
.
Chapter 23
FIGURE 23.1
Relationship between the geocentric and local geodetic coordinate systems.
FIGURE 23.2
Reduction of observations in a local geodetic coordinate system.
FIGURE 23.3
Comparison of horizontal distances from opposite ends of the line.
FIGURE 23.4
Example three-dimensional geodetic network.
FIGURE 23.5
ADJUST listing of adjustment results for example problem in Figure 23.4.
FIGURE 23.6
ADJUST options for a three-dimensional geodetic network adjustment.
FIGURE 23.7
ADJUST data file for example in Section 23.4.
Chapter 25
FIGURE 25.1
The normal distribution.
FIGURE 25.2
Adjusted distances and angles from Example 16.2.
FIGURE 25.3
Readjustment of data in Example 16.2 after removing angle
QTR
.
FIGURE 25.4
Readjusted data from Example 16.2 with a different stochastic model.
FIGURE 25.5
Pseudorange residuals plots from satellites 24 (
a
) and 28 (
b
).
Chapter 26
FIGURE 26.1
Structure of the normal matrix.
FIGURE 26.2
Horizontal network.
FIGURE 26.3
Normal matrix.
FIGURE 26.4
Reordered normal matrix.
FIGURE 26.5
Computation of the Cholesky factor.
Appendix A
FIGURE A.1
Addition of matrices.
FIGURE A.2
Multiplication of matrices.
FIGURE A.3
MATRIX data file.
FIGURE A.4
MATRIX software.
FIGURE A.5
MATRIX selection screen for adding two matrices.
Appendix B
FIGURE B.1
Cofactor of the
a
12
element.
FIGURE B.2
Observation of a line.
Appendix C
FIGURE C.1
Spreadsheet for Example C.3.
Appendix D
FIGURE D.1
The normal distribution curve.
FIGURE D.2
χ
2
distribution.
FIGURE D.3
t
distribution.
FIGURE D.4
F
distribution.
Appendix F
FIGURE F.1
Reduction of a distance to a mapping surface.
FIGURE F.2
Relationship of geodetic azimuth (
T
), grid azimuth (
t
), convergence angle (γ), and arc-to-chord correction (
δ
).
Appendix G
FIGURE G.1
Comparison of a Mathcad function and a C function.
Cover
Table of Contents
Begin Reading
C1
iii
iv
xv
xvi
xvii
xix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
669
670
671
672
673
674
675
676
677
678
679
681
682
683
685
686
687
688
689
690
691
692
693
694
695
E1
Sixth Edition
CHARLES D. GHILANI, PhD
Professor Emeritus of Engineering
The Pennsylvania State University
Copyright © 2018 by John Wiley & Sons, Inc. All rights reserved
Published by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor the author shall be liable for damages arising herefrom.
For general information about our other products and services, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Cataloging-in-Publication Data is Available
ISBN 9781119385981 (Hardback)
ISBN 9781119390312 (ePDF)
ISBN 9781119390619 (ePub)
Cover Design: Wiley
Cover Image: © llhedgehogll/Getty Images
No observation is ever exact. As a corollary, every observation contains error. These statements are fundamental and accepted universally. It follows logically, therefore, that surveyors, who are measurement specialists, should have a thorough understanding of errors. They must be familiar with the different types of errors, their sources, and their expected magnitudes. Armed with this knowledge, they will be able to (1) adopt procedures for reducing error sizes when making their measurements and (2) account rigorously for the presence of errors as they analyze and adjust their data. This book is devoted to creating a better understanding of these topics.
In recent years, the least squares method of adjusting spatial data has been rapidly gaining popularity as the method used for analyzing and adjusting surveying data. This should not be surprising, because the method is the most rigorous adjustment procedure available. It is soundly based on the mathematical theory of probability; it allows for appropriate weighting of all observations in accordance with their expected precisions; and it enables complete statistical analyses to be made following adjustments so that the expected precisions of adjusted quantities can be determined. Procedures for employing the method of least squares and then statistically analyzing the results are major topics covered in this book.
In years past, least squares was only seldom used for adjusting surveying data because the time required to set up and solve the necessary equations was too great for hand methods. Now computers have eliminated this disadvantage. Besides advances in computer technology, some other recent developments have also led to increased use of least squares. Prominent among these are the global navigation satellite systems (GNSSs) such as GPS and geographic information systems (GISs). These systems rely heavily on rigorous adjustment of data and statistical analysis of the results. But perhaps the most compelling of all reasons for the recent increased interest in least squares adjustment is that new accuracy standards for surveys are being developed that are based on quantities obtained from least squares adjustments. Thus, surveyors of the future will not be able to test their observations for compliance with these standards unless they adjust their data using least squares. This edition discusses these newer methods of classifying maps and control no matter the source. Clearly modern surveyors must be able to apply the method of least squares to adjust their observed data, and they must also be able to perform a statistical evaluation of the results after making the adjustments.
In the sixth edition, the author has included instructional videos to accompany Chapters 1 through 25 and Appendixes A through C in the book. These videos provide instructional lessons on the subject matter covered in the book. They also demonstrate the solution of example problems using spreadsheets and the software that accompanies the book. Additionally, this book discusses proper procedures to compute accuracy estimates following both the Federal Geographic Data Committee and ASPRS Digital Geospatial Data and control surveys. For instructors who adopt this text in their classes, an Instructor's Manual to Accompany Adjustment Computations is available from the publisher's website. This manual includes detailed solutions to all the problems in the book along with suggested course outlines and exams, which can be used in their courses. It is available to all instructors who adopt this book. To obtain access the manual contact your regional Wiley representative.
The software STATS, ADJUST, and MATRIX will run on any PC-compatible computer in the Windows environment. The first package, called STATS, performs basic statistical analyses. For any given set of observed data, it will compute the mean, median, mode, and standard deviation, and develop and plot the histogram and normal distribution curve. It will also compute critical values for the t, χ2, and F distributions. New features include its ability to compute critical values for the τ distribution, confidence intervals for the population mean variance, and ratio of two variances, and perform statistical test for the population mean, variance, and ratio of two variances.
The second package, called ADJUST, contains programs for performing specific least squares adjustments covered in the book. When performing least squares adjustments, ADJUST allows the user to select either data snooping or the tau criterion for post-adjustment blunder detection. The program contains a variety of coordinate transformations and allows user to fit points to a line, parabola, or circle. A new feature in MATRIX is its ability to perform unweighted and weighted least squares adjustments with a single command. Using this program, systems of simultaneous linear equations can be solved quickly and conveniently, and the basic algorithm for doing least squares adjustments can be solved in a stepwise fashion. For those who wish to develop their own software, the book provides several helpful computer algorithms in the languages of BASIC, C, FORTRAN, and PASCAL. Additionally, the Mathcad worksheets on the companion website demonstrate the use of functions in developing modular programs.
The chapters of this book are arranged in the order found most convenient in teaching college courses on adjustment computations. The content in this book can be covered in two or three typical undergraduate, college-level courses. It is believed that this order also best facilitates practicing surveyors who use the book for self-study. In earlier chapters we define terms and introduce students to the fundamentals of errors and methods for analyzing them. The next several chapters are devoted to the subject of error propagation in the various types of traditional surveying measurements. Then chapters follow that describe observation weighting and introduce the least-squares method for adjusting observations. Applications of least squares in adjusting basic types of surveys are then presented in separate chapters. Adjustment of level nets, trilateration, triangulation, traverses and horizontal networks, GNSS networks, and conventional three-dimensional surveys are included. The subject of error ellipses and error ellipsoids are covered in a separate chapter. Procedures for applying least squares in curve fitting and in computing coordinate transformations are also presented. The more advanced topics of blunder detection, the method of general least squares adjustments, and computer optimization are covered in the last chapters.
As with previous editions, matrix methods, which are so well adapted to adjustment computations, continue to be used in this edition. For those students who have never studied matrices, or those who wish to review this topic, an introduction to matrix methods is given in Appendixes A and B. Those students who have already studied matrices can conveniently skip this subject.
Least squares adjustments often require the formation and solution of nonlinear equations. Procedures for linearizing nonlinear equations by Taylor's theorem are therefore important in adjustment computations, and this topic is presented in Appendix C. Appendix D contains several statistical tables including the standard normal error distribution, the χ2 distribution, t distribution, and a set of F-distribution tables. These tables are described at appropriate locations in the text, and their use is demonstrated with example problems.
Basic courses in surveying, statistics, and calculus are necessary prerequisites to understanding some of the theoretical coverage and equation derivations given herein. Nevertheless, those who do not have these courses as background but who wish to learn how to apply least squares in adjusting surveying observations can follow the discussions on data analysis.
Besides being appropriate for use as a textbook in college classes, this book will be of value to practicing surveyors and geospatial information managers. With the inclusion of video lessons, it is possible for practitioners to learn the subject matter at their leisure. The author hopes that through the publication of this book, least squares adjustment and rigorous statistical analyses of surveying data will become more commonplace, as it should.
Through the years, many people have contributed to the development of this book. As noted in the preface, the book has been used in continuing education classes taught to practicing surveyors as well as in classes taken by students at the University of California–Berkeley, the University of Wisconsin–Madison, and Pennsylvania State University–Wilkes-Barre. The students in these classes have provided data for some of the example problems and have supplied numerous helpful suggestions for improvements throughout the book. The authors gratefully acknowledge their contributions.
Earlier editions of the book benefited specifically from the contributions of Dr. Paul R. Wolf, who wrote the first two editions of this book, Mr. Joseph Dracup of the National Geodetic Survey, Professor Harold Welch of the University of Michigan, Professor Sandor Veress of the University of Washington, Mr. Charles Schwarz of the National Geodetic Survey, Mr. Earl Burkholder of New Mexico State University, Dr. Herbert Stoughton of Metropolitan State College, Dr. Joshua Greenfeld, Dr. Steve Johnson of Purdue University, Mr. Brian Naberezny, Mr. Preston Hartzell of the University of Houston, and Mr. Edward Connolly of TBE Group, Inc. The suggestions and contributions of these people were extremely valuable and are very much appreciated.
We currently live in what is often termed the information age. Aided by new and emerging technologies, data are being collected at unprecedented rates in all walks of life. For example, in the field of surveying, total station instruments, global navigation satellite systems (GNSSs) equipment, digital metric cameras, laser-scanning systems, LiDAR, mobile mapping systems, and satellite imaging systems are only some of the new instruments that are now available for rapid generation of vast quantities of observational data.
Geographic information systems (GISs) have evolved concurrently with the development of these new data acquisition instruments. GISs are now used extensively for management, planning, and design. They are being applied worldwide at all levels of government, in business and industry, by public utilities, and in private engineering and surveying offices. Implementation of a GIS depends on large quantities of data from a variety of sources, many of them consisting of observations made with the new instruments such as those noted above and others collected by instruments no longer used in practice.
However, before data can be utilized whether for surveying and mapping projects, for engineering design, or for use in a geographic information system, they must be processed. One of the most important aspects of this is to account for the fact that no measurements are exact. That is, they always contain errors.
The steps involved in accounting for the existence of errors in observations consist of (1) performing statistical analyses of the observations to assess the magnitudes of their errors, and study their distributions to determine whether they are within acceptable tolerances, and if the observations are acceptable, (2) adjusting them so they conform to exact geometric conditions or other required constraints. Procedures for performing these two steps in processing measured data are principal subjects of this text.
Measurements are defined as observations made to determine unknown quantities. They may be classified as either direct or indirect. Direct measurements are made by applying an instrument directly to the unknown quantity and observing its value, usually by reading it directly from graduated scales on the device. Determining the distance between two points by making a direct measurement using a graduated tape, or measuring an angle by making a direct observation from the graduated circle of a total station instrument are examples of direct measurements.
Indirect measurements are obtained when it is not possible or practical to make direct measurements. In such cases the quantity desired is determined from its mathematical relationship to direct measurements. For example, surveyors may observe angles and lengths of lines between points directly and use these observations to compute station coordinates. From these coordinate values, other distances and angles that were not observed directly may be derived indirectly by computation. During this procedure, the errors that were present in the original direct observations are propagated (distributed) by the computational process into the indirect values. Thus, the indirect measurements (computed station coordinates, distances, directions, and angles) contain errors that are functions of the original errors. This distribution of errors is known as error propagation. The analysis of how errors propagate is also a principal topic of this text.
It can be stated unconditionally that (1) no measurement is exact, (2) every measurement contains errors, (3) the true value of a measurement is never known, and thus (4) the exact size of the error present is always unknown. These facts can be illustrated by the following. If an angle is measured with a scale divided into degrees, its value can be read only to perhaps the nearest tenth of a degree. However if a better scale graduated in minutes were available and read under magnification, the same angle might be estimated to tenths of a minute. With a scale graduated in seconds, a reading to the nearest tenth of a second might be possible. From the foregoing, it should be clear that no matter how well the observation is taken, a better one may be possible. Obviously in this example, observational accuracy depends on the division size of the scale. But accuracy depends on many other factors, including the overall reliability and refinement of the equipment used, environmental conditions that exist when the observations are taken, and human limitations (e.g., the ability to estimate fractions of a scale division). As better equipment is developed, environmental conditions improve, and observer ability increases, observations will approach their true values more closely, but they can never be exact.
By definition, an error is the difference between a measured value for any quantity and its true value, or
where ε is the error in an observation, y the measured value, and μ its true value.
As discussed above, errors stem from three sources, which are classified as instrumental, natural, and personal. These are described as follows:
Instrumental errors
. These errors are caused by imperfections in instrument construction or adjustment. For example, the divisions on a theodolite or total station instrument may not be spaced uniformly. These error sources are present whether the equipment is read manually or digitally.
Natural errors
. These errors are caused by changing conditions in the surrounding environment. These include variations in atmospheric pressure, temperature, wind, gravitational fields, and magnetic fields.
Personal errors
. These errors arise due to limitations in human senses, such as the ability to read a micrometer or to center a level bubble. The sizes of these errors are affected by personal ability to see and by manual dexterity. These factors may be influenced further by temperature, insects, and other physical conditions that cause humans to behave in a less precise manner than they would under ideal conditions.
From the discussion thus far it can be stated with absolute certainty that all measured values contain errors, whether due to lack of refinement in readings, instabilities in environmental conditions, instrumental imperfections, or human limitations. Some of these errors result from physical conditions that cause them to occur in a systematic way, whereas others occur with apparent randomness. Accordingly, errors are classified as either systematic or random. But before defining systematic and random errors, it is helpful to define mistakes. These three terms are defined as follows:
Mistakes
. These are caused by confusion or by an observer's carelessness. They are not classified as errors and must be removed from any set of observations. Examples of mistakes include (
a
) forgetting to set the proper parts-per-million (ppm) correction on an EDM instrument, or failure to read the correct air temperature, (
b
) mistakes in reading graduated scales, and (
c
) mistakes in recording (i.e., writing down 27.55 for 25.75). Mistakes are also known as
blunders
or
gross errors
.
Systematic errors
. These errors follow some physical law, and thus, these errors can be predicted. Some systematic errors are removed by following correct observational procedures (e.g., balancing backsight and foresight distances in differential leveling to compensate for earth curvature and refraction). Others are removed by deriving corrections based on the physical conditions that were responsible for their creation (e.g., applying a computed correction for earth curvature and refraction on a trigonometric leveling observation). Additional examples of systematic errors are (
a
) temperature not being standard while taping, (
b
) an indexing error of the vertical circle of a total station instrument, and (
c
) use of a level rod that is not standard length. Corrections for systematic errors can be computed and applied to observations to eliminate their effects.
Random errors
. These are the errors that remain after all mistakes and systematic errors have been removed from the observed values. In general, they are the result of human and instrument imperfections. They are generally small and are as likely to be negative as positive. They usually do not follow any physical law and therefore must be dealt with according to the mathematical laws of probability. Examples of random errors are (
a
) imperfect centering over a point during distance measurement with an EDM instrument, (
b
) bubble not centered at the instant a level rod is read, and (
c
) small errors in reading graduated scales. It is impossible to avoid random errors in measurements entirely. Although they are often called accidental errors, their occurrence should not be considered an accident.
Due to errors, repeated measurement of the same quantity will often yield different values. A discrepancy is defined as the algebraic difference between two observations of the same quantity. When small discrepancies exist between repeated observations, it is generally believed that only small errors exist. Thus, the tendency is to give higher credibility to such data and to call the observations precise. However, precise values are not necessarily accurate values. To help understand the difference between precision and accuracy, the following definitions are given:
Precision:
Precision is the degree of consistency between observations and is based on the sizes of the discrepancies in a data set. The degree of precision attainable is dependent on the stability of the environment during the time of measurement, the quality of the equipment used to make the observations, and the observer's skill with the equipment and observational procedures.
Accuracy:
Accuracy is the measure of the absolute nearness of an observed quantity to its true value. Since the true value of a quantity can never be determined, accuracy is always an unknown.
The difference between precision and accuracy can be demonstrated using distance observations. Assume that the distance between two points is paced, taped, and measured electronically and that each procedure is repeated five times. The resulting observations are:
Observation
Pacing (
p
)
Taping (
t
)
EDM (
e
)
1
571
567.17
567.133
2
563
567.08
567.124
3
566
567.12
567.129
4
588
567.38
567.165
5
557
567.01
567.114
The arithmetic means for these sets of data are 569, 567.15, and 567.133, respectively. A line plot illustrating relative values of the electronically measured distances denoted by e, and the taped distances, denoted by t, is shown in Figure 1.1. Notice that although the means of the EDM data and of the taped observations are relatively close, the EDM set has smaller discrepancies. This indicates that the EDM instrument produced a higher precision. However, this higher precision does not necessarily prove that the mean of the electronically observed data is implicitly more accurate than the mean of the taped values. In fact, the opposite may be true if, for example, the reflector constant was entered incorrectly causing a large systematic error to be present in all the electronically observed distances. Because of the larger discrepancies, it is unlikely that the mean of the paced distances is as accurate as either of the other two values. But its mean could be more accurate if large systematic errors were present in both the taped and electronically measured distances.
FIGURE 1.1 Line plot of distance quantities.
Another illustration explaining differences between precision and accuracy involves target shooting, depicted in Figure 1.2. As shown, four situations can occur. If accuracy is considered as closeness of shots to the center of a target at which a marksman shoots and precision as the closeness of the shots to each other then (1) the data may be both precise and accurate, as shown in Figure 1.2(a); (2) the data may produce an accurate mean but not be precise, as shown in Figure 1.2(b); (3) the data may be precise but not accurate, as shown in Figure 1.2(c); or (4) the data may be neither precise nor accurate as shown in Figure 1.2(d).
FIGURE 1.2 Examples of precision versus accuracy.
Figure 1.2(a) is the desired result when observing quantities. The other cases can be attributed to the following situations. The results shown in Figure 1.2(b) occur when there is little refinement in the observational process. Someone skilled at pacing may achieve these results. Figure 1.2(c) generally occurs when systematic errors are present in the observational process. This can occur, for example, in taping if corrections are not made for tape length and temperature, or with electronic distance measurements when using the wrong combined instrument-reflector constant. Figure 1.2(d) shows results obtained when the observations are not corrected for systematic errors and are taken carelessly by the observer (or the observer is unskilled at the particular measurement procedure).
In general, when making measurements, data such as those shown in Figure 1.2(b) and 1.2(d) are undesirable. Rather, results similar to those shown in Figure 1.2(a) are preferred. However, in making measurements the results of Figure 1.2(c) can be just as acceptable if proper steps are taken to correct for the presence of the systematic errors. (This correction would be equivalent to the marksman realigning the sights after taking the shots.) To make these corrections, (1) the specific types of systematic errors that have occurred in the observations must be known, and (2) the procedures used in correcting them must be understood.
As noted earlier, errors exist in all observations. In surveying, the presence of errors is obvious in many situations where the observations must meet certain conditions. In level loops that begin and close on the same bench mark, for example, the elevation difference for the loop must equal zero. However, in practice this is hardly ever the case due to the presence of random errors. (For this discussion it is assumed that all mistakes have been eliminated from the observations and appropriate corrections have been applied to remove all systematic errors.) Other conditions that disclose errors in surveying observations are that (1) the three measured angles in a plane triangle must total 180°, (2) the sum of the angles measured around the horizon at any point must equal 360°, and (3) the algebraic sum of the latitudes (and departures) must equal zero for closed traverses that begin and end on the same station. Many other conditions could be cited; however, in any of them, the observations rarely, if ever, meet the required conditions, due to the presence of random errors.
The examples above not only demonstrate that errors are present in surveying observations but also illustrate the importance of redundant observations; those measurements made that are in excess of the minimum number needed to determine the unknowns. Two measurements of the length of a line, for example, yield one redundant observation. The first observation would be sufficient to determine the unknown length, and the second is redundant. However, this second observation is very valuable. First, by examining the discrepancy between the two values, an assessment of the size of the error in the observations can be made. If a large discrepancy exists, a blunder or large error is likely to have occurred. In that case, observations of the line would be repeated until two values having an acceptably small discrepancy were obtained. Second, the redundant observation permits an adjustment to be made to obtain a final value for the unknown line length, and that final adjusted value will be more precise statistically than either of the individual observations. In this case, if the two observations were of equal precision, the adjusted value would be the simple mean.
Each of the specific conditions cited in the first paragraph of this section involve one redundant observation. For example, there is one redundant observation when the three angles of a plane triangle are observed. This is true because with two observed angles, say A and B, the third could be computed as C = 180° − A − B, and thus, observation of C is unnecessary. However measuring angle C enables an assessment of the errors in the angles to be made, and it also makes an adjustment possible to obtain final angles with statistically improved precision. Assuming the angles were of equal precision, the adjustment would enforce the 180° sum for the three angles by distributing the total discrepancy in equal parts to each angle.
