Optimal Control - Frank L. Lewis - E-Book

Optimal Control E-Book

Frank L. Lewis

0,0
147,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A NEW EDITION OF THE CLASSIC TEXT ON OPTIMAL CONTROL THEORY

As a superb introductory text and an indispensable reference, this new edition of Optimal Control will serve the needs of both the professional engineer and the advanced student in mechanical, electrical, and aerospace engineering. Its coverage encompasses all the fundamental topics as well as the major changes that have occurred in recent years. An abundance of computer simulations using MATLAB and relevant Toolboxes is included to give the reader the actual experience of applying the theory to real-world situations. Major topics covered include:

  • Static Optimization
  • Optimal Control of Discrete-Time Systems
  • Optimal Control of Continuous-Time Systems
  • The Tracking Problem and Other LQR Extensions
  • Final-Time-Free and Constrained Input Control
  • Dynamic Programming
  • Optimal Control for Polynomial Systems
  • Output Feedback and Structured Control
  • Robustness and Multivariable Frequency-Domain Techniques
  • Differential Games
  • Reinforcement Learning and Optimal Adaptive Control

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 768

Veröffentlichungsjahr: 2012

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title Page

Copyright

Dedication

Preface

Acknowledgments

Chapter 1: Static Optimization

1.1 Optimization without Constraints

1.2 Optimization with Equality Constraints

1.3 Numerical Solution Methods

Problems

Chapter 2: Optimal Control of Discrete-Time Systems

2.1 Solution of the General Discrete-Time Optimization Problem

2.2 Discrete-Time Linear Quadratic Regulator

2.3 Digital Control of Continuous-Time Systems

2.4 Steady-State Closed-Loop Control and Suboptimal Feedback

2.5 Frequency-Domain Results

Problems

Chapter 3: Optimal Control of Continuous-Time Systems

3.1 The Calculus of Variations

3.2 Solution of the General Continuous-Time Optimization Problem

3.3 Continuous-Time Linear Quadratic Regulator

3.4 Steady-State Closed-Loop Control and Suboptimal Feedback

3.5 Frequency-Domain Results

Problems

Chapter 4: The Tracking Problem and Other LQR Extensions

4.1 The Tracking Problem

4.2 Regulator with Function of Final State Fixed

4.3 Second-Order Variations in the Performance Index

4.4 The Discrete-Time Tracking Problem

4.5 Discrete Regulator with Function of Final State Fixed

4.6 Discrete Second-Order Variations in the Performance Index

Problems

Chapter 5: Final-Time-Free and Constrained Input Control

5.1 Final-Time-Free Problems

5.2 Constrained Input Problems

Problems

Chapter 6: Dynamic Programming

6.1 Bellman's Principle of Optimality

6.2 Discrete-Time Systems

6.3 Continuous-Time Systems

Problems

Chapter 7: Optimal Control for Polynomial Systems

7.1 Discrete Linear Quadratic Regulator

7.2 Digital Control of Continuous-Time Systems

Problems

Chapter 8: Output Feedback and Structured Control

8.1 Linear Quadratic Regulator with Output Feedback

8.2 Tracking a Reference Input

8.3 Tracking by Regulator Redesign

8.4 Command-Generator Tracker

8.5 Explicit Model-Following Design

8.6 Output Feedback in Game Theory and Decentralized Control

Problems

Chapter 9: Robustness and Multivariable Frequency-Domain Techniques

9.1 Introduction

9.2 Multivariable Frequency-Domain Analysis

9.3 Robust Output-Feedback Design

9.4 Observers and the Kalman Filter

9.5 LQG/Loop-Transfer Recovery

9.6 H DESIGN

9.7 Problems

Chapter 10: Differential Games

10.1 Optimal Control Derived Using Pontryagin's Minimum Principle and the Bellman Equation

10.2 Two-player Zero-sum Games

10.3 Application of Zero-sum Games to H Control

10.4 Multiplayer Non-zero-sum Games

Chapter 11: Reinforcement Learning and Optimal Adaptive Control

11.1 Reinforcement Learning

11.2 Markov Decision Processes

11.3 Policy Evaluation and Policy Improvement

11.4 Temporal Difference Learning and Optimal Adaptive Control

11.5 Optimal Adaptive Control for Discrete-time Systems

11.6 Integral Reinforcement Learning for Optimal Adaptive Control of Continuous-time Systems

11.7 Synchronous Optimal Adaptive Control for Continuous-time Systems

Appendix A: Review of Matrix Algebra

A.1 Basic Definitions and Facts

A.2 Partitioned Matrices

A.3 Quadratic Forms and Definiteness

A.4 Matrix Calculus

A.5 The Generalized Eigenvalue Problem

References

Index

Wiley End User License Agreement

This book is printed on acid-free paper.

Copyright © 2012 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and the author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor the author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information about our other products and services, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Lewis, Frank L.

Optimal control / Frank L. Lewis, Draguna L. Vrabie, Vassilis L. Syrmos.—3rd ed.

p. cm.

Includes bibliographical references and index.

ISBN 978-0-470-63349-6 (cloth); ISBN 978-1-118-12263-1(ebk); ISBN 978-1-118-12264-8(ebk); ISBN 978-1-118-12266-2 (ebk); ISBN 978-1-118-12270-9 (ebk); ISBN 978-1-118-12271-6 (ebk); ISBN 978-1-118-12272-3 (ebk)

1. Control theory. 2. Mathematical optimization. I. Vrabie, Draguna L. II. Syrmos, Vassilis L. III. Title.

QA402.3.L487 2012

629.8'312–dc23

2011028234

To Galina, Roma, and Chris, who make every day exciting

—Frank Lewis

To my mother and my grandmother, for teaching me my potential and supporting my every choice

—Draguna Vrabie

To my father, my first teacher

—Vassilis Syrmos

Preface

This book is intended for use in a second graduate course in modern control theory. A background in the state-variable representation of systems is assumed. Matrix manipulations are the basic mathematical vehicle and, for those whose memory needs refreshing, Appendix A provides a short review.

The book is also intended as a reference. Numerous tables make it easy to find the equations needed to implement optimal controllers for practical applications.

Our interactions with nature can be divided into two categories: observation and action. While observing, we process data from an essentially uncooperative universe to obtain knowledge. Based on this knowledge, we act to achieve our goals. This book emphasizes the control of systems assuming perfect and complete knowledge. The dual problem of estimating the state of our surroundings is briefly studied in Chapter 9. A rigorous course in optimal estimation is required to conscientiously complete the picture begun in this text.

Our intention is to present optimal control theory in a clear and direct fashion. This goal naturally obscures the more subtle points and unanswered questions scattered throughout the field of modern system theory. What appears here as a completed picture is in actuality a growing body of knowledge that can be interpreted from several points of view and that takes on different personalities as new research is completed.

We have tried to show with many examples that computer simulations of optimal controllers are easy to implement and are an essential part of gaining an intuitive feel for the equations. Students should be able to write simple programs as they progress through the book, to convince themselves that they have confidence in the theory and understand its practical implications.

Relationships to classical control theory have been pointed out, and a root-locus approach to steady-state controller design is included. Chapter 9 presents some multivariable classical design techniques. A chapter on optimal control of polynomial systems is included to provide a background for further study in the field of adaptive control. A chapter on robust control is also included to expose the reader to this important area. A chapter on differential games shows how to extend the optimality concepts in the book to multiplayer optimization in interacting teams.

Optimal control relies on solving the matrix design equations developed in the book. These equations can be complicated, and exact solution of the Hamilton-Jacobi equations for nonlinear systems may not be possible. The last chapter, on optimal adaptive control, gives practical methods for solving these matrix design equations. Algorithms are given for finding approximate solutions online in real-time using adaptive learning techniques based on data measured along the system trajectories.

The first author wants to thank his teachers: J. B. Pearson, who gave him the initial excitement and passion for the field; E. W. Kamen, who tried to teach him persistence and attention to detail; B. L. Stevens, who forced him to consider applications to real situations; R. W. Newcomb, who gave him self-confidence; and A. H. Haddad, who showed him the big picture and the humor behind it all. We owe our main thanks to our students, who force us daily to take the work seriously and become a part of it.

Acknowledgments

This work was supported by NSF grant ECCS-0801330, ARO grant W91NF-05-1-0314, and AFOSR grant FA9550-09-1-0278.