Oracle Database 12c Backup and Recovery Survival Guide - Francisco Munoz Alvarez - E-Book

Oracle Database 12c Backup and Recovery Survival Guide E-Book

Francisco Munoz Alvarez

0,0
47,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The three main responsibilities of a successful DBA are to ensure the availability, recoverability, and performance of any database. To ensure the recoverability of any database, a DBA needs to have a strong backup and recovery skills set. Every DBA is always looking for a reference book that will help them to solve any possible backup and recovery situation that they can come across in their professional life.
Oracle Database 12c Backup and Recovery Survival Guide has the unique advantage to be a reference to all Oracle backup and recovery options available, making it essential for any DBA in the world. If you are new to Oracle Database, this book will introduce you to the fantastic world of backup and recovery that is vital to your success. If you are an experienced DBA, this book will become a reference guide and will also help you to learn some possible new skills, or give you some new ideas you were never aware about. It will also help you to easily find the solution to some of the most well known problems you could find during your career as a DBA. This book contains useful screenshots, scripts, and examples that you will find more than useful.
Most of the books currently available in the market concentrate only on the RMAN utility to backup and recovery. This book will be an exception to the rule and will become a must-have reference, allowing you to design a real and complete backup and recovery strategy. It covers the most important topics on Oracle database such as backup strategies, Nologging operations, new features in 12c, user managed backups and recoveries, RMAN (including reporting, catalog management, troubleshooting, and performance tuning), advanced data pump, Oracle Enterprise Manager 12c and SQL Developer.
"Oracle Database 12c Backup and Recovery Survival Guide" contains everything a DBA needs to know to keep data safe and recoverable, using real-life scenarios.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 600

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Oracle Database 12c Backup and Recovery Survival Guide
Credits
About the Author
Acknowledgement
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Support files, eBooks, discount offers and more
Why Subscribe?
Free Access for Packt account holders
Instant Updates on New Packt Books
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Understanding the Basics of Backup and Recovery
Purpose of backup and recovery
Testing backups
Protecting data
Media failure
Hardware failure
Human error
Application error
Types of backup
A physical backup
A logical backup
Backup strategies
Restore versus recovery
What is redo?
Redo generation and recoverability
The NOARCHIVELOG mode
The ARCHIVELOG mode
Understanding the ARCHIVELOG mode
Preparing for the ARCHIVELOG mode
Checking the status of the ARCHIVELOG mode
Specifying parameters
Viewing the status of archival destinations
Placing a database into the ARCHIVELOG mode
Differences between redo and undo
Facing excessive redo generation during an online backup?
Summary
2. NOLOGGING Operations
LOGGING versus NOLOGGING
Disabling redo generation (NOLOGGING)
NOLOGGING operations
Indexed organized tables – an exception
Reducing redo generation
Tips when LOGGING is in effect (not using NOLOGGING)
Backups
Bulk inserts
Bulk deletes
Bulk updates
Partitioning
Tips for developers
Tips when NOLOGGING is in effect
Partitioning
Direct path inserts
Bulk inserts
Bulk deletes
Bulk updates
Backups and NOLOGGING
Redo-related wait events
The 'log file parallel write' event
The 'log file sync' event
The 'redo log space request' event
The 'log buffer space' event
Block corruption due to NOLOGGING
Repairing NOLOGGING changes on physical and logical standby databases
Finding sessions that generate lots of redo
Some other important facts
Redo and undo for DML
Redo and temporary tables
Redo generation and materialized views
Flashback and NOLOGGING
Performance and recovery considerations
Direct path load using SQL*Loader
Some useful scripts
Redo generated since instance startup
Redo generated since session startup
Redo generated by current user sessions
Current status for redo logs
Redo log group and log switch information
NOLOGGING objects in the database
Summary
3. What is New in 12c
Pluggable database
RMAN new features and enhancements
Container and pluggable database backup and restore
Enterprise Manager Database Express
Backup privileges
SQL and DESCRIBE
Multi-section backups for incremental backups
Network-based recovery
Active Duplicate
Support for the third-party snapshot
Cross-platform data transport
Table recovery
Data Pump's new features and enhancements
Disabling LOGGING on Data Pump Import
Full transportable Export/Import
Exporting views as tables
Extended character data types
Encryption password
Compressing tables on Import
Exporting data from the data vault
Creating SecureFile LOBs on Import
Auditing Data Pump commands
Summary
4. User-managed Backup and Recovery
Cold backup
Offline backup
Hot backups
Hot backup of a whole database
Hot backup of tablespaces
Hot backup of a container database
Whole container database
Root only or individual pluggable database
Check datafile status
Control file backup
Binary backup
Text file backup
Flashback database
Recovering from a user-managed backup
Other recovery scenarios
Losing all copies of the current control file
Losing one copy of a multiplexed control file
Loss of archived redo logs or online redo logs
Loss of SPFILE
Summary
5. Understanding RMAN and Simple Backups
Why RMAN?
Getting started with RMAN
RMAN architecture
Target database
Target Control File
RMAN channels
Auxiliary database
Recovery Catalog
RMAN client
Oracle Enterprise Manager (EM) Cloud Control 12c
Media Management Library (MML)
Oracle secure backup
Memory requirements for RMAN
Configuring ARCHIVELOG mode and FRA
Introducing RMAN backup formats
Backup sets
Image copy
Full backup
Incremental backups
Differential incremental backup
Cumulative incremental backup
Power of one
Getting introduced to RMAN commands
Taking your first RMAN backup
File sections for backups of large data files
Using RMAN for performing incremental backups
Fast incremental backups using the block change tracking (BCT) file
Multi-section incremental backups
Incrementally updated backups
Performing backups of the control file, the SPFILE, and archived redo logs
Using RMAN compression for backups
RMAN for multitenant container databases
Performing a full backup of a multitenant CDB
Partial backup of a multitenant CDB
Performing backup of a pluggable database and root
Performing backups of backups
Restarting RMAN backups
Useful RMAN views related to backups
Summary
6. Configuring and Recovering with RMAN
RMAN configuration – an introduction
Using the V$RMAN_CONFIGURATION view
Using the SHOW ALL command
Configuring the backup retention policy
Redundancy retention policy
Recovery window retention policy
Using backup optimization
Configuring the device types for the backup
Configuring auto backup for the control file and SPFILE
Configuring RMAN channels
Creating duplexed backups
Configuring encrypted backups
Transparent encryption of backups
Creating and using Oracle Software Keystore
Password encryption
Dual-mode encryption
Configuring compression for backups
Configuring the snapshot control file
Configuring the archived log deletion policy
Configuring the FRA
Configuring authentication for RMAN
Operating system authentication
Authenticating the password file
Crash and media recovery
Key terms related to recovery
Overview of stages in the database startup
Steps involved in the crash/instance recovery
Instance recovery in container and pluggable databases
Performing media recovery
Recovery in the NOARCHIVELOG mode
Loss of a temporary file
Loss of non-system data files
Loss of system data files
Recovering whole CDBs, PDBs, and root container databases
Complete recovery of a container database
Complete recovery of a pluggable database
Complete recovery of the root container
Performing control file recovery
Performing Block Media Recovery (BMR)
Performing point-in-time recovery
Database Point-in-time Recovery (DBPITR)
Tablespace Point-in-time Recovery (TSPITR)
Table and table partition-level recovery from RMAN backups
Data recovery advisor
Summary
7. RMAN Reporting and Catalog Management
Using the control file for RMAN metadata
Using the CONTROLFILE_RECORD_KEEP_TIME parameter
What is a recovery catalog
Creating the recovery catalog
Sizing and creating a database for the recovery catalog
Creating the default tablespace for the catalog owner
Creating the catalog owner schema
Granting privileges to the catalog owner
Creating the recovery catalog
Using the recovery catalog
Resynchronizing the recovery catalog with the control file
Merging multiple recovery catalogs into one
Using virtual private catalogs
Creating and managing stored scripts
Making a recovery catalog highly available
Upgrading the recovery catalog
Unregistering databases from the recovery catalog
Dropping a recovery catalog
Views related to the recovery catalog
Reporting in RMAN
Using the LIST command
Using the REPORT command
Summary
8. RMAN Troubleshooting and Tuning
Getting started with RMAN troubleshooting
Using CHECKSYNTAX
Reading the RMAN error stack
Debugging RMAN using the DEBUG clause
Using the alert log and operating system trace files
RMAN tuning – an introduction
I/O and RMAN – two sides of one coin
Number and size of the input and output buffers
Synchronous and asynchronous I/O modes
Setting the Large Pool memory
Monitoring RMAN I/O performance using dictionary views
V$BACKUP_ASYNC_IO (for asynchronous I/O)
V$BACKUP_SYNC_IO (for synchronous I/O)
Tuning SBT (tape) performance
Monitoring RMAN sessions and operations
Stopping RMAN from being uncontrollable
Using incremental, multi-section, multiplexing, and parallelism
Troubleshooting RMAN performance using tracing
Summary
9. Understanding Data Pump
What is Data Pump?
The Data Pump architecture
New concepts with Data Pump
Methods to move the data
Datafile copy
Direct path
External tables
Conventional path
Network link
Data Pump files
Roles for Data Pump export and import
Directory objects
Creating directory objects
Data Pump scenarios
Schema export and import
Exporting and importing tables
Exporting and importing a whole database/pluggable database
Using Export to estimate space
Parallel full database export and interactive-command mode
Importing tables with only metadata
Exporting views as tables
Importing data via a network link
Summary
10. Advanced Data Pump
Data masking
Metadata repository and version control
Using SOURCE_EDITION and TARGET_EDITIONS
Cloning a user
Creating smaller copies of production
Creating your database in a different file structure
Moving all objects from one tablespace to another
Moving an object to a different schema
Migrating data for upgrade
Downgrading an Oracle Database
Transporting a tablespace
Data Pump flashback
Monitoring Data Pump job status
Some performance tuning tips
Summary
11. OEM12c and SQL Developer
Configuring backup, recovery, and catalog settings
Backup settings
Recovery settings
Catalog settings
Scheduling an RMAN backup
Using the Oracle-Suggested Backup strategy option
Using the Customized Backup option
Restore points
Export/Import with OEM12c
Executing an export operation
Monitoring the job
SQL developer 3.2
RMAN operations
Data Pump operations
Summary
A. Scenarios and Examples – A Hands-on Lab
Configuring the database
Making sure an spfile is used to start the database
Placing the database in the archivelog mode and activating Flashback
Creating a new redo log group and associated files
Configuring RMAN
Creating the target DB RMAN backup account
Configure RMAN using the configure command
Backup database
Checking and deleting obsolete backups
Creating RMAN catalog user
Creating recovery catalog
Register your DB in the recovery catalog
Creating a virtual private catalog
Enabling Block Change tracking
Playing with RMAN, FRA, and catalog views
Monitoring a backup
Incremental backups
Multisection backups
FRA – checking number of redo switches
Check for alerts
Check FRA usage
See the archived log generated by the DB target
See the control file backups
See the corrupted list that exists in datafile backups
See block corruption in the DB, populated when backup or backup validate
See all RMAN configurations (equivalent to show all)
Monitor backup outputs (RMAN)
Offline backups with RMAN
Offline backup without using configured defaults
Using backup limits (duration)
Modifying the retention policy for a backup set (archival backups)
Archive deletion policy
Using RMAN to scan DB for physical and logical errors
Configuring tablespaces for exclusion from whole database backups
Skipping offline, inaccessible, or read-only datafiles
Forcing backups of read-only datafiles
Backup of newly added datafiles
Backup files not backed up during a specific period
General backup examples
Backup copies
Advanced RMAN
Information about fully-completed backups
Summary of the active session history
How long does it take?
V$BACKUP_ASYNC_IO
Tablespace Point-in-time Recovery (TSPITR)
Reporting from a catalog
Duplex backup
Check if the database is recoverable
Recover advisor
Magic with Data Pump
Preparing Data Pump
Data masking
Metadata repository
Cloning a user
Creating smaller copies of production
Creating your database in a different structure
Time-based flashback
Backup and recovery scenarios
Active duplication of a database to a different server with the same structure (non-OMF and non-ASM)
Duplicating a PDB
ASM backup and restore
Recovering from the loss of the SYSTEM tablespace
Recovering a lost datafile using an image from an FRA
Index

Oracle Database 12c Backup and Recovery Survival Guide

Oracle Database 12c Backup and Recovery Survival Guide

Copyright © 2013 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: September 2013

Production Reference: 1170913

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78217-120-1

www.packtpub.com

Cover Image by Dilleshwar (<[email protected]>)

Credits

Authors

Francisco Munoz Alvarez

Aman Sharma

Reviewers

Gokhan Atil

Wissem EL Khlifi

Bjorn Naessens

Arup Nanda

Alessandro Parisi

Laurent Schneider

Acquisition Editor

Vinay Argekar

Lead Technical Editor

Azharuddin Sheikh

Technical Editors

Manan Badani

Aparna Chand

Vivek Pillai

Project Coordinator

Wendell Palmer

Proofreader

Ameesha Green

Indexers

Rekha Nair

Tejal Soni

Graphics

Abhinash Sahu

Ronak Dhruv

Production Coordinator

Shantanu Zagade

Cover Work

Shantanu Zagade

About the Author

Francisco Munoz Alvarez has over two decades of experience in consulting, analysis, support, implementation, and migration of Oracle products. He is also an expert in most phases of a database life cycle, for example, development, stabilization, security, backup and recovery, tuning, installations, and data warehouse (ETL) with excellent implementation and support methodologies. He is a popular speaker at many Oracle conferences around the world.

He is also the President of CLOUG (Chilean Oracle Users Group), LAOUC (Latin American Oracle Users Group Community, which is the umbrella organization for all of Latin America), and NZOUG (New Zealand Oracle Users Group). He worked as an Associate Technologist for Oracle in Brazil and Chile. He was an Oracle instructor for the New Horizons Centre (Chile) and for Oracle Education (Brazil and Chile). He also worked in the first team to introduce Oracle to South America (Oracle 6 and the beta version of Oracle 7). He was also the first Master Oracle 7 Database Administrator in South America, as well as the first Latin American Oracle professional to be awarded a double ACE (ACE in 2008 and ACE Director in 2009) by Oracle HQ. In 2010, he had the privilege to receive a prestigious Oracle Magazine Editor's Choice Award as the Oracle Evangelist of the Year—a huge recognition for his outstanding achievements in the Oracle world that includes the creation and organization of the already famous OTN Tours that are the biggest Oracle evangelist events in the world.

Currently, Francisco works for Revera Limited, which is a leading provider of utility computing infrastructure and enterprise data management in New Zealand, as the Oracle Professional Services Manager since June 2011. He also maintains an Oracle blog (http://www.oraclenz.org) and you can always contact him through this or Twitter (@fcomunoz) regarding any questions about this book.

Acknowledgement

Writing a book is not the accomplishment of one or two people, it involves a huge group of people behind the scenes to make it possible. Due to this, I would like to take this opportunity to acknowledge some important people in my life that without their support and help, writing this book would have been an impossible journey. I would like to start with my wonderful wife, mother of my four wonderful children; honey, without your enormous patience and support, nothing could be possible—even though at times you don't understand this fascinating journey and this fascinating world, my love, I dedicate this book to you! Secondly, my mother, my father and my well-remembered grandmother for everything you all taught me in my early years and the things I have seen because of you. And finally my good friends, Thomas Kyte, Arup Nanda, Aman Sharma (your support, help, and friendship is invaluable), Gokhan Atil, Wissem EL Khifi, Bjorn Naessens, Alessandro Parisi, Laurent Schneider, all editors, and everyone involved in the book, without your help and support throughout writing this book and even during my career, I would never have been able to finish writing this book.

About the Author

Aman Sharma is an Oracle Database consultant and instructor. He holds a Master's degree in Computer Applications and has been working with Oracle Database for over a decade. His main focus is to understand how Oracle Database works internally. Besides the core database, he has a very strong knowledge of Linux, Solaris, Oracle RAC, Data Guard, RMAN, Oracle Exadata and Oracle Enterprise Manager.

He is an Oracle Certified Professional and also an Oracle Certified Expert in Oracle RAC, SQL and Linux. He is also an Oracle Certified Implementation Specialist and a certified Solaris System Administrator. He loves to share his learning with the Oracle community, so when he is not delivering an Oracle related session, he spends a lot of his time participating in OTN (Oracle Technology Network) discussion forums. He also maintains an Oracle blog (http://blog.aristadba.com), is also on Twitter (@amansharma81), you can reach to him using either ways. He is a strong supporter of user groups and is a member of India Oracle Users Group (http://www.aioug.org) and has been a speaker at various Oracle conferences organized by AIOUG. He is also an associate member of IOUG (http://www.ioug.org). In 2010, he was awarded the prestigious Oracle ACE award from Oracle Corporation.

Acknowledgement

I dedicate this book to my entire family for their so much love, affection, and constant encouragement.

A child is just like a plain canvas. It's his parents who fill colors into it with their love, blessings, guidance, and teachings. The same is true for me as well. This book is a result of the blessings, love, affection, guidance, and support of my mom and dad. Thank you so very much mom and dad! Dad, how so I wish that you were here today, but I know you are watching over me from the heavens, your blessings are always with me and you must be very happy today! We all miss you so very much—every day, in every moment!

Writing a book is a team effort and requires a lot of hard work but every journey starts from that very first step. For me, getting aboard to work on this book became possible only because of my dear friend and co-author Francisco Munoz Alvarez, who invited me to become a co-author with him. It was a really great team effort and working with him was an absolute delight, as well as a very learning and rewarding experience. I thank him for not only making me a part of this project but also for his support, help and mentoring the entire time while we were working on this project.

Though a book is written by its authors, there are many other people who work behind the scenes and do tons of things that finally result in letting a book see the sunshine. The most important among these are the technical reviewers who do the painful job of reading the raw writing of an author, line by line and correct it. It's due to their corrections, suggestions, feedback, and questions that convert those sentences written in a Word document into the chapters of an actual book. I am so very thankful to our most excellent team of the technical reviewers which we got: Arup Nanda, Wissem EL Khlifi, Gokhan Atil, Bjorn Naessens, Laurent Schneider, and Alessandro Parisi. Without their input, feedback and corrections, it wouldn't have been possible to find out what was missing, what needed to be removed, which topics required more explanation, and above all, to spot the technical errors and correct them. So when you would be reading about a topic in any chapter and would like it, remember that a major contribution to it has come in some way or the other from the technical excellence and input given by the reviewers.

I also want to say a big thanks to the entire team at Packt who did a splendid job in every aspect of the book, especially to my editor Vinay Argekar, and Wendell Palmer, the coordinator for the entire project, for being so supportive in this entire journey. Thanks to our Lead Technical Editor Azharuddin Sheikh along with his entire team: Manan Badani, Aparna Chand, and Vivek Pillai. They all have spent a lot of time ironing out the final wrinkles from the chapters, giving them that marvellous polished look which you will see throughout. I also want to pass on my gratitude to all those people with whom I might have not interacted directly but who worked very hard behind the scenes to bring this book into your hands.

I would like to take this opportunity to say thanks to all those people in the Oracle community who were not involved in this project but who are always an inspiration and constantly motivated me—not only to correct and improve myself by learning as much as possible, but also to share that learning with others and help them. The list is far too long and I am going to miss more than I will be able to put here but still, to name a few (and in no particular order): Jonathan Lewis, Hans Forbrich, Dan Morgan, Hemant Chitale, Sam Alapati, Eddie Awad, Uwe Hesse, Syed Jaffar Hussain, Kellyn Pot'Vin, Tim Gorman, Steve Adams, Kyle Hailey, Tim Hall, Mark Bobak, Cary Millsap, Howard Rodgers, Steve Karam, Tanel Poder, Alex Gorbachev, Kai Yu, Riaz Shamsudeen, Murali Vallath, Gaja Krishna Vaidyanatha, Rama Velpuri, and so many more as the list goes on and on and on. Many thanks to all of you!

Thanks to my friend Kamran Agayev for giving me the very first writing opportunity by inviting me to be his co-author and Don Burelson, and entire team of Rampant Press for their support in the entire time of writing it. Thanks to Zahida Praveen who gave me the first opportunity to step in and prove myself in the real world. I also wish to express my gratitude and thanks to all my superordinates for their constant support and motivation: Arvind Dikshit, Gautham Malareddy, Usha George, Joann Too, Ajai Sahni, Shantnu Tandan, Latha Gangadharan, Kalyan Chakravarthy, and Ritesh Jain. I would like to offer my gratitude to all the delegates that I get to meet in person and to those whom I interact with virtually over the OTN forums, for asking their questions and doubts. It's your questions which make me learn (and many times, even re-learn) new things every single day.

Finally, thanks a ton to you for making this book a part of your library. I really hope and wish that you are going to like the book and it's going to be useful for you in some way.

About the Reviewers

Gokhan Atil is a Senior Applications DBA working at one of the leading IT consultancy companies in Turkey. He has a strong background in UNIX systems and hands-on experience in Oracle Database versions 11g/10g/9i/8i and Oracle E-Business Suite R12/11i. He is an active member of the Oracle community and a founding member of the Turkish Oracle User Group (TROUG). He has written and presented papers at various conferences. He's one of the co-authors of the book Expert Oracle Enterprise Manager 12c, published in 2013.

Gokhan Atil holds various Oracle certifications such as Oracle Database 10g/11g Administrator Certified Professional, Oracle E-Business Suite R12 Applications DBA Certified Professional, and Oracle Exadata Database Machine X3 Administrator Certified Expert. He was honored with the Oracle ACE award in 2011. He has a blog where he shares his experiences with Oracle since 2008. You can get in touch with Gokhan at http://www.gokhanatil.com.

Wissem El Khlifi is the first Oracle ACE in Spain and an Oracle Certified Professional DBA with over 12 years of IT experience.

He earned a Computer Science Engineer's degree from FST, Tunisia, and a Master's in Computer Science from the UPC, Barcelona.

His areas of interest are Linux System Administration, Oracle Database including high availability (RAC, Dataguard), and Oracle applications.

He has worked with Oracle Database since version 7.

His career has included the roles of Oracle programmer, Java analyst/programmer, Oracle DBA, architect, project manager, and team leader. He currently works as the Senior Oracle DBA for Schneider Electric/APC.

He writes numerous articles on his Oracle website: http://www.oracle-class.com.

Bjorn Naessens is a Senior Oracle DBA at Uptime Technologies N.V. based in Belgium. Prior to his work with Uptime, he was a system engineer/DBA at a leading Belgian media company. Most of Bjorn's projects are for new infrastructure setups, hence using RMAN for backup/recovery has been constant in these projects.

Next to Oracle databases, Bjorn is specialized in Oracle VM and Oracle Linux and has a passion for the Oracle Database Appliance.

Arup Nanda has been an Oracle DBA for the last 19 years and counting. He works as the Principal Database Architect at a New York multinational company, has co-authored four books, has written around 300 articles, and presented at 150 different conferences around the world. Honoring professional accomplishments and contributions to the user community, Oracle chose him as the DBA of the Year in 2003 and Enterprise Architect of the Year in 2012. He blogs frequently at http://www.arup.blogpsot.com, is a member of the Oak Table Network, is an Oracle ACE Director, and tweets with handle @arupnanda.

Alessandro Parisi is an Enterprise Software Architect, Data Scientist and Ethical Hacker, working as an IT Consultant for nearly 20 years now. He is keen on experimenting with non-conventional solutions to problem solving in complex and dynamic contexts, mixing new technologies with lateral thinking and a holistic approach.

Founder of InformaticaSicura.com, specializing in IT Security Consultancy, he is the curator of the Hacking Wisdom column on the blog http://www.informaticasicura.altervista.org.

He is also the author of Sicurezza Informatica e Tutela della Privacy, published by Istituto Poligrafico e Zecca dello Stato, Italy, 2006.

I would like to acknowledge Ilaria Sinisi for her support and patience. Thank you very much, Ilaria.

Laurent Schneider is working as a database administrator for a leading bank in Switzerland. He is the author of the blog http://laurentschneider.com. He has written and reviewed multiple books and articles on Oracle technologies.

Laurent loves to play chess with his friends and to hike and bike with his kids Dora and Loïc in the beautiful Swiss mountains.

www.PacktPub.com

Support files, eBooks, discount offers and more

You might want to visit www.PacktPub.com for support files and downloads related to your book.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

http://PacktLib.PacktPub.com

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books.

Why Subscribe?

Fully searchable across every book published by PacktCopy and paste, print and bookmark contentOn demand and accessible via web browser

Free Access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.

Instant Updates on New Packt Books

Get notified! Find out when new books are published by following @PacktEnterprise on Twitter, or the Packt Enterprise Facebook page.

Preface

Knowledge is only valuable when shared.

The three main responsibilities for a successful DBA are to ensure the availability, recoverability, and performance of any database that the DBA is accountable for. This book will focus on the recoverability set of skills, and will also include some tips and ideas regarding availability. All examples showed in the book are executed over Oracle Enterprise Linux 6.4 and Oracle Database 12.1.0.1 (also known as Oracle 12c), please be aware of these specific versions to ensure that you will be able to reproduce the same results, you will see reflected in this book.

To ensure the recoverability of any database, a DBA needs to have a strong backup and recovery skills set; this knowledge is essential for any good DBA. Without this knowledge, you will be in violation of my most important rule that I have used in my entire career, "The most important rule with respect to data is to never put yourself into an unrecoverable situation". If you follow this simple tip every time you work with data, I can guarantee that you will be always protected against any possible situation that could surprise you in your daily journey, including of course the unexpected ones.

My key intention by writing this book is that if you are a new DBA, introduce you to this fantastic world that is vital to your success. If you are an experienced DBA, this book will become a reference guide and will also help you to learn some possible new skills, or give some new ideas that you never knew about. It will also help you to easily find the solution to some of the most well-known problems you could find during your career, and this book will be rich with screenshots, full of scripts, examples, and tutorials that you will find more than useful and handy.

Most of the books currently available in the market only concentrate on the RMAN utility for backup and recovery; this book will be an exception to the rule and will become a must-have reference to allow you to achieve a real complete backup and recovery strategy. This is not in any case a replacement to the official Oracle documentation available at http://www.oracle.com/pls/db121/homepage; I will always recommend to any serious DBA to read the complete documentation set as a complement to this book.

This book contains my knowledge of more than two decades working with Oracle technologies and also shows several topics and situations that came to my attention when speaking at several conferences around the world or helping others on Oracle forums or virtual communities.

I hope you will enjoy reading this book as the same way I enjoyed writing it.

What this book covers

Chapter 1, Understanding the Basics of Backup and Recovery, covers topics such as understanding the need for creating backups, getting familiar with the different backup types, an overview of backup strategy, understanding what is redo and how it affects your database recoverability, and understanding database operational modes and redo generation.

Chapter 2, NOLOGGING Operations, covers topics such as LOGGING versus NOLOGGING, disabling redo generation, NOLOGGING operations, how to reduce redo generation, redo log wait events, practice with some interesting scripts, and much more interesting topics.

Chapter 3, What is New in 12c, covers topics such as pluggable database, RMAN's new features and enhancements, and Data Pump's new features and enhancements.

Chapter 4, User-managed Backup and Recovery, covers backup and recovery using user-managed methods. Understanding the basics involving a manual backup and recovery will help you to easily understand what is going on in the background of your database when using RMAN and it will also help you to compare and easily understand all benefits of using RMAN against any other backup method when working with Oracle.

Chapter 5, Understanding RMAN and Simple Backups, describes that being the custodians of databases, DBA's should always try to minimize the loss of data. This can be accomplished through an effective strategy that enables us to secure a backup of these databases that can be accessed in case of systemic failures. However, it is rightly said that any strategy is as good as the tool that implements it. In this chapter, we shall introduce you to a tool that's just like one of those super heroes that can fix almost any issue. So in the list of such amazing heroes such as Superman and Batman, please welcome RMAN, the Recovery Manager—Oracle's one stop solution for both backups and recoveries.

Chapter 6, Configuring and Recovering with RMAN, looks into the two abilities of RMAN, that is, how to configure it and how to use it for doing database recoveries.

Chapter 7, RMAN Reporting and Catalog Management, discusses about the topics such as benefits of recovery catalog, creation and use of recovery catalog, using the CATALOG command, and RMAN reporting using the LIST and REPORT commands.

Chapter 8, RMAN Troubleshooting and Tuning, looks at various ways to get the best performance from RMAN and also techniques to troubleshoot it when it won't behave itself.

Chapter 9, Understanding Data Pump, describes about Data Pump and its architecture, new concepts with Data Pump, methods to move data, and play with many Data Pump scenarios.

Chapter 10, Advanced Data Pump, covers topics such as data masking, build a metadata repository, create a version control, clone users (create a new user using an existent user as a template), create smaller copies of production, create your database in a different file structure, move all objects from one tablespace to another, move a object to a different schema (a simple example, change a table owner), migrate data for a database upgrade, downgrade an Oracle database, transport a tablespace, and use Data Pump with flashback.

Chapter 11, OEM12c and SQL Developer, discusses topics such as configuring our backup and recovery settings (including catalog settings) in OEM12c, scheduling backups in OEM12c, creating restore points in OEM12c, understanding database export/import operations in OEM12c, and getting familiar with the SQL developer.

Appendix: Scenarios and Examples – A Hands-on Lab, allows you to practice some of the scenarios you saw in this book (step-by-step) and learn by practice. If you have any doubt about a command of what it will be doing, please refer to the corresponding chapter in this book for more information.

What you need for this book

To be able to reproduce all scenarios in this book you will need the following software:

Oracle VirtualBox, you can download Oracle VirtualBox (free virtualization tool) at https://www.virtualbox.org/wiki/DownloadsOracle Enterprise Linux 6.4 , you can download it (free) from https://edelivery.oracle.com/linuxOracle 12.1.0.1 database for Linux that you can download (free) from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-linux-download-1959253.html

Who this book is for

This book is designed for Oracle DBAs and system administrators. The reader will have a basic working experience of administering Oracle databases. This book will become a reference guide and will also help you to learn some new skills, and give you some new ideas you never knew about, helping you to easily find the solution to some of the most well-known problems you could encounter as DBAs. This book is designed to be understood even by beginners who have just started with the Oracle database. Due to this, any person with a basic working experience of administering an Oracle database will be able to completely understand this book.

Conventions

In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text are shown as follows: "If a directory object is not specified, a default directory object called DATA_PUMP_DIR is provided".

Any command-line input or output is written as follows:

SQL> GRANT DATAPUMP_EXP_FULL_DATABASE, DATAPUMP_IMP_FULL_DATABASE TO fcomunoz;

New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "clicking on the Next button moves you to the next screen".

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.

To send us general feedback, simply send an e-mail to <[email protected]>, and mention the book title via the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the erratasubmissionform link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at <[email protected]> if you are having a problem with any aspect of the book, and we will do our best to address it.

Chapter 1. Understanding the Basics of Backup and Recovery

The constant evolution of IT has, among other things, affected the role of a database administrator (DBA). Today the DBA is not merely a Database Administrator anymore, but is morphing more into the Database Architect role. If you want to become a successful DBA and be more competitive in the market, you should have a different skill set than what was normally required in the past. You need to have a wide range of understanding in architectural design, network, storage, licensing, and much more. The more knowledge you have, the better opportunities you will find.

The main idea of this chapter is to introduce you to some basic concepts regarding backup and recovery, giving you a general overview of the most important methods and tools available for you to achieve your backup goals. Therefore, in this chapter, we will cover the following topics:

Understanding the need for creating backupsGetting familiar with the different backup typesAn overview of backup strategyUnderstanding what is redo and how it affects your database recoverabilityUnderstanding database operational modes and redo generation

Purpose of backup and recovery

As a DBA, you are the person responsible for recovering the data and guarding the business continuity of your organization. Consequently, you have the key responsibility for developing, deploying, and managing an efficient backup and recovery strategy for your institution or clients that will allow them to easily recover from any possible disastrous situation. Remember, data is one of the most important assets a company can have. Most organizations would not survive after the loss of this important asset.

Testing backups

It's incredible how many corporations around the world do not have a proper disaster recovery plan (DRP) in place, and what is worse, many DBAs never even test their backups. Most of the time when auditing Oracle environments for clients, I ask the following question to the DBA team:

Are you 100 percent sure that you can trust your backups? For this question I generally receive answers like:
I'm not 100 percent sure since we do not recover from backups too oftenWe do not test our backups, and so I cannot guarantee the recoverability of them

Another good question is the following:

Do you know how long a full recovery of your database will take? Common responses to this question are:
Probably anything between 6 and 12 hoursI don't know, because I've never done a full recovery of my database

As you can see, a simple implementation of a procedure to proactively test the backups randomly will allow you to:

Test your backups and ensure that they are valid and recoverable: I have been called several times to help clients because their current backups are not viable. Once I was called to help a client and discovered that their backup-to-disk starts every night at 10 P.M. and ends at 2 A.M. Afterwards, the backup files are copied to a tape by a system administrator every morning at 4 A.M. The problem here was that when this process was implemented, the database size was only 500 GB, but after few months, the size of the database had grown to over 1 TB. Consequently, the backup that was initially finishing before 2 A.M. was now finishing at 5 A.M., but the copy to a tape was still being triggered at 4 A.M. by the system administrator. As a result, all backups to a tape were unusable.Know your recovery process in detail: If you test your backups, you will have the knowledge to answer questions regarding how long a full recovery will take. Answering that your full recovery will take around three and a half hours, but you prefer to say five hours just in case of any unexpected problem that you will come across, you will look more professional. This will let me know that you really know what you are talking about.Document and improve your recovery process: The complete process needs to be documented. If the process is documented and you also allow your team to practice on a rotation basis, this will ensure that they are familiar with the course of action and will have all the knowledge necessary to know what to do in case of a disaster. You will now be able to rest in your home at night without being disturbed, because now you are not the only person in the team with the experience required to perform this important task.

Good for you if you have a solid backup and recovery plan in place. But have you tested that plan? Have you verified your ability to recover?

Protecting data

As being the main person responsible for the recovery and availability of the data, you need to have a full understanding of how to protect your data against all possible situations you could come across in your daily job. The most common situations you could see are:

Media failureHardware failureHuman errorApplication error

Let's take a closer look at each of these situations.

Media failure

Media failure occurs when a system is unable to write or read from a physical storage device such a disk or a tape due to a defect on the recording surface. This kind of failure can be easily overcome by ensuring that your data is saved on more than one disk (mirrored) using a solution such as RAID (Redundant Array of Independent Disks) or ASM (Automatic Storage Management). In the case of tapes, ensure that your backups are saved in more than one tape and as mentioned earlier, testing the recoverability from them.

Hardware failure

Hardware failure is when a failure occurs on a physical component of your hardware such as when your server motherboard, CPU, or any other component stops working. To overcome this kind of situation, you will need to have a high availability solution in place as part of your disaster and recovery strategy. This could include solutions such as Oracle RAC, a standby database, or even replacement hardware on the premises. If you are using Oracle Standard Edition or Standard Edition One and need to implement a proper standby database solution, I will recommend you to take a closer look at the Dbvisit Standby solution for Oracle databases that is currently available in the market to allow you to fulfill this need (http://dbvisit.com).

Human error

Human error, also known as user error, is when a user interacting directly or through an application causes damage to the data stored in the database or to the database itself. The most frequent examples of human error involve changing or deleting data and even files by mistake. It is likely that this kind of error is the greatest single cause of database downtime in a company.

No one is immune to user error. Even an experienced DBA or system administrator can delete a redo log file that has the extension .log as a mistake when taking it as a simple log file to be deleted to release space. Fortunately, user error can most of the time be solved by using physical backups, logical backups, and even Oracle Flashback technology.

Application error

An application error happens when a software malfunction causes data corruption in the logical or physical levels. A bug in the code can easily damage data or even corrupt a data block. This kind of problem can be solved using Oracle block media recovery, and is why it is so important to have a proper test done prior to promoting an application change to any production environment.

Tip

Always do a backup before and after a change is implemented in a production environment. A before backup will allow you to roll back to the previous state in case something goes wrong. An after backup will protect you to avoid to redo the change in case of a failure, due that it was not included in the previous backup available.

Types of backup

Now that you understand all types of possible failures that could affect your database, let's take a closer look at the definition of backup and the types of backups that are available to ensure the recoverability of our data.

A backup is a real and consistent copy of data from a database that could be used to reconstruct the data after an incident. Consequently, there are two different types of backups available, which are:

Physical backupsLogical backups

A physical backup

A physical backup is a copy of all the physical database files that are required to perform the recovery of a database. These include datafiles, control files, parameter files, and archived redo log files. As an Oracle DBA, we have different options to make a physical backup of our database. Backups can be taken using user-managed backup techniques or using Recovery Manager (RMAN). Both techniques will be discussed in more detail later in this book. Physical backups are the foundation of any serious backup and recovery strategy.

A logical backup

Oracle uses Oracle Data Pump to allow us to generate a logical backup that can be used to migrate data or even do a partial or full recovery of our database. The utilities available are the Data Pump Export program (expdp) and the Data Pump Import program (impdp).

Many people have a misconception of these tools in thinking that they can only be used to move data. Data Pump is a very flexible and powerful tool that if well utilized can easily become a DBA's best friend. It is not just for moving data. It can also play a crucial role in your backup and recovery strategy.

Chapter 9, Understanding Data Pump and Chapter 10, Advanced Data Pump will go into more detail about the use of Data Pump for logical backup and recovery.

Note

The old Import and Export utilities

In the previous versions of Oracle we used to work with similar utilities called exp and imp. The exp utility is deprecated since Oracle 11g, but the imp utility is still currently supported by Oracle. Theimp utility allows us to recover any backup generated by the old exp program. Just keep in mind that the use of exp is not supported anymore by Oracle and using it can bring future trouble to your environment.

Backup strategies

A backup and recovery strategy has the main purpose of protecting a database against data loss, and this document will contain all steps required to reconstruct the database after a disaster strikes. As the person responsible for the data of your company, it is very important to have a correct backup strategy in place to allow you to recover from any possible disaster.

Before you create a strategy, you will need to understand clearly all the Service Level Agreements (SLAs) in place with in your organization regarding this topic. To that end, you will need to ask some simple questions to the owners of the data:

How much data can the company lose in case of a disaster? (RPO)How much time could the business wait to have the data restored and available again? (RTO)How much will it cost the company for the loss of one hour of data?What retention periods are required by law for the company data?

After receiving the answers to all these questions, you will be able to implement a proper backup and recovery strategy according to your real company needs and SLAs in place.

For example, if your company can only afford to lose three hours of data (RPO) but it can have the database down for up to 24 hours for a recovery process (RTO), all you will need to do to fulfill your SLA is to have a full backup of your database made daily. You will also need to make backups of all your archive logs every three hours to a tape or another network location to allow you to have all your data protected.

As part of creating a strategy, it is important to properly understand the concepts known asRecovery Point Objective (RPO) and Recovery Time Objective (RTO). As you can see in the following figure, the RPO reflects how much data might be lost without incurring a significant risk or loss to the business, and the RTO is basically the maximum amount of time allowed to reestablish the service after an incident without affecting the company seriously.

Restore versus recovery

On several occasions, people have asked me about the differences between restore and recovery. Due to these questions, I will take this opportunity to explain the difference in some simple words to make it clear:

Restore: It is the act that involves the restoration of all files that will be required to recover your database to a consistent state, for example, copying all backup files from a secondary location such as tape or storage to your stage areaRecovery: It is the process to apply all transactions recorded in your archive logs, rolling your database forward to a point-in-time or until the last transaction recorded is applied, thus recovering your database to the point-in-time you need

You will see some examples of how restore and recovery work later in this book. Now let's take a closer look at what is redo log and the two possible modes your database could be operating in. This will help you understand in a bit more in depth what type of backup and recovery you could use on your environment.

What is redo?

Let's look briefly at the redo process. When Oracle blocks (the smallest unit of storage in a database) are changed, including UNDO blocks, Oracle records the changes in the form of vector changes, which are referred to as redo entries or redo records. The changes are written by the server process to the redo log buffer in the System Global Area (SGA). The redo log buffer will then be flushed into the online redo logs in near real-time fashion by the log writer (LGWR) process (if the redo log buffer is too small, then you will start seeing log buffer space waits during bursts of redo generation).

The redo entries are written by the LGWR to a disk when:

A user issues a commitThe log buffer is one third fullThe amount of unwritten redo entries is 1 MBWhen a database checkpoint takes place Otherwise every three seconds

Redo entries are written to disk when one of the situations mentioned take place first. In the event of a checkpoint, the redo entries are written before the checkpoint to ensure recoverability.

Redo log files record changes to the database as a result of transactions and internal Oracle server actions. Redo log files protect the database from loss of integrity due to system failures caused by power outages, disk failures, and so on. Redo log files must be multiplexed using different disks (use of fast disk is preferred) to ensure that the information stored in them is not lost in the event of a disk failure.

The redo log consists of groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of that group, and each group is identified by a number. The LGWR process writes redo records from the redo log buffer to all members of a redo log group until the file is filled or a log switch operation is requested. Then, it switches and writes to the files in the next group. Redo log groups are used in a circular fashion as shown in the following figure:

Tip

Redo log groups need to have at least two files per group, with the files distributed on separate disks or controllers so that no single disk failure destroys an entire log group. Also, never rely exclusively on your ASM disk group or the file system if they have mirrored disks underneath. Remember that mirroring will not protect your database in the event of your online redo log file being deleted or corrupted.

The loss of an entire log group is one of the most serious possible media failures you can come across because it can result in loss of data. The loss of a single member within a multiple-member log group is trivial and does not affect database operation, other than causing an alert to be published in the alert log.

Remember that redo logs heavily influence database performance because a commit cannot be completed until the transaction information has been written to the logs. You must place your redo log files on your fastest disks served by your fastest controllers. If possible, do not place any other database files on the same disks as your redo log files.

Tip

It's not advisable to place members of different groups on the same disk. That's because the archiving process reads the online redo log files and will end up competing with the LGWR process.

As a resume about redo log files and redo log groups, it is important always to:

Have a minimum of three redo log groups. If your database switches too often and you do not have an appropriate number of redo log groups, the LGWR process will need to wait until the next group is available before being able to overwrite it.All online redo logs and standby redo logs are equal in size.Tune your redo log files size to allow redo log switches to happen at no less than 20 minutes from each other at peak times.Remember to place the redo log files on high performance disks.Remember to have a minimum of two redo log members per group to reduce risk, and place them in different disks away from the data.Do not multiplex standby redo logs to prevent additional writes in the redo transport.

Remember as mentioned earlier that it is important to note that not all Oracle databases will have the archive process enabled.

Redo generation and recoverability

The purpose of redo generation is to ensure recoverability. This is the reason why Oracle does not give the DBA a lot of control over redo generation. If the instance crashes, then all the changes within the SGA will be lost. Oracle will then use the redo entries in the online redo log files to bring the database to a consistent state. The cost of maintaining the redo log records is an expensive operation involving latch management operations (CPU) and frequent write access to the redo log files (I/O). You can avoid redo logging for certain operations using the NOLOGGING feature. We will talk more about the NOLOGGING feature in Chapter 2, NOLOGGING Operations.

The ARCHIVELOG mode