33,59 €
Develop programmatic functions to create powerful database applications
If you are a PostgreSQL developer with a basic knowledge of PostgreSQL development and you're want deeper knowledge to develop applications, then this book is for you. As this book does not cover basic installation and configurations, you should have PostgreSQL installed on your machine as a prerequisite.
PostgreSQL is the most advanced open source database in the world. It is easy to install, configure, and maintain by following the documentation; however, it's difficult to develop applications using programming languages and design databases accordingly. This book is what you need to get the most out of PostgreSQL
You will begin with advanced SQL topics such as views, materialized views, and cursors, and learn about performing data type conversions. You will then perform trigger operations and use trigger functions in PostgreSQL. Next we walk through data modeling, normalization concepts, and the effect of transactions and locking on the database.
The next half of the book covers the types of indexes, constrains, and the concepts of table partitioning, as well as the different mechanisms and approaches available to write efficient queries or code. Later, we explore PostgreSQL Extensions and Large Object Support in PostgreSQL. Finally, you will perform database operations in PostgreSQL using PHP and Java. By the end of this book, you will have mastered all the aspects of PostgreSQL development. You will be able to build efficient enterprise-grade applications with PostgreSQL by making use of these concepts
Every chapter follows a step by step approach that first explains the concept , then shows you how to execute it practically so that you can implement them in your application.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 230
Veröffentlichungsjahr: 2016
Copyright © 2016 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: September 2016
Production reference: 1200916
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78398-900-3
www.packtpub.com
Authors
Manpreet Kaur
Baji Shaik
Copy Editor
Zainab Bootwala
Reviewers
Daniel Durante
Danny Sauer
Project Coordinator
Izzat Contractor
Commissioning Editor
Julian Ursell
Proofreader
Safis Editing
Acquisition Editor
Nitin Dasan
Indexer
Rekha Nair
Content Development Editor
Anish Sukumaran
Graphics
Jason Monteiro
Technical Editor
Sunith Shetty
Production Coordinator
Aparna Bhagat
Manpreet Kaur currently works as a business intelligence solution developer at an IT-based MNC in Chandigarh. She has over 7 years of work experience in the field of developing successful analytical solutions in data warehousing, analytics and reporting, and portal and dashboard development in the PostgreSQL and Oracle databases. She has worked on business intelligence tools such as Noetix, SSRS, Tableau, and OBIEE. She has a good understanding of ETL tools such as Informatica and Oracle Data Integrator (ODI). Currently, she works on analytical solutions using Hadoop and OBIEE 12c.
Additionally, she is very creative and enjoys oil painting. She also has a youtube channel, Oh so homemade, where she posts easy ways to make recycled crafts.
Baji Shaik is a database administrator and developer. He is currently working as a database consultant at OpenSCG. He has an engineering degree in telecommunications, and he started his career as a C# and Java developer. He started working with databases in 2011 and, over the years, he has worked with Oracle, PostgreSQL, and Greenplum. His background spans a wide depth and breadth of expertise and experience in SQL/NoSQL database technologies. He has architectured and designed many successful database solutions addressing challenging business requirements. He has provided solutions using PostgreSQL for reporting, business intelligence, data warehousing, applications, and development support. He has a good knowledge of automation, orchestration, and DevOps in a cloud environment.
He comes from a small village named Vutukutu in Andhra Pradesh and currently lives in Hyderabad. He likes to watch movies, read books, and write technical blogs. He loves to spend time with family. He has tech-reviewed Troubleshooting PostgreSQL by Packt Publishing. He is a certified PostgreSQL professional.
Thanks to my loving parents. Thanks to Packt Publishing for giving me this opportunity. Special thanks to Izzat Contractor for choosing me, and Anish Sukumaran, Nitin Dasan, and Sunith Shetty for working with me. Thanks to Dinesh Kumar for helping me write.
Daniel Durante started spending time with computers at the age of 12. He has built applications for various sectors, such as the medical industry, universities, the manufacturing industry, and the open source community. He mainly uses Golang, C, Node, or PHP for developing web applications, frameworks, tools, embedded systems, and so on. Some of his personal work can be found on GitHub and his personal website.
He has also worked on the PostgreSQL Developer's Guide, published by Packt Publishing.
I would like to thank my parents, brother, and friends, who’ve all put up with my insanity, day in and day out. I would not be here today if it weren’t for their patience, guidance, and love.
Danny Sauer has been a Linux sysadmin, software developer, security engineer, open source advocate, and general computer geek at various companies for around 20 years. He has administered, used, and programmed PostgreSQL for over half of that time. When he's not building solutions in the digital world, he and his wife enjoy restoring their antique home and teaching old cars new tricks.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
The purpose of this book is to teach you the fundamental practices and techniques of database developers for programming database applications with PostgreSQL. It is targeted to database developers using PostgreSQL who have basic experience developing database applications with the system, but want a deeper understanding of how to implement programmatic functions with PostgreSQL.
Chapter 1, Advanced SQL, aims to help you understand advanced SQL topics such as views, materialized views, and cursors and will be able to get a sound understanding of complex topics such as subqueries and joins.
Chapter 2, Data Manipulation, provides you the ability to perform data type conversions and perform JSON and XML operations in PostgreSQL.
Chapter 3, Triggers, explains how to perform trigger operations and use trigger functions in PostgreSQL.
Chapter 4, Understanding Database Design Concepts, explains data modeling and normalization concepts. The reader will then be able to efficiently create a robust database design.
Chapter 5, Transactions and Locking, covers the effect of transactions and locking on the database.The reader will also be able to understand isolation levels and understand multi-version concurrency control behavior.
Chapter 6, Indexes And Constraints, provides knowledge about the different indexes and constraints available in PostgreSQL. This knowledge will help the reader while coding and the reader will be in a better position to choose among the different indexes and constraints depending upon the requirement during the coding phase.
Chapter 7, Table Partitioning, gives the reader a better understanding of partitioning in PostgreSQL. The reader will be able to use the different partitioning methods available in PostgreSQL and also implement horizontal partitioning using PL/Proxy.
Chapter 8, Query Tuning and Optimization, provides knowledge about different mechanisms and approaches available to tune a query. The reader will be able to utilize this knowledge in order to write a optimal/efficient query or code.
Chapter 9, PostgreSQL Extensions and Large Object Support, will familiarize the reader with the concept of extensions in PostgreSQL and also with the usage of large objects' datatypes in PostgreSQL.
Chapter 10, Using PHP in PostgreSQL, covers the basics of performing database operations in PostgreSQL using the PHP language, which helps reader to start with PHP code.
Chapter 11, Using Java in PostgreSQL, this chapter provides knowledge about database connectivity using Java and creating/modifying objects using Java code. It also talks about JDBC drivers.
You need PostgreSQL 9.4 or higher to be installed on your machine to test the codes provided in the book. As this covers Java and PHP, you need Java and PHP binaries installed on your machine. All other tools covered in this book have installation procedures included, so there's no need to install them before you start reading the book.
This book is mainly for PostgreSQL developers who want to develop applications using programming languages. It is also useful for tuning databases through query optimization, indexing, and partitioning.
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Database views are created using the CREATE VIEW statement. "
A block of code is set as follows:
import java.sql.Connection; import java.sql.DriverManager; import java.sql.Statement; import java.sql.ResultSet; import java.sql.SQLException;Any command-line input or output is written as follows:
CREATE VIEW view_name ASSELECT column1, column2FROM table_nameWHERE [condition];New terms and important words are shown in bold.
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at [email protected] with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.
A materialized view is a table that actually contains rows but behaves like a view. This has been added in the PostgreSQL 9.3 version. A materialized view cannot subsequently be directly updated, and the query used to create the materialized view is stored in exactly the same way as the view's query is stored. As it holds the actual data, it occupies space as per the filters that we applied while creating the materialized view.
Before we get too deep into how to implement materialized views, let's first examine why we may want to use materialized views.
You may notice that certain queries are very slow. You may have exhausted all the techniques in the standard bag of techniques to speed up those queries. In the end, you will realize that getting queries to run as fast as you want simply isn't possible without completely restructuring the data.
Now, if you have an environment where you run the same type of SELECT query multiple times against the same set of tables, then you can create a materialized view for SELECT so that, on every run, this view does not go to the actual tables to fetch the data, which will obviously reduce the load on them as you might be running a Data Manipulation Language (DML) against your actual tables at the same time. So, basically, you take a view and turn it into a real table that holds real data rather than a gateway to a SELECT query.
A materialized view can be read-only, updatable, or writeable. Users cannot perform DML statements on read-only materialized views, but they can perform them on updatable and writeable materialized views.
You can make a materialized view read-only during creation by omitting the FOR UPDATE clause or by disabling the equivalent option in the database management tool. Read-only materialized views use many mechanisms similar to updatable materialized views, except they do not need to belong to a materialized view group.
In a replication environment, a materialized table holds the table data and resides in a different database. A table that has a materialized view on it is called a master table. The master table resides on a master site and the materialized view resides on a materialized-view site.
In addition, using read-only materialized views eliminates the possibility of introducing data conflicts on the master site or the master materialized view site, although this convenience means that updates cannot be made on the remote materialized view site.
The syntax to create a materialized view is as follows:
CREATE MATERIALIZED VIEW view_name AS SELECT columns FROM table;The CREATE MATERIALIZED VIEW command helps us create a materialized view. The command acts in way similar to the CREATE VIEW command, which was explained in the previous section.
Let's make a read-only materialized view for a supplier table:
CREATE MATERIALIZED VIEW suppliers_matview ASSELECT * FROM suppliers;This view is a read-only materialized view and will not reflect the changes to the master site.
You can make a materialized view updatable during creation by including the FOR UPDATE clause or enabling the equivalent option in the database management tool. In order for changes that have been made to an updatable materialized view to be reflected in the master site during refresh, the updatable materialized view must belong to a materialized view group.
When we say "refreshing the materialized view," we mean synchronizing the data in the materialized view with data in its master table.
An updatable materialized view enables you to decrease the load on master sites because users can make changes to data on the materialized view site.
The syntax to create an updatable materialized view is as follows:
CREATE MATERIALIZED VIEW view_name FOR UPDATE AS SELECT columns FROM table;Let's make an updatable materialized view for a supplier table:
CREATE MATERIALIZED VIEW suppliers_matview FOR UPDATEASSELECT * FROM suppliers;Whenever changes are made in the suppliers_matview clause, it will reflect the changes to the master sites during refresh.
A writeable materialized view is one that is created using the FOR UPDATE clause like an updatable materialized view is, but it is not a part of a materialized view group. Users can perform DML operations on a writeable materialized view; however, if you refresh the materialized view, then these changes are not pushed back to the master site and are lost in the materialized view itself. Writeable materialized views are typically allowed wherever fast-refreshable, read-only materialized views are allowed.
A cursor in PostgreSQL is a read-only pointer to a fully executed SELECT statement's result set. Cursors are typically used within applications that maintain a persistent connection to the PostgreSQL backend. By executing a cursor and maintaining a reference to its returned result set, an application can more efficiently manage which rows to retrieve from a result set at different times without re-executing the query with different LIMIT and OFFSET clauses.
The four SQL commands involved with PostgreSQL cursors are DECLARE, FETCH, MOVE, and CLOSE.
The DECLARE command both defines and opens a cursor, in effect defining the cursor in memory, and then populates the cursor with information about the result set returned from the executed query. A cursor may be declared only within an existing transaction block, so you must execute a BEGIN command prior to declaring a cursor.
Here is the syntax for DECLARE:
DECLARE cursorname [ BINARY ] [ INSENSITIVE ] [ SCROLL ] CURSOR FOR query[ FOR { READ ONLY | UPDATE [ OF column [, ...] ] } ]DECLARE cursorname is the name of the cursor to create. The optional BINARY keyword causes the output to be retrieved in binary format instead of standard ASCII; this can be more efficient, though it is only relevant to custom applications as clients such as psql are not built to handle anything but text output. The INSENSITIVE and SCROLL keywords exist to comply with the SQL standard, though they each define PostgreSQL's default behavior and are never necessary. The INSENSITIVE SQL keyword exists to ensure that all data retrieved from the cursor remains unchanged from other cursors or connections. As PostgreSQL requires the cursors to be defined within transaction blocks, this behavior is already implied. The SCROLL SQL keyword exists to specify that multiple rows at a time can be selected from the cursor. This is the default in PostgreSQL, even if it is unspecified.
The CURSOR FOR query is the complete query and its result set will be accessible by the cursor when executed.
The [FOR { READ ONLY | UPDATE [ OF column [, ...] ] } ] cursors may only be defined as READ ONLY, and the FOR clause is, therefore, superfluous.
Let's begin a transaction block with the BEGIN keyword, and open a cursor named order_cur with SELECT * FROM orders as its executed select statement:
BEGIN;DECLARE order_cur CURSORFOR SELECT * FROM orders;Once the cursor is successfully declared, it means that the rows retrieved by the query are now accessible from the order_cur cursor.
In order to retrieve rows from the open cursor, we need to use the FETCH command. The MOVE
