23,99 €
If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 244
Veröffentlichungsjahr: 2015
Copyright © 2015 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: February 2015
Production reference: 1250215
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78355-515-4
www.packtpub.com
Author
Sonal Raj
Reviewers
Roar Flolo
Dave Meehan
Kailash Nadh
Commissioning Editor
Kunal Parikh
Acquisition Editor
Shaon Basu
Content Development Editor
Akshay Nair
Technical Editor
Faisal Siddiqui
Copy Editors
Deepa Nambiar
Ashwati Thampi
Project Coordinator
Mary Alex
Proofreaders
Simran Bhogal
Maria Gould
Ameesha Green
Kevin McGowan
Jonathan Todd
Indexer
Hemangini Bari
Graphics
Abhinash Sahu
Valentina Dsilva
Production Coordinator
Alwin Roy
Cover Work
Alwin Roy
Sonal Raj is a hacker, Pythonista, big data believer, and a technology dreamer. He has a passion for design and is an artist at heart. He blogs about technology, design, and gadgets at http://www.sonalraj.com/. When not working on projects, he can be found traveling, stargazing, or reading.
He has pursued engineering in computer science and loves to work on community projects. He has been a research fellow at SERC, IISc, Bangalore, and taken up projects on graph computations using Neo4j and Storm. Sonal has been a speaker at PyCon India and local meetups on Neo4j and has also published articles and research papers in leading magazines and international journals. He has contributed to several open source projects.
Presently, Sonal works at Goldman Sachs. Prior to this, he worked at Sigmoid Analytics, a start-up where he was actively involved in the development of machine learning frameworks, NoSQL databases, including Mongo DB and streaming using technologies such as Apache Spark.
I would like to thank my family for encouraging me, supporting my decisions, and always being there for me. I heartily want to thank all my friends who have always respected my passion for being part of open source projects and communities while reminding me that there is more to life than lines of code. Beyond this, I would like to thank the folks at Neo Technologies for the amazing product that can store the world in a graph. Special thanks to my colleagues for helping me validate my writings and finally the reviewers and editors at Packt Publishing without whose efforts this work would not have been possible. Merci à vous.
Roar Flolo has been developing software since 1993 when he got his first job developing video games at Funcom in Oslo, Norway. His career in video games brought him to Boston and Huntington Beach, California, where he cofounded Papaya Studio, an independent game development studio. He has worked on real-time networking, data streaming, multithreading, physics and vehicle simulations, AI, 2D and 3D graphics, and everything else that makes a game tick.
For the last 10 years, Roar has been working as a software consultant at www.flologroup.com, working on games and web and mobile apps. Recent projects include augmented reality apps and social apps for Android and iOS using the Neo4j graph database at the backend.
Dave Meehan has been working in information technology for over 15 years. His areas of specialty include website development, database administration, and security.
Kailash Nadh has been a hobbyist and professional developer for over 13 years. He has a special interest in web development, and is also a researcher with a PhD in artificial intelligence and computational linguistics.
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.
Welcome to the connected world. In the information age, everything around us is based on entities, relations, and above all, connectivity. Data is becoming exponentially more complex, which is affecting the performance of existing data stores. The most natural form in which data is visualized is in the form of graphs. In recent years, there has been an explosion of technologies to manage, process, and analyze graphs. While companies such as Facebook and LinkedIn have been the most well-known users of graph technologies for social web properties, a quiet revolution has been spreading across other industries. More than 30 of the Forbes Global 2000 companies, and many times as many start-ups have quietly been working to apply graphs to a wide array of business-critical use cases.
Neo4j, a graph database by Neo Technologies, is the leading player in the market for handling related data. It is not only efficient and easier to use, but it also includes all security and reliability features of tabulated databases.
We are entering an era of connected data where companies that can master the connections between their data—the lines and patterns linking the dots and not just the dots—will outperform the organizations that fail to recognize connectedness. It will be a long time before relational databases ebb into oblivion. However, their role is no longer universal. Graph databases are here to stay, and for now, Neo4j is setting the standard for the rest of the market.
This book presents an insight into how Neo4j can be applied to practical industry scenarios and also includes tweaks and optimizations for developers and administrators to make their systems more efficient and high performing.
Chapter 1, Getting Started with Neo4j, introduces Neo4j, its functionality, and norms in general, briefly outlining the fundamentals. The chapter also gives an overview of graphs, NOSQL databases and their features and Neo4j in particular, ACID compliance, basic CRUD operations, and setup. So, if you are new to Neo4j and need a boost, this is your chapter.
Chapter 2, Querying and Indexing in Neo4j, deals with querying Neo4j using Cypher, optimizations to data model and queries for better Cypher performance. The basics of Gremlin are also touched upon. Indexing in Neo4j and its types are introduced along with how to migrate from existing SQL stores and data import/export techniques.
Chapter 3, Efficient Data Modeling with Graphs, explores the data modeling concepts and techniques associated with graph data in Neo4j, in particular, property graph model, design constraints for Neo4j, the designing of schemas, and modeling across multiple domains.
Chapter 4, Neo4j for High-volume Applications, teaches you how to develop applications with Neo4j to handle high volumes of data. We will define how to develop an efficient architecture and transactions in a scalable way. We will also take a look at built-in graph algorithms for better traversals and also introduce Spring Data Neo4j.
Chapter 5, Testing and Scaling Neo4j Applications, teaches how to test Neo4j applications using the built-in tools and the GraphAware framework for unit and performance tests. We will also discuss how a Neo4j application can scale.
Chapter 6, Neo4j Internals, takes a look under the hood of Neo4j, skimming the concepts from the core classes in the source into the internal storage structure, caching, transactions, and related operations. Finally, the chapter deals with HA functions and master election.
Chapter 7, Administering Neo4j, throws light upon some useful tools and adapters that have been built to interface Neo4j with the most popular languages and frameworks. The chapter also deals with tips and configurations for administrators to optimize the performance of the Neo4j system. The essential security aspects are also dealt with in this chapter.
Chapter 8, Use Case – Similarity-based Recommendation System, is an example-oriented chapter. It provides a demonstration on how to go about building a similarity-based recommendation system with Neo4j and highlights the utility of graph visualization.
This book is written for developers who work on machines based on Linux, Mac OS X, or Windows. All prerequisites are described in the first chapter to make sure your system is Neo4j-enabled and meets a few requirements. In general, all the examples should work on any platform.
This book assumes that you have a basic understanding of graph theory and are familiar with the fundamental concepts of Neo4j. It focuses primarily on using Neo4j for production environments and provides optimization techniques to gain better performance out of your Neo4j-based application. However, beginners can use this book as well, as we have tried to provide references to basic concepts in most chapters. You will need a server with Windows, Linux, or Mac and the Neo4j Community edition or HA installed. You will also need Python and py2neo configured.
Lastly, keep in mind that this book is not intended to replace online resources, but rather aims at complementing them. So, obviously you will need Internet access to complete your reading experience at some points, through provided links.
This book was written for developers who wish to go further in mastering the Neo4j graph database. Some sections of the book, such as the section on administering and scaling, are targeted at database admins.
It complements the usual "Introducing Neo4j" reference books and online resources and goes deeper into the internal structure and large-scale deployments.
It also explains how to write and optimize your Cypher queries. The book concentrates on providing examples with Java and Cypher. So, if you are not using graph databases or using an adapter in a different language, you will probably learn a lot through this book as it will help you to understand the working of Neo4j.
This book presents an example-oriented approach to learning the technology, where the reader can learn through the code examples and make themselves ready for practical scenarios both in development and production. The book is basically the "how-to" for those wanting a quick and in-depth learning experience of the Neo4j graph database.
While these topics are quickly evolving, this book will not become obsolete that easily because it rather focuses on whys instead of hows. So, even if a given tool presented is not used anymore, you will understand why it was useful and you will be able to pick the right tool with a critical point of view.
In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.
Code words in text are shown as follows: "This is equivalent to the assertSubGraph() method of the GraphUnit API."
A block of code is set as follows:
Any command-line input or output is written as follows:
New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "Click on Finish to complete the package addition process."
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the ErrataSubmissionForm link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.
Graphs and graph operations have grown into prime areas of research in computer science. One reason for this is that graphs can be useful in representing several, otherwise abstract, problems in existence today. Representing the solution space of the problem in terms of graphs can trigger innovative approaches to solving such problems. It's simple. Everything around us, that is, everything we come across in our day-to-day life can be represented as graphs, and when your whiteboard sketches can be directly transformed into data structures, the possibilities are limitless. Before we dive into the technicalities and utilities of graph databases with the topics covered in this chapter, let's understand what graphs are and how representing data in the form of graph databases makes our lives easier. The following topics are dealt with in this chapter:
Graphs are a way of representing entities and the connections between them. Mathematically, graphs can be defined as collections of nodes and edges that denote entities and relationships. The nodes are data entities whose mutual relationships are denoted with the help of edges. Undirected graphs have two-way connections between edges whereas a directed graph has only a one-way edge between the nodes. We can also record the value of an edge and that is referred to as the weight of the graph.
Modern datasets of science, government, or business are diverse and interrelated, and for years we have been developing data stores that have tabular schema. So, when it comes to highly connected data, tabular data stores offer retarded and highly complex operability. So, we started creating data stores that store data in the raw form in which we visualize them. This not only makes it easier to transform our ideas into schemas but the whiteboard friendliness of such data stores also makes it easy to learn, deploy, and maintain such data stores. Over the years, several databases were developed that stored their data structurally in the form of graphs. We will look into them in the next section.
Data has been growing in volume, changing more rapidly, and has become more structurally varied than what can be handled by typical relational databases. Query execution times increase drastically as the size of tables and number of joins grow. This is because the underlying data models build sets of probable answers to a query before filtering to arrive at a solution. NoSQL (often interpreted as Not only SQL) provides several alternatives to the relational model.
NoSQL represents the new class of data management technologies designed to meet the increasing volume, velocity, and variety of data that organizations are storing, processing, and analyzing. NoSQL comprises diverse different database technologies, and it has evolved as a response to an exponential increase in the volume of data stored about products, objects, and consumers, the access frequency of this data, along with increased processing and performance requirements. Relational databases, on the contrary, find it difficult to cope with the rapidly growing scale and agility challenges that are faced by modern applications, and they struggle to take advantage of the cheap, readily available storage and processing technologies in the market.
Often referred to as NoSQL, nonrelational databases feature elasticity and scalability. In addition, they can store big data and work with cloud computing systems. All of these factors make them extremely popular. NoSQL databases address the opportunities that the relational model does not, including the following:
In the case of relational databases, you need to define the schema before you can add your data. In other words, you need to strictly follow a format for all data you are likely to store in the future. For example, you might store data about consumers such as phone numbers, first and last names, address including the city and state—a SQL database must be told what you are storing in advance, thereby giving you no flexibility.
Agile development approaches do not fit well with static schemas, since every completion of a new feature requires the schema of your database to change. So, after a few development iterations, if you decide to store consumers' preferred items along with their contact addresses and phone numbers, that column will need to be added to the already existing-database, and then migrate the complete database to an entirely new schema.
In the case of a large database, this is a time-consuming process that involves significant downtime, which might adversely affect the business as a whole. If the application data frequently changes due to rapid iterations, the downtime might be occurring quite often. Businesses sometimes wrongly choose relational databases in situations where the effective addressing of completely unstructured data is needed or the structure of data is unknown in advance. It is also worthy to note that while most NoSQL databases support schema or structure changes throughout their lifetime, some including graph databases adversely affect performance if schema changes are made after considerably large data has been added to the graph.
Because of their structure, relational databases are usually vertically scalable, that is, increasing the capacity of a single server to host more data in the database so that it is reliable and continuously available. There are limits to such scaling, both in terms of size and expense. An alternate approach is to scale horizontally by increasing the number of machines rather than the capacity of a single machine.
In most relational databases, shardingacross multiple server instances is generally accomplished with Storage Area Networks (SANs) and other complicated arrangements that make multiple hardware act as a single machine. Developers have to manually deploy multiple relational databases across a cluster of machines. The application code distributes the data, queries, and aggregates the results of the queries from all instances of the database. Handling the failure of resources, data replication, and balancing require customized code in the case of manual sharding.
NoSQL databases usually support autosharding out of the box, which means that they natively allow the distribution of data stores across a number of servers, abstracting it from the application, which is unaware of the server pool composition. Data and query load are balanced automatically, and in the case of a node or server failure, it can quickly replace the failed node with no performance drop.
Cloud computing platforms such as Amazon Web Services provide virtually unlimited on-demand capacity. Hence, commodity servers can now provide the same storage and processing powers for a fraction of the price as a single high-end server.
There are many products available that provide a cache tier to SQL database management systems. They can improve the performance of read operations substantially, but not that of write operations and moreover add complexity to the deployment of the system. If read operations, dominate the application, then distributed caching can be considered, but if write operations dominate the application or an even mix of read and write operations, then a scenario with distributed caching might not be the best choice for a good end user experience.
Most NoSQL database systems come with built-in caching capabilities that use the system memory to house the most frequently used data and doing away with maintaining a separate caching layer.
NoSQL databases support automatic replication, which means that you get high availability and failure recovery without the use of specialized applications to manage such operations. From the developer's perspective, the storage environment is essentially virtualized to provide a fault-tolerant experience.
At one time, the answer to all your database needs was a relational database. With the rapidly spreading NoSQL database craze, it is vital to realize that different use cases and functionality call for a different database type. Based on the purpose of use, NoSQL databases have been classified in the following areas:
Key-valuedatabase management systems are the most basic and fundamental implementation of NoSQL types. Such databases operate similar to a dictionary by mapping keys to values and do not reflect structure or relation. Key-value databases are usually used for the rapid storage of information after performing some operation, for example, a resource (memory)-intensive computation. These data stores offer extremely high performance and are efficient and easily scalable. Some examples of key-value data stores are Redis (in-memory data store with optional persistence.), MemcacheDB (distributed, in-memory key-value store), and Riak (highly distributed, replicated key-value store). Sounds interesting, huh? But how do you decide when to use such data stores?
Let's take a look at some key-value data store use cases:
Column family NoSQL database systems extend the features of key-value stores to provide enhanced functionality. Although they are known to have a complex nature, column family stores operate by the simple creation of collections of key-value pairs (single or many) that match a record. Contrary to relational databases, column family NoSQL stores are schema-less. Each record has one or more columns that contain the information with variation in each column of each record.
Column-based NoSQL databases are basically 2D arrays where each key contains a single key-value pair or multiple key-value pairs associated with it, thereby providing support for large and unstructured datasets to be stored for future use. Such databases are generally used when the simple method of storing key-value pairs is not sufficient and storing large quantities of records with a lot of information is mandatory. Database systems that implement a column-based, schema-less model are extremely scalable.
These data stores are powerful and can be reliably used to store essential data of large sizes. Although they are not flexible in what constitutes the data (such as related objects cannot be stored!), they are extremely functional and performance oriented. Some column-based data stores areHBase (an Apache Hadoop data store based on ideas from BigTable) and Cassandra (a data store based on DynamoDB and BigTable).
So, when do we want to use such data stores? Let's take a look at some