R Bioinformatics Cookbook - Dan MacLean - E-Book

R Bioinformatics Cookbook E-Book

Dan MacLean

0,0
43,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Handling biological data effectively requires an in-depth knowledge of machine learning techniques and computational skills, along with an understanding of how to use tools such as edgeR and DESeq. With the R Bioinformatics Cookbook, you’ll explore all this and more, tackling common and not-so-common challenges in the bioinformatics domain using real-world examples.
This book will use a recipe-based approach to show you how to perform practical research and analysis in computational biology with R. You will learn how to effectively analyze your data with the latest tools in Bioconductor, ggplot, and tidyverse. The book will guide you through the essential tools in Bioconductor to help you understand and carry out protocols in RNAseq, phylogenetics, genomics, and sequence analysis. As you progress, you will get up to speed with how machine learning techniques can be used in the bioinformatics domain. You will gradually develop key computational skills such as creating reusable workflows in R Markdown and packages for code reuse.
By the end of this book, you’ll have gained a solid understanding of the most important and widely used techniques in bioinformatic analysis and the tools you need to work with real biological data.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 315

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



R Bioinformatics Cookbook
Use R and Bioconductor to perform RNAseq, genomics, data visualization, and bioinformatic analysis
Dan MacLean
BIRMINGHAM - MUMBAI

R Bioinformatics Cookbook

Copyright © 2019 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Sunith ShettyAcquisition Editor:Reshma RamanContent Development Editor:Athikho Sapuni RishanaSenior Editor: Sofi RogersTechnical Editor:Joseph SunilCopy Editor:Safis EditingProject Coordinator:Kirti PisatProofreader: Safis EditingIndexer:Tejal SoniProduction Designer: Arvindkumar Gupta

First published: October 2019

Production reference: 1101019

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78995-069-4

www.packt.com

Packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Fully searchable for easy access to vital information

Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors

About the author

Professor Dan MacLean has a Ph.D. in molecular biology from the University of Cambridge and gained postdoctoral experience in genomics and bioinformatics at Stanford University in California. Dan is now a Honorary Professor in the School of Computing Sciences at the University of East Anglia. He has worked in bioinformatics and plant pathogenomics, specializing in R and Bioconductor and developing analytical workflows in bioinformatics, genomics, genetics, image analysis, and proteomics at The Sainsbury Laboratory since 2006. Dan has developed and published software packages in R, Ruby, and Python with over 100,000 downloads combined.

About the reviewer

Cho-Yi Chen is an Olympic swimmer, a bioinformatician, and a computational biologist. He majored in computer science, and later devoted himself to biomedical research. He received his MS and Ph.D. degrees in bioinformatics, genomics, and systems biology from National Taiwan University. He was a founding member of the Taiwan Society of Evolution and Computational Biology. He is now a postdoctoral research fellow in the Department of Biostatistics and Computational Biology at the Dana-Farber Cancer Institute, Harvard University. He is an active scientist and software developer, striving to advance our understanding of cancer and other human diseases.

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

R Bioinformatics Cookbook

About Packt

Why subscribe?

Contributors

About the author

About the reviewer

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions usedpacktpub.com/.../9781789950694_ColorImages.pdf

Sections

Getting ready

How to do it...

How it works...

There's more...

See also

Get in touch

Reviews

Performing Quantitative RNAseq

Technical requirements

Estimating differential expression with edgeR

Getting ready

How to do it...

Using edgeR from a count table

Using edgeR from an ExpressionSet object

How it works...

Using edgeR from a count table

Using edgeR from an ExpressionSet object

Estimating differential expression with DESeq2

Getting ready

How to do it...

Using DESeq2 from a count matrix

Using DESeq2 from an ExpressionSet object

How it works...

Using DESeq2 from a count matrix

Using DESeq2 from an ExpressionSet object

Power analysis with powsimR

Getting ready

How to do it...

How it works...

There's more...

Finding unannotated transcribed regions

Getting ready

How to do it...

How it works...

There's more...

Finding regions showing high expression ab initio with bumphunter

Getting ready...

How to do it...

How it works...

There's more...

Differential peak analysis

Getting ready

How to do it...

How it works...

Estimating batch effects using SVA

Getting ready

How to do it...

How it works...

Finding allele-specific expressions with AllelicImbalance

Getting ready

How to do it...

How it works...

There's more...

Plotting and presenting RNAseq data

Getting ready

How to do it...

How it works...

Finding Genetic Variants with HTS Data

Technical requirements

Finding SNPs and indels from sequence data using VariantTools

Getting ready

How to do it...

How it works...

There's more...

See also

Predicting open reading frames in long reference sequences

Getting ready

How to do it...

How it works...

There's more...

Plotting features on genetic maps with karyoploteR

Getting ready

How to do it...

How it works...

There's more...

See also

Selecting and classifying variants with VariantAnnotation

Getting ready

How to do it...

How it works...

See also

Extracting information in genomic regions of interest

Getting ready

How to do it...

How it works...

There's more...

Finding phenotype and genotype associations with GWAS

Getting ready

How to do it...

How it works...

Estimating the copy number at a locus of interest

Getting ready

How to do it...

How it works...

See also

Searching Genes and Proteins for Domains and Motifs

Technical requirements

Finding DNA motifs with universalmotif

Getting ready

How to do it...

How it works...

There's more...

Finding protein domains with PFAM and bio3d

Getting ready

How to do it...

How it works...

There's more...

Finding InterPro domains

Getting ready

How to do it...

How it works...

There's more...

See also...

Performing multiple alignments of genes or proteins

Getting ready

How to do it...

How it works...

There's more...

Aligning genomic length sequences with DECIPHER

Getting ready

How to do it...

How it works...

Machine learning for novel feature detection in proteins

Getting ready

How to do it...

How it works...

3D structure protein alignment with bio3d

Getting ready

How to do it...

How it works...

There's More...

Phylogenetic Analysis and Visualization

Technical requirements

Reading and writing varied tree formats with ape and treeio

Getting ready

How to do it...

How it works...

See also

Visualizing trees of many genes quickly with ggtree

Getting ready

How to do it...

How it works...

There's more...

Quantifying differences between trees with treespace

Getting ready

How to do it...

How it works...

There's more...

Extracting and working with subtrees using ape

Getting ready

How to do it...

How it works...

There's more...

Creating dot plots for alignment visualization

Getting ready

How to do it...

How it works...

Reconstructing trees from alignments using phangorn

Getting ready

How to do it...

How it works...

Metagenomics

Technical requirements

Loading in hierarchical taxonomic data using phyloseq

Getting ready

How to do it...

How it works...

There's more...

See also

Rarefying counts and correcting for sample differences using metacoder

Getting ready

How to do it...

How it works...

There's more...

Reading amplicon data from raw reads with dada2

Getting ready

How to do it...

How it works...

See also

Visualizing taxonomic abundances with heat trees in metacoder

Getting ready

How to do it...

How it works...

Computing sample diversity with vegan

Getting ready

How to do it...

How it works...

See also...

Splitting sequence files into OTUs

Getting ready

How to do it...

How it works...

Proteomics from Spectrum to Annotation

Technical requirements

Representing raw MS data visually

Getting ready

How to do it...

How it works...

Viewing proteomics data in a genome browser

Getting ready

How to do it...

How it works...

There's more...

Visualizing distributions of peptide hit counts to find thresholds

Getting ready

How to do it...

How it works...

Converting MS formats to move data between tools

Getting ready

How to do it...

How it works...

Matching spectra to peptides for verification with protViz

Getting ready

How to do it...

How it works...

Applying quality control filters to spectra

Getting ready

How to do it...

How it works...

There's more...

Identifying genomic loci that match peptides

Getting ready

How to do it...

How it works...

Producing Publication and Web-Ready Visualizations

Technical requirements

Visualizing multiple distributions with ridgeplots

Getting ready

How to do it...

How it works...

Creating colormaps for two-variable data

Getting ready

How to do it...

How it works...

See also

Representing relational data as networks

Getting ready

How to do it...

How it works...

There's more...

Creating interactive web graphics with plotly

Getting ready

How to do it...

How it works...

Constructing three-dimensional plots with plotly

Getting ready

How to do it...

How it works...

Constructing circular genome plots of polyomic data

Getting ready

How to do it...

How it works...

Working with Databases and Remote Data Sources

Technical requirements

Retrieving gene and genome annotation from BioMart

Getting ready

How to do it...

How it works...

Retrieving and working with SNPs

Getting ready

How to do it...

How it works...

There's more...

See also

Getting gene ontology information

Getting ready

How to do it...

How it works...

Finding experiments and reads from SRA/ENA

Getting ready

How to do it...

How it works...

There's more...

Performing quality control and filtering on high-throughput sequence reads

Getting ready

How to do it...

How it works...

Completing read-to-reference alignment with external programs

Getting ready...

How to do it...

How it works...

Visualizing the quality control of read-to-reference alignments

Getting ready...

How to do it...

How it works...

Useful Statistical and Machine Learning Methods

Technical requirements

Correcting p-values to account for multiple hypotheses

Getting ready

How to do it...

How it works...

Generating a simulated dataset to represent a background

Getting ready

How to do it...

How it works...

Learning groupings within data and classifying with kNN

Getting ready

How to do it...

How it works...

Predicting classes with random forests

Getting ready

How to do it...

How it works...

There's more

Predicting classes with SVM

Getting ready

How to do it...

How it works...

Learning groups in data without prior information

Getting ready

How to do it...

How it works...

There's more

Identifying the most important variables in data with random forests

Getting ready

How to do it...

How it works...

Identifying the most important variables in data with PCA

Getting ready

How to do it...

How it works...

Programming with Tidyverse and Bioconductor

Technical requirements

Making base R objects tidy

Getting ready

How to do it...

How it works...

Using nested dataframes

Getting ready

How it works...

How it works...

There's more...

Writing functions for use in dplyr::mutate()

Getting ready

How to do it...

How it works...

Working programmatically with Bioconductor classes

Getting ready

How to do it...

How it works...

Developing reusable workflows and reports

Getting ready

How to do it...

How it works...

Making use of the apply family of functions

Getting ready

How to do it...

How it works...

Building Objects and Packages for Code Reuse

Technical requirements

Creating simple S3 objects to simplify code

Getting ready

How to do it...

How it works...

Taking advantage of generic object functions with S3 classes

Getting ready

How to do it...

How it works...

Creating structured and formal objects with the S4 system

Getting ready

How to do it...

How it works

See also

Simple ways to package code for sharing and reuse

Getting ready

How to do it...

How it works...

Using devtools to host code from GitHub

Getting ready

How to do it...

How it works...

Building a unit test suite to ensure that functions work as you intend

Getting ready

How to do it...

How it works...

Using continuous integration with Travis to keep code tested and up to date

Getting ready

How to do it...

How it works...

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

In R Bioinformatics Cookbook, you will encounter common and not-so-common challenges in the bioinformatics domain using real-world examples.

This book will use a recipe-based approach to help you perform practical research and analysis in computational biology with R. You will gain an understanding of your data through the analysis of Bioconductor, ggplot, and the tidyverse library in bioinformatics. You will be introduced to a number of essential tools in Bioconductor so that you can understand and carry out protocols in RNAseq, phylogenetics, genomics, and sequence analysis. You will also learn how machine learning techniques can be used in the bioinformatics domain. You will develop key computational skills, such as developing workflows in R Markdown and designing your own packages for efficient and reproducible code reuse.

By the end of this book, you'll have a solid understanding of the most important and widely used techniques in bioinformatic analysis, as well as the tools you'll need to work with real biological data.

Who this book is for

This book is for data scientists, bioinformatics analysts, researchers, and R developers who want to address intermediate-to-advanced biological and bioinformatics problems using a recipe-based approach. Working knowledge of the R programming language and some basic understanding of bioinformatics are mandatory.

What this book covers

Chapter 1, Performing Quantitative RNASeq, teaches you how to process raw RNA sequence read data, perform quality checks, and estimate expression levels for differential gene expression detection and analysis. The chapter will also describe important statistical methods and steps for estimating experimental power—an important part of determining whether particular effects can be detected. All the recipes in this chapter will be based on the most popular Bioconductor tools, including Limma, edgeR, DESeq, and more.

Chapter 2, Finding Genetic Variants with HTS Data, introduces you to a range of techniques for performing next-generation genetic variants, including calling SNPs and Indels, using them in analysis, and creating genetic visualizations. All the recipes in this chapter will be based on the most popular and powerful tools of the Bioconductor package.

Chapter 3, Searching Genes and Proteins for Domains and Motifs, teaches you to analyze sequences for features of functional interest, such asde novoDNA motifsand known domains from widely used databases. In this section, we'll learn about some packages for kernel-based machine learning to find protein sequence features. You will also learn some large-scale alignment techniques for many, or very long, sequences. You will use Bioconductor and other statistical learning packages.

Chapter 4, Phylogenetic Analysis and Visualization, shows us how to use Bioconductor and other R phylogenetic packages such as ape to build and manipulate trees of gene and protein sequences. You will also look at how to compare trees with tree metrics and complete genome-scale visualizations.

Chapter 5, Metagenomics, explores importing data from popular metagenomics packages into R for analysis and learning a variety of effective visualizations. You will use packages such as otu, Metacoder, and DADA in Bioconductor and beyond in order to achieve an end-to-end metagenomics workflow.

Chapter 6, Proteomics from Spectrum to Annotation, teaches us how to import mass spectra and view this in external genome browsers along with genomic data. You will develop diagnostic plots and quality control procedures, and learn how to convert between various formats from different platforms.

Chapter 7, Producing Publication and Web-Ready Visualizations, teaches us how to develop high-quality visualizations that can represent large amounts of data and variables in compact and meaningful ways. You will study extensions to ggplot and the plotly package for interactive visualizations for the web and develop visualizations in the Shiny web environment.

Chapter 8, Working with Databases and Remote Data Sources, teaches us how to use web resources remotely by pulling data from commonly used data repositories. You will also examine the objects representing data in R. Methods in the Bioconductor package are heavily used in this chapter. We will also see how downloaded NGS datasets can be quality controlled for downstream use.

Chapter 9, Useful Statistical and Machine Learning Methods, demonstrates how to implement a range of approaches underlying some advanced statistical techniques including simulating data and performing multiple hypothesis tests. You will also learn some supervised and unsupervised machine learning methods to group and cluster data and samples.

Chapter 10, Programming with Tidyverse and Bioconductor, explains how to write code within tidyverse and integrate standard R functions to create pipelines that can analyze diverse datasets. You will use the biobroom package from Bioconductor and the broom package to reformat objects for use in tidy pipelines. The tidyverse set of packages will be used in functional programming and for creating reproducible, literate workflows.

Chapter 11, Building Objects and Packages for Code Reuse, demonstrates how to take developed code and apply R's object-oriented programming systems to simplify usability. You will also learn how to create a simple R package and how to share your code from GitHub so that other researchers can easily find and use what you have built.

To get the most out of this book

Good knowledge of R and its various libraries is mandatory for this book.

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

Support

tab.

Click on

Code Downloads

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/R-Bioinformatics-Cookbook. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available athttps://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781789950694_ColorImages.pdf.

Conventions usedpacktpub.com/.../9781789950694_ColorImages.pdf

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "We'll look at using the ape and treeio packages to get tree data into and out of R. "

A block of code is set as follows:

if (!requireNamespace("BiocManager")) install.packages("BiocManager") BiocManager::install()

Bold: Indicates a new term, an important word, or words that you see on screen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Some of the dependencies rely on encapsulated Java code, so you'll need to install a Java Runtime Environment (JRE) for your system."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it..., How it works..., There's more..., and See also).

To give clear instructions on how to complete a recipe, use these sections as follows:

Getting ready

This section tells you what to expect in the recipe and describes how to set up any software or any preliminary settings required for the recipe.

How to do it...

This section contains the steps required to follow the recipe.

How it works...

This section usually consists of a detailed explanation of what happened in the previous section.

There's more...

This section consists of additional information about the recipe in order to make you more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information for the recipe.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Performing Quantitative RNAseq

The technology of RNAseq has revolutionized the study of transcript abundances, bringing high-sensitivity detection and high-throughput analysis. Bioinformatic analysis pipelines using RNAseq data typically start with a read quality control step followed by either alignment to a reference or the assembly of sequence reads into longer transcriptsde novo. After that, transcript abundances are estimated with read counting and statistical models and differential expression between samples is assessed. Naturally, there are many technologies available for all steps of this pipeline. The quality control and read alignment steps will usually take place outside of R, so analysis in R will begin with a file of transcript or gene annotations (such as GFF and BED files) and a file of aligned reads (such as BAM files).

The tools in R for performing analysis are powerful and flexible. Many of them are part of the Bioconductor suite and, as such, integrate together very nicely. The key question researchers wish to answer with RNAseq is usually: Which transcripts are differentially expressed? In this chapter, we'll look at some recipes for that in standard cases where we already know the genomic positions of genes we're interested in, and in cases where we need to find unannotated transcripts. We'll also look at other important recipes that help answer the questions How many replicates are enough? and Which allele is expressed more?

In this chapter, we will cover the following recipes:

Estimating differential expression with edgeR

Estimating differential expression with DESeq2

Power analysis with powsimR

Finding unannotated transcribed regions with GRanges objects

Finding regions showing high expression ab initio with bumphunter

Differential peak analysis

Estimating batch effects using SVA

Finding allele-specific expression with AllelicImbalance

Plotting and presenting RNAseq data

Technical requirements

The sample data you'll need is available from this book's GitHub repository: https://github.com/PacktPublishing/R-Bioinformatics_Cookbook. If you want to use the code examples as they are written, then you will need to make sure that this data is in a sub-directory of whatever your working directory is.

Here are the R packages that you'll need. Most of these will install with install.packages(); others are a little more complicated:

Bioconductor

AllelicImbalance

bumphunter

csaw

DESeq

edgeR

IRanges

Rsamtools

rtracklayer

sva

SummarizedExperiment

VariantAnnotation

dplyr

extRemes

forcats

magrittr

powsimR

readr

Bioconductor is huge and has its own installation manager. You can install it with the following code:

if (!requireNamespace("BiocManager")) install.packages("BiocManager") BiocManager::install()

Further information is available athttps://www.bioconductor.org/install/.

Normally, in R, a user will load a library and use the functions directly by name. This is great in interactive sessions but it can cause confusion when many packages are loaded. To clarify which package and function I'm using at a given moment, I will occasionally use the packageName::functionName() convention.

Sometimes, in the middle of a recipe, I'll interrupt the code so you can see some intermediate output or the structure of an object it's important to understand. Whenever that happens, you'll see a code block where each line begins with ## (double hash symbols). Consider the following command:

letters[1:5]

This will give us output as follows:

## a b c d e

Note that the output lines are prefixed with ##.

Estimating differential expression with edgeR

edgeR is a widely used and powerful package that implements negative binomial models suitable for sparse count data such as RNAseq data in a general linear model framework, which are powerful for describing and understanding count relationships and exact tests for multi-group experiments. It uses a weighted style normalization called TMM, which is the weighted mean of log ratio between sample and control, after removal of genes with high counts and outlying log ratios. The TMM value should be close to one, but can be used as a correction factor to be applied to overall library sizes

In this recipe, we'll look at some options from preparing read counts for annotated regions in some object to identifying the differentially expressed features in a genome. Usually, there is an upstream step requiring us to take high-throughput sequence reads, align them to a reference and produce files describing those alignments, such as .bam files. With those files prepared, we'd fire up R and start to analyze. So that we can concentrate on the differential expression analysis part of the process, we'll use a prepared dataset for which all of the data is ready. Chapter 8, Working with Databases and Remote Data Sources, shows you how to go from raw data to this stage if you're looking for how to do that step.

As there are many different tools and methods for getting those alignments of reads, we will look at starting the process with two common input object types. We'll use a count table, like that we would have if we were loading from a text file and we'll use an ExpressionSet (eset) object, which is an object type common in Bioconductor.

Our prepared dataset will be themodencodeflydata from the NHGRI encyclopedia of DNA elements project for the model organism,Drosophila melanogaster. You can read about this project atwww.modencode.org. The dataset contains 147 different samples forD. melanogaster, a fruit fly with an approximately 110 Mbp genome, annotated with about 15,000 gene features.

Getting ready

The data is provided as both a count matrix and an ExpressionSet object and you can see the Appendix at the end of this book for further information on these object types. The data is in this book's code and data repository at https://github.com/PacktPublishing/R_Bioinformatics_Cookbookunderdatasets/ch1/modencodefly_eset.RData, datasets/ch1/modencodefly_count_table.txt, and datasets/ch1/modencodelfy_phenodata.txt . We'll also use theedgeR (from Bioconductor), readr, and magrittr libraries.

How to do it...

We will see two ways of estimating differential expressions with edgeR.

How it works...

We saw two ways of estimating differential expression with edgeR. In the first half of this recipe, we used edgeR starting with our data in a text file.

Using edgeR from a count table

In step 1, we use the read_tsv() function in the readr package to load the tab delimited text file of counts into a dataframe called count_dataframe. Then, from that, we extract the 'gene' column to a new variable, genes, and erase it from count_dataframe, by assigning NULL. This is all done so we can easily convert into the count_matrixmatrix with the base as.matrix() function and add the gene information back as rownames. Finally, we load the phenotype data we'll need from file using the readr read_table2() function.

Step 2 is concerned with working out which columns in count_matrix we want to use. We define a variable, experiments_of_interest, which holds the column names we want and then use the %in% operator and which() functions to create a binary vector that matches the number of columns. If, say, the third column of the columns_of_interest vector is TRUE it indicates the name was in the experiments_of interest variable.

Step 3 begins with loading the magrittr package to get the %>% operator, which will allow piping. We then use R indexing with the binary columns_of_interest factor to select the names of columns we want and send it to the forcats as_factor() function to get a factor object for our grouping variable. Sample grouping information is basically a factor that tells us which samples are replications of the same thing and it's important for the experimental design description. We need to create a grouping vector, each index of which refers to a column in the counts table. So, in the following example, the first three columns in the data would be replicates of one sample, the second three columns in the counts table would be replicates of a different replicate, and so on.We can use any symbols in the grouping vector to represent the groups. The more complicated the grouping vector, the more complicated the experiment design can be. In the recipe here, we'll use a simple test/control design:

numeric_groups <- c(1,1,1,2,2,2)letter_groups <- c("A","A","A", "B","B","B")

A simple vector like this will do, but you can also use afactor object. The factor is R's categorical data type and is implemented as a vector of integers that have associated name labels, called levels. When a factor is displayed, the name labels are taken instead of the integers. The factor object has a memory of sorts, and even when a subset of levels is used, all of the levels that could have been used are retained so that when, for example, the levels are used as categories, empty levels can still be displayed.

In Step 4, we use indexing to extract the columns of data we want to actually analyze.

By Step 5, our preparatory work is done and we can build the DGEList object we need to do differential analysis. To start, we load the edgeR library and use the DGEList() function on counts_of_interest and our grouping object.

In Step 6, with DGEList, we can go through the edgeR process. First, we create the experimental design descriptor design object with the base model.matrix() function. A model design is required to tell the functions how to compare samples; this is a common thing in R and so has a base function. We use the grouping variable we created. We must estimate the dispersions of each gene with the estimateDisp() function, then we can use that measure of variability in tests. Finally, a generalized linear model is fit and the quasi-likelihood F-test is applied with the two uses of glmQLFTest(), first with the dispersal estimates, eset_dge, then with the resulting fit object.

We can use the topTags() function to see the details of differentially expressed genes. We get the following output:

## Coefficient: groupingL2Larvae ## logFC logCPM F PValue FDR ## FBgn0027527 6.318665 11.14876 42854.72 1.132951e-41 1.684584e-37 ## [ reached 'max' / getOption("max.print") -- omitted 9 rows ]

The columns show the gene name, the logFC value of the gene, the F value, the P value and the False Detection Rate (FDR). Usually, the column we want to make statistical conclusions from is FDR.

Using edgeR from an ExpressionSet object

In Step 1, we are looking at using edgeR from our prepared eset object. We first load that in, using the base R function as it is stored in a standard Rdata format file.

In Step 2, we prepare the vector of experiments of interest. This works as in step 2, except that we don't need to look at the pheno_data object we created from a file; instead, we can use the eset function, phenoData(), to extract the phenotype data straight from the eset object (note that this is one of the major differences between eset and the count matrix—see this book's Appendix for further information).

In Step 3, we create the grouping factor. Again, this can be done by using the phenoData() extraction function, but, as it returns a factor, we need to drop the levels that aren't selected using the droplevels() function (see the How it works... section in the Estimating differential expression with edgeR recipe, step 3