{ "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "r" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "4.1.0" } }, "nbformat": 4, "nbformat_minor": 5, "cells": [ { "id": "metadata", "cell_type": "markdown", "source": "
read.csv
\n- Read data with dplyr's read_csv
\n- Use dplyr and tidyverse functions to cleanup data.\n\n**Time Estimation: 1H**\ndplyr ({% cite r-dplyr %}) is a powerful R-package to transform and summarize tabular data with rows and columns. It is part of a group of packages (including ggplot2
) called the tidyverse
({% cite r-tidyverse %}), a collection of packages for data processing and visualisation. For further exploration please see the dplyr package vignette: Introduction to dplyr
\n\nComment\nThis tutorial is significantly based on GenomicsClass/labs.
\n
\n\nAgenda\nIn this tutorial, we will cover:
\n\n
The package contains a set of functions (or “verbs”) that perform common data manipulation operations such as filtering for rows, selecting specific columns, re-ordering rows, adding new columns and summarizing data.
\nIn addition, dplyr contains a useful function to perform another common task which is the “split-apply-combine” concept. We will discuss that in a little bit.
\nIf you are familiar with R, you are probably familiar with base R functions such as split(), subset(), apply(), sapply(), lapply(), tapply() and aggregate(). Compared to base functions in R, the functions in dplyr are easier to work with, are more consistent in the syntax and are targeted for data analysis around tibbles, instead of just vectors.
\nTo load the required packages:
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-1", "source": [ "library(tidyverse)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">\n\n\nRemember that you can install new packages by running
\n\ninstall.packages(\"tidyverse\")\n
Or by using the Install button on the RStudio Packages interface
\n
Here we’ve imported the entire suite of tidyverse packages. We’ll specifically be using:
\nPackage | \nUse | \n
---|---|
readr | \nThis provides the read_csv function which is identical to read.csv except it returns a tibble | \n
dplyr | \nAll of the useful functions we’ll be covering are part of dplyr | \n
magrittr | \nA dependency of dplyr that provides the %>% operator | \n
ggplot2 | \nThe famous plotting library which we’ll use at the very end to plot our aggregated data. | \n
The msleep (mammals sleep) data set contains the sleep times and weights for a set of mammals. This data set contains 83 rows and 11 variables.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-3", "source": [ "url <- \"https://raw.githubusercontent.com/genomicsclass/dagdata/master/inst/extdata/msleep_ggplot2.csv\"\n", "msleep <- read_csv(url)\n", "head(msleep)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">The columns (in order) correspond to the following:
\ncolumn name | \nDescription | \n
---|---|
name | \ncommon name | \n
genus | \ntaxonomic rank | \n
vore | \ncarnivore, omnivore or herbivore? | \n
order | \ntaxonomic rank | \n
conservation | \nthe conservation status of the mammal | \n
sleep_total | \ntotal amount of sleep, in hours | \n
sleep_rem | \nrem sleep, in hours | \n
sleep_cycle | \nlength of sleep cycle, in hours | \n
awake | \namount of time spent awake, in hours | \n
brainwt | \nbrain weight in kilograms | \n
bodywt | \nbody weight in kilograms | \n
Compare the above output with the more traditional read.csv
that is built into R
This is a “data frame” and was the basis of data processing for years in R, and is still quite commonly used! But notice how dplyr
has a much prettier and more consice output. This is what is called a tibble
(like a table). We can immediately see metadata about the table, the separator that was guessed for us, what datatypes each column was (dbl or chr), how many rows and columns we have, etc. The tibble
works basically exactly like a data frame except it has a lot of features to integrate nicely with the dplyr
package.
That said, all of the functions below you will learn about work equally well with data frames and tibbles, but tibbles will save you from filling your screen with hundreds of rows by automatically truncating large tables unless you specifically request otherwise.
\ndplyr verbs | \nDescription | \nSQL Equivalent Operation | \n
---|---|---|
select() | \nselect columns | \nSELECT | \n
filter() | \nfilter rows | \nWHERE | \n
arrange() | \nre-order or arrange rows | \nORDER BY | \n
mutate() | \ncreate new columns | \nSELECT x, x*2 ... | \n
summarise() | \nsummarise values | \nn/a | \n
group_by() | \nallows for group operations in the “split-apply-combine” concept | \nGROUP BY | \n
The two most basic functions are select()
and filter()
, which selects columns and filters rows respectively.
Before we go any further, let’s introduce the pipe operator: %>%. dplyr imports this operator from another package (magrittr).This operator allows you to pipe the output from one function to the input of another function. Instead of nesting functions (reading from the inside to the outside), the idea of piping is to read the functions from left to right. This is a lot more like how you would write a bash
data processing pipeline and can be a lot more readable and intuitive than the nested version.
Here’s is the more old fashioned way of writing the equivalent code:
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-7", "source": [ "head(select(msleep, name, sleep_total))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Now in this case, we will pipe the msleep tibble to the function that will select two columns (name and sleep_total) and then pipe the new tibble to the function head()
, which will return the head of the new tibble.
\n\nQuestion\nHow would you rewrite the following code to use the pipe operator?
\n\nprcomp(tail(read.csv(\"file.csv\"), 10))\n
\n👁 View solution
\n\nJust read from inside to outside, starting with the innermost
\n()
and use%>%
between each step.\nread.csv(\"file.csv\") %>% tail(10) %>% prcomp()\n
select()
Select a set of columns: the name and the sleep_total columns.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-11", "source": [ "msleep %>% select(name, sleep_total)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">To select all the columns except a specific column, use the “-“ (subtraction) operator (also known as negative indexing):
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-13", "source": [ "msleep %>% select(-name)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">To select a range of columns by name, use the “:” (colon) operator:
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-15", "source": [ "msleep %>% select(name:order)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">To select all columns that start with the character string “sl”, use the function starts_with()
:
Some additional options to select columns based on a specific criteria include:
\nFunction | \nUsage | \n
---|---|
ends_with() | \nSelect columns that end with a character string | \n
contains() | \nSelect columns that contain a character string | \n
matches() | \nSelect columns that match a regular expression | \n
one_of() | \nSelect column names that are from a group of names | \n
filter()
Filter the rows for mammals that sleep a total of more than 16 hours.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-19", "source": [ "msleep %>% filter(sleep_total >= 16)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Filter the rows for mammals that sleep a total of more than 16 hours and have a body weight of greater than 1 kilogram.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-21", "source": [ "msleep %>% filter(sleep_total >= 16, bodywt >= 1)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Filter the rows for mammals in the Perissodactyla and Primates taxonomic order
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-23", "source": [ "msleep %>% filter(order %in% c(\"Perissodactyla\", \"Primates\"))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">You can use the boolean operators (e.g. >, <, >=, <=, !=, %in%) to create the logical tests.
\narrange()
To arrange (or re-order) rows by a particular column, such as the taxonomic order, list the name of the column you want to arrange the rows by:
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-25", "source": [ "msleep %>% arrange(order) %>% select(order, genus, name)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Now we will select three columns from msleep, arrange the rows by the taxonomic order and then arrange the rows by sleep_total. Finally, show the final tibble:
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-27", "source": [ "msleep %>%\n", " select(name, order, sleep_total) %>%\n", " arrange(order, sleep_total)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Same as above, except here we filter the rows for mammals that sleep for 16 or more hours, instead of showing the whole tibble:
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-29", "source": [ "msleep %>%\n", " select(name, order, sleep_total) %>%\n", " arrange(order, sleep_total) %>%\n", " filter(sleep_total >= 16)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Something slightly more complicated: same as above, except arrange the rows in the sleep_total column in a descending order. For this, use the function desc()
mutate()
The mutate()
function will add new columns to the tibble. Create a new column called rem_proportion, which is the ratio of rem sleep to total amount of sleep.
You can many new columns using mutate (separated by commas). Here we add a second column called bodywt_grams which is the bodywt column in grams.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-35", "source": [ "msleep %>%\n", " mutate(rem_proportion = sleep_rem / sleep_total,\n", " bodywt_grams = bodywt * 1000) %>%\n", " select(sleep_total, sleep_rem, rem_proportion, bodywt, bodywt_grams)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">summarise()
The summarise()
function will create summary statistics for a given column in the tibble such as finding the mean. For example, to compute the average number of hours of sleep, apply the mean()
function to the column sleep_total and call the summary value avg_sleep.
There are many other summary statistics you could consider such sd()
, min()
, max()
, median()
, sum()
, n()
(returns the length of vector), first()
(returns first value in vector), last()
(returns last value in vector) and n_distinct()
(number of distinct values in vector).
group_by()
The group_by()
verb is an important function in dplyr. As we mentioned before it’s related to concept of “split-apply-combine”. We literally want to split the tibble by some variable (e.g. taxonomic order), apply a function to the individual tibbles and then combine the output.
Let’s do that: split the msleep tibble by the taxonomic order, then ask for the same summary statistics as above. We expect a set of summary statistics for each taxonomic order.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-41", "source": [ "msleep %>%\n", " group_by(order) %>%\n", " summarise(avg_sleep = mean(sleep_total),\n", " min_sleep = min(sleep_total),\n", " max_sleep = max(sleep_total),\n", " total = n())" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Most people want to slice and dice their data before plotting, so let’s demonstrate that quickly by plotting our last dataset.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-43", "source": [ "library(ggplot2)\n", "msleep %>%\n", " group_by(order) %>%\n", " summarise(avg_sleep = mean(sleep_total),\n", " min_sleep = min(sleep_total),\n", " max_sleep = max(sleep_total),\n", " total = n()) %>%\n", " ggplot() + geom_point(aes(x=min_sleep, y=max_sleep, colour=order))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ ">Notice how we can just keep piping our data together, this makes it incredibly easier to experiment and play around with our data and test out what filtering or summarisation we want and how that will plot in the end. If we wanted, or if the data processing is an especially computationally expensive step, we could save it to an intermediate variable before playing around with plotting options, but in the case of this small dataset that’s probably not necessary.
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "cell_type": "markdown", "id": "final-ending-cell", "metadata": { "editable": false, "collapsed": false }, "source": [ "# Key Points\n\n", "- Dplyr and tidyverse make it a lot easier to process data\n", "- The functions for selecting data are a lot easier to understand than R's built in alternatives.\n", "\n# Congratulations on successfully completing this tutorial!\n\n", "Please [fill out the feedback on the GTN website](https://training.galaxyproject.org/training-material/topics/data-science/tutorials/r-dplyr/tutorial.html#feedback) and check there for further resources!\n" ] } ] }