# Analyses of metagenomics data - The global picture

### Overview

question Questions
• How to analyze metagenomics data?

• What information can be extracted of metagenomics data?

• What is the difference between amplicon and shotgun data?

• What are the difference in the analyses of amplicon and shotgun data?

objectives Objectives
• Choosing the best approach to analyze metagenomics data

• Selection of tools to analyze amplicon data or shotgun data

• Visualisation of a community structure

requirements Requirements
time Time estimation: 2 hours 30 minutes
Supporting Materials
last_modification Last modification: Jun 11, 2020

# Introduction

In metagenomics, information about micro-organisms in an environment can be extracted with two main techniques:

• Amplicon sequencing, which sequences only the rRNA or ribosomal DNA of organisms
• Shotgun sequencing, which sequences full genomes of the micro-organisms in the environment

In this tutorial, we will introduce the two main types of analyses with their general principles and differences. For a more in-depth look at these analyses, we recommend our detailed tutorials on each analysis.

We will use two datasets (one amplicon and one shotgun) from the same project on the Argentinean agricultural pampean soils. In this project, three different geographic regions that are under different types of land uses and two soil types (bulk and rhizospheric) were analyzed using shotgun and amplicon sequencing. We will focus on data from the Argentina Anguil and Pampas Bulk Soil (the original study included one more geographical regions, see).

### Agenda

In this tutorial, we will deal with:

1. Amplicon data
2. Shotgun metagenomics data

# Amplicon data

Amplicon sequencing is a highly targeted approach for analyzing genetic variation in specific genomic regions. In the metagenomics fields, amplicon sequencing refers to capture and sequence of rRNA data in a sample. It can be 16S for bacteria or archea or 18S for eukaryotes.

### comment Background: The 16S ribosomal RNA gene

The 16S rRNA gene has several properties that make it ideally suited to our purposes

1. Present in all living organisms
2. Highly conserved + highly variable regions
3. Huge reference databases

The highly conserved regions make it easy to target the gene across different organisms, while the highly variable regions allow us to distinguish between different species.

With amplicon data, we can determine the micro-organisms from which the sequences in our sample are coming from. This is called taxonomic assignation. We try to assign sequences to taxons and then classify or extract the taxonomy in our sample.

In this analysis, we will use the mothur tool suite, but only a small portion of its tools and possibilities. To learn more in detail about how to use this tool, check out the full mothur tutorial.

## Importing the data

Our datasets come from a soil samples in two different Argentinian locations, for which the 16S rDNA V4 region has been sequenced using 454 GS FLX Titanium. For the tutorial, the original fastq data has been down sampled and converted to fasta. The original data are available at EBI Metagenomics under the following run numbers:

1. Import the following files from Zenodo, or from the shared data library (in GTN - Material -> metagenomics -> Analyses of metagenomics data - The global picture):
• SRR531818_pampa.fasta
• SRR651839_anguil.fasta
https://zenodo.org/record/815875/files/SRR531818_pampa.fasta
https://zenodo.org/record/815875/files/SRR651839_anguil.fasta

• Open the Galaxy Upload Manager
• Select Paste/Fetch Data
• Paste the link into the text field
• Press Start

### tip Tip: Importing data from a data library

• Go into “Shared data” (top panel) then “Data libraries”
• Click on “Training data” and then “Analyses of metagenomics data”
• Select interesting file
• Click on “Import selected datasets into history”
• Import in a new history

As default, Galaxy takes the link as name, so rename them.

### Preparing datasets

We will perform a multisample analysis with mothur. To do this, we will merge all reads into a single file, and create a group file, indicating which reads belong to which samples.

### hands_on Hands-on: Prepare multisample analysis

1. Merge.files tool with the following parameters
• “Merge” to fasta files
• “Inputs” to the two sample fasta files
2. Make.group tool with the following parameters
• “Method to create group file” to Manually specify fasta files and group names
• First element
• “fasta - Fasta to group” to SRR531818_pampa file
• “group - Group name” to pampa
• Second element (click on “Insert Additional”)
• “fasta - Fasta to group” to SRR651839_anguil file
• “group - Group name” to anguil

### comment Note

Because we only have a small number of samples, we used the manual specification. If you have hundreds of samples this would quickly become bothersome. The solution? use a collection! To read more about collections in Galaxy, please see this tutorial

Have a look at the group file. It is a very simple file, it contains two columns: the first contains the read names, the second contains the group (sample) name, in our case pampa or anguil.

### Optimization of files for computation

Because we are sequencing many of the same organisms, we anticipate that many of our sequences are duplicates of each other. Because it’s computationally wasteful to align the same thing a bazillion times, we’ll unique our sequences using the Unique.seqs command:

### hands_on Hands-on: Remove duplicate sequences

1. Unique.seqs tool with the following parameters
• “fasta” to the merged fasta file
• “output format” to Name File

### question Question

How many sequences were unique? How many duplicates were removed?

### solution Solution

19,502 unique sequences and 498 duplicates.

This can be determined from the number of lines in the fasta (or names) output, compared to the number of lines in the fasta file before this step.

This Unique.seqs tool outputs two files: a fasta file containing only the unique sequences, and a names file. The names file consists of two columns, the first contains the sequence names for each of the unique sequences, and the second column contains all other sequence names that are identical to the representative sequence in the first column.

name          representatives
...


### hands_on Hands-on: Count sequences

1. Count.seqs tool with the following parameters
• “name” to the name file from Unique.seqs
• “Use a group file” to yes
• “group” to the group file from Make.group

The Count.seqs file keeps track of the number of sequences represented by each unique representative across multiple samples. We will pass this file to many of the following tools to be used or updated as needed.

## Quality Control

The first step in any analysis should be to check and improve the quality of our data.

### comment Comment

First, let’s get a feel for our data:

### hands_on Hands-on: Summarize data

1. Summary.seqs tool with the following parameters
• “fasta” parameter to the fasta from Unique.seqs
• “count” to count table from Count.seqs
• “output logfile?” to yes

The summary output files give information per read. The logfile outputs also contain some summary statistics:

              Start    End        NBases     Ambigs   Polymer  NumSeqs
Minimum:      1        80         80         0        3        1
2.5%-tile:    1        104        104        0        3        501
25%-tile:     1        242        242        0        4        5001
Median:       1        245        245        0        4        10001
75%-tile:     1        245        245        0        4        15001
97.5%-tile:   1        247        247        0        6        19501
Maximum:      1        275        275        2        31       20000
Mean:         1        237.519    237.519    0.00495  4.24965
# of unique seqs:   19502
total # of seqs:    20000


This tells us that we have a total of 19,502 unique sequences, representing 20,000 total sequences that vary in length between 80 and 275 bases. Also, note that at least some of our sequences had some ambiguous base calls. Furthermore, at least one read had a homopolymer stretch of 31 bases, this is likely an error so we would like to filter such reads out as well.

If you are thinking that 20,000 is an oddly round number, you are correct; we downsampled the original datasets to 10,000 reads per sample for this tutorial to reduce the amount of time the analysis steps will take.

We can filter our dataset on length, base quality, and maximum homopolymer length using the Screen.seqs tool

The following tool will remove any sequences with ambiguous bases (maxambig parameter), homopolymer stretches of 9 or more bases (maxhomop parameter) and any reads longer than 275 bp or shorter than 225 bp.

### hands_on Hands-on: Filter reads based on quality and length

1. Screen.seqs tool with the following parameters
• “fasta” to the fasta file from Unique.seqs
• “minlength” parameter to 225
• “maxlength” parameter to 275
• “maxambig” parameter to 0
• “maxhomop” parameter to 8
• “count” to the count file from Count.seqs

### question Question

How many reads were removed in this screening step? (Hint: run the Summary.seqs tool again)

### solution Solution

1,804.

This can be determined by looking at the number of lines in bad.accnos output of screen.seqs step or by comparing the total number of seqs between of the summary.seqs log before and after this screening step

## Sequence Alignment

Aligning our sequences to a reference helps improve OTU assignment [Schloss et. al.], so we will now align our sequences to an alignment of the V4 variable region of the 16S rRNA. This alignment has been created as described in [mothur’s MiSeq SOP] from the Silva reference database.

### hands_on Hands-on: Align sequences

1. Import the silva.v4.fasta file in your history

https://zenodo.org/record/815875/files/silva.v4.fasta

2. Align.seqs tool with the following parameters
• “fasta” to the good.fasta output from Screen.seqs
• “Select Reference Template from” to Your history
• “reference” to the silva.v4.fasta reference file
• “flip” to Yes

This step may take a few minutes, please be patient.

3. Summary.seqs tool with the following parameters
• “fasta” parameter to the aligned output from Align.seqs
• “count” parameter to count_table output from Screen.seqs

To get an idea of the quality of the alignment, we can view the log output from the summary step:

        Start   End NBases  Ambigs  Polymer NumSeqs
Minimum:    2391    10674   9   0   2   1
2.5%-tile:  3080    12071   234 0   4   455
25%-tile:   3080    13424   244 0   4   4545
Median:     3080    13424   245 0   4   9090
75%-tile:   3080    13424   245 0   4   13634
97.5%-tile: 3082    13424   246 0   6   17724
Maximum:    13396   13425   267 0   7   18178
Mean:   3080.6  13380   244.212 0   4.27946
# of unique seqs:   17698
total # of seqs:    18178


### question Questions

1. How many sequences have been aligned?
2. Between which positions most of the reads are aligned to this references?

### solution Solution

1. 17,698 are aligned
2. From this we can see that most of our reads align nicely to positions 3080-13424 on this reference. This corresponds exactly to the V4 target region of the 16S gene.

To make sure that everything overlaps the same region we’ll re-run Screen.seqs to get sequences that start at or before position 3,080 and end at or after position 13,424.

### hands_on Hands-on: Remove poorly aligned sequences

1. Screen.seqs tool with the following parameters
• “fasta” to the aligned fasta file
• “start” to 3080
• “end” to 13424
• “count” to the group file created by the previous run of Screen.seqs

### question Question

How many sequences were removed in this step?

### solution Solution

4,579 sequences were removed. This is the number of lines in the bad.accnos output.

Now we know our sequences overlap the same alignment coordinates, we want to make sure they only overlap that region. So we’ll filter the sequences to remove the overhangs at both ends. In addition, there are many columns in the alignment that only contain external gap characters (i.e. “.”), while columns containing only internal gap characters (i.e., “-“) are not considered. These can be pulled out without losing any information. We’ll do all this with Filter.seqs:

### hands_on Hands-on: Filter sequences

1. Filter.seqs tool with the following parameters
• “fasta”” to good.fasta output from Screen.seqs
• “trump” to .

## Extraction of taxonomic information

The main questions when analyzing amplicon data are: Which micro-organisms are present in an environmental samples? And in what proportion? What is the structure of the community of the micro-organisms?

The idea is to take the sequences and assign them to a taxon. To do that, we group (or cluster) sequences based on their similarity to defined Operational Taxonomic Units (OTUs): groups of similar sequences that can be treated as a single “genus” or “species” (depending on the clustering threshold)

### comment Background: Operational Taxonomic Units (OTUs)

In 16S metagenomics approaches, OTUs are clusters of similar sequence variants of the 16S rDNA marker gene sequence. Each of these clusters is intended to represent a taxonomic unit of a bacterial species or genus depending on the sequence similarity threshold. Typically, OTU clusters are defined by a 97% identity threshold of the 16S gene sequence variants at genus level. 98% or 99% identity is suggested for species separation.

The first thing we want to do is to further de-noise our sequences from potential sequencing errors, by pre-clustering the sequences using the Pre.cluster command, allowing for up to 2 differences between sequences. This command will split the sequences by group and then sort them by abundance, then go from most abundant to least and identify sequences that differ by no more than 2 nucleotides from on another. If this is the case, then they get merged. We generally recommend allowing 1 difference for every 100 basepairs of sequence:

### hands_on Hands-on: Perform preliminary clustering of sequences and remove undesired sequences

1. Pre.cluster tool with the following parameters
• “fasta” to the fasta output from the last Filter.seqs run
• “name file or count table” to the count table from the last Screen.seqs step
• “diffs” to 2

### question Question

How many unique sequences are we left with after this clustering of highly similar sequences?

### solution Solution

10,398

This is the number of lines in the fasta output

We would like to classify the sequences using a training set, which is again is provided on [mothur’s MiSeq SOP].

### hands_on Hands-on: Classify the sequences into phylotypes

1. Import the trainset16_022016.pds.fasta and trainset16_022016.pds.tax in your history

https://zenodo.org/record/815875/files/trainset16_022016.pds.fasta
https://zenodo.org/record/815875/files/trainset16_022016.pds.tax

2. Classify.seqs tool with the following parameters

• “fasta” to the fasta output from Pre.cluster
• “Select Reference Template from” to History
• “reference” to trainset16_022016.pds.fasta from your history
• “Select Taxonomy from” to History
• “taxonomy” to trainset16_022016.pds.tax from your history
• “count” to the count table from Pre.cluster

This step may take a couple of minutes, now may be a good time to grab a cup of tea/coffee.

Have a look at the taxonomy output.

name    taxonomy
SRR651839.9109-HXY9DLL01BSHQO-2 Bacteria(99);Verrucomicrobia(99);Spartobacteria(99);unclassified;unclassified;unclassified;
SRR651839.11437-HXY9DLL01AMIPI-2    Bacteria(100);Verrucomicrobia(97);Spartobacteria(97);unclassified;unclassified;unclassified;
SRR651839.15884-HXY9DLL01BDG2F-2    Bacteria(99);unclassified;unclassified;unclassified;unclassified;unclassified;
SRR651839.12048-HXY9DLL01DSOWS-2    Bacteria(100);Verrucomicrobia(100);Spartobacteria(100);unclassified;unclassified;unclassified;
SRR651839.9029-HXY9DLL01E4W9Z-2 Bacteria(100);Verrucomicrobia(100);Spartobacteria(100);unclassified;unclassified;unclassified;
SRR651839.6283-HXY9DLL01CHX25-2 Bacteria(100);Acidobacteria(100);Acidobacteria_Gp4(100);Gp4(100);unclassified;unclassified;
SRR651839.3134-HXY9DLL01DOM67-2 Bacteria(100);Acidobacteria(100);Acidobacteria_Gp6(100);Gp6(100);unclassified;unclassified;
SRR531818.61708-G88ZSJI01AVPPR-2    Bacteria(100);Acidobacteria(99);Acidobacteria_Gp6(99);Gp6(99);unclassified;unclassified;


You will see that every read now has a classification.

The next step is to use this information to determine the abundances of the different found taxa. This consists of three steps:

1. first, all individual sequences are classified, and get assigned a confidence score (0-100%)
2. next, sequences are grouped at 97% identity threshold (not using taxonomy info)
3. finally, for each cluster, a consensus classification is determined based on the classification of the individual sequences and taking their confidence scores into account

### hands_on Hands-on: Assign sequences to OTUs

1. Cluster.split tool with the following parameters
• “Split by” to Classification using fasta
• “fasta” to the fasta output from Pre.cluster
• “taxonomy” to the taxonomy output from Classify.seqs
• “count” to the count table output from Pre.cluster
• “Clustering method” to Average Neighbour
• “cutoff” to 0.15

We obtain a table with the columns being the different identified OTUs, the rows the different distances and the cells the ids of the sequences identified for these OTUs for the different distances.

Next we want to know how many sequences are in each OTU from each group with a distance of 0.03 (97% similarity). We can do this using the Make.shared command with the 0.03 cutoff level:

### hands_on Hands-on: Estimate OTU abundance

1. Make.shared tool with the following parameters
• “Select input type” to OTU list
• “list” to list output from Cluster.split
• “name file or count table” to the count table from Pre.cluster
• “label” to 0.03

We probably also want to know the taxonomy for each of our OTUs. We can get the consensus taxonomy for each OTU using the Classify.otu command:

### hands_on Hands-on: Classify the OTUs

1. Classify.otu tool with the following parameters
• “list” to output from Cluster.split
• “count” to the count table from Pre.cluster
• “Select Taxonomy from” to History
• “taxonomy” to the taxonomy output from Classify.seqs
• “label” to 0.03

### question Questions

1. How many OTUs with taxonomic assignation are found for the Anguil sample? And for the Pampa sample?
2. What is the annotation of first OTU and its size?

### solution Solution

1. 2,195 for Anguil and 2,472 for Pampa (“tax.summary”)
2. Otu00001 is associated to 929 sequences and to Bacteria (kingdom), Verrucomicrobia (phylum), Spartobacteria (class) in “taxonomy” file

## Visualization

We have now determined our OTUs and classified them, but looking at a long text file is not very informative. Let’s visualize our data using Krona:

### hands_on Hands-on: Krona

First we convert our mothur taxonomy file to a format compatible with Krona

• Taxonomy-to-Krona tool with the following parameters
• “Taxonomy file” to the taxonomy output from Classify.otu (note: this is a collection input)
• Krona pie chart tool with the following parameters
• “Type of input” to Tabular
• “Input file” to taxonomy output from Taxonomy-to-Krona (collection)

The result is an HTML file with an interactive visualization, for instance try clicking on one of the rings in the image or playing around with some of the settings.

This produced a single plot for both your samples, but what if you want to compare the two samples?

### hands_on Hands-on: Per-sample Krona plots

1. Classify.otu tool

Hit the rerun button on the Classify.otu job in your history and see if you can find settings that will give you per-sample taxonomy data

2. Krona tool

Now use this new output collection to create per-sample Krona plots

In this new Krona output you can switch between the combined plot and the per-sample plots via the selector in the top-left corner.

### question Question

Which soil sample had a higher percentage of Acidobacteria, anguil or pampa? what were the respective percentages?

### solution Solution

The anguil sample had a higher proportion of Acidobacteria. The exact percentages can be found by looking at the pie charts at the top right-hand corner after clicking on the label Acidobacteria. For anguil the percentage is 36%, for the pampa sample it is 26%.

To further explore the community structure, we can visualize it with dedicated tools such as Phinch.

### hands_on Hands-on: Visualization of the community structure with Phinch

1. Make.biom tool with the following parameters
• “shared” to Make.shared output
• “constaxonomy” to taxonomy output from the first run of Classify.otu (collection)
2. Expand the dataset and click on the “view biom at phinch” link

### comment Comment

3. Play with the several interactive visualisations:

Once we have information about the community structure (OTUs with taxonomic structure), we can do more analysis on it: estimation of the diversity of micro-organism, comparison of diversity between samples, analysis of populations, … We will not go into detail of such analyses here but you can follow our tutorials on amplicon data analyses to learn more about them.

# Shotgun metagenomics data

In the previous section, we saw how to analyze amplicon data to extract the community structure. Such information can also be extracted from shotgun metagenomic data.

In shotgun data, full genomes of the micro-organisms in the environment are sequenced (not only the 16S or 18S). We can then have access to the rRNA (only a small part of the genomes), but also to the other genes of the micro-organisms. Using this information, we can try to answer questions such as “What are the micro-organisms doing?” in addition to the question “What micro-organisms are present?”.

In this second part, we will use a metagenomic sample of the Pampas Soil (SRR606451).

1. Create a new history
2. Import the SRR606451_pampa Fasta file from Zenodo or from the data library (in “Analyses of metagenomics data”)

https://zenodo.org/record/815875/files/SRR606451_pampa.fasta


## Extraction of taxonomic information

As for amplicon data, we can extract taxonomic and community structure information from shotgun data. Different approaches can be used:

• Same approach as for amplicon data with identification and classification of OTUs

Such an approach requires a first step of sequence sorting to extract only the 16S and 18S sequences, then using the same tools as for amplicon data. However, rRNA sequences represent a low proportion (< 1%) of the shotgun sequences so such an approach is not the most statistically supported

• Assignation of taxonomy on the whole sequences using databases with marker genes

In this tutorial, we use the second approach with MetaPhlAn2. This tools is using a database of ~1M unique clade-specific marker genes (not only the rRNA genes) identified from ~17,000 reference (bacterial, archeal, viral and eukaryotic) genomes.

### hands_on Hands-on: Taxonomic assignation with MetaPhlAn2

1. MetaPhlAN2 tool with
• “Input file” to the imported file
• “Database with clade-specific marker genes” to locally cached
• “Cached database with clade-specific marker genes” to MetaPhlAn2 clade-specific marker genes

This step may take a couple of minutes.

3 files are generated:

• A tabular file with the community structure

  #SampleID   Metaphlan2_Analysis
k__Bacteria 100.0
k__Bacteria|p__Proteobacteria   86.20712
k__Bacteria|p__Actinobacteria   13.79288
k__Bacteria|p__Proteobacteria|c__Gammaproteobacteria    86.20712
k__Bacteria|p__Actinobacteria|c__Actinobacteria 13.79288


Each line contains a taxa and its relative abundance found for our sample. The file starts with high level taxa (kingdom: k__) and go to more precise taxa.

• A BIOM file with the same information as the previous file but in BIOM format

It can be used then by mothur and other tools requiring community structure information in BIOM format

• A SAM file with the results of the mapping of the sequences on the reference database

### question Questions

1. What is the most precise level we have access to with MetaPhlAn2?
2. What are the two orders found in our sample?
3. What is the most abundant family in our sample?

### solution Solution

2. Pseudomonadales and Solirubrobacterales are found in our sample
3. The most abundant family is Pseudomonadaceae with 86.21 % of the assigned sequences

Even if the output of MetaPhlAn2 is bit easier to parse than the BIOM file, we want to visualize and explore the community structure with KRONA

### hands_on Hands-on: Interactive visualization with KRONA

1. Format MetaPhlAn2 output for Krona tool with
• “Input file” to Community profile output of MetaPhlAn2
2. KRONA pie chart tool with
• “What is the type of your input data” as MetaPhlan
• “Input file” to the output of Format MetaPhlAn2

## Extraction of functional information

We would now like to answer the question “What are the micro-organisms doing?” or “Which functions are performed by the micro-organisms in the environment?”.

In the shotgun data, we have access to the gene sequences from the full genome. We use that to identify the genes, associate them with a function, build pathways, etc., to investigate the functional part of the community.

### hands_on Hands-on: Metabolism function identification

1. HUMAnN2 tool with
• “Input sequence file” to the imported sequence file
• “Use of a custom taxonomic profile” to Yes
• “Taxonomic profile file” to Community profile output of MetaPhlAn2
• “Nucleotide database” to Locally cached
• “Nucleotide database” to Full
• “Protein database” to Locally cached
• “Protein database” to Full UniRef50
• “Search for uniref50 or uniref90 gene families?” to uniref50
• “Database to use for pathway computations” to MetaCyc
• “Remove stratification from output” to Yes

This step is long so we generated the output for you!

2. Import the 3 files whose the name is starting with “humann2”

https://zenodo.org/record/815875/files/humann2_gene_families_abundance.tsv
https://zenodo.org/record/815875/files/humann2_pathways_abundance.tsv
https://zenodo.org/record/815875/files/humann2_pathways_coverage.tsv


HUMAnN2 generates 3 files

• A file with the abundance of gene families

Gene family abundance is reported in RPK (reads per kilobase) units to normalize for gene length. It reflects the relative gene (or transcript) copy number in the community.

The “UNMAPPED” value is the total number of reads which remain unmapped after both alignment steps (nucleotide and translated search). Since other gene features in the table are quantified in RPK units, “UNMAPPED” can be interpreted as a single unknown gene of length 1 kilobase recruiting all reads that failed to map to known sequences.

• A file with the coverage of pathways

Pathway coverage provides an alternative description of the presence (1) and absence (0) of pathways in a community, independent of their quantitative abundance.

• A file with the abundance of pathways

### question Questions

How many gene families and pathways have been identified?

### solution Solution

44 gene families but no pathways are identified

The RPK for the gene families are quite difficult to interpret in term of relative abundance. We decide then to normalize the values

### hands_on Hands-on: Normalize the gene family abundances

1. Renormalize a HUMAnN2 generated table tool with
• “Gene/pathway table” to the gene family table generated with HUMAnN2
• “Normalization scheme” to Relative abundance
• “Normalization level” to Normalization of all levels by community total

### question Questions

1. What percentage of sequences has not be assigned to a gene family?
2. What is the most abundant gene family?

### solution Solution

1. 55% of the sequences have not be assigned to a gene family
2. The most abundant gene family with 25% of sequences is a putative secreted protein

With the previous analyses, we investigate “Which micro-organims are present in my sample?” and “What function are performed by the micro-organisms in my sample?”. We can go further in these analyses (for example, with a combination of functional and taxonomic results). We did not detail that in this tutorial but you can find more analyses in our tutorials on shotgun metagenomic data analyses.

# Conclusion

We can summarize the analyses with amplicon and shotgun metagenomic data:

Both analyses are quite complex! However, in this tutorial, we only showed simple cases of metagenomics data analysis with subset of real data.

Check our other tutorials to learn more in detail of how to analyze metagenomics data.

### keypoints Key points

• With amplicon data, we can extract information about the studied community structure

• With shotgun data, we can extract information about the studied community structure and also the functions realised by the community

• The tools used to analyze amplicon and shotgun data are different, except for the visualisation

• Metagenomics data analyses are complex and time-consuming

# Useful literature

Further information, including links to documentation and original publications, regarding the tools, analysis techniques and the interpretation of results described in this tutorial can be found here.

# Feedback

Did you use this material as an instructor? Feel free to give us feedback on how it went.

# Citing this Tutorial

1. Saskia Hiltemann, Bérénice Batut, 2020 Analyses of metagenomics data - The global picture (Galaxy Training Materials). https://training.galaxyproject.org/archive/2021-07-01/topics/metagenomics/tutorials/general-tutorial/tutorial.html Online; accessed TODAY
2. Batut et al., 2018 Community-Driven Data Analysis Training for Biology Cell Systems 10.1016/j.cels.2018.05.012

### details BibTeX

@misc{metagenomics-general-tutorial,
author = "Saskia Hiltemann and Bérénice Batut",
title = "Analyses of metagenomics data - The global picture (Galaxy Training Materials)",
year = "2020",
month = "06",
day = "11"
url = "\url{https://training.galaxyproject.org/archive/2021-07-01/topics/metagenomics/tutorials/general-tutorial/tutorial.html}",
note = "[Online; accessed TODAY]"
}
@article{Batut_2018,
doi = {10.1016/j.cels.2018.05.012},
url = {https://doi.org/10.1016%2Fj.cels.2018.05.012},
year = 2018,
month = {jun},
publisher = {Elsevier {BV}},
volume = {6},
number = {6},
pages = {752--758.e1},
author = {B{\'{e}}r{\'{e}}nice Batut and Saskia Hiltemann and Andrea Bagnacani and Dannon Baker and Vivek Bhardwaj and Clemens Blank and Anthony Bretaudeau and Loraine Brillet-Gu{\'{e}}guen and Martin {\v{C}}ech and John Chilton and Dave Clements and Olivia Doppelt-Azeroual and Anika Erxleben and Mallory Ann Freeberg and Simon Gladman and Youri Hoogstrate and Hans-Rudolf Hotz and Torsten Houwaart and Pratik Jagtap and Delphine Larivi{\{e}}re and Gildas Le Corguill{\'{e}} and Thomas Manke and Fabien Mareuil and Fidel Ram{\'{\i}}rez and Devon Ryan and Florian Christoph Sigloch and Nicola Soranzo and Joachim Wolff and Pavankumar Videm and Markus Wolfien and Aisanjiang Wubuli and Dilmurat Yusuf and James Taylor and Rolf Backofen and Anton Nekrutenko and Björn Grüning},
title = {Community-Driven Data Analysis Training for Biology},
journal = {Cell Systems}
}
`