Functionally Assembled Terrestrial Ecosystem Simulator (FATES)

Creative Commons License: CC-BY Questions:
  • How to run CLM-FATES with the CLM-FATES Galaxy tool?

  • How to upload input data for running CLM-FATES?

  • How to customize your runs?

  • How to analyze your model outputs?

  • How to create a workflow?

  • How to share your workflow?

  • Setting up a CLM-FATES case.

  • Customizing your run.

  • Interactive visualization with Panoply.

  • Automating your analyzes and visualisations of your CLM-FATES case.

  • Creating multi-case scenarios.

  • Composing, executing and publishing CML-FATES workflow.

Time estimation: 4 hours
Supporting Materials:
Published: Oct 25, 2020
Last modification: Jun 14, 2024
License: Tutorial Content is licensed under Creative Commons Attribution 4.0 International License. The GTN Framework is licensed under MIT
purl PURL:
rating Rating: 4.0 (0 recent ratings, 1 all time)
version Revision: 17

Terrestrial ecosystem models have been widely used to study the impact of climate changes on vegetation and terrestrial biogeochemical cycles in climate modelling community. They are also more and more applied in ecological studies to help ecologists to better understand the processes. But the technical challenges are still too high for most of the ecologists to use them. This practical aims at familiarizing you (especially ecologists) with running a terrestrial ecosystem model (i.e., CLM-FATES) at site-level in Galaxy and analyzing the model results. It will also teach you on how to create Galaxy workflow for your site-level CLM-FATES simulations to make your research fully reproducible. We hope this tutorial will promote the use of CLM-FATES and other terrestrial ecosystem models by a broader community.


In this tutorial, we will cover:

  1. Get CLM-FATES input data
  2. Setting up a CLM-FATES simulation
  3. Quick visualization with Panoply
    1. Opening up Panoply
    2. Inspect metadata
  4. Using Galaxy tools for analysing your CLM-FATES simulation
  5. Convert your analysis history into a Galaxy workflow
  6. Change your CLM-FATES case and rerun your workflow
  7. Share your work
  8. Conclusion
Comment: Background

FATES is the “Functionally Assembled Terrestrial Ecosystem Simulator”, which is a vegetation demographic model (Fisher et al. 2017). FATES needs what we call a “Host Land Model” (HLM) to run and in this tutorial we will be using the Community Land Model of the Community Terrestrial Systems Model (CLM-CTSM). FATES was derived from the CLM Ecosystem Demography model (CLM(ED)), which was documented in Taking off the training wheels: the properties of a dynamic vegetation model without climate envelopes, CLM4.5(ED) 2015 and Koven et al. 2020. And this technical note was first published as an appendix to that paper. The FATES documentation will provide some more insight on FATES too.

Get CLM-FATES input data

Preparing CLM-FATES input data is out of scope for this tutorial. We assume the input data tarball contains the following folders:

atm   cpl   lnd   share

Each sub-folder will then contain all the necessary inputs for running your CLM-FATES case. For instance, ‘atm’ contains all the meteorological forcing data for running CLM-FATES. ‘lnd’ contains the data required to describe surface conditions (e.g., soil depth) for the model. More details about the model inputdata can be found in CLM and FATES documentation. For the purpose of this tutorial, input data for a single point location (ALP1) on the Norwegian alpine tundra ecosystem (Latitude: 61.0243N, Longitude: 8.12343E, Elevation: 1208 m) has been prepared and is ready to use. This is a site included in the modelling platform developed under [EMERALD project] ( More details about the sites can be found in Klanderud et al. 2015 and Vandvik et al. 2020

Hands-on: Data upload
  1. Create a new history for this tutorial. If you are not inspired, you can name it fates.

    To create a new history simply click the new-history icon at the top of the history panel:

    UI for creating new history

  2. Import the input data and the restart dataset from Zenodo or from the shared data library. Restart dataset will be used if you want to initialize the model from exisiting experiments rather than running the model from a cold start to shorten spin-up time needed for the model.
    • Copy the link location
    • Click galaxy-upload Upload Data at the top of the tool panel

    • Select galaxy-wf-edit Paste/Fetch Data
    • Paste the link(s) into the text field

    • Press Start

    • Close the window

    As an alternative to uploading the data from a URL or your computer, the files may also have been made available from a shared data library:

    1. Go into Shared data (top panel) then Data libraries
    2. Navigate to the correct folder as indicated by your instructor.
      • On most Galaxies tutorial data will be provided in a folder named GTN - Material –> Topic Name -> Tutorial Name.
    3. Select the desired files
    4. Click on Add to History galaxy-dropdown near the top and select as Datasets from the dropdown menu
    5. In the pop-up window, choose

      • “Select history”: the history you want to import the data to (or create a new one)
    6. Click on Import

  3. Check the datatype (for both files) is tar

    • Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
    • In the central panel, click galaxy-chart-select-data Datatypes tab on the top
    • In the galaxy-chart-select-data Assign Datatype, select datatypes from “New type” dropdown
      • Tip: you can start typing the datatype into the field to filter the dropdown menu
    • Click the Save button

  4. Rename galaxy-pencil datasets

    • Dataset names are the full URL, but this is not very nice to work with, and can even give errors for some tools
    • It is good practice to change the dataset names to something more meaningful and without any special characters
      • E.g. by stripping off the beginning of the URL
    • Example: rename to inputdata_version2.0.0_ALP1.tar
    • Do the same for the other dataset
    • Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
    • In the central panel, change the Name field
    • Click the Save button

Setting up a CLM-FATES simulation

We will be using the CTSM/FATES-EMERALD Galaxy tool.This tool is based on the version of CLM-FATES that have been adapted to run at the sites included in the EMERALD project. More details about this model version can be found in README_fates_emerald_api

Comment: Tip: Finding your tool

Different Galaxy servers may have tools available under different sections, therefore it is often useful to use the search bar at the top of the tool panel to find your tool.

Additionally different servers may have multiple, similarly named tools which accomplish similar functions. When following tutorials, you should use precisely the tools that they describe. For real analyses, however, you will need to search among the various options to find the one that works for you.

Comment: Tip: Pre-selected tool parameters

When selecting a tool, Galaxy will pre-fill the tool parameters, selecting the first dataset with the corresponding type in your history. Be aware that very often, the default pre-selection is incorrect and do not correspond to the required dataset. So always check and update accordingly the tool parameters!

Hands-on: Creating a new CTSM/FATES-EMERALD case
  1. CTSM/FATES-EMERALD ( Galaxy version 2.0.1) with the following parameters:
    • param-file “inputdata for running FATES EMERALD”: inputdata_version2.0.0_ALP1.tar file from your history
    • “Name of your case”: ALP1_exp
    • In section “Customize the model run period”:
      • param-select “Determines the model run initialization type:: hybrid
        • “Reference case for hybrid or branch runs”: ALP1_refcase
        • “Reference date for hybrid or branch runs (yyyy-mm-dd)”: 2300-01-01
        • “Run start date (yyyy-mm-dd). Only used for startup or hybrid runs”: 0001-01-01
        • param-file “Restart for running FATES EMERALD”: CTSM_FATES-EMERALD_version2.0.0_ALP1_restart_2300-01-01.tar
      • “Provides a numerical count for STOP_OPTION”: 5
      • “Sets the run length along with STOP_N and STOP_DATE”: nyears
    Comment: Startup versus Hybrid

    When using startup, the FATES model will start from some arbitrary baseline state that is not linked to any previous run. Startup runs are typically initialized using a start date of 0001-01-01 except if you change it (start date option). For any scientific study, starting from an arbitraty baseline state implies you would need to run the model for a long period (between 100 and 200 years) before being able to use the model outputs. For this reason, we usually make a first simulation (spin-up) in startup mode and reuse this case as a baseline for our scientific study. We then use hybrid type and give additional inputs (restart files) to our simulation case. It is then important to specify the dates of your restart files. This is what we do in this tutorial.

  2. Check that the datatype galaxy-pencil of your outputs (history file) is netcdf
    • If this is not the case, please change the datatype now
    Comment: About CLM-FATES history files

    All the CLM-FATES history files are organized in a collection.

    Comment: About datatypes

    All the history files contain gridded data values written at specified times during the model run. Depending on the length of your simulation, you may have one or more history files that you can recognize from their names: (for non-monthly history files). Datatypes are, by default, automatically guessed. Here, as the prefix is .nc, the format is not always recognized as netcdf files. To cope with that, one can change the datatype manually, as shown below.

    • Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
    • In the central panel, click galaxy-chart-select-data Datatypes tab on the top
    • In the galaxy-chart-select-data Assign Datatype, select datatypes from “New type” dropdown
      • Tip: you can start typing the datatype into the field to filter the dropdown menu
    • Click the Save button

  3. Rename galaxy-pencil the output dataset (history file) to

    Our FATES model has run for 5 years only, so we get a single output file. As previously, we recommend to rename all netCDF files so that they do not contain any special characters or dots (except for the file extension) or slashes. Some tools, in particular Panoply, won’t be able to recognize your file if not named properly.

    • Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
    • In the central panel, change the Name field
    • Click the Save button

  4. NetCDF xarray Metadata Info ( Galaxy version 0.15.1) to get metadata information for CLM-FATES netCDF outputs:
    • param-file “Netcdf file”:
  5. Inspect galaxy-eye the generated output files
    • Identify which variables would provide you some insights about canopy transpiration.
    1. What are the short names of the relevant variables? Which one will you pick if you want a result in mm/s?
    2. What are the dimensions of these variables?
    1. FCTR is the canopy transpiration in W/m^2 and QVEGT is in mm/s. Therefore, we would select the latter.
    2. These variables are stored as a function of time and lndgrid and since we have only one grid cell, lngrid=1, hence the time series.

Quick visualization with Panoply

Opening up Panoply

Hands-on: Launch Panoply

Panoply plots geo-referenced and other arrays from netCDF and is available as a Galaxy interactive environment and may not be available on all Galaxy servers.

Currently Panoply in Galaxy is available on instance, on the “Interactive tools” tool panel section or, as all interactive tools, from the dedicated subdomain: You may have to login again to (use the same username and password than on other subdomains) and switch to the correct history.

You can access the tool by clicking here to launch it on EU

  1. Open the Panoply
  2. Check dataset selected in the netcdf input field
  3. Click Run Tool
  4. The tool will start running and will stay running permanently
  5. Click on the “User” menu at the top and go to “Active Interactive Tools” and locate the Panoply instance you started.
  6. Click on your Panoply instance
    Panoply dataset selection. Open image in new tab

    Figure 1: Select dataset
  7. Click on dataset

Inspect metadata

Hands-on: Inspect dataset
  1. Inspect dataset content

    Here you can look at the dataset ( and related variables (FSDS, FSA, AREA_TREE, BIOMASS_CANOPY, etc.)

    1. What is the long name of MORTALITY?
    2. What is its physical unit?
    1. Rate of total mortality per PFT (Plat functional types)
    2. indiv/ha/yr
  2. Plot the total carbon in live plant leaves (LEAFC)

    Cutomize your plot and save it as png file in the output folder. Remember that if you do not save in the output folder, your plot will get lost.

    1. Can you observe any pattern? Does it make any sense?
    1. We can clearly see a seasonal cycle.
    Panoply LEAFC timeseries. Open image in new tab

    Figure 2: LEAFC
  3. Plot the rate of total mortality per PFT (MORTALITY)

    Select a 2D plot with time as x-axis and colored by the rate of total mortality per PFT (Plant functional type). Make sure to adjust the y-axis and save your plots in the output folder (as png file).

    1. Can you observe any pattern? Does it make any sense?
    1. We can clearly see a seasonal cycle of PFT2.
    Panoply MORTALITY per PFT. Open image in new tab

    Figure 3: total mortality per PFT
    Comment: Quit Panoply properly to save your plots!

    To make sure all your plots stored in outputs folder get exported to Galaxy, you need to quit panoply: File –> Quit Panoply.

Using Galaxy tools for analysing your CLM-FATES simulation

Panoply is a great tool for exploring the results of your simulations but what we would like is to automate the generation of the plots so that we can reuse it for any simulations.

Hands-on: Select and plot LEAFC
  1. NetCDF xarray Selection ( Galaxy version 0.15.1) to select the total carbon in live plant leaves (LEAFC)
    • param-file “Input netcdf file”:
    • param-file “Tabular of variables”: Metadata info from (output of NetCDF xarray Metadata Info tool)
    • param-select “Choose the variable to extract”: LEAFC
  2. Rename galaxy-pencil dataset to NetCDF xarray Selection on

    • Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
    • In the central panel, change the Name field
    • Click the Save button

  3. Replace parts of text ( Galaxy version 1.1.3) to clean date column for plotting:
    • param-file “File to process”: NetCDF xarray Selection on
    • param-text “Find pattern”: 00:00:00
    • “Find-Pattern is a regular expression”: No
    • “Replace all occurences of the pattern”: Yes
    • “Case-Insensitive search”: No
    • “Find whole-words”: Yes
    • “Ignore first line”: Yes
    • param-select “Find and Replace text in”: entire line
  4. Rename galaxy-pencil dataset to LEAFC_clean.tabular

    • Click on the galaxy-pencil pencil icon for the dataset to edit its attributes
    • In the central panel, change the Name field
    • Click the Save button

  5. Scatterplot w ggplot2 tool to plot the total carbon in live plant leaves (LEAFC):
    • param-file “Input in tabular format”: LEAFC_clean.tabular
    • “Column to plot on x-axis”: 1
    • “Column to plot on y-axis”: 4
    • “Plot title”: Total carbon in live plant leaves
    • “Label for x axis”: Time
    • “Label for y axis”: LEAFC (kgC ha-1)
    • In Advanced Options
      • param-select “Type of plot”: Points and Lines
    • In Output options
      • “width of output”:19.0
      • “height of output”: 5.0
  6. View galaxy-eye the resulting plot:

    Snapshot of LEAFC resulting plot.

Convert your analysis history into a Galaxy workflow

Hands-on: Extract workflow
  1. Go to the History Options menu galaxy-gear menu
    • Select the Extract Workflow option.
    • Remove any unwanted steps, in particular all steps with Panoply as we do not want to have interactive tools in our automated workflow..
  2. Rename the workflow to something descriptive
    • For example: CLM-FATES_ ALP1 simulation (5 years).
    • If there are any steps that shouldn’t be included in the workflow, you can uncheck them.
  3. Click “Create Workflow”
    • Click on “edit” and check your workflow
    • Check all the steps

Change your CLM-FATES case and rerun your workflow

We would like to run a CLM-FATES case where the atmospheric Carbon Dioxyde Concentration (CO2) is increased by a factor of 4.

Hands-on: Compare the two simulations

Using the results from your two CLM-FATES simulations and the generated plots, assess the impact of an increase in the atmosperhic CO2 on the outputs of the model.

  1. Open the workflow editor

    1. In the top menu bar, click on Workflows
    2. Click on the name of the workflow you want to edit Workflow drop down menu showing Edit option
    3. Select galaxy-wf-edit Edit from the dropdown menu to open the workflow in the workflow editor

  2. Edit your workflow and customize it to run your new CO2 experiment. For this you would need to:

    • In “Advanced customization”, change “Atmospheric CO2 molar ratio (by volume) only used when co2_type==constant (umol/mol)” from 367.0 to 1468.0.
    • Add an extra step to extract the first history file from the history collection: Extract Dataset and make sure to select “netcdf” in the change datatype field.
    • Generate the corresponding plot. The final workflow would be similar to the one shown below:
    Snapshot of FATES workflow. Open image in new tab

    Figure 4: FATES workflow
    1. Is the model response to this significant increase of atmospheric CO2 what you expected? Justify your answer.
    2. Is the current workflow (in particular the variables selected for the plots) the best choice? What changes/additions would you recommend?
    1. Running 5 years is already sufficient to highlight significant changes. LEAFC 4xCO2.
    2. Many suggestions can be given here. One simple addition can be the generation of plots where both simulations are represented on the same plot.

Share your work

One of the most important features of Galaxy comes at the end of an analysis. When you have published striking findings, it is important that other researchers are able to reproduce your in-silico experiment. Galaxy enables users to easily share their workflows and histories with others.

Sharing your history allows others to import and access the datasets, parameters, and steps of your history.

Access the history sharing menu via the History Options dropdown (galaxy-history-options), and clicking “history-share Share or Publish”

  1. Share via link
    • Open the History Options galaxy-history-options menu at the top of your history panel and select “history-share Share or Publish”
      • galaxy-toggle Make History accessible
      • A Share Link will appear that you give to others
    • Anybody who has this link can view and copy your history
  2. Publish your history
    • galaxy-toggle Make History publicly available in Published Histories
    • Anybody on this Galaxy server will see your history listed under the Shared Data menu
  3. Share only with another user.
    • Click the Share with a user button at the bottom
    • Enter an email address for the user you want to share with
    • Your history will be shared only with this user.
  4. Finding histories others have shared with me
    • Click on User menu on the top bar
    • Select Histories shared with me
    • Here you will see all the histories others have shared with you directly

Note: If you want to make changes to your history without affecting the shared version, make a copy by going to History Options galaxy-history-options icon in your history and clicking Copy this History

Hands-on: Share history
  1. Share your history with your neighbour (ask for his/her galaxy username).
  2. Find the history shared by your neighbour. Histories shared with specific users can be accessed by those users under their top masthead “User” menu under Histories shared with me.
Comment: Publish your history to

One step further is to share your workflow on where it will be stored in a Galaxy workflow format as well as in Common Workflow Language. It provides standardised workflow identifiers and descriptions needed for workflow discovery, reuse, preservation, interoperability and monitoring and metadata harvesting using standard protocols. Please note that is still under active development.


We have learnt to run single-point simulations with FATES-CLM and generate workflows for multi-site scenarios.