Adding auto-generated video to your slides


question Questions
  • How can we add auto-generated video?

  • How does it work?

  • What do I need to do to make it optimal for viewers?

objectives Objectives
  • Adding a video to a set of slides

time Time estimation: 20 minutes

Supporting Materials

last_modification Last modification: Feb 19, 2021

Video Lectures

Based on the work by Delphine Larivière and James Taylor with their COVID-19 Lectures we have implemented a similar feature in the Galaxy Training Network.


In this tutorial, we will:

  1. How it Works
  2. Enabling Video
    1. Writing Good Captions
    2. Enable the Video

How it Works

We wrote a short script which does the following:

Locally and in production:

  • Extracts a ‘script’ from the slides. We extract every presenter comment in the slidedeck, and turn this into a text file.
  • Every line of this text file is then narrated by Amazon Polly (if you have money) or MozillaTTS (free).
  • The slide deck is converted to a PDF, and then each slide is extracted as a PNG.
  • Captions are extracted from the audio components.
  • The narration is stitched together into an mp3
  • The images are stitched together into an mp4 file
  • The video, audio, and captions are muxed together into a final mp4 file

In production

  • We use Amazon Polly, paid for by the Galaxyproject
  • The result is uploaded to an S3 bucket

Enabling Video

We have attempted to simplify this process as much as possible, but making good slides which work well is up to you.

Writing Good Captions

Every slide must have some narration in the presenter notes. It does not make sense for students to see a slide without commentary. For each slide, you’ll need to write presenter notes in full, but short sentences.

Sentence Structure

Use short and uncomplex sentences whenever possible. Break up ideas into easy to digest bits. Students will be listening to this spoken and possibly reading the captions.

The captioning process is completely automated, but it means that for very long sentences, we do not currently break them up into multiple captions. So please keep your sentences under ~120 characters where possible.


  • Configuration management manages the configuration of machines.
  • It specifies what software should be installed, and how it should be configured.

Bad Configuration management manages the configuration of machines, it specifies what software should be installed, and how it should be configured

Captions per Slide

Every slide must have some speaker notes in this system, NO exceptions.


Sentences should end with punctuation like . or ? or even ! if you’re feeling excited.


These are generally fine as-is. (e.g. e.g./i.e. is fine as-is, RNA is fine, etc.) Make sure abbreviations are all caps though.

Good This role deploys CVMFS.

“Weird” Names

In the captions you will want to teach the GTN how to pronounce these words by editing bin/ari-map.yml to provide your definition.


Word Pronunciation
SQLAlchemy SQL alchemy
FastQC fast QC
nginx engine X
gxadmin GX admin
/etc / E T C

The same applies to the many terms we read differently from how they are written, e.g. ‘src’ vs ‘source’. Most of us would pronounce it like the latter, even though it isn’t spelt that way. Our speaking robot doesn’t know what we mean, so we need to spell it out properly.

So we write the definition in the bin/ari-map.yml file.

Other Considerations

(Written 2020-12-16, things may have changed since.)

Be sure to check the pronunciation of the slides. There are known issues with heteronyms, words spelt the same but having different pronunciation and meaning. Consider “read” for a classic example, or “analyses” for one that comes up often in the GTN. “She analyses data” and “Multiple analyses” are pronounced quite differently based on their usage in sentences. See the wiktionary page for more information, or the list of English heteronyms you might want to be aware of.

This becomes an issue for AWS Polly and Mozilla’s TTS which both don’t have sufficient context sometimes to choose between the two pronunciations. You’ll find that “many analyses” is pronounced correctly while “multiple analyses” isn’t.

Oftentimes the services don’t understand part of speech, so by adding adjectives to analyses, you confuse the engine in to thinking it should be the third person singular pronunciation. This is probably because it only has one or two words of context ahead of the word to be pronounced.

Enable the Video

Lastly, we need to tell the GTN framework we would like videos to be generated.

hands_on Hands-on: Enable video

  1. Edit the slides.html for your tutorial
  2. Add video: true to the top

That’s it! With this, videos can be automatically generated.


keypoints Key points

  • Thanks to the GTN, videos are easy to add

  • Be mindful of your captions. Short sentences are good!


Did you use this material as an instructor? Feel free to give us feedback on how it went.

Click here to load Google feedback frame

Citing this Tutorial

  1. Helena Rasche, 2021 Adding auto-generated video to your slides (Galaxy Training Materials). /training-material/topics/contributing/tutorials/slides-with-video/tutorial.html Online; accessed TODAY
  2. Batut et al., 2018 Community-Driven Data Analysis Training for Biology Cell Systems 10.1016/j.cels.2018.05.012

details BibTeX

    author = "Helena Rasche",
    title = "Adding auto-generated video to your slides (Galaxy Training Materials)",
    year = "2021",
    month = "02",
    day = "19"
    url = "\url{/training-material/topics/contributing/tutorials/slides-with-video/tutorial.html}",
    note = "[Online; accessed TODAY]"
        doi = {10.1016/j.cels.2018.05.012},
        url = {},
        year = 2018,
        month = {jun},
        publisher = {Elsevier {BV}},
        volume = {6},
        number = {6},
        pages = {752--758.e1},
        author = {B{\'{e}}r{\'{e}}nice Batut and Saskia Hiltemann and Andrea Bagnacani and Dannon Baker and Vivek Bhardwaj and Clemens Blank and Anthony Bretaudeau and Loraine Brillet-Gu{\'{e}}guen and Martin {\v{C}}ech and John Chilton and Dave Clements and Olivia Doppelt-Azeroual and Anika Erxleben and Mallory Ann Freeberg and Simon Gladman and Youri Hoogstrate and Hans-Rudolf Hotz and Torsten Houwaart and Pratik Jagtap and Delphine Larivi{\`{e}}re and Gildas Le Corguill{\'{e}} and Thomas Manke and Fabien Mareuil and Fidel Ram{\'{\i}}rez and Devon Ryan and Florian Christoph Sigloch and Nicola Soranzo and Joachim Wolff and Pavankumar Videm and Markus Wolfien and Aisanjiang Wubuli and Dilmurat Yusuf and James Taylor and Rolf Backofen and Anton Nekrutenko and Björn Grüning},
        title = {Community-Driven Data Analysis Training for Biology},
        journal = {Cell Systems}

congratulations Congratulations on successfully completing this tutorial!

Developing GTN training material

This tutorial is part of a series to develop GTN training material, feel free to also look at:
  1. Overview of the Galaxy Training Material
  2. Adding auto-generated video to your slides
  3. Contributing with GitHub via command-line
  4. Contributing with GitHub via its interface
  5. Creating a new tutorial
  6. Creating content in Markdown
  7. Creating Interactive Galaxy Tours
  8. Creating Slides
  9. Generating PDF artefacts of the website
  10. Including a new topic
  11. Running the Galaxy Training material website locally
  12. Tools, Data, and Workflows for tutorials