Teaching Python
Author(s) | Helena Rasche |
Editor(s) | Bazante Sanders |
Tester(s) | Donny Vrins |
Reviewers |
OverviewQuestions:Objectives:
What methods can I incorporate to improve student material uptake within my python courses?
How do I write problems that use these methodologies
Learn about four different problem design ideas, and one teaching methodology.
Time estimation: 2 hoursSupporting Materials:
Published: Oct 19, 2022Last modification: Nov 23, 2023License: Tutorial Content is licensed under Creative Commons Attribution 4.0 International License. The GTN Framework is licensed under MITpurl PURL: https://gxy.io/GTN:T00068version Revision: 4
Improving student learning and out comes should always be a goal of teaching. Here we present several strategies to improve student experiences during courses by focusing on how they approach specific problems, and giving them real world applicable solutions to those problems.
AgendaIn this tutorial, you will learn how to run a local instance of the GTN website:
“Live coding”, as espoused by the Carpentries, is a fantastic strategy to communicate material to students and ensure they get a hands-on experience simultaneously. Showing what happens live on the screen is received well by students, if they can manage to watch what we type and try to type it themselves simultaneously. We know at least that our examples give the correct result, but students never see anything other than correct, working code, and never have to formulate an internal model for how to write code. They end up copying and pasting and not understanding why.
Predicting code behaviour without running it is a key component of work as a programmer, and a lot of the time we spend debugging relies on us emulating the computer in our head. Without a solid mental model of code behaviour one cannot predict how it will function in one situation, much less other or non-standard situations. Planning for code to handle both good and bad inputs requires some creativity and mentally planning around expected values at various points throughout the execution.
This situation leaves students unprepared for incorrect or buggy code, either (un)intentionally included in homework assignments, or, generated by themselves, if they cannot identify where code will fail without executing it.
Augmenting lessons with:
- Pair programming
- Tracing - Stepping through the internal state
- Faded examples
- Compounding examples
- Debugging intentionally broken examples
Will give students enough tools to respond dynamically to failure states with informed experience to resolve issues they encounter as programmers.
The student’s mental model of the code underlies everything they do as a programmer, from conception to implementation to debugging to their self efficacy:
This study shows that a well-developed and accurate mental model directly affects course performance and also increases self efficacy, the other key element in course performance. Given this double impact, helping students develop good mental models should remain a goal in introductory programming courses.
This is a foundational skill to be able to think through a program, step by step, and understand how the code executes and which variables exist when, and what their values should be. This mental modelling allows students to predict the behaviour of a system, and when it diverges from their prediction, recognise any potential bugs.
Course Management Strategies
Pair Programming
Complementary to the other strategies, Pair Programming or “pairing” provides a reinforcement activity where they utilise similar skills. As one person writes and executes code, the other person ‘drives’ the experience, telling them what to write (Williams, Williams and Upchurch 2001). It has become a common learning model in introductory courses due to its benefits to students (Mendes et al. 2005, Mendes et al. 2006, Hannay et al. 2007). Specifically this technique has also been shown to be beneficial for women in computer science and gives them better chances for success in future programming endeavours (Werner et al. 2004). Adopting this technique is promising, provided you adhere to principles outlined by Mentz et al. 2008.
These can often be implemented as breakout rooms wherein students are assigned a handful of problems to complete. After the breakout rooms end, you can have students summarize solutions, pick on individual ones for their ideas, etc.
Problem Strategies
Tracing Code Execution
Input: Code# Initialise our accumulator x = 1 + 1 # Loop over our input data for i in range(10): # 0..9 # In-loop temporary variable tmp = x * 2 + i # Update our accumulator x = tmp + 1 # Output our result print(f'The final value is {x}')
Output: Trace
Line i
x
tmp
2 n/a 2 n/a 4 0 2 n/a 6 0 2 4 8 0 5 4 4 1 5 n/a 6 1 5 11 8 1 12 11 4 2 12 n/a 6 2 12 26 8 2 27 26
While there is no bug in the above, when there is a bug present, having students produce a table like that significantly improves their understanding of code flow and execution Hertz and Jump 2013. “Tracing” is a valuable and easy to complete exercise, and the results can even be checked automatically leading to good scalability of the exercise across larger classes.
Here students can also use a Debugger like pudb which can follow the execution of a bit of code, and show exactly how it’s working.
Here teaching liberal use of the print()
command, as opposed to more complicated tools like the above, can give students the tools they need to solve problems.
This was generated by hexylena/auto-python which can be reused or contributed to if new examples are needed.
Faded Examples
When teaching programming one must constantly be cognisant of the student’s cognitive load. It is a complicated task that demands a lot of students, requiring types of explicit logic analysis that they may not have engaged in before. Both learning based on problem-solving and worked examples may cause high cognitive loads for different audiences, and exploring alternatives is important (Retnowati 2017). Faded examples such as what is seen below are exactly such an alternative, starting with a fully worked example and removing successive components until we reach a problem description requiring a full solution. This leads to fewer unproductive learning events (Renkl et al. 2004).
### Write a function that multiplies two numbers
def multiply(a, b):
c = a * b
return c
The initial problem shows the entire solution to students
### Write a function that adds two numbers
def add(___):
____
return c
Increased fading, here we call out blanks students should fill in specifically with syntactically incorrect underscores.
### Write a function that subtracts two numbers
Final fading, the entire problem is gone except for the description of what they need to do.
Faded examples however, do come at a higher cost of implementation than worked out examples (Zamary and Rawson 2018). They require writing the correct worked out example and then determining which components to remove, which presents an additional cost during course updates that if examples are changed they need to be double checked to ensure they are still valid, whereas worked examples can be checked more automatically.
Compounding Problems
Compounding problems are a good strategy for homework problems, as you can ask multiple things of students and provide a gentle ramp up to increased complexity. Start by designing a small but complex problem like “write a fastq trimmer”, where here they need to implement several different subtasks:
- file processing
- several utility functions
- multiple filter stages
- a single main function which combines all of the above
If done correctly, the students have freedom to move around individual functions that aren’t dependent on each other, making sure they’re correct, before building them up into a final function.
There are two alternative ways to further design the problem:
- Provide it broken down, precise small functions they should implement.
- Describe the problem and let students determine the optimal way to break it down into small, manageable components.
Which option is prefgerrable depends strongly on how advanced your students are. See the example homework FastQ_trimmer.html and it’s associated ipynb file.
Debugging
Debugging is the act of identify and resolving “bugs” or defects within code, a term popularly attributed to my personal hero Admiral Grace Hopper:
While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were “debugging” the system
Debugging also functions as a reinforcement method we can use once students have an ok mental model of code execution, a necessary pre-requisite for this activity, which can be further developed through debugging (Ramalingam et al. 2004) alongside their self-efficacy (Michaeli and Romeike 2019). Debugging activities can take many forms but most commonly the task is to correct incorrect code, an activity that works best if they are primed with a number of methods of debugging (Murphy et al. 2008) such as the “Wolf Fence” (Gauss 1982), commenting out code, or breakpoints.
### Fix me!
for number in range(10):
# use a if the number is a multiple of 3, otherwise use b
if Number \% 3 == 0:
message = message + a
else:
message = message + "b"
print(message)
The above debugging exercise featuring code with numerous issues from type confusion, variable typos, and failure to initialise a variable. Students can run this example iteratively to figure out where it fails and attempt to fix it.
Use of more complex debugging tools is not always indicated, as the cognitive complexity may be too much for students.
Comparisons to K-12 methodologies
In K-12 teaching (Sentance and Waite 2017), this intervention is used to good results. Their model, PRIMM (Sentance and Waite 2017), starts with a good mental model which is required to predict, tracing during investigation, and debugging to modify code, all building towards students making things themselves.
Here are some examples of how to implement the PRIMM methodology in exercises.