Join us to discuss failure rates in introductory programming courses on Monday 1st February at 2pm GMT

Icons made by freepik from flaticon.com

Following on from our discussion of ungrading, this month we’ll be discussing pass/fail rates in introductory programming courses. [1] Here is the abstract:

Vast numbers of publications in computing education begin with the premise that programming is hard to learn and hard to teach. Many papers note that failure rates in computing courses, and particularly in introductory programming courses, are higher than their institutions would like. Two distinct research projects in 2007 and 2014 concluded that average success rates in introductory programming courses world-wide were in the region of 67%, and a recent replication of the first project found an average pass rate of about 72%. The authors of those studies concluded that there was little evidence that failure rates in introductory programming were concerningly high.

However, there is no absolute scale by which pass or failure rates are measured, so whether a failure rate is concerningly high will depend on what that rate is compared against. As computing is typically considered to be a STEM subject, this paper considers how pass rates for introductory programming courses compare with those for other introductory STEM courses. A comparison of this sort could prove useful in demonstrating whether the pass rates are comparatively low, and if so, how widespread such findings are.

This paper is the report of an ITiCSE working group that gathered information on pass rates from several institutions to determine whether prior results can be confirmed, and conducted a detailed comparison of pass rates in introductory programming courses with pass rates in introductory courses in other STEM disciplines.

The group found that pass rates in introductory programming courses appear to average about 75%; that there is some evidence that they sit at the low end of the range of pass rates in introductory STEM courses; and that pass rates both in introductory programming and in other introductory STEM courses appear to have remained fairly stable over the past five years. All of these findings must be regarded with some caution, for reasons that are explained in the paper. Despite the lack of evidence that pass rates are substantially lower than in other STEM courses, there is still scope to improve the pass rates of introductory programming courses, and future research should continue to investigate ways of improving student learning in introductory programming courses.

Anyone is welcome to join us. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

Thanks to Brett Becker and Joseph Allen for this months #paper-suggestions via our slack channel at uk-acm-sigsce.slack.com.

References

  1. Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, Neena Thota (2019) Pass Rates in Introductory Programming and in other STEM Disciplines in ITiCSE-WGR ’19: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, Pages 53–71 DOI: 10.1145/3344429.3372502

Join us to discuss learning programming languages: Monday 4th May at 11am

Hieroglyphs from the tomb of Seti I, by Jon Bodsworth via Wikimedia Commons and the Egypt archive

ACM SIGCSE Journal Club returns Monday 4th May at 11am. The paper we’re discussing this month is “Relating Natural Language Aptitude to Individual Differences in Learning Programming Languages” by Chantel Prat et al published in Scientific Reports. [1] Here’s the abstract:

This experiment employed an individual differences approach to test the hypothesis that learning modern programming languages resembles second “natural” language learning in adulthood. Behavioral and neural (resting-state EEG) indices of language aptitude were used along with numeracy and fluid cognitive measures (e.g., fluid reasoning, working memory, inhibitory control) as predictors. Rate of learning, programming accuracy, and post-test declarative knowledge were used as outcome measures in 36 individuals who participated in ten 45-minute Python training sessions. The resulting models explained 50–72% of the variance in learning outcomes, with language aptitude measures explaining significant variance in each outcome even when the other factors competed for variance. Across outcome variables, fluid reasoning and working-memory capacity explained 34% of the variance, followed by language aptitude (17%), resting-state EEG power in beta and low-gamma bands (10%), and numeracy (2%). These results provide a novel framework for understanding programming aptitude, suggesting that the importance of numeracy may be overestimated in modern programming education environments

The paper describes an experiment which investigates the relationship between learning natural languages and programming languages and draws some interesting conclusions that provide some good discussion points. Does being good at learning natural languages like English make you good at learning programming language like Python? Do linguists make good coders?

References

  1. Prat, C.S., Madhyastha, T.M., Mottarella, M.J. et al. (2020) Relating Natural Language Aptitude to Individual Differences in Learning Programming LanguagesScientific Reports 10, 3817 (2020). DOI:10.1038/s41598-020-60661-8

Join us to discuss student misconceptions in programming, March 23rd from 1pm to 2pm

The Scream by Edvard Munch 😱, reproduced in LEGO by Nathan Sawaya, the BrickArtist.com

Join us to discuss Identifying Student Misconceptions of Programming by Lisa Kaczmarczyk et al [1] which was voted a top paper from the last 50 years by SIGCSE members in 2019. Here is a summary:

Computing educators are often baffled by the misconceptions that their CS1 students hold. We need to understand these misconceptions more clearly in order to help students form correct conceptions. This paper describes one stage in the development of a concept inventory for Computing Fundamentals: investigation of student misconceptions in a series of core CS1 topics previously identified as both important and difficult. Formal interviews with students revealed four distinct themes, each containing many interesting misconceptions. Three of those misconceptions are detailed in this paper: two misconceptions about memory models, and data assignment when primitives are declared. Individual misconceptions are related, but vary widely, thus providing excellent material to use in the development of the CI. In addition, CS1 instructors are provided immediate usable material for helping their students understand some difficult introductory concepts.

In case you’re wondering, CS1 refers to the first course in the introductory sequence of a computer science major (in American parlance), roughly equivalent to first year undergraduate in the UK. CI refers to a Concept Inventory, a test designed to tell teachers exactly what students know and don’t know. According to Reinventing Nerds, the paper has been influential because it was the “first to apply rigorous research methods to investigating misconceptions”.

References

  1. Kaczmarczyk, Lisa C.; Petrick, Elizabeth R.; East, J. Philip; Herman, Geoffrey L. (2010). Identifying student misconceptions of programmingSIGCSE ’10: Proceedings of the 41st ACM technical symposium on Computer science education, pages 107–111. doi:10.1145/1734263.1734299