Join us to discuss the implications of the Open AI codex on introductory programming Monday 4th July at 2pm BST


Automatic code generators have been with us a while, but how do modern AI powered bots perform on introductory programming assignments? Join us to discuss the implications of the OpenAI Codex on introductory programming courses on Monday 4th July at 2pm BST. We’ll be discussing a paper by James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly and James Prather [1] for our monthly SIGCSE journal club meetup on zoom. Here is the abstract:

Recent advances in artificial intelligence have been driven by an exponential growth in digitised data. Natural language processing, in particular, has been transformed by machine learning models such as OpenAI’s GPT-3 which generates human-like text so realistic that its developers have warned of the dangers of its misuse. In recent months OpenAI released Codex, a new deep learning model trained on Python code from more than 50 million GitHub repositories. Provided with a natural language description of a programming problem as input, Codex generates solution code as output. It can also explain (in English) input code, translate code between programming languages, and more. In this work, we explore how Codex performs on typical introductory programming problems. We report its performance on real questions taken from introductory programming exams and compare it to results from students who took these same exams under normal conditions, demonstrating that Codex outscores most students. We then explore how Codex handles subtle variations in problem wording using several published variants of the well-known “Rainfall Problem” along with one unpublished variant we have used in our teaching. We find the model passes many test cases for all variants. We also explore how much variation there is in the Codex generated solutions, observing that an identical input prompt frequently leads to very different solutions in terms of algorithmic approach and code length. Finally, we discuss the implications that such technology will have for computing education as it continues to evolve, including both challenges and opportunities. (see accompanying slides)

All welcome, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jim Paterson at Glasgow Caledonian University for nominating this months paper.

References

  1. James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, James Prather (2022) The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming ACE ’22: Australasian Computing Education Conference Pages 10–19 DOI:10.1145/3511861.3511863

Join us to discuss failure rates in introductory programming courses on Monday 1st February at 2pm GMT

Icons made by freepik from flaticon.com

Following on from our discussion of ungrading, this month we’ll be discussing pass/fail rates in introductory programming courses. [1] Here is the abstract:

Vast numbers of publications in computing education begin with the premise that programming is hard to learn and hard to teach. Many papers note that failure rates in computing courses, and particularly in introductory programming courses, are higher than their institutions would like. Two distinct research projects in 2007 and 2014 concluded that average success rates in introductory programming courses world-wide were in the region of 67%, and a recent replication of the first project found an average pass rate of about 72%. The authors of those studies concluded that there was little evidence that failure rates in introductory programming were concerningly high.

However, there is no absolute scale by which pass or failure rates are measured, so whether a failure rate is concerningly high will depend on what that rate is compared against. As computing is typically considered to be a STEM subject, this paper considers how pass rates for introductory programming courses compare with those for other introductory STEM courses. A comparison of this sort could prove useful in demonstrating whether the pass rates are comparatively low, and if so, how widespread such findings are.

This paper is the report of an ITiCSE working group that gathered information on pass rates from several institutions to determine whether prior results can be confirmed, and conducted a detailed comparison of pass rates in introductory programming courses with pass rates in introductory courses in other STEM disciplines.

The group found that pass rates in introductory programming courses appear to average about 75%; that there is some evidence that they sit at the low end of the range of pass rates in introductory STEM courses; and that pass rates both in introductory programming and in other introductory STEM courses appear to have remained fairly stable over the past five years. All of these findings must be regarded with some caution, for reasons that are explained in the paper. Despite the lack of evidence that pass rates are substantially lower than in other STEM courses, there is still scope to improve the pass rates of introductory programming courses, and future research should continue to investigate ways of improving student learning in introductory programming courses.

Anyone is welcome to join us. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

Thanks to Brett Becker and Joseph Allen for this months #paper-suggestions via our slack channel at uk-acm-sigsce.slack.com.

References

  1. Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, Neena Thota (2019) Pass Rates in Introductory Programming and in other STEM Disciplines in ITiCSE-WGR ’19: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, Pages 53–71 DOI: 10.1145/3344429.3372502