Join us on to discuss why LLM-enhanced Programming Error Messages are Ineffective in Practice on Monday 2nd December at 2pm GMT (UTC)

Icon by LAFS on flaticon.com

Large Language Models (LLMs) can help explain programming error messages and these explanations tend to improve as the models they are based on include more source code. However, it is unknown to what extent novice programmers are able to effectively utilise these automatically generated explanations to debug their programs, with tools like GitHub CoPilot and ChatGPT. Join us to discuss a paper on this by Eddie Antonio Santos and Brett Becker. This paper won a best paper award at UKICER.com earlier this year. We’ll be joined by the papers lead author, Eddie Antonio Santos, who’ll give a lightning talk to kick off our discussion. From the abstract:

The sudden emergence of large language models (LLMs) such as ChatGPT has had a disruptive impact throughout the computing education community. LLMs have been shown to excel at producing correct code to CS1 and CS2 problems, and can even act as friendly assistants to students learning how to code. Recent work shows that LLMs demonstrate unequivocally superior results in being able to explain and resolve compiler error messages—for decades, one of the most frustrating parts of learning how to code. However, LLM-generated error message explanations have only been assessed by expert programmers in artificial conditions. This work sought to understand how novice programmers resolve programming error messages (PEMs) in a more realistic scenario. We ran a within-subjects study with 𝑛 = 106 participants in which students were tasked to fix six buggy C programs. For each program, participants were randomly assigned to fix the problem using either a stock compiler error message, an expert-handwritten error message, or an error message explanation generated by GPT-4. Despite promising evidence on synthetic benchmarks, we found that GPT-4 generated error messages outperformed conventional compiler error messages in only 1 of the 6 tasks, measured by students’ time-to-fix each problem. Handwritten explanations still outperform LLM and conventional error messages, both on objective and subjective measures.

As usual, we’ll be meeting on zoom, all welcome, details at sigcse.cs.manchester.ac.uk/join-us.

References

  1. Eddie Antonio Santos and Brett A. Becker (2024) Not the Silver Bullet: LLM-enhanced Programming Error Messages are Ineffective in Practice, UKICER ’24: Proceedings of the 2024 Conference on United Kingdom & Ireland Computing Education Research DOI:10.1145/3689535.3689554

Join us to discuss the use of AI in undergraduate programming courses on Monday 4th Nov at 2pm GMT (UTC)

Co-pilots still need pilots, but what’s the relationship between them? CC BY icon from flaticon.com

Students of programming are often encouraged to use AI assistants with little consideration for their perceptions and preferences. How do students perceptions influence their usage of AI and Large Language Models (LLMs) in undergraduate programming courses? How does the use of tools like ChatGPT and GitHub CoPilot relate to students self-belief in their own programming abilities? Join us to discuss a paper by Aadarsh Padiyath et al about this published at ICER 2024. [1] From the abstract:

The capability of large language models (LLMs) to generate, debug, and explain code has sparked the interest of researchers and educators in undergraduate programming, with many anticipating their transformative potential in programming education. However, decisions about why and how to use LLMs in programming education may involve more than just the assessment of an LLM’s technical capabilities. Using the social shaping of technology theory as a guiding framework, our study explores how students’ social perceptions influence their own LLM usage. We then examine the correlation of self-reported LLM usage with students’ self-efficacy and midterm performances in an undergraduate programming course. Triangulating data from an anonymous end-of-course student survey (n = 158), a mid-course self-efficacy survey (n=158), student interviews (n = 10), self-reported LLM usage on homework, and midterm performances, we discovered that students’ use of LLMs was associated with their expectations for their future careers and their perceptions of peer usage. Additionally, early self-reported LLM usage in our context correlated with lower self-efficacy and lower midterm scores, while students’ perceived over-reliance on LLMs, rather than their usage itself, correlated with decreased self-efficacy later in the course.

There’s also an accompanying article and blog post to go with this paper. [2.3]

All welcome, as usual, we’ll be meeting online joining details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Aadarsh Padiyath, Xinying Hou, Amy Pang, Diego Viramontes Vargas, Xingjian Gu, Tamara Nelson-Fromm, Zihan Wu, Mark Guzdial, Barbara Ericson (2024) Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course ICER ’24: Proceedings of the 2024 ACM Conference on International Computing Education Research – Volume 1, Pages 114 – 130 DOI:10.1145/3632620.3671098 (non-paywalled version at arxiv.org/abs/2406.06451)
  2. Aadarsh Padiyath (2024) Do I have a say in this or has ChatGPT already decided for me? blog post at computinged.wordpress.com
  3. Aadarsh Padiyath (2024) Do I Have a Say in This, or Has ChatGPT Already Decided for Me? XRDS: Crossroads, The ACM Magazine for students, Volume 31, Issue 1, Pages 52 – 55, DOI:10.1145/3688090 (paywalled version only)

Join us to discuss the ability of generative AI to pass exams on 4th December at 2pm GMT

CC-licensed exam image from flaticon.com

How good is generative AI at passing exams? What does this tell us about how we could design better assessments? Join us on Monday 4th December at 2pm GMT (UTC) to discuss a paper on this by Joyce Mahon, Brian Mac Namee and Brett Becker at University College Dublin published at UKICER earlier this year. [1] From the abstract:

We investigate the capabilities of ChatGPT (GPT-4) on second level (high-school) computer science examinations: the UK A-Level and Irish Leaving Certificate. Both are national, government-set / approved, and centrally assessed examinations. We also evaluate performance differences in exams made publicly available before and after the ChatGPT knowledge cutoff date, and investigate what types of question ChatGPT struggles with.

We find that ChatGPT is capable of achieving very high marks on both exams and that the performance difference before and after the knowledge cutoff date are minimal. We also observe that ChatGPT struggles with questions involving symbols or images, which can be mitigated when in-text information ‘fills in the gaps’. Additionally, GPT-4 performance can be negatively impacted when an initial inaccurate answer leads to further inaccuracies in subsequent parts of the same question. Finally, the element of choice on the Leaving Certificate is a significant advantage in achieving a high grade. Notably, there are minimal occurrences of hallucinations in answers and few errors in solutions not involving images.

These results reveal several strengths and weaknesses of these exams in terms of how generative AI performs on them and have implications for exam design, the construction of marking schemes, and could also shift the focus of what is examined and how.

We’ll be joined by the papers lead author Joyce, who will give us a lightning talk summary of her paper to start our discussion. All welcome, as usual we’ll be meeting on zoom details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Joyce Mahon, Brian MacNamee and Brett A. Becker (2023) No More Pencils No More Books: Capabilities of Generative AI on Irish and UK Computer Science School Leaving Examinations. In The United Kingdom and Ireland Computing Education Research conference (UKICER 2023), September 07–08, 2023, Swansea, Wales UK. ACM, New York, NY, USA, 7 pages. DOI: 10.1145/3610969.3610982

Join us on zoom to discuss the implications of programming getting easier, Monday 15th May at 2pm BST

Programming is hard, or at least it used to be. AI code generators like Amazon’s CodeWhisperer, DeepMind’s AlphaCode, GitHub’s CoPilot, Replit’s Ghostwriter and many others now make programming easier, at least for some people, some of the time. What opportunities and challenges do these new tools present for educators? Join us on Zoom to discuss an award winning paper by Brett Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather and Eddie Antonio Santos at University College Dublin, the University of Auckland and Abilene Christian University on this very topic. [1] We’ll be joined by two of the co-authors who will present a lightning talk to kick-off our discussion, for our monthly ACM journal club meetup. Here’s the abstract of his paper:

The introductory programming sequence has been the focus of much research in computing education. The recent advent of several viable and freely-available AI-driven code generation tools present several immediate opportunities and challenges in this domain. In this position paper we argue that the community needs to act quickly in deciding what possible opportunities can and should be leveraged and how, while also working on overcoming otherwise mitigating the possible challenges. Assuming that the effectiveness and proliferation of these tools will continue to progress rapidly, without quick, deliberate, and concerted efforts, educators will lose advantage in helping shape what opportunities come to be, and what challenges will endure. With this paper we aim to seed this discussion within the computing education community.

All welcome, as usual we’ll be meeting on zoom at 2pm BST (UTC+1), details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Sue Sentance at the University of Cambridge for nominating this paper for discussion.

See also linkedin.com/posts/duncanhull_ai-codewhisperer-alphacode-activity-7051921278923915264-7i_5

References

  1. Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, Eddie Antonio Santos (2023) Programming Is Hard – Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation in Proceedings of the 54th ACM Technical Symposium on Computer Science Education: SIGCSE 2023, pages 500–506, DOI: 10.1145/3545945.3569759

Join us to discuss using AI to solve simple programming problems on Monday 3rd April at 2pm BST

CC licensed pilot icon from flaticon.com

Maybe you wrote that code and maybe you didn’t. If AI helped you, such as the OpenAI Codex in GitHub Copilot, how did it solve your problem? How much did Artificial Intelligence help or hinder your solution? Join us to discuss a paper by Michel Wermelinger from the Open University published in the SIGCSE technical symposium earlier this month on this very topic. [1] We’ll be joined by Michel who will present a lightning talk to kick-off our discussion. Here’s the abstract of his paper:

The teaching and assessment of introductory programming involves writing code that solves a problem described by text. Previous research found that OpenAI’s Codex, a natural language machine learning model trained on billions of lines of code, performs well on many programming problems, often generating correct and readable Python code. GitHub’s version of Codex, Copilot, is freely available to students. This raises pedagogic and academic integrity concerns. Educators need to know what Copilot is capable of, in order to adapt their teaching to AI-powered programming assistants. Previous research evaluated the most performant Codex model quantitatively, e.g. how many problems have at least one correct suggestion that passes all tests. Here I evaluate Copilot instead, to see if and how it differs from Codex, and look qualitatively at the generated suggestions, to understand the limitations of Copilot. I also report on the experience of using Copilot for other activities asked of students in programming courses: explaining code, generating tests and fixing bugs. The paper concludes with a discussion of the implications of the observed capabilities for the teaching of programming.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Michel Wermelinger (2023) Using GitHub Copilot to Solve Simple Programming Problems in Proceedings of the 54th ACM Technical Symposium on Computer Science Education Pages SIGCSE 2023 page 172–178 DOI: 10.1145/3545945.3569830

Join us to discuss code comprehension on Monday 6th March at 2pm GMT

CC licensed puzzle icon by flaticon.com


It’s all very well getting an AI to write your code for you but neither writing code or reading code are the same as understanding code. So what is going on in novices brains when they learn to actually understand the code they are reading and writing? Join us on Monday 6th March at 2pm GMT to discuss a paper by Quintin Cutts and Maria Kallia from the University of Glasgow on this very topic [1], from the abstract:

An approach to code comprehension in an introductory programming class is presented, drawing on the Text Surface, Functional and Machine aspects of Schulte’s Block Model, and emphasising programming as a modelling activity involving problem and machine domains. To visually connect the domains and a program, a key diagram conceptualising the three aspects lies at the approach’s heart, alongside instructional exposition and exercises, which are all presented. Students find the approach challenging initially, but most recognise its value later, and identify, unexpectedly, the value of the approach for problem decomposition, planning and coding.

We’ll be joined by one of the co-authors (Quintin Cutts), who’ll give us a lightning talk summary of the paper to kick-off our journal club discussion. [1] Quintin has added: “You can’t write if you can’t read.  In just four pages the paper outlines a classroom approach to developing in novices good code comprehension right from the start of an introductory course.  There’s also some feedback on what students thought, a year later – spoiler – they seemed to get a lot from it.  Anyone teaching introductory programming might find such a short paper thought provoking, even if they don’t pick up the technique in their teaching. Worth a quick read, and coming along to listen/add to the discussion…”

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Quintin Cutts and Maria Kallia (2023) Introducing Modelling and Code Comprehension from the First Days of an Introductory Programming Class in CEP ’23: Proceedings of 7th Conference on Computing Education Practice Pages 21–24 DOI:10.1145/3573260.3573266

Join us to discuss the implications of the Open AI codex on introductory programming Monday 4th July at 2pm BST


Automatic code generators have been with us a while, but how do modern AI powered bots perform on introductory programming assignments? Join us to discuss the implications of the OpenAI Codex on introductory programming courses on Monday 4th July at 2pm BST. We’ll be discussing a paper by James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly and James Prather [1] for our monthly SIGCSE journal club meetup on zoom. Here is the abstract:

Recent advances in artificial intelligence have been driven by an exponential growth in digitised data. Natural language processing, in particular, has been transformed by machine learning models such as OpenAI’s GPT-3 which generates human-like text so realistic that its developers have warned of the dangers of its misuse. In recent months OpenAI released Codex, a new deep learning model trained on Python code from more than 50 million GitHub repositories. Provided with a natural language description of a programming problem as input, Codex generates solution code as output. It can also explain (in English) input code, translate code between programming languages, and more. In this work, we explore how Codex performs on typical introductory programming problems. We report its performance on real questions taken from introductory programming exams and compare it to results from students who took these same exams under normal conditions, demonstrating that Codex outscores most students. We then explore how Codex handles subtle variations in problem wording using several published variants of the well-known “Rainfall Problem” along with one unpublished variant we have used in our teaching. We find the model passes many test cases for all variants. We also explore how much variation there is in the Codex generated solutions, observing that an identical input prompt frequently leads to very different solutions in terms of algorithmic approach and code length. Finally, we discuss the implications that such technology will have for computing education as it continues to evolve, including both challenges and opportunities. (see accompanying slides and sigarch.org/coping-with-copilot/)

All welcome, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jim Paterson at Glasgow Caledonian University for nominating this months paper.

References

  1. James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, James Prather (2022) The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming ACE ’22: Australasian Computing Education Conference Pages 10–19 DOI:10.1145/3511861.3511863