Join us to discuss the use of AI in undergraduate programming courses on Monday 4th Nov at 2pm GMT (UTC)

Co-pilots still need pilots, but what’s the relationship between them? CC BY icon from flaticon.com

Students of programming are often encouraged to use AI assistants with little consideration for their perceptions and preferences. How do students perceptions influence their usage of AI and Large Language Models (LLMs) in undergraduate programming courses? How does the use of tools like ChatGPT and GitHub CoPilot relate to students self-belief in their own programming abilities? Join us to discuss a paper by Aadarsh Padiyath et al about this published at ICER 2024. [1] From the abstract:

The capability of large language models (LLMs) to generate, debug, and explain code has sparked the interest of researchers and educators in undergraduate programming, with many anticipating their transformative potential in programming education. However, decisions about why and how to use LLMs in programming education may involve more than just the assessment of an LLM’s technical capabilities. Using the social shaping of technology theory as a guiding framework, our study explores how students’ social perceptions influence their own LLM usage. We then examine the correlation of self-reported LLM usage with students’ self-efficacy and midterm performances in an undergraduate programming course. Triangulating data from an anonymous end-of-course student survey (n = 158), a mid-course self-efficacy survey (n=158), student interviews (n = 10), self-reported LLM usage on homework, and midterm performances, we discovered that students’ use of LLMs was associated with their expectations for their future careers and their perceptions of peer usage. Additionally, early self-reported LLM usage in our context correlated with lower self-efficacy and lower midterm scores, while students’ perceived over-reliance on LLMs, rather than their usage itself, correlated with decreased self-efficacy later in the course.

There’s also an accompanying article and blog post to go with this paper. [2.3]

All welcome, as usual, we’ll be meeting online joining details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Aadarsh Padiyath, Xinying Hou, Amy Pang, Diego Viramontes Vargas, Xingjian Gu, Tamara Nelson-Fromm, Zihan Wu, Mark Guzdial, Barbara Ericson (2024) Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course ICER ’24: Proceedings of the 2024 ACM Conference on International Computing Education Research – Volume 1, Pages 114 – 130 DOI:10.1145/3632620.3671098 (non-paywalled version at arxiv.org/abs/2406.06451)
  2. Aadarsh Padiyath (2024) Do I have a say in this or has ChatGPT already decided for me? blog post at computinged.wordpress.com
  3. Aadarsh Padiyath (2024) Do I Have a Say in This, or Has ChatGPT Already Decided for Me? XRDS: Crossroads, The ACM Magazine for students, Volume 31, Issue 1, Pages 52 – 55, DOI:10.1145/3688090 (paywalled version only)

Join us to discuss using AI to solve simple programming problems on Monday 3rd April at 2pm BST

CC licensed pilot icon from flaticon.com

Maybe you wrote that code and maybe you didn’t. If AI helped you, such as the OpenAI Codex in GitHub Copilot, how did it solve your problem? How much did Artificial Intelligence help or hinder your solution? Join us to discuss a paper by Michel Wermelinger from the Open University published in the SIGCSE technical symposium earlier this month on this very topic. [1] We’ll be joined by Michel who will present a lightning talk to kick-off our discussion. Here’s the abstract of his paper:

The teaching and assessment of introductory programming involves writing code that solves a problem described by text. Previous research found that OpenAI’s Codex, a natural language machine learning model trained on billions of lines of code, performs well on many programming problems, often generating correct and readable Python code. GitHub’s version of Codex, Copilot, is freely available to students. This raises pedagogic and academic integrity concerns. Educators need to know what Copilot is capable of, in order to adapt their teaching to AI-powered programming assistants. Previous research evaluated the most performant Codex model quantitatively, e.g. how many problems have at least one correct suggestion that passes all tests. Here I evaluate Copilot instead, to see if and how it differs from Codex, and look qualitatively at the generated suggestions, to understand the limitations of Copilot. I also report on the experience of using Copilot for other activities asked of students in programming courses: explaining code, generating tests and fixing bugs. The paper concludes with a discussion of the implications of the observed capabilities for the teaching of programming.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Michel Wermelinger (2023) Using GitHub Copilot to Solve Simple Programming Problems in Proceedings of the 54th ACM Technical Symposium on Computer Science Education Pages SIGCSE 2023 page 172–178 DOI: 10.1145/3545945.3569830