Join us to discuss the use of AI in undergraduate programming courses on Monday 4th Nov at 2pm GMT (UTC)

Co-pilots still need pilots, but what’s the relationship between them? CC BY icon from flaticon.com

Students of programming are often encouraged to use AI assistants with little consideration for their perceptions and preferences. How do students perceptions influence their usage of AI and Large Language Models (LLMs) in undergraduate programming courses? How does the use of tools like ChatGPT and GitHub CoPilot relate to students self-belief in their own programming abilities? Join us to discuss a paper by Aadarsh Padiyath et al about this published at ICER 2024. [1] From the abstract:

The capability of large language models (LLMs) to generate, debug, and explain code has sparked the interest of researchers and educators in undergraduate programming, with many anticipating their transformative potential in programming education. However, decisions about why and how to use LLMs in programming education may involve more than just the assessment of an LLM’s technical capabilities. Using the social shaping of technology theory as a guiding framework, our study explores how students’ social perceptions influence their own LLM usage. We then examine the correlation of self-reported LLM usage with students’ self-efficacy and midterm performances in an undergraduate programming course. Triangulating data from an anonymous end-of-course student survey (n = 158), a mid-course self-efficacy survey (n=158), student interviews (n = 10), self-reported LLM usage on homework, and midterm performances, we discovered that students’ use of LLMs was associated with their expectations for their future careers and their perceptions of peer usage. Additionally, early self-reported LLM usage in our context correlated with lower self-efficacy and lower midterm scores, while students’ perceived over-reliance on LLMs, rather than their usage itself, correlated with decreased self-efficacy later in the course.

There’s also an accompanying article and blog post to go with this paper. [2.3]

All welcome, as usual, we’ll be meeting online joining details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Aadarsh Padiyath, Xinying Hou, Amy Pang, Diego Viramontes Vargas, Xingjian Gu, Tamara Nelson-Fromm, Zihan Wu, Mark Guzdial, Barbara Ericson (2024) Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course ICER ’24: Proceedings of the 2024 ACM Conference on International Computing Education Research – Volume 1, Pages 114 – 130 DOI:10.1145/3632620.3671098 (non-paywalled version at arxiv.org/abs/2406.06451)
  2. Aadarsh Padiyath (2024) Do I have a say in this or has ChatGPT already decided for me? blog post at computinged.wordpress.com
  3. Aadarsh Padiyath (2024) Do I Have a Say in This, or Has ChatGPT Already Decided for Me? XRDS: Crossroads, The ACM Magazine for students, Volume 31, Issue 1, Pages 52 – 55, DOI:10.1145/3688090 (paywalled version only)