Early bird registration for UKICER 2025 in Edinburgh now open until Monday 4th August


The UK and Ireland Computing Education Research (UKICER) conference, takes place on Thursday 4th of September 2025 and Friday 5th of September 2025 in Edinburgh, UK following on from Manchester in 2024. [1] There are also two free co-located pre-conference events taking place at the same location on Wednesday 3rd of September 2025 at 1-5 pm.

UKICER will include keynotes from Keith Quille from TU Dublin, Judy Robertson and Serdar Abaci from the University of Edinburgh. Early bird registration now open until 5th August. Upon registering, attendees can sign up to one of two free collocated pre-conference events taking place on Wednesday 3rd of September 2025 with Pavlos Andreadis on GenAI Integration in Computer Science Education and Aurora Constantin on Embedding Accessibility in Computer Science Education

We’re taking a break from journal clubbing during August, but we’ll be back in September at UKICER in Edinburgh. If you’re going, see you there. Find out more and register at ukicer.com 🏴󠁧󠁢󠁳󠁣󠁴󠁿

CC BY Edinburgh Skyline picture by Andrew Colin on Wikimedia Commons w.wiki/Ef$W

Cite this post using DOI:10.59350/sigcse.3022

References

  1. Troy Astarte, Duncan Hull, Fiona McNeill and Faron Moller (2024) UKICER ’24: Proceedings of the 2024 Conference on United Kingdom & Ireland Computing Education Research, Manchester, UK DOI:10.1145/3689535

SIGCSE journal club posts now have Digital Object Identifiers (DOIs)

All of the posts here now have Digital Object Identifiers (DOIs) thanks to a tool called Rogue Scholar. [1] What this means is that details of our meetings are more:

  • Findable — every blog post is searchable via rich metadata and full-text search.
  • Citeable — every blog post is assigned a Digital Object Identifier (DOI), to make them citable and trackable. Rogue Scholar shows citations to blog posts found by Crossref.
  • Interoperable — metadata are distributed via Crossref and ORCID, and downstream services using their metadata catalogs.
  • Reusable — the full-text of every blog post is distributed under the terms of the Creative Commons Attribution 4.0 license.
  • Archiveable — blog posts are archived by Rogue Scholar, and semiannually by the Internet Archive Archive-It service.

Find them all listed at rogue-scholar.org/communities/sigcse – there is sometimes a short lag between publication here and DOI assignment by rogue scholar. You get DOI’s for your blog posts at rogue-scholar.org, thanks to Martin Fenner at Rogue Scholar for support.

Cite this post using DOI:10.59350/sigcse.3020

References

  1. Lena Stoll, Patrick Vale and Rosa Morais Clark (2025) Scholarly blogs and their place in the research nexus, crossref.org blog DOI:10.64000/552ec-b8g03

Join us to discuss the effect of ChatGPT on students’ learning on Monday July 7th at 2pm BST

CC licensed wise owl image from flaticon.com

Is generative AI making students more wise and productive or is it encouraging lazy learners to cut too many corners? Join us on Monday 7th July at 2pm BST (UTC+1) to discuss a recently published review paper investigating the effect of ChatGPT on students’ learning by Jin Wang and Wenxiang Fan at Hangzhou Normal University in China [1]. From the abstract:

As a new type of artificial intelligence, ChatGPT is becoming widely used in learning. However, academic consensus regarding its efficacy remains elusive. This study aimed to assess the effectiveness of ChatGPT in improving students’ learning performance, learning perception, and higher-order thinking through a meta-analysis of 51 research studies published between November 2022 and February 2025. The results indicate that ChatGPT has a large positive impact on improving learning performance (g = 0.867) and a moderately positive impact on enhancing learning perception (g = 0.456) and fostering higher-order thinking (g = 0.457). The impact of ChatGPT on learning performance was moderated by type of course (QB = 64.249, P < 0.001), learning model (QB = 76.220, P < 0.001), and duration (QB = 55.998, P < 0.001); its effect on learning perception was moderated by duration (QB = 19.839, P < 0.001); and its influence on the development of higher-order thinking was moderated by type of course (QB = 7.811, P < 0.05) and the role played by ChatGPT (QB = 4.872, P < 0.05). This study suggests that: (1) appropriate learning scaffolds or educational frameworks (e.g., Bloom’s taxonomy) should be provided when using ChatGPT to develop students’ higher-order thinking; (2) the broad use of ChatGPT at various grade levels and in different types of courses should be encouraged to support diverse learning needs; (3) ChatGPT should be actively integrated into different learning modes to enhance student learning, especially in problem-based learning; (4) continuous use of ChatGPT should be ensured to support student learning, with a recommended duration of 4–8 weeks for more stable effects; (5) ChatGPT should be flexibly integrated into teaching as an intelligent tutor, learning partner, and educational tool. Finally, due to the limited sample size for learning perception and higher-order thinking, and the moderately positive effect, future studies with expanded scope should further explore how to use ChatGPT more effectively to cultivate students’ learning perception and higher-order thinking.

The authors of the paper have been invited to join us to give a lightning talk summary. All welcome, meeting URL is public at zoom.us/j/96465296256 (meeting ID 9646-5296-256) but the password is private and pinned in the slack channel which you can join by following the instructions at sigcse.cs.manchester.ac.uk/join-us

(Paper recommendation via Mustafa Suleyman at Microsoft (quote): “The fear: AI will make students lazy. The reality? It’s making them smarter. A new meta-analysis of 51 studies shows AI is actually boosting critical thinking, not just grades”)

References

  1. Jin Wang & Wenxiang Fan (2025) The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis Humanities and Social Sciences Communications: Volume 12, No. 621, available from nature.com/articles/s41599-025-04787-y and doi.org/g9h3x3

Join us to discuss why we teach Computing at School (and University) on Monday 7th April at 2pm BST

CC licensed image via flaticon.com

Why do we even bother? What (exactly) is the point? In this age of AI why would anyone need to learn about Computing? What value does it add, what skills do students learn and what knowledge do students actually need to develop? Join us on Monday 7th April at 2pm BST (UTC+1) to discuss a paper co-authored by Sue Sentance and published at iticse.acm.org [1]. From the abstract:

K-12 computing education research is a rapidly growing field of research, both driven by and driving the implementation of computing as a school and extra-curricular subject globally. In the context of discipline-based education research, it is a new and emerging field, drawing on areas such as mathematics and science education research for inspiration and theoretical bases. The urgency around investigating effective teaching and learning in computing in school alongside broadening participation has led to much of the field being focused on empirical research. Less attention has been paid to the underlying philosophical assumptions informing the discipline, which might include a critical examination of the rationale for K-12 computing education, its goals and perspectives, and associated inherent values and beliefs. In this working group, we conducted an analysis of the implicit and hidden values, perspectives and goals underpinning computing education at school in order to shed light on the question of what we are talking about when we talk about K-12 computing education. To do this we used a multi-faceted approach to identify implicit rationales for K-12 computing education and examine what these might mean for the implemented curriculum. Methods used include both traditional and natural language processing techniques for examining relevant literature, alongside an examination of the theoretical literature relating to education theory. As a result we identified four traditions for K-12 computing education: algorithmic, design-making, scientific and societal. From this we have developed a framework for the exemplification of these traditions, alongside several potential use cases. We suggest that while this work may provoke some discussion and debate, it will help researchers and others to identify and express the rationales they draw on with respect to computing education.

We’ll be joined by one of the papers co-authors, Sue Sentance from the University of Cambridge.  Sue is Director of the Raspberry Pi Computing Education Research Centre, recipient of the BCS Lovelace medal and an editor of the book Computer Science Education: Perspectives on Teaching and Learning in School published by Bloomsbury Academic. Sue will give us a lightning talk on the paper which is also summarised on the computing education research blog and in the slides from her talk.

All welcome, meeting URL is public at zoom.us/j/96465296256 (meeting ID 9646-5296-256) but the password is private and pinned in the slack channel which you can join by following the instructions at sigcse.cs.manchester.ac.uk/join-us

References

  1. Carsten Schulte, Sue Sentance, Sören Sparmann, Rukiye Altin, Mor Friebroon-Yesharim, Martina Landman, Michael T. Rücker, Spruha Satavlekar, Angela Siegel, Matti Tedre, Laura Tubino, Henriikka Vartiainen, J. Ángel Velázquez-Iturbide, Jane Waite and Zihan Wu (2024) What We Talk About When We Talk About K-12 Computing Education Working Group Reports on Innovation and Technology in Computer Science Education (ITiCSE 2024), Pages 226 – 257 DOI:10.1145/3689187.37096

Join us to discuss team based capstone projects on Monday 3rd February at 2pm UTC

CC licensed project icon from flaticon.com

There is no “I” in Team, but there is an “I” in University. Teamwork is a core skill taught in many Computing degrees. How can instructors help students improve their teamwork skills though collaborative projects? Join us on Zoom to discuss a paper investigating teamwork skills in the context of capstone projects published in ITiCSE iticse.acm.org [1]. From the abstract

Team-based capstone courses are integral to many undergraduate and postgraduate degree programs in the computing field. They are designed to help students gain hands-on experience and practice professional skills such as communication, teamwork, and self-reflection as they transition into the real world. Prior research on capstone courses has focused primarily on the experiences of students. The perspectives of instructors who teach capstone courses have not been explored comprehensively. However, an instructor’s experience, motivation, and expectancy can have a significant impact on the quality of a capstone course. In this working group, we used a mixed methods approach to understand the experiences of capstone instructors. Issues such as class size, industry partnerships, managing student conflicts, and factors influencing instructor motivation were examined using a quantitative survey and semi-structured interviews with capstone teaching staff from multiple institutions across different continents. Our findings show that there are more similarities than differences across various capstone course structures. Similarities include team size, team formation methodologies, duration of the capstone course, and project sourcing. Differences in capstone courses include class sizes and institutional support. Some instructors felt that capstone courses require more time and effort than regular lecture-based courses. These instructors cited that the additional time and effort is related to class size and liaising with external stakeholders, including industry partners. Some instructors felt that their contributions were not recognized enough by the leadership at their institutions. Others acknowledged institutional support and the value that the capstone brought to their department. Overall, we found that capstone instructors were highly intrinsically motivated and enjoyed teaching the capstone course. Most of them agree that the course contributes to their professional development. The majority of the instructors reported positive experiences working with external partners and did not report any issues with Non-Disclosure Agreements (NDAs) or disputes about Intellectual Property (IP). In most institutions, students own the IP of their work, and clients understand that. We use the global perspective that this work has given us to provide guidelines for institutions to better support capstone instructors.

We’ll be joined by one of the co-authors Steve Riddle from Newcastle University, who will give us a lightning talk summary to kick-off our discussion. All welcome, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Sara Hooshangi, Asma Shakil, Subhasish Dasgupta, Karen C. C. Davis, Mohammed Farghally, KellyAnn Fitzpatrick, Mirela Gutica, Ryan Hardt, Steve Riddle, Mohammed Seyam (2025) Instructors’ Perspectives on Capstone Courses in Computing Fields: A Mixed-Methods Study ITiCSE 2024: 2024 Working Group Reports on Innovation and Technology in Computer Science Education, DOI:10.1145/3689187.3709608

Send us your SIGCSE journal club paper suggestions by 10th January 2025

Our next meeting will be at Computing Education Practice (CEP) in Durham on 7th January, if you’re joining us, we’ll see you there. cepconference.webspace.durham.ac.uk

CC BY licensed picture of Durham School in Snow by Teach46 on Wikimedia Commons w.wiki/CTqo

If you can’t make it to Durham this year (registration closed last week), our next UK ACM SIGCSE journal club meeting is on the first Monday in February, that’s Monday 3rd February at 2pm GMT (UTC+1). What paper should we discuss? Send us your suggestions by all the usual channels:

… by 5pm GMT on Friday 10th January 2025.

In the meantime, we wish all our readers and club members a happy holiday and prosperous new year.

Join us on to discuss why LLM-enhanced Programming Error Messages are Ineffective in Practice on Monday 2nd December at 2pm GMT (UTC)

Icon by LAFS on flaticon.com

Large Language Models (LLMs) can help explain programming error messages and these explanations tend to improve as the models they are based on include more source code. However, it is unknown to what extent novice programmers are able to effectively utilise these automatically generated explanations to debug their programs, with tools like GitHub CoPilot and ChatGPT. Join us to discuss a paper on this by Eddie Antonio Santos and Brett Becker. This paper won a best paper award at UKICER.com earlier this year. We’ll be joined by the papers lead author, Eddie Antonio Santos, who’ll give a lightning talk to kick off our discussion. From the abstract:

The sudden emergence of large language models (LLMs) such as ChatGPT has had a disruptive impact throughout the computing education community. LLMs have been shown to excel at producing correct code to CS1 and CS2 problems, and can even act as friendly assistants to students learning how to code. Recent work shows that LLMs demonstrate unequivocally superior results in being able to explain and resolve compiler error messages—for decades, one of the most frustrating parts of learning how to code. However, LLM-generated error message explanations have only been assessed by expert programmers in artificial conditions. This work sought to understand how novice programmers resolve programming error messages (PEMs) in a more realistic scenario. We ran a within-subjects study with 𝑛 = 106 participants in which students were tasked to fix six buggy C programs. For each program, participants were randomly assigned to fix the problem using either a stock compiler error message, an expert-handwritten error message, or an error message explanation generated by GPT-4. Despite promising evidence on synthetic benchmarks, we found that GPT-4 generated error messages outperformed conventional compiler error messages in only 1 of the 6 tasks, measured by students’ time-to-fix each problem. Handwritten explanations still outperform LLM and conventional error messages, both on objective and subjective measures.

As usual, we’ll be meeting on zoom, all welcome, details at sigcse.cs.manchester.ac.uk/join-us.

References

  1. Eddie Antonio Santos and Brett A. Becker (2024) Not the Silver Bullet: LLM-enhanced Programming Error Messages are Ineffective in Practice, UKICER ’24: Proceedings of the 2024 Conference on United Kingdom & Ireland Computing Education Research DOI:10.1145/3689535.3689554

Join us to discuss the use of AI in undergraduate programming courses on Monday 4th Nov at 2pm GMT (UTC)

Co-pilots still need pilots, but what’s the relationship between them? CC BY icon from flaticon.com

Students of programming are often encouraged to use AI assistants with little consideration for their perceptions and preferences. How do students perceptions influence their usage of AI and Large Language Models (LLMs) in undergraduate programming courses? How does the use of tools like ChatGPT and GitHub CoPilot relate to students self-belief in their own programming abilities? Join us to discuss a paper by Aadarsh Padiyath et al about this published at ICER 2024. [1] From the abstract:

The capability of large language models (LLMs) to generate, debug, and explain code has sparked the interest of researchers and educators in undergraduate programming, with many anticipating their transformative potential in programming education. However, decisions about why and how to use LLMs in programming education may involve more than just the assessment of an LLM’s technical capabilities. Using the social shaping of technology theory as a guiding framework, our study explores how students’ social perceptions influence their own LLM usage. We then examine the correlation of self-reported LLM usage with students’ self-efficacy and midterm performances in an undergraduate programming course. Triangulating data from an anonymous end-of-course student survey (n = 158), a mid-course self-efficacy survey (n=158), student interviews (n = 10), self-reported LLM usage on homework, and midterm performances, we discovered that students’ use of LLMs was associated with their expectations for their future careers and their perceptions of peer usage. Additionally, early self-reported LLM usage in our context correlated with lower self-efficacy and lower midterm scores, while students’ perceived over-reliance on LLMs, rather than their usage itself, correlated with decreased self-efficacy later in the course.

There’s also an accompanying article and blog post to go with this paper. [2.3]

All welcome, as usual, we’ll be meeting online joining details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Aadarsh Padiyath, Xinying Hou, Amy Pang, Diego Viramontes Vargas, Xingjian Gu, Tamara Nelson-Fromm, Zihan Wu, Mark Guzdial, Barbara Ericson (2024) Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course ICER ’24: Proceedings of the 2024 ACM Conference on International Computing Education Research – Volume 1, Pages 114 – 130 DOI:10.1145/3632620.3671098 (non-paywalled version at arxiv.org/abs/2406.06451)
  2. Aadarsh Padiyath (2024) Do I have a say in this or has ChatGPT already decided for me? blog post at computinged.wordpress.com
  3. Aadarsh Padiyath (2024) Do I Have a Say in This, or Has ChatGPT Already Decided for Me? XRDS: Crossroads, The ACM Magazine for students, Volume 31, Issue 1, Pages 52 – 55, DOI:10.1145/3688090 (paywalled version only)

In Memory of Brett Becker

We are deeply saddened to hear of Brett Becker‘s tragic passing. Brett has been a regular speaker, supporter and attendee at SIGCSE journal club since we started in 2020 and we’ve have often discussed Brett’s papers at SIGCSE journal club. We’d planned to discuss another one of Brett’s papers at our October meetup but we’ve postponed that to a later date, in light of his passing.

Brett was an accomplished researcher and active member of the international computing education research community.  Among his many professional activities and accomplishments, Brett was influential in the sigcse.org community in Europe, America and beyond where he served as vice chair.  He also served as program co-chair at the inaugural ukicer.com conference in 2019, and again in 2022, and served on the UKICER steering committee.  He helped to ensure Ireland was a cornerstone of UKICER, and also co-founded sigcseire.acm.org, the SIGCSE Ireland chapter.  He was an energetic and astute proponent of computing education in Ireland and globally, always a pleasure to work with, and he will be greatly missed.

Plans to honor and remember Brett will be distributed to the SIGCSE-MEMBERS@LISTSERV.ACM.ORG mailing list in due course, this is an open list that anyone can subscribe to.





Join us on 5th & 6th September for UKICER.com in Manchester

Journal club is taking a break for August, we’ll be back in September for the United Kingdom & Ireland Computing Education Research conference (UKICER.com), 5th & 6th of September in Manchester.


If you’ve any papers you’d like to discuss at future journal clubs, send us your paper nominations via the usual channels, see the nominate details at sigcse.cs.manchester.ac.uk/papers