Rather than meeting online in January, we’ll be meeting in person. So join us at Durham University for the annual Computing Education Practice (CEP) conference which takes place on Friday 5th January, with a pre-conference dinner in the evening of Thursday 4th January.
How good is generative AI at passing exams? What does this tell us about how we could design better assessments? Join us on Monday 4th December at 2pm GMT (UTC) to discuss a paper on this by Joyce Mahon, Brian Mac Namee and Brett Becker at University College Dublin published at UKICER earlier this year.  From the abstract:
We investigate the capabilities of ChatGPT (GPT-4) on second level (high-school) computer science examinations: the UK A-Level and Irish Leaving Certificate. Both are national, government-set / approved, and centrally assessed examinations. We also evaluate performance differences in exams made publicly available before and after the ChatGPT knowledge cutoff date, and investigate what types of question ChatGPT struggles with.
We find that ChatGPT is capable of achieving very high marks on both exams and that the performance difference before and after the knowledge cutoff date are minimal. We also observe that ChatGPT struggles with questions involving symbols or images, which can be mitigated when in-text information ‘fills in the gaps’. Additionally, GPT-4 performance can be negatively impacted when an initial inaccurate answer leads to further inaccuracies in subsequent parts of the same question. Finally, the element of choice on the Leaving Certificate is a significant advantage in achieving a high grade. Notably, there are minimal occurrences of hallucinations in answers and few errors in solutions not involving images.
These results reveal several strengths and weaknesses of these exams in terms of how generative AI performs on them and have implications for exam design, the construction of marking schemes, and could also shift the focus of what is examined and how.
We’ll be joined by the papers lead author Joyce, who will give us a lightning talk summary of her paper to start our discussion. All welcome, as usual we’ll be meeting on zoom details at sigcse.cs.manchester.ac.uk/join-us
Joyce Mahon, Brian MacNamee and Brett A. Becker (2023) No More Pencils No More Books: Capabilities of Generative AI on Irish and UK Computer Science School Leaving Examinations. In The United Kingdom and Ireland Computing Education Research conference (UKICER 2023), September 07–08, 2023, Swansea, Wales UK. ACM, New York, NY, USA, 7 pages. DOI: 10.1145/3610969.3610982
All the world’s a stage, and all the men and women merely players; They have their exits and their entrances. And one teacher in their time plays many parts.
As students watch academic actors enter and exit their lecture theatres on University campuses around the world, what role can drama play in their teaching and learning? How can theatre and storytelling facilitate students understanding of whatever is they are supposed to be learning?
Are we walking shadows and poor players that strut and fret our hour upon the stage, and then are heard no more? Do we tell tales like an idiot, full of sound and fury but signifying nothing? In short, how much should teachers embrace theatricality, both amateur and professional, on their respective stages? Can drama and storytelling actually improve students learning and if so, how? 🎭
Join us on Monday 6th November at 2pm UTC for our monthly ACM SIGCSE journal club meetup on zoom to discuss a paper on this topic by David Malan.  From the abstract
In Fall 2020, Harvard University transitioned entirely from on-campus instruction to Zoom online. But a silver lining of that time was unprecedented availability of space on campus, including the university’s own repertory theater. In healthier times, that theater would be brimming with talented artisans and weekly performances, without any computer science in sight. But with that theater’s artisans otherwise idled during COVID-19, our introductory course, CS50, had an unusual opportunity to collaborate with the same. Albeit subject to rigorous protocols, including face masks and face shields for all but the course’s instructor, along with significant social distancing, that moment in time allowed us an opportunity to experiment with lights, cameras, and action on an actual stage, bringing computer science to life in ways not traditionally possible in the course’s own classroom. Equipped with an actual prop shop in back, the team of artisans was able to actualize ideas that might otherwise only exist in slides and code. And students’ experience proved the better for it, with a supermajority of students attesting at term’s end to the efficacy of almost all of the semester’s demonstrations. We present in this work the design and implementation of the course’s theatricality along with the motivation therefor and results thereof. And we discuss how we have adapted, and others can adapt, these same moments more modestly in healthier times to more traditional classrooms, large and small.
This paper was presented at the SIGCSE 2023 Technical Symposium in Toronto, a video presentation of the paper is also available below. All welcome, as usual, we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us
Why do some students achieve more than others? Students goals, their belief in their ability to reach those goals and their prior experience are key factors. But how do they interplay? Join us for our monthly ACM SIGCSE journal club meetup on Zoom to discuss a prize-winning paper  on this topic by Hannu Pesonen, Juho Leinonen, Lassi Haaranen and Arto Hellas from Aalto University in Finland and the University of Auckland. From the abstract:
We explore achievement goal orientations, self-efficacy, gender, and prior experience, and look into their interplay in order to understand their contributions to course performance. Our results provide evidence for the appropriateness of the three-factor achievement goal orientation model (performance, mastery approach, mastery avoidance) over the more pervasive four-factor model. We observe that the aspects and the model factors correlate with course achievement. However, when looking into the interplay of the aspects and the model factors, the observations change and the role of, for example, self-efficacy as an aspect contributing to course achievement diminishes. Our study highlights the need to further explore the interplay of aspects contributing to course achievement.
We’ll be joined by one of the papers co-authors, Hannu, who’ll give a lightning talk summary to kick off our discussion. This paper won a best paper award at ukicer.com this year. All welcome, meeting details at sigcse.cs.manchester.ac.uk/join-us
Hannu Pesonen, Juho Leinonen, Lassi Haaranen, and Arto Hellas (2023) Exploring the Interplay of Achievement Goals, Self-Efficacy, Prior Experience and Course Achievement. In The United Kingdom and Ireland Computing Education Research (UKICER) conference (UKICER 2023), September 07–08, 2023, Swansea, Wales UK. ACM, New York, NY, USA, 7 pages. DOI: 10.1145/3610969.3611178
Microcredentials are mini-qualifications that allow learners to provide evidence of their broader skills alongside their traditional academic awards. How can these awards be integrated into existing educational qualifications? Join us on Monday 4th September at 2pm BST (UTC+1) to discuss a paper on this topic by Rupert Ward, Tom Crick, James H. Davenport, Paul Hanna, Alan Hayes, Alastair Irons, Keith Miller, Faron Moller, Tom Prickett and Julie Walters. From the abstract:
Employers are increasingly selecting and developing employees based on skills rather than qualifications. Governments now have a growing focus on skilling, reskilling and upskilling the workforce through skills-based development rather than qualifications as a way of improving productivity. Both these changes are leading to a much stronger interest in digital badging and micro-credentialing that enables a more granular, skills-based development of learner-earners. This paper explores the use of an online skills profiling tool that can be used by designers, educators, researchers, employers and governments to understand how badges and micro-credentials can be incorporated within existing qualifications and how skills developed within learning can be compared and aligned to those sought in job roles. This work, and lessons learnt from the case study examples of computing-related degree programmes in the UK, also highlights exciting opportunities for educational providers to develop and accommodate personalised learning into existing formal education structures across a range of settings and contexts.
We’ll be joined by Rupert Ward and some of the other co-authors of the paper who will give a five-minute lightning talk to kick-off our discussion. All welcome, as usual we’ll be meeting on Zoom, details at sigcse.cs.manchester.ac.uk/join-us.
Ward, Rupert; Crick, Tom; Davenport, James H.; Hanna, Paul; Hayes, Alan; Irons, Alastair; Miller, Keith; Moller, Faron; Prickett, Tom; Walters, Julie (2023). “Using Skills Profiling to Enable Badges and Micro-Credentials to be Incorporated into Higher Education Courses”. Journal of Interactive Media in Education. Ubiquity Press, Ltd. 2023 (1). DOI:10.5334/jime.807
What is the most dangerous course to teach in Computing? Join us on Monday 7th August at 2pm BST (UTC+1) to discuss an opinion piece by Tony Clear from Auckland University of Technology on this very subject. Tony argues that introductory programming (aka CS1) is the most dangerous course for educators to teach. Do you agree with him? From the intro to his paper:
This column reflects on some of my own experiences, observations, and research insights into CS1 teaching over more than 25 years in my own institution and others. The challenges facing first year programming educators and the inability of universities and their managers to learn from the copious literature relating to the teaching of introductory programming seem to be perennial. This places first year programming educators in some peril!
A good theory can be the most concentrated form of knowledge. By encapsulating an infinite number of cases, a theory can make predictions rather than just describing a finite number of disjointed facts. So how does theory feature in research about assessment and feedback? Join us on Monday 3rd July at 2pm BST (UTC+1) to discuss a paper investigating this question by Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai from the University of Hong Kong and Deakin University.  From the abstract of their paper:
Assessment and feedback research constitutes its own ‘silo’ amidst the higher education research field. Theory has been cast as an important but absent aspect of higher education research. This may be a particular issue in empirical assessment research which often builds on the conceptualisation of assessment as objective measurement. So, how does theory feature in assessment and feedback research? We conduct a critical review of recent empirical articles (2020, N = 56) to understand how theory is engaged with in this field. We analyse the repertoire of theories and the mechanisms for putting these theories into practice. 21 studies drew explicitly on educational theory. Theories were most commonly used to explain and frame assessment. Critical theories were notably absent, and quantitative studies engaged with theory in a largely instrumental manner. We discuss the findings through the concept of reflexivity, conceptualising engagement with theory as a practice with both benefits and pitfalls. We therefore call for further reflexivity in the field of assessment and feedback research through deeper and interdisciplinary engagement with theories to avoid further siloing of the field.
Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai (2023) How is theory used in assessment and feedback research? A critical review, Assessment & Evaluation in Higher Education, 48:1, 77-94, DOI: 10.1080/02602938.2022.2047154
The textbook has long been a mainstay of education. Although online textbooks can give students easy (and sometimes free) access to increasingly interactive resources, authors have a bewildering array of tools and publishing models to select from. Software such as asciidoctor.org, bookdown.org, leanpub.com, pretextbook.org, quarto.org, rephactor.com, runestone.academy, zybooks.com, and many others allow instructors to publish course material freed from the constraints of printed paper, monolithic Learning Management Systems (LMSs) and Monolithic Massive Open Online Courses (MOOCs). Join us on Monday 12th of June at 2pm BST (UTC+1) to discuss a paper describing one example: Dive Into Systems an undergraduate textbook on computer systems. We’ll be joined the co-authors of a paper  and corresponding textbook by Suzanne Matthews, Tia Newhall and Kevin C. Webb from Swarthmore College, Pennsylvania and the United States Military Academy at westpoint.edu, New York. 🇺🇸 From the abstract of their paper:
This paper presents our experiences, motivations, and goals for developing Dive into Systems, a new, free, online textbook that introduces computer systems, computer organisation, and parallel computing. Our book’s topic coverage is designed to give readers a gentle and broad introduction to these important topics. It teaches the fundamentals of computer systems and architecture, introduces skills for writing efficient programs, and provides necessary background to prepare students for advanced study in computer systems topics. Our book assumes only a CS1 background of the reader and is designed to be useful to a range of courses as a primary textbook for courses that introduce computer systems topics or as an auxiliary textbook to provide systems background in other courses. Results of an evaluation from students and faculty at 18 institutions who used a beta release of our book show overwhelmingly strong support for its coverage of computer systems topics, its readability, and its availability. Chapters are reviewed and edited by external volunteers from the CS education community. Their feedback, as well as that of student and faculty users, is continuously incorporated into its online content at diveintosystems.org/book
We’ll also be discussing options for adding interactivity to textbooks, see diveintosystems.org/sigcse23. So join us to find out more about what the future of textbooks might look like using Dive Into Systems as an exemplar. All welcome, as usual, we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us
Suzanne J. Matthews, Tia Newhall and Kevin C. Webb (2021) Dive into Systems: A Free, Online Textbook for Introducing Computer Systems SIGCSE ’21: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, Pages 1110–1116 DOI: 10.1145/3408877.3432514
Programming is hard, or at least it used to be. AI code generators like Amazon’s CodeWhisperer, DeepMind’s AlphaCode, GitHub’s CoPilot, Replit’s Ghostwriter and many others now make programming easier, at least for some people, some of the time. What opportunities and challenges do these new tools present for educators? Join us on Zoom to discuss an award winning paper by Brett Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather and Eddie Antonio Santos at University College Dublin, the University of Auckland and Abilene Christian University on this very topic.  We’ll be joined by two of the co-authors who will present a lightning talk to kick-off our discussion, for our monthly ACM journal club meetup. Here’s the abstract of his paper:
The introductory programming sequence has been the focus of much research in computing education. The recent advent of several viable and freely-available AI-driven code generation tools present several immediate opportunities and challenges in this domain. In this position paper we argue that the community needs to act quickly in deciding what possible opportunities can and should be leveraged and how, while also working on overcoming otherwise mitigating the possible challenges. Assuming that the effectiveness and proliferation of these tools will continue to progress rapidly, without quick, deliberate, and concerted efforts, educators will lose advantage in helping shape what opportunities come to be, and what challenges will endure. With this paper we aim to seed this discussion within the computing education community.
Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, Eddie Antonio Santos (2023) Programming Is Hard – Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation in Proceedings of the 54th ACM Technical Symposium on Computer Science Education: SIGCSE 2023, pages 500–506, DOI: 10.1145/3545945.3569759
Maybe you wrote that code and maybe you didn’t. If AI helped you, such as the OpenAI Codex in GitHub Copilot, how did it solve your problem? How much did Artificial Intelligence help or hinder your solution? Join us to discuss a paper by Michel Wermelinger from the Open University published in the SIGCSE technical symposium earlier this month on this very topic.  We’ll be joined by Michel who will present a lightning talk to kick-off our discussion. Here’s the abstract of his paper:
The teaching and assessment of introductory programming involves writing code that solves a problem described by text. Previous research found that OpenAI’s Codex, a natural language machine learning model trained on billions of lines of code, performs well on many programming problems, often generating correct and readable Python code. GitHub’s version of Codex, Copilot, is freely available to students. This raises pedagogic and academic integrity concerns. Educators need to know what Copilot is capable of, in order to adapt their teaching to AI-powered programming assistants. Previous research evaluated the most performant Codex model quantitatively, e.g. how many problems have at least one correct suggestion that passes all tests. Here I evaluate Copilot instead, to see if and how it differs from Codex, and look qualitatively at the generated suggestions, to understand the limitations of Copilot. I also report on the experience of using Copilot for other activities asked of students in programming courses: explaining code, generating tests and fixing bugs. The paper concludes with a discussion of the implications of the observed capabilities for the teaching of programming.
Michel Wermelinger (2023) Using GitHub Copilot to Solve Simple Programming Problems in Proceedings of the 54th ACM Technical Symposium on Computer Science Education Pages SIGCSE 2023 page 172–178 DOI: 10.1145/3545945.3569830