Join us on Monday 4th September at 2pm BST to discuss microcredentials in Higher Education

CC licensed Image by iconsax on flaticon.com

Microcredentials are mini-qualifications that allow learners to provide evidence of their broader skills alongside their traditional academic awards. How can these awards be integrated into existing educational qualifications? Join us on Monday 4th September at 2pm BST (UTC+1) to discuss a paper on this topic by Rupert Ward, Tom Crick, James H. Davenport, Paul Hanna, Alan Hayes, Alastair Irons, Keith Miller, Faron Moller, Tom Prickett and Julie Walters. From the abstract:

Employers are increasingly selecting and developing employees based on skills rather than qualifications. Governments now have a growing focus on skilling, reskilling and upskilling the workforce through skills-based development rather than qualifications as a way of improving productivity. Both these changes are leading to a much stronger interest in digital badging and micro-credentialing that enables a more granular, skills-based development of learner-earners. This paper explores the use of an online skills profiling tool that can be used by designers, educators, researchers, employers and governments to understand how badges and micro-credentials can be incorporated within existing qualifications and how skills developed within learning can be compared and aligned to those sought in job roles. This work, and lessons learnt from the case study examples of computing-related degree programmes in the UK, also highlights exciting opportunities for educational providers to develop and accommodate personalised learning into existing formal education structures across a range of settings and contexts.

We’ll be joined by Rupert Ward and some of the other co-authors of the paper who will give a five-minute lightning talk to kick-off our discussion. All welcome, as usual we’ll be meeting on Zoom, details at  sigcse.cs.manchester.ac.uk/join-us

References

  1. Ward, Rupert; Crick, Tom; Davenport, James H.; Hanna, Paul; Hayes, Alan; Irons, Alastair; Miller, Keith; Moller, Faron; Prickett, Tom; Walters, Julie (2023). “Using Skills Profiling to Enable Badges and Micro-Credentials to be Incorporated into Higher Education Courses”. Journal of Interactive Media in Education. Ubiquity Press, Ltd. 2023 (1). DOI:10.5334/jime.807

Join us to discuss how theory is used in assessment and feedback on Monday 3rd July at 2pm BST

Test image from flaticon.com

A good theory can be the most concentrated form of knowledge. By encapsulating an infinite number of cases, a theory can make predictions rather than just describing a finite number of disjointed facts. So how does theory feature in research about assessment and feedback? Join us on Monday 3rd July at 2pm BST (UTC+1) to discuss a paper investigating this question by Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai from the University of Hong Kong and Deakin University. [1] From the abstract of their paper:

Assessment and feedback research constitutes its own ‘silo’ amidst the higher education research field. Theory has been cast as an important but absent aspect of higher education research. This may be a particular issue in empirical assessment research which often builds on the conceptualisation of assessment as objective measurement. So, how does theory feature in assessment and feedback research? We conduct a critical review of recent empirical articles (2020, N = 56) to understand how theory is engaged with in this field. We analyse the repertoire of theories and the mechanisms for putting these theories into practice. 21 studies drew explicitly on educational theory. Theories were most commonly used to explain and frame assessment. Critical theories were notably absent, and quantitative studies engaged with theory in a largely instrumental manner. We discuss the findings through the concept of reflexivity, conceptualising engagement with theory as a practice with both benefits and pitfalls. We therefore call for further reflexivity in the field of assessment and feedback research through deeper and interdisciplinary engagement with theories to avoid further siloing of the field.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jane Waite at Queen Mary, University of London, for nominating this months paper.

References

  1. Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai (2023) How is theory used in assessment and feedback research? A critical review, Assessment & Evaluation in Higher Education, 48:1, 77-94, DOI: 10.1080/02602938.2022.2047154





Join us to discuss what counts as Computing Education Research on Monday 5th September at 2pm BST

Picture of Glasgow Cathedral (St Mungos) on Wikimedia Commons w.wiki/5aFU

Science is a broad church, full of narrow minds, trained to know ever more about even less. That’s according to Steve Jones [1], but in Computing Education Research (CER) are we being too narrow-minded about what counts (and what doesn’t count) as a contribution? Join us to discuss via a paper by Steve Draper and Joseph Maguire at the University of Glasgow recently published in TOCE [2]. From the abstract:

The overall aim of this paper is to stimulate discussion about the activities within CER, and to develop a more thoughtful and explicit perspective on the different types of research activity within CER, and their relationships with each other. While theories may be the most valuable outputs of research to those wishing to apply them, for researchers themselves there are other kinds of contribution important to progress in the field. This is what relates it to the immediate subject of this special journal issue on theory in CER. We adopt as our criterion for value “contribution to knowledge”. This paper’s main contributions are: A set of 12 categories of contribution which together indicate the extent of this terrain of contributions to research. Leading into that is a collection of ideas and misconceptions which are drawn on in defining and motivating “ground rules”, which are hints and guidance on the need for various often neglected categories. These are also helpful in justifying some additional categories which make the set as a whole more useful in combination. These are followed by some suggested uses for the categories, and a discussion assessing how the success of the paper might be judged.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Steve Jones (2007) Coral: A Pessimist in Paradise, Little Brown
  2. Steve Draper and Joseph Maguire (2022) The different types of contributions to knowledge (in CER): All needed, but not all recognised ACM Transactions on Computing Education (TOCE) DOI:10.1145/3487053

Join us to discuss the implications of the Open AI codex on introductory programming Monday 4th July at 2pm BST


Automatic code generators have been with us a while, but how do modern AI powered bots perform on introductory programming assignments? Join us to discuss the implications of the OpenAI Codex on introductory programming courses on Monday 4th July at 2pm BST. We’ll be discussing a paper by James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly and James Prather [1] for our monthly SIGCSE journal club meetup on zoom. Here is the abstract:

Recent advances in artificial intelligence have been driven by an exponential growth in digitised data. Natural language processing, in particular, has been transformed by machine learning models such as OpenAI’s GPT-3 which generates human-like text so realistic that its developers have warned of the dangers of its misuse. In recent months OpenAI released Codex, a new deep learning model trained on Python code from more than 50 million GitHub repositories. Provided with a natural language description of a programming problem as input, Codex generates solution code as output. It can also explain (in English) input code, translate code between programming languages, and more. In this work, we explore how Codex performs on typical introductory programming problems. We report its performance on real questions taken from introductory programming exams and compare it to results from students who took these same exams under normal conditions, demonstrating that Codex outscores most students. We then explore how Codex handles subtle variations in problem wording using several published variants of the well-known “Rainfall Problem” along with one unpublished variant we have used in our teaching. We find the model passes many test cases for all variants. We also explore how much variation there is in the Codex generated solutions, observing that an identical input prompt frequently leads to very different solutions in terms of algorithmic approach and code length. Finally, we discuss the implications that such technology will have for computing education as it continues to evolve, including both challenges and opportunities. (see accompanying slides and sigarch.org/coping-with-copilot/)

All welcome, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jim Paterson at Glasgow Caledonian University for nominating this months paper.

References

  1. James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, James Prather (2022) The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming ACE ’22: Australasian Computing Education Conference Pages 10–19 DOI:10.1145/3511861.3511863

Join us to discuss spatial skills in engineering on Monday 9th May at 2pm BST

CC BY-SA licensed image of a Rubik’s cube via by Booyabazooka Wikimedia Commons w.wiki/He9

Spatial skills can be beneficial in engineering and computing, but how are they connected? Why are spatial abilities beneficial in engineering? Join us to discuss this via a paper on spatial skills training by Jack Parkinson and friends at the University of Glasgow. Here is the abstract:

We have been training spatial skills for Computing Science students over several years with positive results, both in terms of the students’ spatial skills and their CS outcomes. The delivery and structure of the training has been modified over time and carried out at several institutions, resulting in variations across each intervention. This article describes six distinct case studies of training deliveries, highlighting the main challenges faced and some important takeaways. Our goal is to provide useful guidance based on our varied experience for any practitioner considering the adoption of spatial skills training for their students.

see [1]

All welcome. As usual we’ll be meeting on zoom, details are in the slack channel sigcse.cs.manchester.ac.uk/join-us. Thanks to Steven Bradley for suggesting the paper

References

  1. Jack Parkinson, Ryan Bockmon, Quintin Cutts, Michael Liut, Andrew Petersen and Sheryl Sorby (2021) Practice report: six studies of spatial skills training in introductory computer science, ACM Inroads Volume 12, issue 4, pp 18–29 DOI: 10.1145/3494574

Join us to discuss the feeling of learning ❤️ (vs. actual learning) on Monday 4th April at 2pm BST

Learning can be an emotional process and we often don’t realise when we are actually learning. When you’re listening to an expert explain something well, it’s easy to mistake the speaker’s smooth delivery for your own understanding. You might feel like you’re learning, but actual learning is often hard work and feels uncomfortable. Join us to discuss actual learning vs. feeling of learning via a paper by Louis Deslauriers, Logan S. McCarty, Kelly Miller, Kristina Callaghan, and Greg Kestin at Harvard University here is the abstract:

We compared students’ self-reported perception of learning with their actual learning under controlled conditions in large-enrollment introductory college physics courses taught using 1) active instruction (following best practices in the discipline) and 2) passive instruction (lectures by experienced and highly rated instructors). Both groups received identical class content and handouts, students were randomly assigned, and the instructor made no effort to persuade students of the benefit of either method. Students in active classrooms learned more (as would be expected based on prior research), but their perception of learning, while positive, was lower than that of their peers in passive environments. This suggests that attempts to evaluate instruction based on students’ perceptions of learning could inadvertently promote inferior (passive) pedagogical methods. For instance, a superstar lecturer could create such a positive feeling of learning that students would choose those lectures over active learning. Most importantly, these results suggest that when students experience the increased cognitive effort associated with active learning, they initially take that effort to signify poorer learning. That disconnect may have a detrimental effect on students’ motivation, engagement, and ability to self-regulate their own learning. Although students can, on their own, discover the increased value of being actively engaged during a semester-long course, their learning may be impaired during the initial part of the course. We discuss strategies that instructors can use, early in the semester, to improve students’ response to being actively engaged in the classroom.

From [1] and [2]

Thanks to Uli Sattler and Andrea Schalk for highlighting the paper. All welcome. As usual we’ll be meeting on zoom, details are in the slack channel sigcse.cs.manchester.ac.uk/join-us.

References

  1. Logan S. McCarty; Kelly Miller; Kristina Callaghan; Greg Kestin (2019) “Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom”Proceedings of the National Academy of Sciences of the United States of America: 201821936. DOI:10.1073/PNAS.1821936116 PMC: 6765278 PMID: 31484770
  2. Jill Barshay (2022) College students often don’t know when they’re learning: Harvard experiment reveals the psychological grip of lectures, The Hechinger Report

Join us to discuss conversational programming on Monday 7th March at 2pm GMT

Somewhere between the traditional division of non-programmers and programmers, there is a third category of conversational programmers. These are people who can (or want) to learn programming so that they can speak in the “programmer’s language” and work better with software engineers. Join us to discuss conversational programming via paper by Katie Cunningham et al. [1] This won a best paper award at SIGCSE 2022: 🏆

As the number of conversational programmers grows, computing educators are increasingly tasked with a paradox: to teach programming to people who want to communicate effectively about the internals of software, but not write code themselves. Designing instruction for conversational programmers is particularly challenging because their learning goals are not well understood, and few strategies exist for teaching to their needs. To address these gaps, we analyse the research on programming learning goals of conversational programmers from survey and interview studies of this population. We identify a major theme from these learners’ goals: they often involve making connections between code’s real-world purpose and various internal elements of software. To better understand the knowledge and skills conversational programmers require, we apply the Structure Behaviour Function framework to compare their learning goals to those of aspiring professional developers. Finally, we argue that instructional strategies for conversational programmers require a focus on high-level program behaviour that is not typically supported in introductory programming courses.

see [1] below


All welcome. As usual we’ll be meeting on zoom, details are in the slack channel sigcse.cs.manchester.ac.uk/join-us.

References

  1. Kathryn Cunningham, Yike Qiao, Alex Feng and Eleanor O’Rourke (2022) Bringing “High-level” Down to Earth: Gaining Clarity in Conversational Programmer Learning Goals in SIGCSE 2022: Proceedings of the 53rd ACM Technical Symposium on Computer Science Education, Pages 551–557 DOI:10.1145/3478431.3499370
Video summary of the paper by Katie Cunningham

Join us to re-examine inequalities in Computer Science participation on Monday 4th October at 2pm BST

Loaded scales image by Carole J. Lee on Wikimedia Commons w.wiki/42Rp

It’s no secret that both Computer Science and engineering have inequalities in their participation. Join us to re-examine and discuss these inequalities via a paper by Maria Kallia and Quintin Cutts [1] on Monday 4th October at 2pm BST. This won a best paper award at ICER 2021. From the abstract:

Concerns about participation in computer science at all levels of education continue to rise, despite the substantial efforts of research, policy, and world-wide education initiatives. In this paper, which is guided by a systematic literature review, we investigate the issue of inequalities in participation by bringing a theoretical lens from the sociology of education, and particularly, Bourdieu’s theory of social reproduction. By paying particular attention to Bourdieu’s theorising of capital, habitus, and field, we first establish an alignment between Bourdieu’s theory and what is known about inequalities in computer science (CS) participation; we demonstrate how the factors affecting participation constitute capital forms that individuals possess to leverage within the computer science field, while students’ views and dispositions towards computer science and scientists are rooted in their habitus which influences their successful assimilation in computer science fields. Subsequently, by projecting the issue of inequalities in CS participation to Bourdieu’s sociological theorisations, we explain that because most interventions do not consider the issue holistically and not in formal education settings, the reported benefits do not continue in the long-term which reproduces the problem. Most interventions have indeed contributed significantly to the issue, but they have either focused on developing some aspects of computer science capital or on designing activities that, although inclusive in terms of their content and context, attempt to re-construct students’ habitus to “fit” in the already “pathologized” computer science fields. Therefore, we argue that to contribute significantly to the equity and participation issue in computer science, research and interventions should focus on restructuring the computer science field and the rules of participation, as well as on building holistically students’ computer science capital and habitus within computer science fields.

A presentation video by Maria of the paper from ICER 2021


All welcome. As usual, we’ll be meeting on zoom. Thanks to Steven Bradley for suggesting this months paper.

References

  1. Maria Kallia and Quintin Cutts (2021) Re-Examining Inequalities in Computer Science Participation from a Bourdieusian Sociological Perspective. In Proceedings of the 17th ACM Conference on International Computing Education Research (ICER) 2021 Pages 379–392, 10.1145/3446871.3469763

Join us to discuss the tyranny of content on Monday 5th July at 2pm BST

CC-BY-SA image of Bill Gates by Kuhlmann MSC via Wikimedia Commons w.wiki/3W7k

If content is king, then his rule is tyrannical. Bill Gates once remarked that “Content is King” but In the kingdom of education, how much do educators oppressively inflict content on their learners? What can be done to reduce the tyranny of content? We’ll be discussing this via a paper by Christina I. Petersen et al, here’s the abstract:

Instructors have inherited a model for conscientious instruction that suggests they must cover all the material outlined in their syllabus, and yet this model frequently diverts time away from allowing students to engage meaningfully with the content during class. We outline the historical forces that may have conditioned this teacher-centered model as well as the disciplinary pressures that inadvertently reward it. As a way to guide course revision and move to a learner-centered teaching approach, we propose three evidence-based strategies that instructors can adopt: 1) identify the core concepts and competencies for your course; 2) create an organizing framework for the core concepts and competencies; and 3) teach students how to learn in your discipline. We further outline examples of actions that instructors can incorporate to implement each of these strategies. We propose that moving from a content-coverage approach to these learner-centered strategies will help students better learn and retain information and apply it to new situations.

All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Petersen, Christina I.; Baepler, Paul; Beitz, Al; Ching, Paul; Gorman, Kristen S.; Neudauer, Cheryl L.; Rozaitis, William; Walker, J. D.; Wingert, Deb; Reiness, C. Gary (2020). The Tyranny of Content: “Content Coverage” as a Barrier to Evidence-Based Teaching Approaches and Ways to Overcome It. CBE—Life Sciences Education19 (2): ar17. doi:10.1187/cbe.19-04-0079

Join us to discuss cognitive load on Monday 7th June at 2pm

Cognitive Load Theory provides a basis for understanding the learning process. It has been widely used to improve the teaching and learning of many subjects including Computer Science. But how can it help us build better collaborative learning experiences? Join us to discuss via a paper by Paul Kirschner, John Sweller, Femke Kirschner & Jimmy Zambrano R. [1] From the abstract:

Cognitive load theory has traditionally been associated with individual learning. Based on evolutionary educational psychology and our knowledge of human cognition, particularly the relations between working memory and long-term memory, the theory has been used to generate a variety of instructional effects. Though these instructional effects also influence the efficiency and effectiveness of collaborative learning, be it computer supported or face-to-face, they are often not considered either when designing collaborative learning situations/environments or researching collaborative learning. One reason for this omission is that cognitive load theory has only sporadically concerned itself with certain particulars of collaborative learning such as the concept of a collective working memory when collaborating along with issues associated with transactive activities and their concomitant costs which are inherent to collaboration. We illustrate how and why cognitive load theory, by adding these concepts, can throw light on collaborative learning and generate principles specific to the design and study of collaborative learning.

Thanks to Nicola Looker for suggesting this months paper. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Kirschner, Paul A.; Sweller, John; Kirschner, Femke; Zambrano R., Jimmy (2018). “From Cognitive Load Theory to Collaborative Cognitive Load Theory”. International Journal of Computer-Supported Collaborative Learning13 (2): 213–233. DOI:10.1007/s11412-018-9277-y