Join us to discuss spatial skills in engineering on Monday 9th May at 2pm BST

CC BY-SA licensed image of a Rubik’s cube via by Booyabazooka Wikimedia Commons w.wiki/He9

Spatial skills can be beneficial in engineering and computing, but how are they connected? Why are spatial abilities beneficial in engineering? Join us to discuss this via a paper on spatial skills training by Jack Parkinson and friends at the University of Glasgow. Here is the abstract:

We have been training spatial skills for Computing Science students over several years with positive results, both in terms of the students’ spatial skills and their CS outcomes. The delivery and structure of the training has been modified over time and carried out at several institutions, resulting in variations across each intervention. This article describes six distinct case studies of training deliveries, highlighting the main challenges faced and some important takeaways. Our goal is to provide useful guidance based on our varied experience for any practitioner considering the adoption of spatial skills training for their students.

see [1]

All welcome. As usual we’ll be meeting on zoom, details are in the slack channel sigcse.cs.manchester.ac.uk/join-us. Thanks to Steven Bradley for suggesting the paper

References

  1. Jack Parkinson, Ryan Bockmon, Quintin Cutts, Michael Liut, Andrew Petersen and Sheryl Sorby (2021) Practice report: six studies of spatial skills training in introductory computer science, ACM Inroads Volume 12, issue 4, pp 18–29 DOI: 10.1145/3494574

Join us to discuss the feeling of learning ❤️ (vs. actual learning) on Monday 4th April at 2pm BST

Learning can be an emotional process and we often don’t realise when we are actually learning. When you’re listening to an expert explain something well, it’s easy to mistake the speaker’s smooth delivery for your own understanding. You might feel like you’re learning, but actual learning is often hard work and feels uncomfortable. Join us to discuss actual learning vs. feeling of learning via a paper by Louis Deslauriers, Logan S. McCarty, Kelly Miller, Kristina Callaghan, and Greg Kestin at Harvard University here is the abstract:

We compared students’ self-reported perception of learning with their actual learning under controlled conditions in large-enrollment introductory college physics courses taught using 1) active instruction (following best practices in the discipline) and 2) passive instruction (lectures by experienced and highly rated instructors). Both groups received identical class content and handouts, students were randomly assigned, and the instructor made no effort to persuade students of the benefit of either method. Students in active classrooms learned more (as would be expected based on prior research), but their perception of learning, while positive, was lower than that of their peers in passive environments. This suggests that attempts to evaluate instruction based on students’ perceptions of learning could inadvertently promote inferior (passive) pedagogical methods. For instance, a superstar lecturer could create such a positive feeling of learning that students would choose those lectures over active learning. Most importantly, these results suggest that when students experience the increased cognitive effort associated with active learning, they initially take that effort to signify poorer learning. That disconnect may have a detrimental effect on students’ motivation, engagement, and ability to self-regulate their own learning. Although students can, on their own, discover the increased value of being actively engaged during a semester-long course, their learning may be impaired during the initial part of the course. We discuss strategies that instructors can use, early in the semester, to improve students’ response to being actively engaged in the classroom.

From [1] and [2]

Thanks to Uli Sattler and Andrea Schalk for highlighting the paper. All welcome. As usual we’ll be meeting on zoom, details are in the slack channel sigcse.cs.manchester.ac.uk/join-us.

References

  1. Logan S. McCarty; Kelly Miller; Kristina Callaghan; Greg Kestin (2019) “Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom”Proceedings of the National Academy of Sciences of the United States of America: 201821936. DOI:10.1073/PNAS.1821936116 PMC: 6765278 PMID: 31484770
  2. Jill Barshay (2022) College students often don’t know when they’re learning: Harvard experiment reveals the psychological grip of lectures, The Hechinger Report

Join us to discuss conversational programming on Monday 7th March at 2pm GMT

Somewhere between the traditional division of non-programmers and programmers, there is a third category of conversational programmers. These are people who can (or want) to learn programming so that they can speak in the “programmer’s language” and work better with software engineers. Join us to discuss conversational programming via paper by Katie Cunningham et al. [1] This won a best paper award at SIGCSE 2022: 🏆

As the number of conversational programmers grows, computing educators are increasingly tasked with a paradox: to teach programming to people who want to communicate effectively about the internals of software, but not write code themselves. Designing instruction for conversational programmers is particularly challenging because their learning goals are not well understood, and few strategies exist for teaching to their needs. To address these gaps, we analyse the research on programming learning goals of conversational programmers from survey and interview studies of this population. We identify a major theme from these learners’ goals: they often involve making connections between code’s real-world purpose and various internal elements of software. To better understand the knowledge and skills conversational programmers require, we apply the Structure Behaviour Function framework to compare their learning goals to those of aspiring professional developers. Finally, we argue that instructional strategies for conversational programmers require a focus on high-level program behaviour that is not typically supported in introductory programming courses.

see [1] below


All welcome. As usual we’ll be meeting on zoom, details are in the slack channel sigcse.cs.manchester.ac.uk/join-us.

References

  1. Kathryn Cunningham, Yike Qiao, Alex Feng and Eleanor O’Rourke (2022) Bringing “High-level” Down to Earth: Gaining Clarity in Conversational Programmer Learning Goals in SIGCSE 2022: Proceedings of the 53rd ACM Technical Symposium on Computer Science Education, Pages 551–557 DOI:10.1145/3478431.3499370
Video summary of the paper by Katie Cunningham

Join us to re-examine inequalities in Computer Science participation on Monday 4th October at 2pm BST

Loaded scales image by Carole J. Lee on Wikimedia Commons w.wiki/42Rp

It’s no secret that both Computer Science and engineering have inequalities in their participation. Join us to re-examine and discuss these inequalities via a paper by Maria Kallia and Quintin Cutts [1] on Monday 4th October at 2pm BST. This won a best paper award at ICER 2021. From the abstract:

Concerns about participation in computer science at all levels of education continue to rise, despite the substantial efforts of research, policy, and world-wide education initiatives. In this paper, which is guided by a systematic literature review, we investigate the issue of inequalities in participation by bringing a theoretical lens from the sociology of education, and particularly, Bourdieu’s theory of social reproduction. By paying particular attention to Bourdieu’s theorising of capital, habitus, and field, we first establish an alignment between Bourdieu’s theory and what is known about inequalities in computer science (CS) participation; we demonstrate how the factors affecting participation constitute capital forms that individuals possess to leverage within the computer science field, while students’ views and dispositions towards computer science and scientists are rooted in their habitus which influences their successful assimilation in computer science fields. Subsequently, by projecting the issue of inequalities in CS participation to Bourdieu’s sociological theorisations, we explain that because most interventions do not consider the issue holistically and not in formal education settings, the reported benefits do not continue in the long-term which reproduces the problem. Most interventions have indeed contributed significantly to the issue, but they have either focused on developing some aspects of computer science capital or on designing activities that, although inclusive in terms of their content and context, attempt to re-construct students’ habitus to “fit” in the already “pathologized” computer science fields. Therefore, we argue that to contribute significantly to the equity and participation issue in computer science, research and interventions should focus on restructuring the computer science field and the rules of participation, as well as on building holistically students’ computer science capital and habitus within computer science fields.

A presentation video by Maria of the paper from ICER 2021


All welcome. As usual, we’ll be meeting on zoom. Thanks to Steven Bradley for suggesting this months paper.

References

  1. Maria Kallia and Quintin Cutts (2021) Re-Examining Inequalities in Computer Science Participation from a Bourdieusian Sociological Perspective. In Proceedings of the 17th ACM Conference on International Computing Education Research (ICER) 2021 Pages 379–392, 10.1145/3446871.3469763

Join us to discuss the tyranny of content on Monday 5th July at 2pm BST

CC-BY-SA image of Bill Gates by Kuhlmann MSC via Wikimedia Commons w.wiki/3W7k

If content is king, then his rule is tyrannical. Bill Gates once remarked that “Content is King” but In the kingdom of education, how much do educators oppressively inflict content on their learners? What can be done to reduce the tyranny of content? We’ll be discussing this via a paper by Christina I. Petersen et al, here’s the abstract:

Instructors have inherited a model for conscientious instruction that suggests they must cover all the material outlined in their syllabus, and yet this model frequently diverts time away from allowing students to engage meaningfully with the content during class. We outline the historical forces that may have conditioned this teacher-centered model as well as the disciplinary pressures that inadvertently reward it. As a way to guide course revision and move to a learner-centered teaching approach, we propose three evidence-based strategies that instructors can adopt: 1) identify the core concepts and competencies for your course; 2) create an organizing framework for the core concepts and competencies; and 3) teach students how to learn in your discipline. We further outline examples of actions that instructors can incorporate to implement each of these strategies. We propose that moving from a content-coverage approach to these learner-centered strategies will help students better learn and retain information and apply it to new situations.

All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Petersen, Christina I.; Baepler, Paul; Beitz, Al; Ching, Paul; Gorman, Kristen S.; Neudauer, Cheryl L.; Rozaitis, William; Walker, J. D.; Wingert, Deb; Reiness, C. Gary (2020). The Tyranny of Content: “Content Coverage” as a Barrier to Evidence-Based Teaching Approaches and Ways to Overcome It. CBE—Life Sciences Education19 (2): ar17. doi:10.1187/cbe.19-04-0079

Join us to discuss cognitive load on Monday 7th June at 2pm

Cognitive Load Theory provides a basis for understanding the learning process. It has been widely used to improve the teaching and learning of many subjects including Computer Science. But how can it help us build better collaborative learning experiences? Join us to discuss via a paper by Paul Kirschner, John Sweller, Femke Kirschner & Jimmy Zambrano R. [1] From the abstract:

Cognitive load theory has traditionally been associated with individual learning. Based on evolutionary educational psychology and our knowledge of human cognition, particularly the relations between working memory and long-term memory, the theory has been used to generate a variety of instructional effects. Though these instructional effects also influence the efficiency and effectiveness of collaborative learning, be it computer supported or face-to-face, they are often not considered either when designing collaborative learning situations/environments or researching collaborative learning. One reason for this omission is that cognitive load theory has only sporadically concerned itself with certain particulars of collaborative learning such as the concept of a collective working memory when collaborating along with issues associated with transactive activities and their concomitant costs which are inherent to collaboration. We illustrate how and why cognitive load theory, by adding these concepts, can throw light on collaborative learning and generate principles specific to the design and study of collaborative learning.

Thanks to Nicola Looker for suggesting this months paper. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Kirschner, Paul A.; Sweller, John; Kirschner, Femke; Zambrano R., Jimmy (2018). “From Cognitive Load Theory to Collaborative Cognitive Load Theory”. International Journal of Computer-Supported Collaborative Learning13 (2): 213–233. DOI:10.1007/s11412-018-9277-y

Join us to discuss what goes on in the mind of Teaching Assistants on Monday 10th May at 2pm BST

Thinking icon via flaticon.com

Both graduate and undergraduate teaching assistants (TAs) are crucial to facilitating students learning. What goes on inside the mind of a teaching assistant? How can understanding this help us train TA’s better for the roles they play in education? Join us to discuss via a paper by Julia M. Markel and Philip Guo. [1] From the abstract:

As CS enrolments continue to grow, introductory courses are employing more undergraduate TAs. One of their main roles is performing one-on-one tutoring in the computer lab to help students understand and debug their programming assignments. What goes on in the mind of an undergraduate TA when they are helping students with programming? In this experience report, we present firsthand accounts from an undergraduate TA documenting her 36 hours of in-lab tutoring for a CS2 course, where she engaged in 69 one-on-one help sessions. This report provides a unique perspective from an undergraduate’s point-of-view rather than a faculty member’s. We summarise her experiences by constructing a four-part model of tutoring interactions: a) The tutor begins the session with an initial state of mind (e.g., their energy/focus level, perceived time pressure). b) They observe the student’s outward state upon arrival (e.g., how much they seem to care about learning). c) Using that observation, the tutor infers what might be going on inside the student’s mind. d) The combination of what goes on inside the tutor’s and student’s minds affects tutoring interactions, which progress from diagnosis to planning to an explain-code-react loop to post-resolution activities. We conclude by discussing ways that this model can be used to design scaffolding for training novice TAs and software tools to help TAs scale their efforts to larger classes.

This paper was one of nine best papers at SIGCSE 2021, there’s a video of the paper presentation on pathable.sigcse2021.org. All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Markel, Julia M. and Guo, Philip (2021) Inside the Mind of a CS Undergraduate TA: A Firsthand Account of Undergraduate Peer Tutoring in Computer Labs SIGCSE ’21: Proceedings of the 52nd ACM Technical Symposium on Computer Science EducationMarch 2021 Pages 502–508 DOI: 10.1145/3408877.3432533 (open access)

Join us to discuss learning sciences for computing education on Monday 12th April at 2pm BST

Scientist icon made by Eucalyp flaticon.com

Learning sciences aims to improve our theoretical understanding of how people learn while computing education investigates with how people learn to compute. Historically, these fields existed independently, although attempts have been made to merge them. Where do these disciplines overlap and how can they be integrated further? Join us to discuss learning sciences for computing education via a paper by Lauren Margulieux, Brian Dorn and Kristin Searle, from the abstract:

This chapter discusses potential and current overlaps between the learning sciences and computing education research in their origins, theory, and methodology. After an introduction to learning sciences, the chapter describes how both learning sciences and computing education research developed as distinct fields from cognitive science. Despite common roots and common goals, the authors argue that the two fields are less integrated than they should be and recommend theories and methodologies from the learning sciences that could be used more widely in computing education research. The chapter selects for discussion one general learning theory from each of cognition (constructivism), instructional design (cognitive apprenticeship), social and environmental features of learning environments (sociocultural theory), and motivation (expectancy-value theory). Then the chapter describes methodology for design-based research to apply and test learning theories in authentic learning environments. The chapter emphasizes the alignment between design-based research and current research practices in computing education. Finally, the chapter discusses the four stages of learning sciences projects. Examples from computing education research are given for each stage to illustrate the shared goals and methods of the two fields and to argue for more integration between them.

There’s a 5 minute summary of the chapter ten minutes into the video below:



All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details. Thanks to this months paper suggestions from Sue Sentance and Nicola Looker.

References

  1. Margulieux, Lauren E.; Dorn, Brian; Searle, Kristin A. (2019). “Learning Sciences for Computing Education“: 208–230. doi:10.1017/9781108654555.009. in In S. A. Fincher & A. V. Robins (Eds.) The Cambridge Handbook of Computing Education Research. Cambridge, UK: Cambridge University Press

Join us to discuss teaching social responsibility and justice in Computer Science on Monday 1st March at 2pm GMT

Scales of justice icon made by monkik from flaticon.com

With great power comes great responsibility. [1] Given their growing power in the twenty-first century, computer scientists have a duty to society to use that power responsibly and justly. How can we teach this kind of social responsibility and ethics to engineering students? Join us to discuss teaching social justice in computer science via a paper by Rodrigo Ferreira and Moshe Vardi at Rice University in Houston, Texas published in the sigcse2021.sigcse.org conference [2]. From the abstract of the preprint:

As ethical questions around the development of contemporary computer technologies have become an increasing point of public and political concern, computer science departments in universities around the world have placed renewed emphasis on tech ethics undergraduate classes as a means to educate students on the large scale social implications of their actions. Committed to the idea that tech ethics is an essential part of the undergraduate computer science educational curriculum, at Rice University this year we piloted a redesigned version of our Ethics and Accountability in Computer Science class. This effort represents our first attempt at implementing a “deep” tech ethics approach to the course.

Incorporating elements from philosophy of technology, critical media theory, and science and technology studies, we encouraged students to learn not only ethics in a “shallow” sense, examining abstract principles or values to determine right and wrong, but rather looking at a series of “deeper” questions more closely related to present issues of social justice and relying on a structural understanding of these problems to develop potential socio-technical solutions. In this article, we report on our implementation of this redesigned approach. We describe in detail the rationale and strategy for implementing this approach, present key elements of the redesigned syllabus, and discuss final student reflections and course evaluations. To conclude, we examine course achievements, limitations, and lessons learned toward the future, particularly in regard to the number escalating social protests and issues involving Covid-19.

This paper got me thinking:

Houston, we’ve had your problem!

After paging the authors in Houston with the message above there was radio silence.

Beep - beep - beep [white noise] Beep - beep - beep...

Hello Manchester, this is Houston, Can we join you?

So we’re delighted to be joined LIVE by the authors of the paper Rodrigo Ferreira and Moshe Vardi from Houston, Texas. They’ll give a lightning talk outlining the paper before we discuss it together in smaller break out groups.

Their paper describes a problem everyone in the world has had in teaching ethics in Computer Science recently. How can we make computing more ethical?

All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Spider-Man (1962) https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility
  2. Rodrigo Ferreira and Moshe Vardi (2021) Deep Tech Ethics An Approach to Teaching Social Justice in Computer Science in Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), March 13–20, 2021, Virtual Event, USA. ACM, New York, NY, USA. DOI:10.1145/3408877.3432449
  3. Jack Swigert (1970) https://en.wikipedia.org/wiki/Houston,_we_have_a_problem

Join us to discuss failure rates in introductory programming courses on Monday 1st February at 2pm GMT

Icons made by freepik from flaticon.com

Following on from our discussion of ungrading, this month we’ll be discussing pass/fail rates in introductory programming courses. [1] Here is the abstract:

Vast numbers of publications in computing education begin with the premise that programming is hard to learn and hard to teach. Many papers note that failure rates in computing courses, and particularly in introductory programming courses, are higher than their institutions would like. Two distinct research projects in 2007 and 2014 concluded that average success rates in introductory programming courses world-wide were in the region of 67%, and a recent replication of the first project found an average pass rate of about 72%. The authors of those studies concluded that there was little evidence that failure rates in introductory programming were concerningly high.

However, there is no absolute scale by which pass or failure rates are measured, so whether a failure rate is concerningly high will depend on what that rate is compared against. As computing is typically considered to be a STEM subject, this paper considers how pass rates for introductory programming courses compare with those for other introductory STEM courses. A comparison of this sort could prove useful in demonstrating whether the pass rates are comparatively low, and if so, how widespread such findings are.

This paper is the report of an ITiCSE working group that gathered information on pass rates from several institutions to determine whether prior results can be confirmed, and conducted a detailed comparison of pass rates in introductory programming courses with pass rates in introductory courses in other STEM disciplines.

The group found that pass rates in introductory programming courses appear to average about 75%; that there is some evidence that they sit at the low end of the range of pass rates in introductory STEM courses; and that pass rates both in introductory programming and in other introductory STEM courses appear to have remained fairly stable over the past five years. All of these findings must be regarded with some caution, for reasons that are explained in the paper. Despite the lack of evidence that pass rates are substantially lower than in other STEM courses, there is still scope to improve the pass rates of introductory programming courses, and future research should continue to investigate ways of improving student learning in introductory programming courses.

Anyone is welcome to join us. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

Thanks to Brett Becker and Joseph Allen for this months #paper-suggestions via our slack channel at uk-acm-sigsce.slack.com.

References

  1. Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, Neena Thota (2019) Pass Rates in Introductory Programming and in other STEM Disciplines in ITiCSE-WGR ’19: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, Pages 53–71 DOI: 10.1145/3344429.3372502