What paper should we discuss on Monday 10th May at 2pm BST?

Our next journal club (see the upcoming events page) is scheduled for Monday 10th May at 2pm BST. What paper should we discuss? We could pick one of the nine best papers from SIGCSE 2021 last month (listed below). As usual, you can send us your paper suggestions via our slack channel, via twitter @ukicse, email me or post a comment below by Friday 16th April at 6pm BST.

Best Papers for Computing Education Research

  1. Real Talk: Saturated Sites of Violence in CS Education. Yolanda A. Rankin, Florida State University; Jakita O. Thomas, Auburn University; Sheena Erete, DePaul University
  2. Investigating the Impact of the COVID-19 Pandemic on Computing Students’ Sense of Belonging. Catherine Mooney, University College Dublin; Brett A. Becker, University College Dublin
  3. Superficial Code-guise: Investigating the Impact of Surface Feature Changes on Students’ Programming Question Scores. Max Fowler, University of Illinois at Urbana-Champaign; Craig Zilles, University of Illinois at Urbana-Champaign

Best Papers for Experience Reports and Tools

  1. How a Remote Video Game Coding Camp Improved Autistic College Students’ Self-Efficacy in Communication. Andrew Begel, Microsoft Research; James Dominic, Clemson University; Conner Phillis, KeyMark, Inc.; Thomas Beeson, Clemson University; Paige Rodeghero, Clemson University
  2. Inside the Mind of a CS Undergraduate TA: A Firsthand Account of Undergraduate Peer Tutoring in Computer Labs. Julia M. Markel, UC San Diego; Philip J. Guo, UC San Diego
  3. Understanding Immersive Research Experiences that Build Community, Equity, and Inclusion. Audrey Rorrer, UNC Charlotte; Breauna Spencer, University of California, Irvine; Sloan Davis, Google; Sepi Hejazi Moghadam, Google; Deborah Holmes, UNC Charlotte; Cori Grainger, Google

Best Papers for Positions and Curriculum Initiatives

  1. Creating a Multifarious Cyber Science Major. Raymond W. Blaine, U.S. Military Academy; Jean R. S. Blair, U.S. Military Academy; Christa M. Chewar, U.S. Military Academy; Rob Harrison, U.S. Military Academy; James J. Raftery, U.S. Military Academy; Edward Sobiesk, U.S. Military Academy
  2. Confronting Inequities in Computer Science Education: A Case for Critical Theory. Aleata Hubbard Cheuoua, WestEd
  3. Developing an Interdisciplinary Data Science Program. Mariam Salloum, University of California, Riverside; Daniel Jeske, University of California, Riverside; Wenxiu Ma, University of California, Riverside; Vagelis Papalexakis, University of California, Riverside; Christian Shelton, University of California, Riverside; Vassilis Tsotras, University of California, Riverside; Shuheng Zhou, University of California, Riverside

There’s also Nicola’s suggestion from last month on What Do We Think We Think We Are Doing? Metacognition and Self-Regulation in Programming by James Prather, Brett A. Becker, Michelle Craig, Paul Denny, Dastyni Loksa, and Lauren Margulieux from ICER 2020 doi.org/gh3qm8.

Join us to discuss learning sciences for computing education on Monday 12th April at 2pm BST

Scientist icon made by Eucalyp flaticon.com

Learning sciences aims to improve our theoretical understanding of how people learn while computing education investigates with how people learn to compute. Historically, these fields existed independently, although attempts have been made to merge them. Where do these disciplines overlap and how can they be integrated further? Join us to discuss learning sciences for computing education via a paper by Lauren Margulieux, Brian Dorn and Kristin Searle, from the abstract:

This chapter discusses potential and current overlaps between the learning sciences and computing education research in their origins, theory, and methodology. After an introduction to learning sciences, the chapter describes how both learning sciences and computing education research developed as distinct fields from cognitive science. Despite common roots and common goals, the authors argue that the two fields are less integrated than they should be and recommend theories and methodologies from the learning sciences that could be used more widely in computing education research. The chapter selects for discussion one general learning theory from each of cognition (constructivism), instructional design (cognitive apprenticeship), social and environmental features of learning environments (sociocultural theory), and motivation (expectancy-value theory). Then the chapter describes methodology for design-based research to apply and test learning theories in authentic learning environments. The chapter emphasizes the alignment between design-based research and current research practices in computing education. Finally, the chapter discusses the four stages of learning sciences projects. Examples from computing education research are given for each stage to illustrate the shared goals and methods of the two fields and to argue for more integration between them.

There’s a 5 minute summary of the chapter ten minutes into the video below:



All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details. Thanks to this months paper suggestions from Sue Sentance and Nicola Looker.

References

  1. Margulieux, Lauren E.; Dorn, Brian; Searle, Kristin A. (2019). “Learning Sciences for Computing Education“: 208–230. doi:10.1017/9781108654555.009. in In S. A. Fincher & A. V. Robins (Eds.) The Cambridge Handbook of Computing Education Research. Cambridge, UK: Cambridge University Press

Join us to discuss teaching social responsibility and justice in Computer Science on Monday 1st March at 2pm GMT

Scales of justice icon made by monkik from flaticon.com

With great power comes great responsibility. [1] Given their growing power in the twenty-first century, computer scientists have a duty to society to use that power responsibly and justly. How can we teach this kind of social responsibility and ethics to engineering students? Join us to discuss teaching social justice in computer science via a paper by Rodrigo Ferreira and Moshe Vardi at Rice University in Houston, Texas published in the sigcse2021.sigcse.org conference [2]. From the abstract of the preprint:

As ethical questions around the development of contemporary computer technologies have become an increasing point of public and political concern, computer science departments in universities around the world have placed renewed emphasis on tech ethics undergraduate classes as a means to educate students on the large scale social implications of their actions. Committed to the idea that tech ethics is an essential part of the undergraduate computer science educational curriculum, at Rice University this year we piloted a redesigned version of our Ethics and Accountability in Computer Science class. This effort represents our first attempt at implementing a “deep” tech ethics approach to the course.

Incorporating elements from philosophy of technology, critical media theory, and science and technology studies, we encouraged students to learn not only ethics in a “shallow” sense, examining abstract principles or values to determine right and wrong, but rather looking at a series of “deeper” questions more closely related to present issues of social justice and relying on a structural understanding of these problems to develop potential socio-technical solutions. In this article, we report on our implementation of this redesigned approach. We describe in detail the rationale and strategy for implementing this approach, present key elements of the redesigned syllabus, and discuss final student reflections and course evaluations. To conclude, we examine course achievements, limitations, and lessons learned toward the future, particularly in regard to the number escalating social protests and issues involving Covid-19.

This paper got me thinking:

Houston, we’ve had your problem!

After paging the authors in Houston with the message above there was radio silence.

Beep - beep - beep [white noise] Beep - beep - beep...

Hello Manchester, this is Houston, Can we join you?

So we’re delighted to be joined LIVE by the authors of the paper Rodrigo Ferreira and Moshe Vardi from Houston, Texas. They’ll give a lightning talk outlining the paper before we discuss it together in smaller break out groups.

Their paper describes a problem everyone in the world has had in teaching ethics in Computer Science recently. How can we make computing more ethical?

All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Spider-Man (1962) https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility
  2. Rodrigo Ferreira and Moshe Vardi (2021) Deep Tech Ethics An Approach to Teaching Social Justice in Computer Science in Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), March 13–20, 2021, Virtual Event, USA. ACM, New York, NY, USA. DOI:10.1145/3408877.3432449
  3. Jack Swigert (1970) https://en.wikipedia.org/wiki/Houston,_we_have_a_problem

Join us to discuss failure rates in introductory programming courses on Monday 1st February at 2pm GMT

Icons made by freepik from flaticon.com

Following on from our discussion of ungrading, this month we’ll be discussing pass/fail rates in introductory programming courses. [1] Here is the abstract:

Vast numbers of publications in computing education begin with the premise that programming is hard to learn and hard to teach. Many papers note that failure rates in computing courses, and particularly in introductory programming courses, are higher than their institutions would like. Two distinct research projects in 2007 and 2014 concluded that average success rates in introductory programming courses world-wide were in the region of 67%, and a recent replication of the first project found an average pass rate of about 72%. The authors of those studies concluded that there was little evidence that failure rates in introductory programming were concerningly high.

However, there is no absolute scale by which pass or failure rates are measured, so whether a failure rate is concerningly high will depend on what that rate is compared against. As computing is typically considered to be a STEM subject, this paper considers how pass rates for introductory programming courses compare with those for other introductory STEM courses. A comparison of this sort could prove useful in demonstrating whether the pass rates are comparatively low, and if so, how widespread such findings are.

This paper is the report of an ITiCSE working group that gathered information on pass rates from several institutions to determine whether prior results can be confirmed, and conducted a detailed comparison of pass rates in introductory programming courses with pass rates in introductory courses in other STEM disciplines.

The group found that pass rates in introductory programming courses appear to average about 75%; that there is some evidence that they sit at the low end of the range of pass rates in introductory STEM courses; and that pass rates both in introductory programming and in other introductory STEM courses appear to have remained fairly stable over the past five years. All of these findings must be regarded with some caution, for reasons that are explained in the paper. Despite the lack of evidence that pass rates are substantially lower than in other STEM courses, there is still scope to improve the pass rates of introductory programming courses, and future research should continue to investigate ways of improving student learning in introductory programming courses.

Anyone is welcome to join us. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

Thanks to Brett Becker and Joseph Allen for this months #paper-suggestions via our slack channel at uk-acm-sigsce.slack.com.

References

  1. Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, Neena Thota (2019) Pass Rates in Introductory Programming and in other STEM Disciplines in ITiCSE-WGR ’19: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, Pages 53–71 DOI: 10.1145/3344429.3372502

Join us to discuss ungraded assessment on Monday 4th January at 2pm GMT

Image via Good Ware and monkik edited by Bruce The Deus, CC BY-SA 4.0, via Wikimedia Commons w.wiki/qWo

The more time students spend thinking about their grades, the less time they spend thinking about their learning.

Ungraded (pass or fail) assessment provides an alternative to letter grading (A, B, C etc) which can address this issue. Join us on Monday 4th January at 2pm to discuss a new paper by David Malan which describes removing traditional letter grading from CS50: An introduction to Computer Science [1]. Heres is the abstract:

In 2010, we proposed to eliminate letter grades in CS50 at Harvard University in favor of Satisfactory / Unsatisfactory (SAT / UNS), whereby students would instead receive at term’s end a grade of SAT in lieu of A through C- or UNS in lieu of D+ through E. Albeit designed to empower students without prior background to explore an area beyond their comfort zone without fear of failure, that proposal initially failed. Not only were some concentrations on campus unwilling to grant credit for SAT, the university’s program in general education (of which CS50 was part) required that all courses be taken for letter grades.

In 2013, we instead proposed, this time successfully, to allow students to take CS50 either for a letter grade or SAT/UNS. And in 2017, we made SAT/UNS the course’s default, though students could still opt out. The percentage of students taking the course SAT/UNS jumped that year to 31%, up from 9% in the year prior, with as many as 86 of the course’s 671 students (13%) reporting that they enrolled because of SAT/UNS. The percentage of women in the course also increased to 44%, a 29-year high. And 19% of students who took the course SAT/UNS subsequently reported that their concentration would be or might be CS. Despite concerns to the contrary, students taking the course SAT/UNS reported spending not less but more time on the course each week than letter-graded classmates. And, once we accounted for prior background, they performed nearly the same.

We present the challenges and results of this 10-year initiative. We argue ultimately in favor of SAT/UNS, provided students must still meet all expectations, including all work submitted, in order to be eligible for SAT.

As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. David Malan (2021) Toward an Ungraded CS50. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), March 13–20, 2021, Virtual Event, USA. ACM, New York, NY, USA. DOI:10.1145/3408877.3432461

Join us to discuss peer instruction on Monday 7th December at 2pm GMT

Peer instruction is a tried and tested technique for teaching popularised by the Harvard physicist Eric Mazur. Join us to discuss the use of peer instruction in introductory computing via a paper by Leo Porter and his collaborators, [1] which won an award from the ACM SIGCSE Technical Symposium Top Ten Papers of All Time. Here is the abstract:

Peer Instruction (PI) is a student-centric pedagogy in which students move from the role of passive listeners to active participants in the classroom. Over the past five years, there have been a number of research articles regarding the value of PI in computer science. The present work adds to this body of knowledge by examining outcomes from seven introductory programming instructors: three novices to PI and four with a range of PI experience. Through common measurements of student perceptions, we provide evidence that introductory computing instructors can successfully implement PI in their classrooms. We find encouraging minimum (74%) and average (92%) levels of success as measured through student valuation of PI for their learning. This work also documents and hypothesizes reasons for comparatively poor survey results in one course, highlighting the importance of the choice of grading policy (participation vs. correctness) for new PI adopters.

As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details and meeting URLs.

References

  1.  Porter, Leo; Bouvier, Dennis; Cutts, Quintin; Grissom, Scott; Lee, Cynthia; McCartney, Robert; Zingaro, Daniel; Simon, Beth (2016). “A Multi-institutional Study of Peer Instruction in Introductory Computing”: SIGCSE ’16: Proceedings of the 47th ACM Technical Symposium on Computing Science Education 358–363. DOI:10.1145/2839509.2844642.

Join us to discuss student misconceptions in programming, March 23rd from 1pm to 2pm

The Scream by Edvard Munch 😱, reproduced in LEGO by Nathan Sawaya, the BrickArtist.com

Join us to discuss Identifying Student Misconceptions of Programming by Lisa Kaczmarczyk et al [1] which was voted a top paper from the last 50 years by SIGCSE members in 2019. Here is a summary:

Computing educators are often baffled by the misconceptions that their CS1 students hold. We need to understand these misconceptions more clearly in order to help students form correct conceptions. This paper describes one stage in the development of a concept inventory for Computing Fundamentals: investigation of student misconceptions in a series of core CS1 topics previously identified as both important and difficult. Formal interviews with students revealed four distinct themes, each containing many interesting misconceptions. Three of those misconceptions are detailed in this paper: two misconceptions about memory models, and data assignment when primitives are declared. Individual misconceptions are related, but vary widely, thus providing excellent material to use in the development of the CI. In addition, CS1 instructors are provided immediate usable material for helping their students understand some difficult introductory concepts.

In case you’re wondering, CS1 refers to the first course in the introductory sequence of a computer science major (in American parlance), roughly equivalent to first year undergraduate in the UK. CI refers to a Concept Inventory, a test designed to tell teachers exactly what students know and don’t know. According to Reinventing Nerds, the paper has been influential because it was the “first to apply rigorous research methods to investigating misconceptions”.

References

  1. Kaczmarczyk, Lisa C.; Petrick, Elizabeth R.; East, J. Philip; Herman, Geoffrey L. (2010). Identifying student misconceptions of programmingSIGCSE ’10: Proceedings of the 41st ACM technical symposium on Computer science education, pages 107–111. doi:10.1145/1734263.1734299