Join us to discuss how theory is used in assessment and feedback on Monday 3rd July at 2pm BST

Test image from flaticon.com

A good theory can be the most concentrated form of knowledge. By encapsulating an infinite number of cases, a theory can make predictions rather than just describing a finite number of disjointed facts. So how does theory feature in research about assessment and feedback? Join us on Monday 3rd July at 2pm BST (UTC+1) to discuss a paper investigating this question by Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai from the University of Hong Kong and Deakin University. [1] From the abstract of their paper:

Assessment and feedback research constitutes its own ‘silo’ amidst the higher education research field. Theory has been cast as an important but absent aspect of higher education research. This may be a particular issue in empirical assessment research which often builds on the conceptualisation of assessment as objective measurement. So, how does theory feature in assessment and feedback research? We conduct a critical review of recent empirical articles (2020, N = 56) to understand how theory is engaged with in this field. We analyse the repertoire of theories and the mechanisms for putting these theories into practice. 21 studies drew explicitly on educational theory. Theories were most commonly used to explain and frame assessment. Critical theories were notably absent, and quantitative studies engaged with theory in a largely instrumental manner. We discuss the findings through the concept of reflexivity, conceptualising engagement with theory as a practice with both benefits and pitfalls. We therefore call for further reflexivity in the field of assessment and feedback research through deeper and interdisciplinary engagement with theories to avoid further siloing of the field.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jane Waite at Queen Mary, University of London, for nominating this months paper.

References

  1. Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai (2023) How is theory used in assessment and feedback research? A critical review, Assessment & Evaluation in Higher Education, 48:1, 77-94, DOI: 10.1080/02602938.2022.2047154





Join us to discuss failure rates in introductory programming courses on Monday 1st February at 2pm GMT

Icons made by freepik from flaticon.com

Following on from our discussion of ungrading, this month we’ll be discussing pass/fail rates in introductory programming courses. [1] Here is the abstract:

Vast numbers of publications in computing education begin with the premise that programming is hard to learn and hard to teach. Many papers note that failure rates in computing courses, and particularly in introductory programming courses, are higher than their institutions would like. Two distinct research projects in 2007 and 2014 concluded that average success rates in introductory programming courses world-wide were in the region of 67%, and a recent replication of the first project found an average pass rate of about 72%. The authors of those studies concluded that there was little evidence that failure rates in introductory programming were concerningly high.

However, there is no absolute scale by which pass or failure rates are measured, so whether a failure rate is concerningly high will depend on what that rate is compared against. As computing is typically considered to be a STEM subject, this paper considers how pass rates for introductory programming courses compare with those for other introductory STEM courses. A comparison of this sort could prove useful in demonstrating whether the pass rates are comparatively low, and if so, how widespread such findings are.

This paper is the report of an ITiCSE working group that gathered information on pass rates from several institutions to determine whether prior results can be confirmed, and conducted a detailed comparison of pass rates in introductory programming courses with pass rates in introductory courses in other STEM disciplines.

The group found that pass rates in introductory programming courses appear to average about 75%; that there is some evidence that they sit at the low end of the range of pass rates in introductory STEM courses; and that pass rates both in introductory programming and in other introductory STEM courses appear to have remained fairly stable over the past five years. All of these findings must be regarded with some caution, for reasons that are explained in the paper. Despite the lack of evidence that pass rates are substantially lower than in other STEM courses, there is still scope to improve the pass rates of introductory programming courses, and future research should continue to investigate ways of improving student learning in introductory programming courses.

Anyone is welcome to join us. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

Thanks to Brett Becker and Joseph Allen for this months #paper-suggestions via our slack channel at uk-acm-sigsce.slack.com.

References

  1. Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, Neena Thota (2019) Pass Rates in Introductory Programming and in other STEM Disciplines in ITiCSE-WGR ’19: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, Pages 53–71 DOI: 10.1145/3344429.3372502

Join us to discuss ungraded assessment on Monday 4th January at 2pm GMT

Image via Good Ware and monkik edited by Bruce The Deus, CC BY-SA 4.0, via Wikimedia Commons w.wiki/qWo

The more time students spend thinking about their grades, the less time they spend thinking about their learning.

Ungraded (pass or fail) assessment provides an alternative to letter grading (A, B, C etc) which can address this issue. Join us on Monday 4th January at 2pm to discuss a new paper by David Malan which describes removing traditional letter grading from CS50: An introduction to Computer Science [1]. Heres is the abstract:

In 2010, we proposed to eliminate letter grades in CS50 at Harvard University in favor of Satisfactory / Unsatisfactory (SAT / UNS), whereby students would instead receive at term’s end a grade of SAT in lieu of A through C- or UNS in lieu of D+ through E. Albeit designed to empower students without prior background to explore an area beyond their comfort zone without fear of failure, that proposal initially failed. Not only were some concentrations on campus unwilling to grant credit for SAT, the university’s program in general education (of which CS50 was part) required that all courses be taken for letter grades.

In 2013, we instead proposed, this time successfully, to allow students to take CS50 either for a letter grade or SAT/UNS. And in 2017, we made SAT/UNS the course’s default, though students could still opt out. The percentage of students taking the course SAT/UNS jumped that year to 31%, up from 9% in the year prior, with as many as 86 of the course’s 671 students (13%) reporting that they enrolled because of SAT/UNS. The percentage of women in the course also increased to 44%, a 29-year high. And 19% of students who took the course SAT/UNS subsequently reported that their concentration would be or might be CS. Despite concerns to the contrary, students taking the course SAT/UNS reported spending not less but more time on the course each week than letter-graded classmates. And, once we accounted for prior background, they performed nearly the same.

We present the challenges and results of this 10-year initiative. We argue ultimately in favor of SAT/UNS, provided students must still meet all expectations, including all work submitted, in order to be eligible for SAT.

As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. David Malan (2021) Toward an Ungraded CS50. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), March 13–20, 2021, Virtual Event, USA. ACM, New York, NY, USA. DOI:10.1145/3408877.3432461