Learning sciences aims to improve our theoretical understanding of how people learn while computing education investigates with how people learn to compute. Historically, these fields existed independently, although attempts have been made to merge them. Where do these disciplines overlap and how can they be integrated further? Join us to discuss learning sciences for computing education via a paper by Lauren Margulieux, Brian Dorn and Kristin Searle, from the abstract:
This chapter discusses potential and current overlaps between the learning sciences and computing education research in their origins, theory, and methodology. After an introduction to learning sciences, the chapter describes how both learning sciences and computing education research developed as distinct fields from cognitive science. Despite common roots and common goals, the authors argue that the two fields are less integrated than they should be and recommend theories and methodologies from the learning sciences that could be used more widely in computing education research. The chapter selects for discussion one general learning theory from each of cognition (constructivism), instructional design (cognitive apprenticeship), social and environmental features of learning environments (sociocultural theory), and motivation (expectancy-value theory). Then the chapter describes methodology for design-based research to apply and test learning theories in authentic learning environments. The chapter emphasizes the alignment between design-based research and current research practices in computing education. Finally, the chapter discusses the four stages of learning sciences projects. Examples from computing education research are given for each stage to illustrate the shared goals and methods of the two fields and to argue for more integration between them.
There’s a 5 minute summary of the chapter ten minutes into the video below:
Margulieux, Lauren E.; Dorn, Brian; Searle, Kristin A. (2019). “Learning Sciences for Computing Education“: 208–230. doi:10.1017/9781108654555.009. in In S. A. Fincher & A. V. Robins (Eds.) The Cambridge Handbook of Computing Education Research. Cambridge, UK: Cambridge University Press
Scales of justice icon made by monkik from flaticon.com
With great power comes great responsibility. [1] Given their growing power in the twenty-first century, computer scientists have a duty to society to use that power responsibly and justly. How can we teach this kind of social responsibility and ethics to engineering students? Join us to discuss teaching social justice in computer science via a paper by Rodrigo Ferreira and Moshe Vardi at Rice University in Houston, Texas published in the sigcse2021.sigcse.org conference [2]. From the abstract of the preprint:
As ethical questions around the development of contemporary computer technologies have become an increasing point of public and political concern, computer science departments in universities around the world have placed renewed emphasis on tech ethics undergraduate classes as a means to educate students on the large scale social implications of their actions. Committed to the idea that tech ethics is an essential part of the undergraduate computer science educational curriculum, at Rice University this year we piloted a redesigned version of our Ethics and Accountability in Computer Science class. This effort represents our first attempt at implementing a “deep” tech ethics approach to the course.
Incorporating elements from philosophy of technology, critical media theory, and science and technology studies, we encouraged students to learn not only ethics in a “shallow” sense, examining abstract principles or values to determine right and wrong, but rather looking at a series of “deeper” questions more closely related to present issues of social justice and relying on a structural understanding of these problems to develop potential socio-technical solutions. In this article, we report on our implementation of this redesigned approach. We describe in detail the rationale and strategy for implementing this approach, present key elements of the redesigned syllabus, and discuss final student reflections and course evaluations. To conclude, we examine course achievements, limitations, and lessons learned toward the future, particularly in regard to the number escalating social protests and issues involving Covid-19.
This paper got me thinking:
Houston, we’ve had your problem!
After paging the authors in Houston with the message above there was radio silence.
Hello Manchester, this is Houston, Can we join you?
So we’re delighted to be joined LIVE by the authors of the paper Rodrigo Ferreira and Moshe Vardi from Houston, Texas. They’ll give a lightning talk outlining the paper before we discuss it together in smaller break out groups.
Their paper describes a problem everyone in the world has had in teaching ethics in Computer Science recently. How can we make computing more ethical?
Following on from our discussion of ungrading, this month we’ll be discussing pass/fail rates in introductory programming courses. [1] Here is the abstract:
Vast numbers of publications in computing education begin with the premise that programming is hard to learn and hard to teach. Many papers note that failure rates in computing courses, and particularly in introductory programming courses, are higher than their institutions would like. Two distinct research projects in 2007 and 2014 concluded that average success rates in introductory programming courses world-wide were in the region of 67%, and a recent replication of the first project found an average pass rate of about 72%. The authors of those studies concluded that there was little evidence that failure rates in introductory programming were concerningly high.
However, there is no absolute scale by which pass or failure rates are measured, so whether a failure rate is concerningly high will depend on what that rate is compared against. As computing is typically considered to be a STEM subject, this paper considers how pass rates for introductory programming courses compare with those for other introductory STEM courses. A comparison of this sort could prove useful in demonstrating whether the pass rates are comparatively low, and if so, how widespread such findings are.
This paper is the report of an ITiCSE working group that gathered information on pass rates from several institutions to determine whether prior results can be confirmed, and conducted a detailed comparison of pass rates in introductory programming courses with pass rates in introductory courses in other STEM disciplines.
The group found that pass rates in introductory programming courses appear to average about 75%; that there is some evidence that they sit at the low end of the range of pass rates in introductory STEM courses; and that pass rates both in introductory programming and in other introductory STEM courses appear to have remained fairly stable over the past five years. All of these findings must be regarded with some caution, for reasons that are explained in the paper. Despite the lack of evidence that pass rates are substantially lower than in other STEM courses, there is still scope to improve the pass rates of introductory programming courses, and future research should continue to investigate ways of improving student learning in introductory programming courses.
Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, Neena Thota (2019) Pass Rates in Introductory Programming and in other STEM Disciplines in ITiCSE-WGR ’19: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, Pages 53–71 DOI: 10.1145/3344429.3372502
Image via Good Ware and monkik edited by Bruce The Deus, CC BY-SA 4.0, via Wikimedia Commons w.wiki/qWo
The more time students spend thinking about their grades, the less time they spend thinking about their learning.
Ungraded (pass or fail) assessment provides an alternative to letter grading (A, B, C etc) which can address this issue. Join us on Monday 4th January at 2pm to discuss a new paper by David Malan which describes removing traditional letter grading from CS50: An introduction to Computer Science [1]. Heres is the abstract:
In 2010, we proposed to eliminate letter grades in CS50 at Harvard University in favor of Satisfactory / Unsatisfactory (SAT / UNS), whereby students would instead receive at term’s end a grade of SAT in lieu of A through C- or UNS in lieu of D+ through E. Albeit designed to empower students without prior background to explore an area beyond their comfort zone without fear of failure, that proposal initially failed. Not only were some concentrations on campus unwilling to grant credit for SAT, the university’s program in general education (of which CS50 was part) required that all courses be taken for letter grades.
In 2013, we instead proposed, this time successfully, to allow students to take CS50 either for a letter grade or SAT/UNS. And in 2017, we made SAT/UNS the course’s default, though students could still opt out. The percentage of students taking the course SAT/UNS jumped that year to 31%, up from 9% in the year prior, with as many as 86 of the course’s 671 students (13%) reporting that they enrolled because of SAT/UNS. The percentage of women in the course also increased to 44%, a 29-year high. And 19% of students who took the course SAT/UNS subsequently reported that their concentration would be or might be CS. Despite concerns to the contrary, students taking the course SAT/UNS reported spending not less but more time on the course each week than letter-graded classmates. And, once we accounted for prior background, they performed nearly the same.
We present the challenges and results of this 10-year initiative. We argue ultimately in favor of SAT/UNS, provided students must still meet all expectations, including all work submitted, in order to be eligible for SAT.
David Malan (2021) Toward an Ungraded CS50. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), March 13–20, 2021, Virtual Event, USA. ACM, New York, NY, USA. DOI:10.1145/3408877.3432461
Peer instruction is a tried and tested technique for teaching popularised by the Harvard physicist Eric Mazur. Join us to discuss the use of peer instruction in introductory computing via a paper by Leo Porter and his collaborators, [1] which won an award from the ACM SIGCSE Technical Symposium Top Ten Papers of All Time. Here is the abstract:
Peer Instruction (PI) is a student-centric pedagogy in which students move from the role of passive listeners to active participants in the classroom. Over the past five years, there have been a number of research articles regarding the value of PI in computer science. The present work adds to this body of knowledge by examining outcomes from seven introductory programming instructors: three novices to PI and four with a range of PI experience. Through common measurements of student perceptions, we provide evidence that introductory computing instructors can successfully implement PI in their classrooms. We find encouraging minimum (74%) and average (92%) levels of success as measured through student valuation of PI for their learning. This work also documents and hypothesizes reasons for comparatively poor survey results in one course, highlighting the importance of the choice of grading policy (participation vs. correctness) for new PI adopters.
Minimal guidance is a popular approach to teaching and learning. This technique advocates teachers taking a back seat to facilitate learning by letting their students get on with it. Minimal guidance comes in many guises including constructivism, discovery learning, problem-based learning, experiential learning, active learning, inquiry-based learning and even lazy teaching. According to its critics, unguided and minimally guided approaches don’t work. Join us to discuss why via a paper [1] published by Paul Kirschner, John Sweller and Richard Clark, here is the abstract:
Evidence for the superiority of guided instruction is explained in the context of our knowledge of human cognitive architecture, expert–novice differences, and cognitive load. Although unguided or minimally guided instructional approaches are very popular and intuitively appealing, the point is made that these approaches ignore both the structures that constitute human cognitive architecture and evidence from empirical studies over the past half-century that consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student learning process. The advantage of guidance begins to recede only when learners have sufficiently high prior knowledge to provide “internal” guidance. Recent developments in instructional research and instructional design models that support guidance during instruction are briefly described.
Kirschner, Paul A.; Sweller, John; Clark, Richard E. (2006). “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching”. Educational Psychologist. 41 (2): 75–86. DOI: 10.1207/s15326985ep4102_1 (see also altmetric.com/details/564640 for online attention scores)
The use of git is widespread in software engineering, however many novices struggle to get to grips with its complex distributed information model, challenging command line syntax and leaky abstractions. To investigate these pitfalls, we’ll be talking about a paper published by Santiago Perez De Rosso and Daniel Jackson on Purposes, Concepts, Misfits, and a Redesign of Git at OOPSLA. [1] From the abstract:
Git is a widely used version control system that is powerful but complicated. Its complexity may not be an inevitable consequence of its power but rather evidence of flaws in its design. To explore this hypothesis, we analysed the design of Git using a theory that identifies concepts, purposes, and misfits. Some well-known difficulties with Git are described, and explained as misfits in which underlying concepts fail to meet their intended purpose. Based on this analysis, we designed a reworking of Git (called Gitless) that attempts to remedy these flaws.
To correlate misfits with issues reported by users, we conducted a study of Stack Overflow questions. And to determine whether users experienced fewer complications using Gitless in place of Git, we conducted a small user study. Results suggest our approach can be profitable in identifying, analysing, and fixing design problems.
So what’s wrong with git?
Santiago’s presentation on What’s Wrong With Git? at Git Merge in 2017
Details of the zoom meeting have been posted on our slack workspace, see sigcse.cs.manchester.ac.uk/join-us for further information. Thanks to Juha Sorva at Aalto University for recommending this paper.
Journal club dates for your diary
We’ll be meeting on the first Monday of every month throughout autumn, so if you’d like to join us next month or a subsequent month, add these journal club dates to your diary:
Join us on Monday 7th September to discuss using theory in Computing Education Research at 11am. We’ll be talking about a paper [1] by Greg L. Nelson and Amy Ko at the University of Washington:
A primary goal of computing education research is to discover designs that produce better learning of computing. In this pursuit, we have increasingly drawn upon theories from learning science and education research, recognising the potential benefits of optimising our search for better designs by leveraging the predictions of general theories of learning. In this paper, we contribute an argument that theory can also inhibit our community’s search for better designs. We present three inhibitions: 1) our desire to both advance explanatory theory and advance design splits our attention, which prevents us from excelling at both; 2) our emphasis on applying and refining general theories of learning is done at the expense of domain-specific theories of computer science knowledge, and 3) our use of theory as a critical lens in peer review prevents the publication of designs that may accelerate design progress. We present several recommendations for how to improve our use of theory, viewing it as just one of many sources of design insight in pursuit of improving learning of computing.
Details of the zoom meeting will be posted on our slack workspace at uk-acm-sigsce.slack.com. If you don’t have access to the workspace, send me (Duncan Hull) an email to request an invite to join the workspace.
As Universities transition to online teaching during the global coronavirus pandemic, there’s increasing interest in the use of pre-recorded videos to replace traditional lectures in higher education. Join us to discuss how video production affects student engagement, based on a paper published by Philip Guo at the University of California, San Deigo (UCSD) from the Learning at Scale conference on How video production affects student engagement: an empirical study of MOOC videos. (MOOC stands for Massive Open Online Course). [1] Here is the abstract:
Videos are a widely-used kind of resource for online learning. This paper presents an empirical study of how video production decisions affect student engagement in online educational videos. To our knowledge, ours is the largest-scale study of video engagement to date, using data from 6.9 million video watching sessions across four courses on the edX MOOC platform. We measure engagement by how long students are watching each video, and whether they attempt to answer post-video assessment problems.
Our main findings are that shorter videos are much more engaging, that informal talking-head videos are more engaging, that Khan-style tablet drawings are more engaging, that even high-quality pre-recorded classroom lectures might not make for engaging online videos, and that students engage differently with lecture and tutorial videos.
Based upon these quantitative findings and qualitative insights from interviews with edX staff, we developed a set of recommendations to help instructors and video producers take better advantage of the online video format. Finally, to enable researchers to reproduce and build upon our findings, we have made our anonymized video watching data set and analysis scripts public. To our knowledge, ours is one of the first public data sets on MOOC resource usage.
Details of the zoom meeting will be posted on our slack workspace at uk-acm-sigsce.slack.com. If you don’t have access to the workspace, send me (Duncan Hull) an email to request an invite to join the workspace. The paper refers to several styles of video production, some examples below.
What is innovative pedagogy? CC-BY licensed picture by Giulia Forsythe
Join us for our next journal club meeting on Monday 6th July at 3pm, the papers we’ll be discussing below come from the #paper-suggestions channel of our slack workspace at uk-acm-sigsce.slack.com.
Show me the pedagogy!
The first paper is a short chapter by Katrina Falkner and Judy Sheard which gives an overview of pedagogic approaches including active learning, collaborative learning, cooperative learning, contributing student pedagogy (CSP), blended learning and MOOCs. [1] This was published last year as chapter 15 of the Cambridge Handbook on Computing Education Research edited by Sally Fincher and Anthony V. Robins. A lot of blended learning resources focus on technology, this chapter talks about where blended learning fits with a range of different pedagogic approaches.
A video summary of all sixteen chapters of the Cambridge Handbook of Computing Education Research, including chapter 15 which we’ll be discussing
Implementing blended learning
The second paper (suggested by Jane Waite) is Design and implementation factors in blended synchronous learning environments [2], here’s a summary from the abstract:
Increasingly, universities are using technology to provide students with more flexible modes of participation. This article presents a cross-case analysis of blended synchronous learning environments—contexts where remote students participated in face-to-face classes through the use of rich-media synchronous technologies such as video conferencing, web conferencing, and virtual worlds. The study examined how design and implementation factors influenced student learning activity and perceived learning outcomes, drawing on a synthesis of student, teacher, and researcher observations collected before, during, and after blended synchronous learning lessons. Key findings include the importance of designing for active learning, the need to select and utilise technologies appropriately to meet communicative requirements, varying degrees of co-presence depending on technological and human factors, and heightened cognitive load. Pedagogical, technological, and logistical implications are presented in the form of a Blended Synchronous Learning Design Framework that is grounded in the results of the study.
We look forward to seeing you there, zoom details are on the slack channel, email me if you’d like to request an invitation to the slack channel. Likewise, if you don’t have access to the papers let me know.
Short notes from the discussion
Some of the questions discussed on the day:
Inclusion raises a number of questions in terms of room management, gender balance – was this a consideration?
What effect do you think the absence of anyone F2F would have on the case studies and/or your outcomes?
How scalable is this approach? Can it be used with classes of 200 or 300 students?
Constructive alignment plays an important role in getting this kind of blended learning to work, see the work of John Biggs e.g. Teaching for Quality Learning at University book
Further reading from co-authors
Jaqueline Kenney, one of the co-authors of the paper we discussed joined us for the session (thanks again Jacqueline). Matt Bower also emailed some suggestions of work that follows on
See related work Collaborative learning across physical and virtual worlds: Factors supporting and constraining learners in a blended reality environment DOI:10.1111/bjet.12435 and blendsync.org
Bower, M. (2006). Virtual classroom pedagogy. Paper presented at the Proceedings of the 37th SIGCSE technical symposium on Computer science education, Houston, Texas, USA. DOI:10.1145/1121341.1121390
Bower, M. (2006). A learning system engineering approach to developing online courses. Paper presented at the Proceedings of the 8th Australasian Conference on Computing Education – Volume 52, Hobart, Australia.
Bower, M. (2007). Groupwork activities in synchronous online classroom spaces. Paper presented at the Proceedings of the 38th SIGCSE technical symposium on Computer science education, Covington, Kentucky, USA. DOI:10.1145/1227310.1227345
Bower, M. (2007). Independent, synchronous and asynchronous an analysis of approaches to online concept formation. Paper presented at the Proceedings of the 12th annual SIGCSE conference on Innovation and technology in computer science education, Dundee, Scotland. DOI:10.1145/1268784.1268827
Bower, M. (2008). The “instructed-teacher”: a computer science online learning pedagogical pattern. Paper presented at the Proceedings of the 13th annual conference on Innovation and technology in computer science education, Madrid, Spain. DOI:10.1145/1384271.1384323
Bower, M., & McIver, A. (2011). Continual and explicit comparison to promote proactive facilitation during second computer language learning. Paper presented at the Proceedings of the 16th annual joint conference on Innovation and technology in computer science education, Darmstadt, Germany. DOI:10.1145/1999747.1999809
Bower, M., & Richards, D. (2005). The impact of virtual classroom laboratories in CSE. Paper presented at the Proceedings of the 36th SIGCSE technical symposium on Computer science education, St. Louis, Missouri, USA. DOI:10.1145/1047344.1047447As well, this Computers & Education paper specifically relates to a study of teaching computing online:
Bower, M., & Hedberg, J. G. (2010). A quantitative multimodal discourse analysis of teaching and learning in a web-conferencing environment–the efficacy of student-centred learning designs. Computers & education, 54(2), 462-478.
References
Falkner, Katrina; Sheard, Judy (2019). “Pedagogic Approaches”: 445–480. doi:10.1017/9781108654555.016. Chapter 15 of the The Cambridge Handbook of Computing Education Research
Bower, Matt; Dalgarno, Barney; Kennedy, Gregor E.; Lee, Mark J.W.; Kenney, Jacqueline (2015). “Design and implementation factors in blended synchronous learning environments: Outcomes from a cross-case analysis”. Computers & Education. 86: 1–17. doi:10.1016/j.compedu.2015.03.006. ISSN0360-1315.