Join us to discuss ten things engineers should learn about learning on Monday 5th February at 2pm GMT

See one, do one, teach one” is a popular technique for teaching surgery to medical students. It has three steps:

  • You see one: by watching it, reading about it or listening to it
  • You do one: by engineering it or making it
  • You teach one: by telling others all about it


If you’re teaching engineers, what do you need to know beyond the seeing and doing? Understanding how human memory and learning works and the differences between beginners and experts can improve your teaching. So what practical steps can engineers take to improve the training and development of other engineers? What do engineers need to know in order to improve their own learning?

Join us on Monday 5th February at 2pm GMT (UTC) for our monthly ACM SIGCSE journal club meetup on zoom to discuss a paper on this topic by Neil Brown, Felienne Hermans and Lauren Margulieux, published in (and featured on the cover of) the January issue of Communications of the ACM. [1]

We’ll be joined by the lead author, Neil Brown of King’s College London, who will give us a lightning talk summary of the paper to kick off our discussion.

All welcome, as usual, we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Neil C.C. Brown, Felienne F.J. Hermans and Lauren Margulieux (2024) 10 Things Software Developers Should Learn about Learning, Communications of the ACM, Volume 67, No. 1. DOI:10.1145/3584859 (see accompanying video at vimeo.com/885743448 )

Join us to discuss how theory is used in assessment and feedback on Monday 3rd July at 2pm BST

Test image from flaticon.com

A good theory can be the most concentrated form of knowledge. By encapsulating an infinite number of cases, a theory can make predictions rather than just describing a finite number of disjointed facts. So how does theory feature in research about assessment and feedback? Join us on Monday 3rd July at 2pm BST (UTC+1) to discuss a paper investigating this question by Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai from the University of Hong Kong and Deakin University. [1] From the abstract of their paper:

Assessment and feedback research constitutes its own ‘silo’ amidst the higher education research field. Theory has been cast as an important but absent aspect of higher education research. This may be a particular issue in empirical assessment research which often builds on the conceptualisation of assessment as objective measurement. So, how does theory feature in assessment and feedback research? We conduct a critical review of recent empirical articles (2020, N = 56) to understand how theory is engaged with in this field. We analyse the repertoire of theories and the mechanisms for putting these theories into practice. 21 studies drew explicitly on educational theory. Theories were most commonly used to explain and frame assessment. Critical theories were notably absent, and quantitative studies engaged with theory in a largely instrumental manner. We discuss the findings through the concept of reflexivity, conceptualising engagement with theory as a practice with both benefits and pitfalls. We therefore call for further reflexivity in the field of assessment and feedback research through deeper and interdisciplinary engagement with theories to avoid further siloing of the field.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jane Waite at Queen Mary, University of London, for nominating this months paper.

References

  1. Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai (2023) How is theory used in assessment and feedback research? A critical review, Assessment & Evaluation in Higher Education, 48:1, 77-94, DOI: 10.1080/02602938.2022.2047154





Join us to discuss assisting Teaching Assistants with Automatic Code Corrections on Monday 1st August at 2pm BST

Image via Freepik

Teaching Assistants (both undergraduate UTA’s and graduate GTA’s) are crucial to enable teaching and learning in higher education. How can we make their jobs easier using automatic code corrections? Join us on Monday 1st August at 2pm to discuss via a paper recently published at CHI by Yana Malysheva and Caitlin Kelleher. [1]

Undergraduate Teaching Assistants(TAs) in Computer Science courses are often the first and only point of contact when a student gets stuck on a programming problem. But these TAs are often relative beginners themselves, both in programming and in teaching. In this paper, we examine the impact of availability of corrected code on TAs’ ability to find, fix, and address bugs in student code. We found that seeing a corrected version of the student code helps TAs debug code 29% faster, and write more accurate and complete student-facing explanations of the bugs (30% more likely to correctly address a given bug). We also observed that TAs do not generally struggle with the conceptual understanding of the underlying material. Rather, their difficulties seem more related to issues with working memory, attention, and overall high cognitive load.

All welcome, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Sarah Clinch for nominating this months paper.

References

  1. Yana Malysheva and Caitlin Kelleher (2022) Assisting Teaching Assistants with Automatic Code Corrections CHI ’22: Proceedings of the 2022 CHI Conference on Human Factors in Computing SystemsApril 2022 Article No.: 231 Pages 1–18 DOI: 10.1145/3491102.3501820

Join us to discuss what goes on in the mind of Teaching Assistants on Monday 10th May at 2pm BST

Thinking icon via flaticon.com

Both graduate and undergraduate teaching assistants (TAs) are crucial to facilitating students learning. What goes on inside the mind of a teaching assistant? How can understanding this help us train TA’s better for the roles they play in education? Join us to discuss via a paper by Julia M. Markel and Philip Guo. [1] From the abstract:

As CS enrolments continue to grow, introductory courses are employing more undergraduate TAs. One of their main roles is performing one-on-one tutoring in the computer lab to help students understand and debug their programming assignments. What goes on in the mind of an undergraduate TA when they are helping students with programming? In this experience report, we present firsthand accounts from an undergraduate TA documenting her 36 hours of in-lab tutoring for a CS2 course, where she engaged in 69 one-on-one help sessions. This report provides a unique perspective from an undergraduate’s point-of-view rather than a faculty member’s. We summarise her experiences by constructing a four-part model of tutoring interactions: a) The tutor begins the session with an initial state of mind (e.g., their energy/focus level, perceived time pressure). b) They observe the student’s outward state upon arrival (e.g., how much they seem to care about learning). c) Using that observation, the tutor infers what might be going on inside the student’s mind. d) The combination of what goes on inside the tutor’s and student’s minds affects tutoring interactions, which progress from diagnosis to planning to an explain-code-react loop to post-resolution activities. We conclude by discussing ways that this model can be used to design scaffolding for training novice TAs and software tools to help TAs scale their efforts to larger classes.

This paper was one of nine best papers at SIGCSE 2021, there’s a video of the paper presentation on pathable.sigcse2021.org. All welcome. As usual, we’ll be meeting on zoom, see sigcse.cs.manchester.ac.uk/join-us for details.

References

  1. Markel, Julia M. and Guo, Philip (2021) Inside the Mind of a CS Undergraduate TA: A Firsthand Account of Undergraduate Peer Tutoring in Computer Labs SIGCSE ’21: Proceedings of the 52nd ACM Technical Symposium on Computer Science EducationMarch 2021 Pages 502–508 DOI: 10.1145/3408877.3432533 (open access)

Join us to discuss using theory in Computing Education Research, 7th September at 11am

cc-licensed image from the thenounproject.com/term/theory/2332503/

Join us on Monday 7th September to discuss using theory in Computing Education Research at 11am. We’ll be talking about a paper [1] by Greg L. Nelson and Amy Ko at the University of Washington:

A primary goal of computing education research is to discover designs that produce better learning of computing. In this pursuit, we have increasingly drawn upon theories from learning science and education research, recognising the potential benefits of optimising our search for better designs by leveraging the predictions of general theories of learning. In this paper, we contribute an argument that theory can also inhibit our community’s search for better designs. We present three inhibitions: 1) our desire to both advance explanatory theory and advance design splits our attention, which prevents us from excelling at both; 2) our emphasis on applying and refining general theories of learning is done at the expense of domain-specific theories of computer science knowledge, and 3) our use of theory as a critical lens in peer review prevents the publication of designs that may accelerate design progress. We present several recommendations for how to improve our use of theory, viewing it as just one of many sources of design insight in pursuit of improving learning of computing.

Details of the zoom meeting will be posted on our slack workspace at uk-acm-sigsce.slack.com. If you don’t have access to the workspace, send me (Duncan Hull) an email to request an invite to join the workspace.

References

  1. Greg L. Nelson and Andrew Ko (2018) On Use of Theory in Computing Education Research in ICER ’18: Proceedings of the 2018 ACM Conference on International Computing Education Research, August 2018 Pages 31–39 DOI:10.1145/3230977.3230992

Join us to discuss how video production affects student engagement Monday 3rd August at 11am

As Universities transition to online teaching during the global coronavirus pandemic, there’s increasing interest in the use of pre-recorded videos to replace traditional lectures in higher education. Join us to discuss how video production affects student engagement, based on a paper published by Philip Guo at the University of California, San Deigo (UCSD) from the Learning at Scale conference on How video production affects student engagement: an empirical study of MOOC videos. (MOOC stands for Massive Open Online Course). [1] Here is the abstract:

Videos are a widely-used kind of resource for online learning. This paper presents an empirical study of how video production decisions affect student engagement in online educational videos. To our knowledge, ours is the largest-scale study of video engagement to date, using data from 6.9 million video watching sessions across four courses on the edX MOOC platform. We measure engagement by how long students are watching each video, and whether they attempt to answer post-video assessment problems.

Our main findings are that shorter videos are much more engaging, that informal talking-head videos are more engaging, that Khan-style tablet drawings are more engaging, that even high-quality pre-recorded classroom lectures might not make for engaging online videos, and that students engage differently with lecture and tutorial videos.

Based upon these quantitative findings and qualitative insights from interviews with edX staff, we developed a set of recommendations to help instructors and video producers take better advantage of the online video format. Finally, to enable researchers to reproduce and build upon our findings, we have made our anonymized video watching data set and analysis scripts public. To our knowledge, ours is one of the first public data sets on MOOC resource usage.

Details of the zoom meeting will be posted on our slack workspace at uk-acm-sigsce.slack.com. If you don’t have access to the workspace, send me (Duncan Hull) an email to request an invite to join the workspace. The paper refers to several styles of video production, some examples below.

Khan style tablet drawings

The paper refers to Khan style videos, this is an example, taken from Khan Academy course on algorithms, khanacademy.org/computing/computer-science/algorithms

What is an algorithm? Video introduction to Khan Academy algorithms course by Thomas Cormen and Devin Balkcom

Talking Heads

Some examples of talking head videos:

How to frame a talking head with Tomás De Matteis

There’s more than one way to do talking head videos, see Moving to Blended Learning, Part 3: Types of Video at www.elearning.fse.manchester.ac.uk/fseta/moving-to-blended-learning-part-3-types-of-video/

Making video-friendly slides

Steve Pettifer explains how to make video-friendly slides


Lose the words! Your PowerPoint / Keynote presentation should not be a script or a handout

References

  1. Guo, Philip J.; Kim, Juho; Rubin, Rob (2014). “How video production affects student engagement: An Empirical Study of MOOC Videos “. Proceedings of the first ACM conference on Learning @ scale conference: 41–50. doi:10.1145/2556325.2566239. see also altmetric.com/details/2188041 for online attention score

Join us to discuss blended learning & pedagogy in Computer Science on Monday 6th July at 3pm

What is innovative pedagogy? CC-BY licensed picture by Giulia Forsythe

Join us for our next journal club meeting on Monday 6th July at 3pm, the papers we’ll be discussing below come from the #paper-suggestions channel of our slack workspace at uk-acm-sigsce.slack.com.

Show me the pedagogy!

The first paper is a short chapter by Katrina Falkner and Judy Sheard which gives an overview of pedagogic approaches including active learning, collaborative learning, cooperative learning, contributing student pedagogy (CSP), blended learning and MOOCs. [1] This was published last year as chapter 15 of the Cambridge Handbook on Computing Education Research edited by Sally Fincher and Anthony V. Robins. A lot of blended learning resources focus on technology, this chapter talks about where blended learning fits with a range of different pedagogic approaches.

A video summary of all sixteen chapters of the Cambridge Handbook of Computing Education Research, including chapter 15 which we’ll be discussing

Implementing blended learning

The second paper (suggested by Jane Waite) is Design and implementation factors in blended synchronous learning environments [2], here’s a summary from the abstract:

Increasingly, universities are using technology to provide students with more flexible modes of participation. This article presents a cross-case analysis of blended synchronous learning environments—contexts where remote students participated in face-to-face classes through the use of rich-media synchronous technologies such as video conferencing, web conferencing, and virtual worlds. The study examined how design and implementation factors influenced student learning activity and perceived learning outcomes, drawing on a synthesis of student, teacher, and researcher observations collected before, during, and after blended synchronous learning lessons. Key findings include the importance of designing for active learning, the need to select and utilise technologies appropriately to meet communicative requirements, varying degrees of co-presence depending on technological and human factors, and heightened cognitive load. Pedagogical, technological, and logistical implications are presented in the form of a Blended Synchronous Learning Design Framework that is grounded in the results of the study.

We look forward to seeing you there, zoom details are on the slack channel, email me if you’d like to request an invitation to the slack channel. Likewise, if you don’t have access to the papers let me know.

Short notes from the discussion

Some of the questions discussed on the day:

  • Inclusion raises a number of questions in terms of room management, gender balance – was this a consideration?
  • What effect do you think the absence of anyone F2F would have on the case studies and/or your outcomes?
  • How scalable is this approach? Can it be used with classes of 200 or 300 students?
  • Constructive alignment plays an important role in getting this kind of blended learning to work, see the work of John Biggs e.g. Teaching for Quality Learning at University book

Further reading from co-authors

Jaqueline Kenney, one of the co-authors of the paper we discussed joined us for the session (thanks again Jacqueline). Matt Bower also emailed some suggestions of work that follows on

  • See related work Collaborative learning across physical and virtual worlds: Factors supporting and constraining learners in a blended reality environment DOI:10.1111/bjet.12435 and blendsync.org
  • Bower, M. (2006). Virtual classroom pedagogy. Paper presented at the Proceedings of the 37th SIGCSE technical symposium on Computer science education, Houston, Texas, USA. DOI:10.1145/1121341.1121390
  • Bower, M. (2006). A learning system engineering approach to developing online courses. Paper presented at the Proceedings of the 8th Australasian Conference on Computing Education – Volume 52, Hobart, Australia. 
  • Bower, M. (2007). Groupwork activities in synchronous online classroom spaces. Paper presented at the Proceedings of the 38th SIGCSE technical symposium on Computer science education, Covington, Kentucky, USA. DOI:10.1145/1227310.1227345
  • Bower, M. (2007). Independent, synchronous and asynchronous an analysis of approaches to online concept formation. Paper presented at the Proceedings of the 12th annual SIGCSE conference on Innovation and technology in computer science education, Dundee, Scotland. DOI:10.1145/1268784.1268827
  • Bower, M. (2008). The “instructed-teacher”: a computer science online learning pedagogical pattern. Paper presented at the Proceedings of the 13th annual conference on Innovation and technology in computer science education, Madrid, Spain. DOI:10.1145/1384271.1384323
  • Bower, M., & McIver, A. (2011). Continual and explicit comparison to promote proactive facilitation during second computer language learning. Paper presented at the Proceedings of the 16th annual joint conference on Innovation and technology in computer science education, Darmstadt, Germany. DOI:10.1145/1999747.1999809
  • Bower, M., & Richards, D. (2005). The impact of virtual classroom laboratories in CSE. Paper presented at the Proceedings of the 36th SIGCSE technical symposium on Computer science education, St. Louis, Missouri, USA. DOI:10.1145/1047344.1047447As well, this Computers & Education paper specifically relates to a study of teaching computing online:
  • Bower, M., & Hedberg, J. G. (2010). A quantitative multimodal discourse analysis of teaching and learning in a web-conferencing environment–the efficacy of student-centred learning designs. Computers & education, 54(2), 462-478.

References

  1.  Falkner, Katrina; Sheard, Judy (2019). “Pedagogic Approaches”: 445–480. doi:10.1017/9781108654555.016. Chapter 15 of the The Cambridge Handbook of Computing Education Research
  2. Bower, Matt; Dalgarno, Barney; Kennedy, Gregor E.; Lee, Mark J.W.; Kenney, Jacqueline (2015). “Design and implementation factors in blended synchronous learning environments: Outcomes from a cross-case analysis”. Computers & Education86: 1–17. doi:10.1016/j.compedu.2015.03.006ISSN 0360-1315.