Join us to discuss how theory is used in assessment and feedback on Monday 3rd July at 2pm BST

Test image from flaticon.com

A good theory can be the most concentrated form of knowledge. By encapsulating an infinite number of cases, a theory can make predictions rather than just describing a finite number of disjointed facts. So how does theory feature in research about assessment and feedback? Join us on Monday 3rd July at 2pm BST (UTC+1) to discuss a paper investigating this question by Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai from the University of Hong Kong and Deakin University. [1] From the abstract of their paper:

Assessment and feedback research constitutes its own ‘silo’ amidst the higher education research field. Theory has been cast as an important but absent aspect of higher education research. This may be a particular issue in empirical assessment research which often builds on the conceptualisation of assessment as objective measurement. So, how does theory feature in assessment and feedback research? We conduct a critical review of recent empirical articles (2020, N = 56) to understand how theory is engaged with in this field. We analyse the repertoire of theories and the mechanisms for putting these theories into practice. 21 studies drew explicitly on educational theory. Theories were most commonly used to explain and frame assessment. Critical theories were notably absent, and quantitative studies engaged with theory in a largely instrumental manner. We discuss the findings through the concept of reflexivity, conceptualising engagement with theory as a practice with both benefits and pitfalls. We therefore call for further reflexivity in the field of assessment and feedback research through deeper and interdisciplinary engagement with theories to avoid further siloing of the field.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jane Waite at Queen Mary, University of London, for nominating this months paper.

References

  1. Juuso Henrik Nieminen, Margaret Bearman & Joanna Tai (2023) How is theory used in assessment and feedback research? A critical review, Assessment & Evaluation in Higher Education, 48:1, 77-94, DOI: 10.1080/02602938.2022.2047154





Join us on Zoom to dive into open online interactive textbook publishing on Monday 12th June at 2pm BST

CC licensed Scuba diver by flaticon.com

The textbook has long been a mainstay of education. Although online textbooks can give students easy (and sometimes free) access to increasingly interactive resources, authors have a bewildering array of tools and publishing models to select from. Software such as asciidoctor.org, bookdown.org, leanpub.com, pretextbook.org, quarto.org, rephactor.com, runestone.academy, zybooks.com, and many others allow instructors to publish course material freed from the constraints of printed paper, monolithic Learning Management Systems (LMSs) and Monolithic Massive Open Online Courses (MOOCs). Join us on Monday 12th of June at 2pm BST (UTC+1) to discuss a paper describing one example: Dive Into Systems an undergraduate textbook on computer systems. We’ll be joined the co-authors of a paper [1] and corresponding textbook by Suzanne Matthews, Tia Newhall and Kevin C. Webb from Swarthmore College, Pennsylvania and the United States Military Academy at westpoint.edu, New York. 🇺🇸 From the abstract of their paper:

This paper presents our experiences, motivations, and goals for developing Dive into Systems, a new, free, online textbook that introduces computer systems, computer organisation, and parallel computing. Our book’s topic coverage is designed to give readers a gentle and broad introduction to these important topics. It teaches the fundamentals of computer systems and architecture, introduces skills for writing efficient programs, and provides necessary background to prepare students for advanced study in computer systems topics. Our book assumes only a CS1 background of the reader and is designed to be useful to a range of courses as a primary textbook for courses that introduce computer systems topics or as an auxiliary textbook to provide systems background in other courses. Results of an evaluation from students and faculty at 18 institutions who used a beta release of our book show overwhelmingly strong support for its coverage of computer systems topics, its readability, and its availability. Chapters are reviewed and edited by external volunteers from the CS education community. Their feedback, as well as that of student and faculty users, is continuously incorporated into its online content at diveintosystems.org/book

We’ll also be discussing options for adding interactivity to textbooks, see diveintosystems.org/sigcse23. So join us to find out more about what the future of textbooks might look like using Dive Into Systems as an exemplar. All welcome, as usual, we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

Nominate papers you’d like us to discuss at future journal club meetings at sigcse.cs.manchester.ac.uk/papers.

References

  1. Suzanne J. Matthews, Tia Newhall and Kevin C. Webb (2021) Dive into Systems: A Free, Online Textbook for Introducing Computer Systems SIGCSE ’21: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, Pages 1110–1116 DOI: 10.1145/3408877.3432514

Join us on zoom to discuss the implications of programming getting easier, Monday 15th May at 2pm BST

Programming is hard, or at least it used to be. AI code generators like Amazon’s CodeWhisperer, DeepMind’s AlphaCode, GitHub’s CoPilot, Replit’s Ghostwriter and many others now make programming easier, at least for some people, some of the time. What opportunities and challenges do these new tools present for educators? Join us on Zoom to discuss an award winning paper by Brett Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather and Eddie Antonio Santos at University College Dublin, the University of Auckland and Abilene Christian University on this very topic. [1] We’ll be joined by two of the co-authors who will present a lightning talk to kick-off our discussion, for our monthly ACM journal club meetup. Here’s the abstract of his paper:

The introductory programming sequence has been the focus of much research in computing education. The recent advent of several viable and freely-available AI-driven code generation tools present several immediate opportunities and challenges in this domain. In this position paper we argue that the community needs to act quickly in deciding what possible opportunities can and should be leveraged and how, while also working on overcoming otherwise mitigating the possible challenges. Assuming that the effectiveness and proliferation of these tools will continue to progress rapidly, without quick, deliberate, and concerted efforts, educators will lose advantage in helping shape what opportunities come to be, and what challenges will endure. With this paper we aim to seed this discussion within the computing education community.

All welcome, as usual we’ll be meeting on zoom at 2pm BST (UTC+1), details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Sue Sentance at the University of Cambridge for nominating this paper for discussion.

See also linkedin.com/posts/duncanhull_ai-codewhisperer-alphacode-activity-7051921278923915264-7i_5

References

  1. Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, Eddie Antonio Santos (2023) Programming Is Hard – Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation in Proceedings of the 54th ACM Technical Symposium on Computer Science Education: SIGCSE 2023, pages 500–506, DOI: 10.1145/3545945.3569759

Join us to discuss using AI to solve simple programming problems on Monday 3rd April at 2pm BST

CC licensed pilot icon from flaticon.com

Maybe you wrote that code and maybe you didn’t. If AI helped you, such as the OpenAI Codex in GitHub Copilot, how did it solve your problem? How much did Artificial Intelligence help or hinder your solution? Join us to discuss a paper by Michel Wermelinger from the Open University published in the SIGCSE technical symposium earlier this month on this very topic. [1] We’ll be joined by Michel who will present a lightning talk to kick-off our discussion. Here’s the abstract of his paper:

The teaching and assessment of introductory programming involves writing code that solves a problem described by text. Previous research found that OpenAI’s Codex, a natural language machine learning model trained on billions of lines of code, performs well on many programming problems, often generating correct and readable Python code. GitHub’s version of Codex, Copilot, is freely available to students. This raises pedagogic and academic integrity concerns. Educators need to know what Copilot is capable of, in order to adapt their teaching to AI-powered programming assistants. Previous research evaluated the most performant Codex model quantitatively, e.g. how many problems have at least one correct suggestion that passes all tests. Here I evaluate Copilot instead, to see if and how it differs from Codex, and look qualitatively at the generated suggestions, to understand the limitations of Copilot. I also report on the experience of using Copilot for other activities asked of students in programming courses: explaining code, generating tests and fixing bugs. The paper concludes with a discussion of the implications of the observed capabilities for the teaching of programming.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Michel Wermelinger (2023) Using GitHub Copilot to Solve Simple Programming Problems in Proceedings of the 54th ACM Technical Symposium on Computer Science Education Pages SIGCSE 2023 page 172–178 DOI: 10.1145/3545945.3569830

Join us to discuss code comprehension on Monday 6th March at 2pm GMT

CC licensed puzzle icon by flaticon.com


It’s all very well getting an AI to write your code for you but neither writing code or reading code are the same as understanding code. So what is going on in novices brains when they learn to actually understand the code they are reading and writing? Join us on Monday 6th March at 2pm GMT to discuss a paper by Quintin Cutts and Maria Kallia from the University of Glasgow on this very topic [1], from the abstract:

An approach to code comprehension in an introductory programming class is presented, drawing on the Text Surface, Functional and Machine aspects of Schulte’s Block Model, and emphasising programming as a modelling activity involving problem and machine domains. To visually connect the domains and a program, a key diagram conceptualising the three aspects lies at the approach’s heart, alongside instructional exposition and exercises, which are all presented. Students find the approach challenging initially, but most recognise its value later, and identify, unexpectedly, the value of the approach for problem decomposition, planning and coding.

We’ll be joined by one of the co-authors (Quintin Cutts), who’ll give us a lightning talk summary of the paper to kick-off our journal club discussion. [1] Quintin has added: “You can’t write if you can’t read.  In just four pages the paper outlines a classroom approach to developing in novices good code comprehension right from the start of an introductory course.  There’s also some feedback on what students thought, a year later – spoiler – they seemed to get a lot from it.  Anyone teaching introductory programming might find such a short paper thought provoking, even if they don’t pick up the technique in their teaching. Worth a quick read, and coming along to listen/add to the discussion…”

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Quintin Cutts and Maria Kallia (2023) Introducing Modelling and Code Comprehension from the First Days of an Introductory Programming Class in CEP ’23: Proceedings of 7th Conference on Computing Education Practice Pages 21–24 DOI:10.1145/3573260.3573266

Join us to discuss Collaborative Coding in the Cloud on Monday 6th February at 2pm GMT

Creative Commons cloud image by flaticon.com

More and more software development tools are available in the cloud, with tools like Replit, CodingRooms, GitHub Codespaces, Amazon Web Services Cloud9, JetBrains and Eclipse all offering online tools for developers to code collaboratively in the cloud. Integrated Development Environments (IDEs) which have traditionally been available as “fatter” clients are increasingly available as “thinner” web-based clients running in a browser. These tools can lower some of the barriers to installation and maintenance for their users. What are the strengths and weaknesses of these new tools for teaching introductory programming courses? Join us on Monday 6th February at 2pm GMT to discuss a paper by Phil Hackett and his colleagues at the Open University on this very topic [1], from the abstract:

This paper discusses a pilot research project, which investigated the use of online collaborative IDEs (Integrated development environments) during a first-year computing degree course. The IDEs used can be described as virtual computing labs because they replicate some of the actions possible in physical computing labs. Students were supported by a tutor with real-time help and feedback provided, whilst they were programming, without being collocated. The use of two different platforms is considered with the benefits and drawbacks discussed. Students and tutors indicated that they would like to use a virtual computing lab approach in the future.

We’ll be joined by the lead author of the paper Phil Hackett, who’ll give us a lightning talk summary of the paper to kick-off our journal club discussion. The paper was presented at Computing Education Practice (CEP) in Durham earlier this month. [1]

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Phil Hackett, Michel Wermelinger, Karen Kear and Chris Douce (2023) Using a Virtual Computing Lab to Teach Programming at a Distance in CEP ’23: Proceedings of 7th Conference on Computing Education Practice Pages 5–8 DOI:10.1145/3573260.3573262

Join us to discuss Computing in school in the UK & Ireland on Monday 5th December at 2pm GMT

CC licensed school image via flaticon.com

Computing is widely taught in schools in the UK and Ireland, but how does the subject vary across primary and secondary education in Scotland, England, Wales and Ireland? Join us to discuss via a paper published at UKICER.com by Sue Sentance, Diana Kirby, Keith Quille, Elizabeth Cole, Tom Crick and Nicola Looker. [1]

Many countries have increased their focus on computing in primary and secondary education in recent years and the UK and Ireland are no exception. The four nations of the UK have distinct and separate education systems, with England, Scotland, Wales, and Northern Ireland offering different national curricula, qualifications, and teacher education opportunities; this is the same for the Republic of Ireland. This paper describes computing education in these five jurisdictions and reports on the results of a survey conducted with computing teachers. A validated instrument was localised and used for this study, with 512 completed responses received from teachers across all five countries The results demonstrate distinct differences in the experiences of the computing teachers surveyed that align with the policy and provision for computing education in the UK and Ireland. This paper increases our understanding of the differences in computing education provision in schools across the UK and Ireland, and will be relevant to all those working to understand policy around computing education in school.

(we’ll be joined by the co-authors of the paper: Sue Sentance and Diana Kirby from the University of Cambridge and the Raspberry Pi Foundation with a lightning talk summary to start our discussion)

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Joseph Maguire at the University of Glasgow for proposing this months paper.

References

  1. Sue Sentance, Diana Kirby, Keith Quille, Elizabeth Cole, Tom Crick and Nicola Looker (2022) Computing in School in the UK & Ireland: A Comparative Study UKICER ’22: Proceedings of the 2022 Conference on United Kingdom & Ireland Computing Education Research 5 pp 1–7 DOI: 10.1145/3555009.3555015

Join us to discuss novice use of Java on Monday 7th November at 2pm GMT

Java is widely used as a teaching language in Universities around the world, but what wider problems does it present for novice programmers? Join us to discuss via a paper published in TOCE by Neil Brown, Pierre Weill-Tessier, Maksymilian Sekula, Alexandra-Lucia Costache and Michael Kölling. [1] From the abstract:

Objectives: Java is a popular programming language for use in computing education, but it is difficult to get a wide picture of the issues that it presents for novices, and most studies look only at the types or frequency of errors. In this observational study we aim to learn how novices use different features of the Java language. Participants: Users of the BlueJ development environment have been invited to opt-in to anonymously record their activity data for the past eight years. This dataset is called Blackbox, which was used as the basis for this study. BlueJ users are mostly novice programmers, predominantly male, with a median age of 16. Our data subset featured approximately 225,000 participants from around the world. Study Methods: We performed a secondary data analysis that used data from the Blackbox dataset. We examined over 320,000 Java projects collected over the course of eight years, and used source code analysis to investigate the prevalence of various specifically-selected Java programming usage patterns. As this was an observational study without specific hypotheses, we did not use significance tests; instead we present the results themselves with commentary, having applied seasonal trend decomposition to the data. Findings: We found many long-term trends in the data over the course of the eight years, most of which were monotonic. There was a notable reduction in the use of the main method (common in Java but unnecessary in BlueJ), and a general reduction in the complexity of the projects. We find that there are only a small number of frequently used types: int, String, double and boolean, but also a wide range of other infrequently used types. Conclusions: We find that programming usage patterns gradually change over a long period of time (a period where the Java language was not seeing major changes), once seasonal patterns are accounted for. Any changes are likely driven by instructors and the changing demographics of programming novices. The novices use a relatively restricted subset of Java, which implies that designers of languages specifically targeted at novices can satisfy their needs with a smaller set of language constructs and features. We provide detailed recommendations for the designers of educational programming languages and supporting development tools.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Neil C. C. Brown, Pierre Weill-Tessier, Maksymilian Sekula, Alexandra-Lucia Costache and Michael Kölling (2022) Novice use of the Java programming language ACM Transactions on Computing Education DOI:10.1145/3551393

Join us to discuss graduate skills for Computer Science students on Monday 3rd October at 2pm BST

Graduate cap by flaticon.com

What do employers want from Computer Science students and how good are Universities in producing graduates with what employers need? Join us to discuss via a paper by Roseanne English and Alan Hayes from UKICER 2022. [1] From the abstract:

In preparing computing science students for industry, degree content often focuses on technical skills such as programming. Such skills are essential for a successful post-study career in industry and is popular with students. However, industry notes that students are often limited in what can be referred to as graduate attributes or transferable skills. Such skills include effective teamwork, communication, and critical thinking amongst others. Similar gaps have also been demonstrated for computing science students more specifically, resulting in industry developing their own training programmes for graduates. To address this issue, graduate attributes could be incorporated more readily into computing curricula. Within the UK this is discussed in accreditation requirements as well as higher education frameworks. However, research which aims to explore how to achieve this is still comparatively limited. Building on existing work in this area, this paper presents a thematic analysis of graduate attributes at Russell Group Universities in the UK to identify the most common attribute themes, and uses the most frequent themes to begin to consider how these could be more readily embedded in CS curricula.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Rosanne English and Alan Hayes (2022) Towards Integrated Graduate Skills for UK Computing Science Students in UKICER ’22: Proceedings of the 2022 Conference on United Kingdom & Ireland Computing Education Research Pages 1–7 DOI:10.1145/3555009.3555018 (free version via https://pureportal.strath.ac.uk… )

Join us to discuss what counts as Computing Education Research on Monday 5th September at 2pm BST

Picture of Glasgow Cathedral (St Mungos) on Wikimedia Commons w.wiki/5aFU

Science is a broad church, full of narrow minds, trained to know ever more about even less. That’s according to Steve Jones [1], but in Computing Education Research (CER) are we being too narrow-minded about what counts (and what doesn’t count) as a contribution? Join us to discuss via a paper by Steve Draper and Joseph Maguire at the University of Glasgow recently published in TOCE [2]. From the abstract:

The overall aim of this paper is to stimulate discussion about the activities within CER, and to develop a more thoughtful and explicit perspective on the different types of research activity within CER, and their relationships with each other. While theories may be the most valuable outputs of research to those wishing to apply them, for researchers themselves there are other kinds of contribution important to progress in the field. This is what relates it to the immediate subject of this special journal issue on theory in CER. We adopt as our criterion for value “contribution to knowledge”. This paper’s main contributions are: A set of 12 categories of contribution which together indicate the extent of this terrain of contributions to research. Leading into that is a collection of ideas and misconceptions which are drawn on in defining and motivating “ground rules”, which are hints and guidance on the need for various often neglected categories. These are also helpful in justifying some additional categories which make the set as a whole more useful in combination. These are followed by some suggested uses for the categories, and a discussion assessing how the success of the paper might be judged.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Steve Jones (2007) Coral: A Pessimist in Paradise, Little Brown
  2. Steve Draper and Joseph Maguire (2022) The different types of contributions to knowledge (in CER): All needed, but not all recognised ACM Transactions on Computing Education (TOCE) DOI:10.1145/3487053