We’ll be watching this short video intro to blended learning
If you’ve not got access to the workspace yet, ping me or Alcywn Parker and we’ll add you to the group. Journal Club is part of the Association for Computing Machinery (ACM) Special Interest Group (SIG) on Computer Science Education (CSE) – all welcome!
Hieroglyphs from the tomb of Seti I, by Jon Bodsworth via Wikimedia Commons and the Egypt archive
ACM SIGCSE Journal Club returns Monday 4th May at 11am. The paper we’re discussing this month is “Relating Natural Language Aptitude to Individual Differences in Learning Programming Languages” by Chantel Prat et al published in Scientific Reports. [1] Here’s the abstract:
This experiment employed an individual differences approach to test the hypothesis that learning modern programming languages resembles second “natural” language learning in adulthood. Behavioral and neural (resting-state EEG) indices of language aptitude were used along with numeracy and fluid cognitive measures (e.g., fluid reasoning, working memory, inhibitory control) as predictors. Rate of learning, programming accuracy, and post-test declarative knowledge were used as outcome measures in 36 individuals who participated in ten 45-minute Python training sessions. The resulting models explained 50–72% of the variance in learning outcomes, with language aptitude measures explaining significant variance in each outcome even when the other factors competed for variance. Across outcome variables, fluid reasoning and working-memory capacity explained 34% of the variance, followed by language aptitude (17%), resting-state EEG power in beta and low-gamma bands (10%), and numeracy (2%). These results provide a novel framework for understanding programming aptitude, suggesting that the importance of numeracy may be overestimated in modern programming education environments
The paper describes an experiment which investigates the relationship between learning natural languages and programming languages and draws some interesting conclusions that provide some good discussion points. Does being good at learning natural languages like English make you good at learning programming language like Python? Do linguists make good coders?
Computing educators are often baffled by the misconceptions that their CS1 students hold. We need to understand these misconceptions more clearly in order to help students form correct conceptions. This paper describes one stage in the development of a concept inventory for Computing Fundamentals: investigation of student misconceptions in a series of core CS1 topics previously identified as both important and difficult. Formal interviews with students revealed four distinct themes, each containing many interesting misconceptions. Three of those misconceptions are detailed in this paper: two misconceptions about memory models, and data assignment when primitives are declared. Individual misconceptions are related, but vary widely, thus providing excellent material to use in the development of the CI. In addition, CS1 instructors are provided immediate usable material for helping their students understand some difficult introductory concepts.
In case you’re wondering, CS1 refers to the first course in the introductory sequence of a computer science major (in American parlance), roughly equivalent to first year undergraduate in the UK. CI refers to a Concept Inventory, a test designed to tell teachers exactly what students know and don’t know. According to Reinventing Nerds, the paper has been influential because it was the “first to apply rigorous research methods to investigating misconceptions”.