I recently completed lecturing the course "Foundations of Computer Science" to our newly arrived first-year computer scientists here at Cambridge. This is the first time I've lectured this course, taking over from Anil while he's on sabbatical. Although I was very nervous indeed about it, I ended up really enjoying the experience - and I hope the students did too! This post is a little brain dump of my thoughts on how it went and how we might improve it for next year.
The course is 12 lectures long and has been lectured in a similar way since I myself was an undergraduate here, way back in 1996. There have been a few changes, not least of which is that back then it was in Standard ML rather than OCaml, but the core material has remained largely the same: lists, recursive functions, trees, higher-order functions, search and finally mutability. There are no prerequisites for the course, although all students have at least a maths A-level (or equivalent), and almost all of them have done some programming before, though the experience varies widely. Very few have done any functional programming, and even fewer have written any OCaml before.
The notes for the course are distributed both in hard copy and also as an interactive Jupyter Notebook, which we host on our JupyterHub server that I maintain. The idea is that the students can read through the notes and then play around with the code examples directly in the notebook. I don't encourage them or give them time to do much during the lectures - not that I think this is a terrible idea, but it's a struggle to fit all the material in otherwise! The notes are pretty closely coupled to the lectures, organised into 11 chapters that correspond to the first 11 lectures, with exercises at the end of each chapter that are intended to be covered in the supervisions. We also have some assessed exercises - "Ticks" - that the students complete in their own time using the JupyterHub server using nbgrader. They are automatically assessed in a very transparent way; each "tick" is a Jupyter notebook with editable answer cells and read-only test cells. Overall we're aiming for the students not to have to install OCaml locally at all, though I hope many of them will choose to do so anyway.
While I didn't want them playing around with the notebook during the lectures, I do, however, try to get them to interact by getting them to answer questions. It's pretty intimidating to stick your head above the parapet like this, so as an incentive I rewarded those that answered (rightly or wrongly) with some of the excellent stickers that Tarides has printed over the years. Everybody loves stickers!
The questions I asked varied quite a lot in their difficulty, and many were in the first few minutes of each lecture, where I had a short 'warm-up' where we recapped the contents of the previous lecture. These warm-ups were strongly suggested by Anil, and as well as reminding everyone of where we left off, they also gave me a bit of feedback on the things that the students found challenging.
One entertaining aspect is that during the first lecture I do actually encourage them to at least log on to the JupyterHub server, mostly to get them used to the idea of trying it. The entertaining part is that our server isn't particularly big and beefy, and so with 130 students all trying to log on at once, it invariably caves in under the load. At this point in the lecture I ssh to the server and run btop/htop and we watch it die in real time!
During the lectures themselves, rather than use Keynote or PowerPoint for the slides, I decided to try using Slipshow, augmented with x-ocaml to embed executable OCaml code snippets. I'm very happy with how this worked out. I was able to prepare both working and broken snippets, modify them live during the lecture, and things like type-on-hover was very useful. In a few lectures where we were discussing big-O notation, I was able to run code on different input sizes and really demonstrate the big difference in run-time of certain algorithms. After the lectures, I posted the slides onto the course website so that students can refer back to them, and they can also try out the live code snippets directly in the slides.
Both Slipshow and x-ocaml are still quite young projects, so it was inevitable that there were a few rough edges, and in fact the interaction of the two revealed the biggest problem: that when you use the 'speaker-view' mode of Slipshow, where you have a separate window with notes and the current slide, the x-ocaml widgets are effectively independent in the two windows, so updating in one doesn't update in the other. Paul-Elliot, the author of Slipshow, had already got a potential fix for this in the works when I spoke to him about it, so hopefully next time I use this I'll be able to have speaker notes on screen, instead of hand-written index cards! The x-ocaml project is a lot smaller than Slipshow, so I was able to use Claude to help me add functionality I needed, such as being able to programmatically highlight sections of the code.
Another new thing I tried this year was to go over 'tracing' of execution to help the students understand how programs run. We've always taught reduction steps in the course, which works well as it's only the last lecture where we introduce mutability, but it can quickly become unwieldy, and it can be challenging to do this all by hand. Tracing a function tells the runtime to log when function calls and returns happen, so you just need to call the function on your desired input, and you get a fully automatic trace of the execution. As it's only function calls and returns, it doesn't tell the full story, but alongside the handwritten reduction, it can help reassure students that they're on the right track. I ended up writing up a trace of a particularly complicated lazy-list evaluation using Slipshow and x-ocaml, which I posted here.
Overall I'm very happy with how the course went this year, though in some ways it did feel a little bit like the course finished just when it had started to get to the good stuff! There's a Tripos review process going on at the moment, so maybe we'll get to expand this course a bit in future years.
While the Slipshow+x-ocaml combination worked well, the fact that we ended up with two separate systems for executing OCaml wasn't ideal. I think it'd be a really nice project to investigate just how far we can push x-ocaml / Slipshow / some other web technology to have a true "serverless" experience so we can ditch the JupyterHub server entirely. By caching the x-ocaml 'execution' web worker in the browser, we could have a system that works fully offline, removing an annoyingly failure-prone single point of failure. Of course, we'd still need some way to do the assessed exercises, but that's a small point in a much larger problem: we really can't continue to ignore how LLMs are impacting the way that students are approaching these exercises in both positive and negative ways. To answer this properly, we need to think hard about what the purpose of these exercises is and look around to see what our colleagues are doing in this space.
The slide decks themselves are fully open and available on the course website: