Coordinates Thursdays from 10 AM to noon in room 028 of Ludwigstrasse 31.
Lecturer Tom Sterkenburg. Contact me at tom.sterkenburglmu.de; visit me in room 126 of Ludwigstrasse 31.
Course description

Machine learning with deep neural networks has seen a tremendous rise in the last decade, with large language models like GTP only the most recent example. This seminar focuses on philosophical questions around the capacities of deep neural networks. In particular, we will read and discuss the novel book by Cameron Buckner, From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence.

Contents and material

Our primary reading is the book by Bucker, supplemented by some background reading. See the below schedule and material for details. (Further background reading will be added to the schedule as the course progresses, possibly also based on the participants' interests.)

Prerequisites

This is a philosophy course, and our focus will be on conceptual issues. It will, of course, be helpful to already have some prior knowledge of machine learning and deep learning in particular, but none is required. Throughout the course I will also spend a little time on covering the essentials of techniques in deep learning, mostly based on the introductory textbook by Goodfellow, Bengio & Courville (2016).

Assessment

The course is worth 9 ECTS. Your grade will be determined by a term paper at the end of the course. The term paper treats of a theme we have discussed in the course, and has a length of about 6000 words.

In addition, starting from the third meeting, you are required to submit at least two questions for discussion by the day prior to each meeting. A discussion question comes in the form of indicating some aspect of the text that you find problematic, implausible, and/or confusing; and a brief explanation why you think so.

Schedule

Date Topic Material Assignment
Thu 19 October Intro. Background: Goodfellow, Bengio & Courville, ch. 1. Buckner, preface.
Thu 26 October Essentials of deep learning: Deep feedforward networks. Background: Goodfellow, Bengio & Courville, ch. 6.
Thu 2 November NO CLASS.
Thu 9 November Empiricism vs. nativism in philosophy and in computer science. Buckner, ch. 1 up to sect. 1.3. Background: Marcus (2018b).
Thu 16 November The new empiricist DoGMA. Buckner, ch. 1 from sect. 1.4. Background: Goyal & Bengio (2022), sects. 1-2, 5.
Thu 23 November Deep learning and its potential. Buckner, ch. 2.
Thu 30 November Perception: Locke and convolutional nets. Buckner, ch. 3 up to sect. 3.3. Background: Goodfellow, Bengio & Courville, sects. 9.1-9.4.
Thu 7 December Perception: Locke and convolutional nets. Buckner, ch. 3 from sect. 3.4. Background: Goodfellow, Bengio & Courville, sects. 9.10-9.11.
Thu 14 December Memory: Ibn Sina and deep reinforcement learning. Buckner, ch. 4 up to sect. 4.4. Background: Botvinick et al. (2019), up to p. 414.
Thu 21 December Memory: Ibn Sina and deep reinforcement learning. Buckner, ch. 4 from sect. 4.5. Background: Botvinick et al. (2019).
CHRISTMAS BREAK.
Thu 11 January Imagination: Hume and generative adversarial networks. Buckner, ch. 5 up to sect. 5.3. Background: Goetschalckx et al. (2021).
Thu 18 January Imagination: Hume and generative adversarial networks. Buckner, ch. 5 from sect. 5.4. Background: Goetschalckx et al. (2021).
Thu 25 January Attention: James and transformers. Buckner, ch. 6 up to sect. 6.4. Background: Lindsay (2020), up to sect. 2.
Thu 1 February Attention: James and transformers. Buckner, ch. 6 from sect. 6.5. Background: Lindsay (2020), from sect. 3.
Thu 8 February Social and moral cognition: De Grouchy and affective computing. Buckner, ch. 7.
Mon 25 March Deadline term paper.

Material, primary.

Material, listed background.

(Additional listed background reading may be added as the course progresses.)

Material, further background.