Abstract: Deep artificial neural network (DANN) designers often accept that the systems they construct lack interpretability, are not transparent – in other words, that they are ‘inexplicable’. It should not be obvious what they mean. Explanations, particularly in the neurosciences, are often thought to consist of the mechanisms which underpin observed phenomena. But DANN designers have complete access to the mechanisms underpinning the systems they build – as well as access to their training sets, design parameters, training algorithms and so on. In this talk I distinguish various senses of ‘explanation’ – ontic, epistemic, objective, subjective. The aims are (1) to help map out the various questions we might be interested in, (2) to scope the limits of mechanistic approaches to the question of explanation, and (3) to try to narrow down the sense in which DANNs are supposed to be explanatorily opaque.
Biography: William has been a lecturer in Philosophy at the University of Southampton since 2016 and is part of the Philosophy of Language, Philosophy of Mind and Epistemology Research Group. Prior to this he lectured at Kings College London, the University of York and Cardiff University. His research interests are centered on the epistemology of perception, social cognition and inferential knowledge.