The introduction of AI into our knowledge-gathering practices and decision making often means that we also become (somewhat) epistemically dependent on these systems. We rely on them to make it easier for us to know and understand phenomena. This can be highly beneficial, similar to how we use instruments in a lot of settings to more easily acquire knowledge and understanding. The specific nature of AI, primarily its opacity and often unexpected lapses in reliability, pose challenges however to beneficial epistemic dependence. In this talk I go over risks to our competence and responsibility as we increase our epistemic dependence on AI, and offer some suggestions on how we might deal with those.
Stefan Buijsman is an Associate professor in philosophy at TU Delft, where he works on the intersection of epistemology and ethics of AI. In addition to more theoretical research he leads the Delft Digital Ethics Centre, which has a wide range of applied projects that aim to translate values central to responsible AI into concrete design requirements. Among others, he works in collaboration with the Erasmus Medical Centre, the Dutch social benefits organization and the Dutch military on the challenge to operationalize responsible AI.
If you are interested in participating online, please register via the following form: https://forms.microsoft.com/r/W3whw0ac3B. If you would like to attend in person, please send an e-mail to udnn.ht@tu-dortmund.de.
This lecture is a special edition of the AI Colloquium at TU Dortmund University. This lecture series will investigate fundamental issues in AI from the vantage point of philosophy of science, which includes topics such as the transparency and interpretability of AI within scientific research, as well as the impact of AI on scientific understanding and explanation.
JProf. Florian Boge