Responsibility
Lecture
This lecture considers some abstract ways to think about / conceptualise Responsibility.
This year it will be a guest lecture from Dr Fabio Tollon, a philosopher who works as part of the Bridging Responsible AI Divides (BRAID) project.
Abstract:
In recent years ‘Responsible AI’ (R-AI) has been applied to a number of contexts and research applications. On the surface this seems a good thing, as of course we want the development, deployment, and use of AI-systems to be in line with certain normative principles, and it seems the ‘responsible’ frame can give us just that. R-AI can ensure that AI-systems respect human rights and are aligned with democratic values. However, just what exactly R-AI means is contested, and often undefined. This raises problems for translating the ‘principles’ of various R-AI guidelines into meaningful ‘practices’ for those developing AI-systems.
By getting a better handle on R-AI, we can better promote a philosophically robust understanding of the concept. This means, among other things, acknowledging that there is no ‘one’ R-AI community, but rather a network of intersecting and interconnected communities. The practitioners who currently make up the R-AI ecosystem come from many different disciplines and sectors, and their interaction is what makes the R-AI ecosystem what it is. R-AI, in this framing, is not a ‘problem’ to be ‘solved’, but a process to be governed.
Slides:
Document
Bonus:
If you're interested in the more basic version of this that I usually deliver, the slides and transcript for that are below:
Slides:
Document
Transcript:
Reading
Required - Debates About Responsibility
"“Who Should Stop Unethical A.I.?"
https://www.newyorker.com/tech/annals-of-technology/who-should-stop-unethical-ai (thanks Charlie Lee for the accessible version)
This article is not very focused (and arguably trying too hard to give "both sides"), but it brings up a lot of varying viewpoints on where the responsibility lies in stopping harmful tech development. I recommend focusing discussion on highlighting and summarising what these views are, then comparing their merits.
Optional - Personal Responsibility
"Directions for Future Work: From #TechWontBuildIt to #DesignJustice"
This chapter discusses a campaign of tech workers organising to refuse to create certain kinds of software. This movement leans heavily towards person responsibility.
As does: "The Responsibility to Not Design and the Need for Citizen Professionalism"
https://techotherwise.pubpub.org/pub/vizamy14/release/1
Much shorter, but also much less informative.