Academic Handbook Course Descriptors and Programme Specifications
LPHIL7260 Responsible Artificial Intelligence Course Descriptor
Course Code | LPHIL7260 | Faculty | Philosophy |
UK Credit | 15 | US Credit | Any |
FHEQ Level | Level 7 | Date Approved | |
Core Attributes | N/A | ||
Pre-requisites | None | ||
Co-requisites | None |
Course Overview
This course introduces the ethical issues regarding the responsible use of artificial intelligence (AI). It addresses the potential problems AI systems present to both individuals and society as a whole. The concept of Responsible AI involves crafting, developing, deploying, and overseeing AI systems in a manner that guarantees they are ethical, secure, and reliable. The course aims to provide students with insights into how philosophical thought intersects with other fields in the context of AI, enriching the creation and governance of emerging technologies. The curriculum is driven by research and leverages the specialised knowledge of Northeastern University London’s faculty. Each subject will be examined from a philosophical standpoint, alongside at least one other relevant disciplinary perspective, which may range from business studies, economics, digital humanities, law, and politics to sociology, psychology, or computer science, among others. A significant emphasis is placed on the social and environmental sustainability of AI systems.
Learning Outcomes
On successful completion of the course, students will be able to:
Knowledge and Understanding
K1d | Demonstrate detailed knowledge and sophisticated understanding of key questions and contemporary debates in the field of Responsible AI from an interdisciplinary perspective. |
K4d | Evaluate the societal dimensions of AI and data practices and demonstrate a comprehensive understanding and critical awareness of key philosophical issues (ethical, cultural, privacy, or policy) surrounding data use, data processing, and AI. |
Subject Specific Skills
S3d | Develop original, practical and implementable ideas for the future development, implementation, and management of responsible AI systems. |
S4d | Understand the importance of embedding ethical considerations into the development of data applications and AI systems. |
Transferable and Employability Skills
T2d | Consistently display an excellent level of technical proficiency in written English and command of scholarly terminology, so as to be able to deal with complex issues in a sophisticated and systematic way. |
T3d | Communicate effectively with rigorous arguments appropriately for both technical and non-technical audiences in relation to the development and application of responsible AI systems and the contemporary philosophical questions that surround it, through written reports. |
Teaching and Learning
This course has a dedicated Virtual Learning Environment (VLE) page with a syllabus and a range of additional resources (e.g. readings, question prompts, tasks, assignment briefs, and discussion boards) to orientate and engage students in their studies.
- Teaching and learning strategies for this course will include:
- Lectures: Instructor-led classes.
- Seminars/workshops: Interactive sessions on project management principles, focused on applying theoretical concepts.
- Experiential Learning, which may include simulations and role-playing for hands-on experience, or guest speakers for insight from professionals.
- Online Resources: Flexible learning with additional study materials.
Faculty hold regular ‘office hours’, which are opportunities for students to drop in or sign up to explore ideas, raise questions, or seek targeted guidance or feedback, individually or in small groups.
Students are to attend and participate in all the scheduled teaching and learning activities for this course and to manage their directed learning and independent study.
Indicative total learning hours for this course: 150, including a minimum of 16.5 scheduled hours.
Assessment
Both formative and summative assessment are used as part of this course, with formative opportunities typically embedded within interactive teaching activities delivered via the VLE.
Summative Assessments
AE: | Assessment Activity | Weighting (%) | Duration | Length |
1 | Written Assignment | 100% | N/A | 4,000 Words |
Further information about the assessments can be found in the Course Syllabus.
Feedback
Students will receive formative and summative feedback in a variety of ways, written (e.g. marked up on assignments or via the VLE) or oral (e.g. as part of interactive teaching sessions or in office hours).
Indicative Reading
Note: Comprehensive and current reading lists are produced annually in the Course Syllabus or other documentation provided to students; the indicative reading list provided below is for a general guide and part of the approval/modification process only.
Books
Voeneky, S., Kellmeyer, P., Mueller, O., & Burgard, W. (Eds.) (2022). The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
Dubber, M.D., Pasquale, F. and Das, S. (Eds.) (2020). The Oxford handbook of ethics of AI. Oxford University Press.
Dignum, V. (2019) Responsible Artificial Intelligence: How to develop and use AI in a responsible way. Springer.
von Braun, S. Archer, M., Reichberg, G. M., & Sánchez Sorondo, M. (2021). Robotics, AI, and Humanity: Science, Ethics, and Policy. Springer Nature.
Noble, S. U. (2018). Algorithms of oppression. In Algorithms of oppression. New York University Press
Sandler. (2014). Ethics and Emerging Technologies. Palgrave Macmillan UK.
Journals
Autor, D. H. (2015), ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’, The Journal of Economic Perspectives, 29, pp. 3–30
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
Spencer, D. A. (2022) ‘Automation and Well-Being: Bridging the Gap between Economics and Business Ethics.’ Journal of Business Ethics.
Hristov, K. (2016). ‘Artificial intelligence and the copyright dilemma’. Idea, 57, p.431.
van Wynsberghe, A. (2021) ‘Sustainable AI: AI for sustainability and the sustainability of AI’. AI Ethics, 1, 213–218 (2021). https://doi.org/10.1007/s43681-021-00043-6
Hongladarom, S. and Bandasak, J. (2023) ‘Non-western AI ethics guidelines: implications for intercultural ethics of technology’. AI & Society. pp.1-14.
La Fors, K. (2022) ‘Toward children-centric AI: a case for a growth model in children-AI interactions’. AI & Society. pp.1-13.
Reports
Taylor, Steve, Pickering, Brian, Boniface, Michael, Anderson, Michael, Danks, David,
Følstad, Asbjørn, Leese, Matthias, Müller, Vincent, Sorell, Tom, Winfield, Alan, & Woollard,
Fiona. (2018). Responsible AI – Key Themes, Concerns & Recommendations for
European Research and Innovation (1.0). Zenodo. https://doi.org/10.5281/zenodo.1303253
Electronic Resources
University of Oxford. Ethics in AI [Podcast] Available at: https://podcasts.ox.ac.uk/series/ethics-ai
Ethical OS Toolkit https://ethicalos.org/
Indicative Topics
- The ethics of automation
- Authorship, AI, and copyright
- AI in the classroom
- Human-computer interaction
- Human-centred AI
- Sustainable AI
- AI in healthcare
Version History
Title: LPHIL7260 Responsible Artificial Intelligence
Approved by: Academic Board Location: Academic Handbook/Programme specifications and Handbooks/ Postgraduate Programme Specifications/MSc Computer Science Programme Specification/Course Descriptors |
|||||
Version number | Date approved | Date published | Owner | Proposed next review date | Modification (As per AQF4) & category number |
1.0 | July 2024 | July 2024 | Dr Tom Beevers | April 2029 |