Academic Handbook AI Strategy

AI Strategy 2023-24

Vision

  1. Northeastern University London (the University) will be a national leader in advancing AI literacy and developing responsible use in the higher education setting.

Strategy Principles

  1. This strategy is to place the University at the forefront of discovery concerning the future of AI, as a tool with both ready promise and uncertain impacts. The University will use its strengths in research, teaching and learning, and forming external partnership to examine the applications of AI in both educational and professional settings, and to cultivate foresight into a future of unprecedented integration between human labour and AI. This strategy recognises the integral importance of AI literacy to responsible AI use.
  2. This AI strategy reflects the University’s intrinsic and unique association with the humanities, the study of which develops acuity of thought, reasoning power, knowledge of methods and sources, information awareness, and communication skills, all outcomes applicable to lifelong learning and the demands of work and life after graduation (See AQF2: Overview of Teaching and Learning). It also aligns with the University’s inter- and cross-disciplinary growth. Equally, it is informed by Northeastern University’s academic vision and its unique position as a global university network with impactful and experiential education at its core, together with particular strengths in humanics.[1]
  3. This strategy proceeds from the following principles:
    1. Teaching, learning, and research at the University is tightly connected to external contexts, both within the world of work and beyond.
    2. The University has a responsibility to prepare its students for lives and careers after graduation and to deliver a competitive education by staying up to date with advances in education, industry, and society.
    3. The University recognizes that any practical strategy it adopts will have an impact on education, industry, and society. This could be direct, such as in the case of influencing policy in the HE, government, or private sector, or indirect, by encouraging attitudes that University members carry into external contexts.
    4. AI technologies are advancing into professional settings, raising predictions of near-term prevalence in the workplace and the classroom.
    5. The rapid rate of change of AI technologies challenges predictions of long-term consequences for education, industry, and society.
    6. The use of AI carries immense potential. It also raises significant concerns, both for higher education and at large. In the context of education, AI has the potential to undermine learning, creativity and innovation, and independent thought and inquiry. It is the duty of the University to privilege applications of AI that foster these competencies. In effect, education has a crucial role to play in shaping the future, global AI landscape.
    7. The use of AI raises important concerns about access, equality, bias, and discrimination. These issues must be addressed when integrating AI into learning activities, coursework and research.

Strategy Implementation

Generative AI

  1. This strategy applies to machine learning models, such as ChatGPT and Stable Diffusion, that generate text, code, audio, or graphics based on training or web-sourced data. It also applies to integrated forms of generative AI, such as Copilot, with the ability to produce substantively new content.

AI Literacy

  1. AI literacy comprises a basic understanding of how generative AI works, an awareness of its applications and limitations, and an ability to weigh ethical questions concerning its use. In doing so, it builds on the content areas of humanics, a discipline that nurtures creativity and flexibility in anticipating a future of close interaction between humans and machines.[2]
    1. Technological literacy involves a familiarity with machine learning and the skills to optimise interactions with AI on the strength of this knowledge.
    2. AI literacy also comprises data literacy, which enables us to use AI for processing and organising data, and which empowers us to use our own knowledge and discernment in probing whether data is reliable or complete. Equally, data discipline allows us to integrate our knowledge and perspectives with data provided by AI for the purposes of efficiency and lateral ideation.
    3. AI literacy comprises a strong foundation in AI ethics, as well. This is the study and formation of principles that underlie AI development and use, and the judgement of AI use as morally right or wrong in specific cases. Generative AI raises issues of fairness, trustworthiness, sustainability, safety, and privacy, which human literacy, and particularly, the capacities of cultural agility and critical thinking, allows us to appreciate and resolve.
    4. AI literacy will take particular shape according to the demands of different disciplines and learning outcomes.
    5. In order to level disciplinary disparities and promote equal opportunity, the University will enable AI literacy through workshops, trainings, and resources targeting both student and faculty audiences.
    6. The aim of advancing AI literacy is to empower faculty, students, and researchers to make informed decisions about AI use and to stimulate a proactive understanding of AI futures.

Responsible Use

  1. Being AI-literate is indispensable to using AI responsibly, although responsible use will take particular shape from the demands of different academic settings.
    1. In the context of teaching and learning, the responsible use of AI enhances and does not hinder learning. It refers to practices that do not draw attention away from foundational literacies, but instead afford us more time and energy to cultivate and fulfil scholastic potential.
    2. In a research context, using AI responsibly means at all times remaining accountable before peers, employers, funders, government, publishers, independent support agencies and networks, and the public, including future generations.[3] Where research involves the development of AI, responsible use encompasses the design, development, implementation, and regulation of AI systems that are ethical, safe, and trustworthy.
    3. In all cases, the use of AI must be acknowledged in order to remain responsible. This holds teaching, learning, and research up to a standard of integrity rooted in transparent and accountable practices.
    4. In all activities, AI is to be used in ways that align with the values of the University, as outlined, for example, in its Equality, Diversity & Inclusion Policy and its Assessment Strategy.
    5. The University will issue clear guidance on what constitutes irresponsible use of AI and how to avoid it. Irresponsible use of AI in assessment will be defined in the Academic Misconduct Policy.

Conclusion

  1. Perhaps the most salient feature of AI at the time of writing of this strategy is its potential to create new currents for discovery, one of Northeastern University’s academic plan pillars. The University is uniquely poised to enrich the forward momentum of the global university network to which it belongs by enhancing our foresight. Through its foundational strengths in the humanities, which train depth and roundness of vision, and its inter- and cross-disciplinary development, the University will bring us closer to a future where the positive opportunities of AI become positive outcomes.

Version History

Title: AI Strategy

Approved by: Academic Board

Location: Academic Handbook/ Strategies

Version Number Date Approved Date Published  Owner  Proposed Next Review Date
23.1.0 July 2023 September 2023 Associate Dean of Teaching and Learning May 2024
Referenced documents Academic Misconduct Policy; Equality, Diversity and Inclusion Policy; Assessment Strategy; AQF2 Overview of Teaching and Learning.
External Reference Point(s) None

Footnotes

[1] See item 6 below.

[2] For an overview of humanics, see the introduction of Northeastern University President Aoun’s Robot-Proof: Higher Education in the Age of Artificial Intelligence (MIT Press, 2017), pp. xviii–xix.

[3] See the map of the “Research Integrity Landscape in the UK” produced by the UK Research Integrity Office (UKRIO): https://ukrio.org/about-us/research-integrity-landscape-in-the-uk/.