SRHE Blog

The Society for Research into Higher Education

What do artificial intelligence systems mean for academic practice?

Leave a comment

by Mary Davis

I attended and made a presentation at the SRHE Roundtable event ‘What do artificial intelligence systems mean for academic practice?’ on 19 July 2023. The roundtable brought together a wide range of perspectives on artificial intelligence: philosophical questions, problematic results, ethical considerations, the changing face of assessment and practical engagement for learning and teaching. The speakers represented a range of UK HEI contexts, as well as Australia and Spain, and a variety of professional roles including academic integrity leads, lecturers of different disciplines and emeritus professors.

The day began with Ron Barnett’s fierce defence of the value of authorship and the concerns about what it means to be a writer in a Chatbot world. Ron argued that use of AI tools can lead to an erosion of trust; the essential trust relationship between writer and reader in HE and wider social contexts such as law may disintegrate and with it, society. Ron reminded us of the pain and struggle of writing and creating an authorial voice that is necessary for human writing. He urged us to think about the frameworks of learning such as ‘deep learning’ (Ramsden), agency and internal story-making (Archer) and his own ‘Will to Learn’, all of which could be lost. His arguments challenged us to reflect on the far-reaching social consequences of AI use and opened the day of debate very powerfully.

I then presented the advice I have been giving to students at my institution using my analysis of student declarations of AI use which I had categorised using a traffic light system for appropriate use (eg checking and fixing a text before submission); at risk use (eg paraphrasing and summarising); and inappropriate use (eg using assignment briefs as prompts and submitting the output as own work). I got some helpful feedback from the audience that the traffic lights provided useful navigation for students. Coincidentally, the next speaker Angela Brew also used a traffic light system to guide students with AI. She argued for the need to help students develop a scholarly mindset, for staff to stop teaching as in the 18th Century with universities as foundations of knowledge. Instead, she proposed that everyone at university should be a discoverer, a learner and producer of knowledge, as a response to AI use.

Stergios Aidinlis provided an intriguing insight into practical use of AI as part of a law degree. In his view, generative AI can be an opportunity to make assessment currently fit for purpose. He presented a three-stage model of learning with AI comprising: stage 1 as using AI to produce a project pre-mortem to tackle a legal problem as pre-class preparation; stage 2 using AI as a mentor to help students solve a legal problem in class; and stage 3 using AI to evaluate the technology after class. Stergios recommended Mollick and Mollick (2023) for ideas to help students learn to use AI. The presentation by Stergios stood out in terms of practical ideas and made me think about the availability of suitable AI tools for all students to be able to do tasks like this.

The next session by Richard Davies, one of the roundtable convenors, took a philosophical direction in considering what a ‘student’s own work’ actually means, and how we assess a student’s contribution. David Boud returned the theme to assessment and argued that three elements are always necessary: assuring learning outcomes have been met (summative assessment), enabling students to use information to aid learning (formative assessment) and building students’ capacity to evaluate their learning (sustainable assessment). He argued for a major re-design of assessment, that still incorporates these elements but avoids tasks that are no longer viable.

Liz Newton presented guidance for students which emphasized positive ways to use AI such as using it for planning or teaching, which concurred with my session. Maria Burke argued for ethical approaches to the use of AI that incorporate transparency, accountability, fairness and regulation, and promote critical thinking within AI context. Finally, Tania Alonso presented her ChatGPTeaching project with seven student rules for use of ChatGPT, such as proposing use only for areas of the student’s own knowledge.

The roundtable discussion was lively and our varied perspectives and experiences added a lot to the debate; I believe we all came away with new insights and ideas. I especially appreciated the opportunity to look at AI from practical and philosophical viewpoints. I am looking forward to the ongoing sessions and forum discussions. Thanks very much to SRHE for organising this event.

Dr Mary Davis is Academic Integrity Lead and Principal Lecturer (Education and Student Experience) at Oxford Brookes University. She has been a researcher of academic integrity since 2005 and has carried out extensive research on plagiarism, use of text-matching tools, the development of source use, proofreading, educational responses to academic conduct issues and focused her recent research on inclusion in academic integrity. She is on the Board of Directors of the International Center for Academic Integrity and co-chair of the International Day of Action for Academic Integrity.

Author: SRHE News Blog

An international learned society, concerned with supporting research and researchers into Higher Education

Leave a Reply

Discover more from SRHE Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading