Generative Artificial Intelligence – ban or embrace?

Image credit: Tim Drysdale, created using midjourney, prompt: friendly robot standing between a happy child and an angry man –ar 16:9

In this post, Tim and Adam discuss the threats and opportunities generative artificial intelligence tools can present in Higher Education. Highlighting  an urgent need for artificial intelligence literacy, Prof Tim Drysdale and Prof Adam A. Stokes from the School of Engineering signpost resources and share their experience on how this can be a fascinating step change in teaching and learning. This post belongs to the Hot topic series: Moving forward with ChatGPT.


Reaction to ChatGPT has been polarised. Microsoft has enthusiastically packaged GPT-4 into Office 365 while some EU states have acted to ban it. Universities are split on whether to embrace it or treat it as academic misconduct. University policies, including our own, are likely to evolve as further developments occur, both in the technology and our understanding of it. So, what are the factors that might influence whether we ought to ban it or embrace it at The University of Edinburgh?

You can’t restrict something if you can’t control access

ChatGPT is just one of a number of a generative artificial intelligence tools. Italy’s temporary ban on ChatGPT was a specific objection to suspected violations of General Data Protection Regulation laws, not a blanket ban on generative artificial intelligence. The CEO of OpenAI has stated that growing ever-larger language models is no longer the optimum direction for the field – apparently we have reached the peak size of language models. Now, the next step is diversification into more specialised tools. An example of this is ChatLlama, an open source version of GPT that is intended to help users create customised assistants. There is no credible means of banning access to generative artificial intelligence – even if we could switch off Co-pilot for our Office365 subscription, and even if the UK government banned OpenAI from operating here, there are other commercial providers of GPT-like models, and the source code is available to make an infinite number of personalised assistants. Restricting access is simply not credible.

You can’t enforce a ban on something you can’t detect

Traditional plagiarism detection tools are problematic for a number of reasons [1], but at least they are known to deliver their intended function of comparing submitted work against existing work. Unfortunately, the same cannot be said of tools for detecting the output from large language models. There is an as-yet un-peer-reviewed study claiming that as the distribution of words from large language models more closely approaches natural human output, the reliability of a successful detection falls to 50%, i.e. a coin-flip. Approaches such as watermarking are not a solution either, because of the possibility for adversarial humans to infer those watermarks, add them to human-generated text, and then cause potential reputational damage to those who develop or use the detection mechanisms. While far-fetched, it remains theoretically possible an activist-minded student could take this approach, and end up bringing a University to court for incorrectly accusing them of plagiarism, and winning by showing an audit trail for how they, as a human, generated the text. Meanwhile, a number of other students using AI to generate their essays would have gone undetected. This does not make for an even playing field amongst students, because there would be clear disadvantages on offer for those choosing to act with integrity, a situation similar to the various doping scandals in professional sport, where choosing not to dope was effectively choosing not to win.

It’s worth embracing things that help us better reflect the world of work

It’s probably no surprise that graduates value the activities that most closely prepare them for the world of work [2]. A common denominator in the world of work is the use of Microsoft Office. We use it on campus, and apparently over 330,000 companies use it in the UK. Microsoft have bundled GPT-4 bundled into Office365 and fused with organisational data, to create the “Co-pilot” tool that can generate business-specific responses to prompts. Since we know GPT-4 can produce realistic but spurious claims that are legally damaging, such as generating a fake citation to a non-existent Washington Post newspaper article to support the claim that a real professor had sexually harassed someone (on a trip they never took, at a School they never taught at). Imagine the legal consequences if a new graduate sends out a Co-pilot generated response that seems plausible to them, but that invites legal jeopardy because it is in fact, incorrect. This suggests an urgent need for artificial intelligence literacy in higher education [3]. We argue that this literacy cannot be achieved whilst simultaneously banning the use of artificial intelligence, because researchers have discovered that ethics training does not translate into other courses unless it is embedded [4]. Staff could potentially use AI in marking, such as GRAIDE from Imperial or GPT itself and that will require understanding from all stakeholders involved, including students and institutions.

 How do we embrace GPT going forward?

The three main points in this article have focussed on threats: control, enforcement, and the disconnect between education and practice. The opportunities presented by large language models represent a fascinating step-change both for teachers and students, but they challenge current pedagogical practice.

As an example, in one specific course that I (Prof Stokes) taught this year then we embraced ChatGPT and I would ask the tutorial questions directly to ChatGPT live in-front of the class. The AI was capable of taking in a question posed in natural language, of parsing the relevant points, of finding the correct background materials and equations, of solving that equation, and explaining each step along the way – i.e. it was able to perform much of the work that I would have  normally done as a teaching academic, leaving me either to act simply as a text-to-speech module, or indeed to be able to add more value to the class  than simply solving equations. For the rest of the lecture course I changed my tutorial style to add value over and above that which was afforded by ChatGPT: I was able to spend time drawing diagrams, discussing papers from the literature, hosting whole-class discussions etc… The use of the AI tool enabled a higher-quality teaching experience for the face-to-face component of the class.

Going forward, using ChatGPT with well-designed prompts can enable the system to act as a very personalised AI-tutor for the student, one which can be tailored to their current level of understanding, whether that is surface level or deep[5], and whether that includes an awareness of cutting edge research in their field.  While there is no evidence that catering to different learning styles leads to better learning outcomes [6],[7], generative AI opens the door to experimenting in this area at much lower opportunity cost. Early developments for prompts such as “Mr Ranedeer” show enormous promise for augmenting the learning experience of students, and changing the way that  staff teach and assess students. We as academics need to recognise the opportunities and “lean-in” to the new technologies, otherwise we, and the sector, will be left behind.

References

[1] Beetham, H, Collier, A, Czerniewicz, L, Lamb, B, Lin, Y, Ross, J, Scott, A-M & Wilson, A (2022), Surveillance practices, risks and responses in the post pandemic university, Digital Culture and Education, vol. 14, no. 1, pp. 16-37.

[2] Leigh N. Wood, Jim Psaros, Erica French & Jennifer W.M. Lai (2015), Learning experiences for the transition to professional work, Cogent Business & Management, 2:1, 1042099, DOI: 10.1080/23311975.2015.1042099.

[3] Laupichler, M. C., Aster, A., Schirch, J., & Raupach, T. (2022), Artificial intelligence literacy in higher and adult education: A scoping literature review, Computers and Education: Artificial Intelligence, 100101.

[4] Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J. (2019), Embedded EthiCS: integrating ethics across CS education, Communications of the ACM62(8), 54-61.

[5] Thompson, A.R., Lake, L.P.O. (2023), Relationship between learning approach, Bloom’s taxonomy, and student performance in an undergraduate Human Anatomy course, Adv in Health Sci Educ, 1-16. 

[6] Neal Nghia Nguyen, William Mosier, Jayme Hines & William Garnett (2022)Learning Styles Are Out of Style: Shifting to Multimodal Learning Experiences, Kappa Delta PiRecord, 58:2, 70-75.

[7] Cedar Riener & Daniel Willingham (2010) The Myth of Learning Styles, Change: The Magazine of Higher Learning, 42:5, 32-35. 


Tim Drysdale

Professor Timothy Drysdale is the Chair of Technology Enhanced Science Education in the School of Engineering, having joined The University of Edinburgh in August 2018. Immediately prior to that he was a Senior Lecturer in Engineering at the Open University, where he was the founding director and lead developer of the £3M openEngineering Laboratory. The openEngineering Laboratory is a large-scale online laboratory offering real-time interaction with teaching equipment via the web, for undergraduate engineering students, which has attracted educational awards from the Times Higher Education (Outstanding Digital Innovation, 2017), The Guardian (Teaching Excellence, 2018), Global Online Labs Consortium (Remote Experiment Award, 2018), and National Instruments (Engineering Impact Award or Education in Europe, Middle East, Asia Region 2018). He is now developing an entirely new approach to online laboratories to support a mixture of non-traditional online practical work activities across multiple campuses. His discipline background is in electronics and electromagnetics.


photo of authorADAM A.STOKES

Professor Adam A. Stokes is a Full Professor and Chair of Bioinspired Engineering in The School of Engineering at The University of Edinburgh. He holds degrees in engineering, biomedical science, and chemistry and he used this background to found The Soft Systems Group, an interdisciplinary research laboratory focusing on the intersection of next-generation robotics technology, bioelectronics, and bioinspired engineering. He is the Co-Lead of The National Robotarium, the UK centre of excellence in robotics, and Deputy Director of the Edinburgh Centre for Robotics. Before joining the faculty at Edinburgh, he was a Fellow in the George M. Whitesides group at Harvard University, one of the most innovative and entrepreneurial labs in the world. He is enthusiastic about translating innovation out of the lab and into people’s lives. His entrepreneurial activities have been recognised  by winning the Inaugural Data Driven Entrepreneurship (DDE) Academic Entrepreneurship Award, and the Principal’s Award for Innovation. Outside of the academy, he is a founder of several companies and he is the Academic in Residence with Archangels Investors Ltd.

One comment

  1. In the ever-evolving landscape of higher education, the integration of generative artificial intelligence (AI) tools like ChatGPT has sparked intense debates and varied reactions. Prof Tim Drysdale and Prof Adam A. Stokes from the School of Engineering at The University of Edinburgh shed light on the complexities surrounding the adoption or rejection of this technology in their recent blog post.

Leave a Reply

Your email address will not be published. Required fields are marked *