SRHE Blog

The Society for Research into Higher Education

For meta or for worse…

1 Comment

by Paul Temple

Remember the Metaverse? Oh, come on, you must remember it, just think back a year, eighteen months ago, it was everywhere! Mark Zuckerberg’s new big thing, ads everywhere about how it was going to transform, well, everything! I particularly liked the ad showing a school group virtually visiting the Metaverse forum in ancient Rome, which was apparently going to transform their understanding of the classical world. Well, that’s what $36 bn (yes, that’s billion) buys you. Accenture were big fans back then, displaying all the wide-eyed credulity expected of a global consultancy firm when they reported in January 2023 that “Growing consumer and business interest in the Metaverse [is] expected to fuel [a] trillion dollar opportunity for commerce, Accenture finds”.

It was a little difficult, though, to find actual uses of the Metaverse, as opposed to vague speculations about its future benefits, on the Accenture website. True, they’d used it in 2022 to prepare a presentation for Tuvalu for COP27; and they’d created a virtual “Global Collaboration Village” for the 2023 Davos get-together; and we mustn’t overlook the creation of the ChangiVerse, “where visitors can access a range of fun-filled activities and social experiences” while waiting for delayed flights at Singapore’s Changi airport. So all good. Now tell me that I don’t understand global business finance, but I’d still be surprised if these and comparable projects added up to a trillion dollars.

But of course that was then, in the far-off days of 2023. In 2024, we’re now in the thrilling new world of AI, do keep up! Accenture can now see that “AI is accelerating into a mega-trend, transforming industries, companies and the way we live and work…better positioned to reinvent, compete and achieve new levels of performance.” As I recall, this is pretty much what the Metaverse was promising, but never mind. Possible negative effects of AI? Sorry, how do you mean, “negative”?

It’s been often observed that every development in communications and information technology – radio, TV, computers, the internet – has produced assertions that the new technology means that the university as understood hitherto is finished. Amazon is already offering a dozen or so books published in the last six months on the impact of the various forms of AI on education, which, to go by the summaries provided, mostly seem to present it in terms of the good, the bad, and the ugly. I couldn’t spot an “end of the university as we know it” offering, but it has to be along soon.

You’ve probably played around with ChatGPT – perhaps you were one of its 100 million users logging-on within two months of its release – maybe to see how students (or you) might use it. I found it impressive, not least because of its speed, but at the same time rather ordinary: neat B-grade summaries of topics of the kind you might produce after skimming the intro sections of a few standard texts but, honestly, nothing very interesting. Microsoft is starting to include ChatGPT in its Office products; so you might, say, ask it to list the action points from the course committee minutes over the last year, based on the Word files it has access to. In other words, to get it to undertake, quickly and accurately, a task that would be straightforward yet tedious for a person: a nice feature, but hardly transformative. (By the way, have you tried giving ChatGPT some text it produced and asking where it came from? It said to me, in essence, I don’t remember doing this, but I suppose I might have: it had an oddly evasive feel.)

So will AI transform the way teaching and learning works in higher education? A recent paper by Strzelecki (2023) reporting on an empirical study of the use of ChatGPT by Polish university students notes both the potential benefits if it can be carefully integrated into normal teaching methods – creating material tailored to individuals’ learning needs, for example – as well as the obvious ethical problems that will inevitably arise. If students are able to use AI to produce work which they pass off as their own, it seems to me that that is an indictment of under-resourced, poorly-managed higher education which doesn’t allow a proper engagement between teachers and students, rather than a criticism of AI as such. Plagiarism in work that I marked really annoyed me, because the student was taking the course team for fools, assuming our knowledge of the topic was as limited as theirs. (OK, there may have been some very sophisticated plagiarism which I missed, but I doubt it: a sophisticated plagiarist is usually a contradiction in terms.)

The 2024 Consumer Electronics Show (CES), held in Las Vegas in January 2024, was all about AI. Last year it was all about the Metaverse; this year, although the Metaverse got a mention, it seemed to rank in terms of interest well below the AI-enabled cat flap on display – it stops puss coming in if it’s got a mouse in its jaws – which I’m guessing cost rather less than $36bn to develop. I’ve put my name down for one.

Dr Paul Temple is Honorary Associate Professor in the Centre for Higher Education Studies, UCL Institute of Education.

Author: SRHE News Blog

An international learned society, concerned with supporting research and researchers into Higher Education

One thought on “For meta or for worse…

  1. I have an office in a university computing school in the engineering college, so I can see the future invented around me, years, in some cases decades, before the public. But for most of the things researchers stop in the corridor and show me, I respond with “Clever, but what is it for?”. Virtual Reality was one of those, AI is not.

    Generative AI is not the end of the university, but it is going to change the university far faster than radio, TV, computers, or the internet did. AI has the potential to surprise us.

Leave a Reply

Discover more from SRHE Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading