Podcast: Broader technical landscape and key players in Generative AI – Episode 4 (20 mins)

Teaching Matters: Episode 4 - Generative AI Podcast

The fourth and final episode of Generative AI podcast series↗️ features James Stewart, a lecturer at Science, Technology and Innovations studies↗️, The University of Edinburgh. This episode provides a comprehensive exploration of Generative AI, discussing its influence not only in academia but also across industries, and delving into the key players in this dynamic landscape.


Academic integrity and Generative AI

James emphasizes the need for proactive engagement with AI technologies rather than viewing them as problems to be overcome.

….as soon as we start getting things like students writing essays using chatGPT then it’s like, Oh no, we have to do something about it. So like with most technologies, most people are actually forced into thinking about it rather unwillingly and don’t come very well equipped with the how to think about it and see it more as a problem to be overcome rather than opportunity to be grasped.

James shares insights into his experience as a lecturer and the challenges posed by AI-generated content in student essays. He talks about why a student would resort to paid essay writing services or would possibly submit AI generated essays ? He highlights the need for us to work closely with the students in mitigating these challenges.

Gen AI as a personal tutor

Rather than as a writing assistant, James discusses a helpful use case of how we might use this technology to help understand what we’re reading by entering into a dialogue with it.

You can upload a paper to ChatGPT and then say, I want you to ask me some questions about this paper, then you will tell me whether my answer is good and you will help me answer questions. So it’s a much more interactive way, the same as I would do as a teacher sitting down with a student talking about the paper. Actually, these language language models are good at that.

Broader technical landscape – key players and value chain

The conversation delves into the broader technical landscape of AI, exploring the value chain, key players in the field, and the development of various AI models. James highlights the diverse origins of these models and the emerging industries related to data collection, cleaning, and fine-tuning. He talks about the increasing scope these technologies provide for diverse yet privileged researchers from across the world.

We often have this talk about how these are just produced by kind of privileged white men[people]. Well, actually, they’re probably produced by privileged men[people] of any colour in any ethnicity and a lot of different political situations.

He elaborates on the technical landscape of AI highlighting some of the key players:

  •  Those who bring in the data to train these models

…people put together whatever they can collect normally from the Internet, and they can be collection of images …. of pirated books and the sorts of things that you can download from a website … anything, almost anything that’s got text or images, so that you can train a text or an image model. The same thing with music and in a research context….some of the main research datasets that have been used to train these early models actually are licensed (if they actually really got a license) as research only, but they’re already being put into commercialized models.

  •  Those who are in the world of publishing data

James talks about organisations such as Getty Photographic Library and social platforms like Twitter and Reddit that are grappling with the unauthorized use of their data in AI models, prompting them to take measures such as restricting access to their databases to regain control and explore potential monetization or independent model development.

  • Those that are cleaning the data and fine tuning

we have a lot of companies now that are working with the core models in order to try and produce these so-called fine tuned models. And there’s a lot of science going in about when you fine tune it very much to do one task. It is going to be be absolutely terrible at any other task. And how do you define the limits and how do you get people to make sure that they don’t use it for the wrong thing?

James touches on the ethical concerns surrounding the use of copyrighted material for training AI models and the ongoing debates about rights and permissions. He cites legal battles of artists arguing that their works have been used to train models without consent. On the flip side, once a model is trained, decisions arise about whether to release rights or keep them proprietary. The complex intersection of AI, artistic rights, and commercial interests is sparking debates about the ethical use of these technologies.

James highlights the need for collaboration as companies and organizations seek to incorporate AI applications into everyday operations, creating diverse career opportunities beyond major corporations.

Take home messages

Vasileios:

We need more of these type of conversations. They’re very much needed as the content of what we have said indicates. I think today’s conversation highlights the need for more social science understandings of the technological content.. and we need more people who are sensitized to look into broader social issues to treat these technologies, not as the illnesses of a given social context, but the symptoms of existing social illnesses, I think. that’s something that, for me, underpins most of the comments that we have heard…

Irene:

We’ve covered the many topics – the commercial, the industrial side towards academia side – and a few concerns from both the educators perspective and the students perspective of the use of this AI tools. well, I think many of us have mentioned the double edged sword feature of AI software… and from a student perspective, I think it’s important to see AI software as a supplement rather than substitute.

Lara:

the fact that these technologies are sort of parrots. As one famous paper states, they reproduce biases from an inherently biased world because that’s what we have. So if we want them to function differently, then it’s up to us to introduce additional biases, biases within these models so that they can function in a less toxic manner based on whichever context we want them to function in.. and this could happen in academia… this could happen in government settings. But really, it’s up to us to understand and decide how we want to change the functionality of this technology and what data we want to introduce.

We would like to thank our amazing guests of this series, Vasileios, James, Laura and Irene. Thank you for bringing this important discussion to the table and discussing this topic from varied perspectives. This episode marks the end of this Podcast series on Generative AI. I’m sure this will be an ongoing conversation and we’d be delighted for you to join these conversations. Reach out to Teaching Matters (teachingmatters@ed.ac.uk).  Stay Tuned!

Timestamps:

(2:02) Academic integrity and Generative AI

(4:47) Gen AI as a personal tutor

(6:03) Broader technical landscape – key players and value chain

(17:09) Series conclusion: Take home messages

Transcript of this episode↗️


phtograph of the authorJames Stewart

James Stewart is a lecturer in Science Technology and Innovation studies, and part of the Edinburgh Living Lab↗️ team. You can follow him on Twitter @jamesks↗️ and @edilivinglab↗️.


photograph of the authorVasileios Galanos

Vasileios Galanos (it/ve/vem) is a Teaching Fellow in Science, Technology and Innovation Studies at the School of Social and Political Science, University of Edinburgh and Associate Editor of the journal, Technology Analysis and Strategic Management. Vasileios researches and publishes on the interplay of expectations and expertise in the development of AI, robotics, and internet technologies, with further interests in cybernetics, media theory, invented religions, oriental and continental philosophy, community-led initiatives, and art. Vasileios is also a book, vinyl, beer cap, and mouth harp collector – using the latter instrument to invite students back from class break.
Twitter handle: @fractaloidconvo


photo of the authorLara Dalmolin

Lara is a second year Ph.D. student in Science, Technology and Innovation Studies at The University of Edinburgh. She is also a part of the social data science research cluster with the University of Copenhagen. Her research interests are within the realm and intersection of language, A.I. and gender, asking how all of these things come together and interplay. Her research project specifically is about trying to integrate intersectional and especially queer perspectives in large language models.


photograph of the authorIrene Xi

Irene Xi is a postgraduate student, currently undertaking Sociology and Global Change course at The University of Edinburgh. She earned a Communication Bachelor’s degree from Monash University. Also, she is a Chinese girl who is enthusiastic and interested in AI and online technologies.


Episode produced and edited by:

photo of the authorSylvia Joshua Western

Sylvia is currently doing her PhD in Clinical Education at The University of Edinburgh and has a Master’s degree in Clinical Education. Her PhD research explores test-wise behaviours in Objective Structured Clinical Examination (OSCE) context.  Coming from a dental background, she enjoys learning about and researching clinical assessments. She works part-time as a PhD intern at Teaching Matters, the University’s largest blog and podcast platform through Employ.ed scheme at the Institute of Academic Development.


photograph of the authorJoséphine Foucher

Joséphine is doing a PhD in Sociology at The University of Edinburgh. Her research looks at the intersection between art and politics in contemporary Cuba. She supports Jenny Scoles as the Teaching Matters Co-Editor and Student Engagement Officer through the PhD Intern scheme at the Institute for Academic Development.

Leave a Reply

Your email address will not be published. Required fields are marked *