Chat­bots as a game changer for uni­ver­sity teach­ing

A conversation with ChatGPT and university teachers

Anyone has been able to hold a conversation with the ChatGPT chatbot since November. This artificial intelligence (AI) from the company OpenAI draws on digital databases and answers users’ questions in full sentences. The software is therefore able to complete entire texts, presenting new challenges for teaching staff. The question of how AI-based chatbots will change university teaching is tackled in an interview with ChatGPT itself, as well as information systems specialists Professor Oliver Müller (Data Analytics) and Junior Professor Milad Mirbabaie (Digital Society) and business education specialist Professor Tobias Jenert (University Didactics and Development) from Paderborn University. 

How will ChatGPT change university teaching?

ChatGPT: ChatGPT and other AI text generation systems have the potential to support and improve university teaching by helping teachers save time and resources, automatically providing answers to frequently asked questions, giving feedback on exercises and examinations, and even creating interactive tutorials. It can also help to facilitate access to education for people in remote areas or with disabilities, by enabling personalised learning and remote access to learning materials. However, there are also concerns that it could adversely affect interaction between teachers and students, and that it is important to ensure that the use of AI in teaching is ethical and responsible.

Oliver Müller: Like previous digital tools such as Google or Wikipedia, AI-based language models will undoubtedly increase the productivity of teachers and students. However, there is currently still a major question mark regarding the veracity of the texts generated by ChatGPT. They often sound superficially plausible, but are sometimes not factually correct. This presents a risk for both teachers and students.

Tobias Jenert: AI applications will definitely be able to provide good support for teaching standard content. However, I can see two key reservations: firstly, students’ trust in the teacher is a vital factor. In a study, students preferred to receive grades and text feedback from a person than an AI-based tool, even though they knew that the software provided more accurate and fairer assessments than people. Secondly, it is important that we do not limit teaching to conveying mainstream knowledge, but also tackle the controversies and contradictions that are a typical feature of research. Admittedly, greater importance also needs to be placed on these in 'traditional’ teaching.

Milad Mirbabaie: ChatGPT is indeed heralding a new era, including in the education sector. In the future I can see chatbots also serving as collaboration partners, moderators, or a way of increasing trust within teams. However, if it becomes impossible to distinguish future versions from texts produced by humans, the question arises of what restructuring measures will be required in the education sector to avoid students needing to produce ‘simple’ texts and essays.

What happens if students use ChatGPT for their homework assignments?

ChatGPT: Students can use ChatGPT or other AI text generation systems to produce homework assignments. This can help them to generate content more quickly and easily, but it can also mean that they spend less time deepening their knowledge of the topic, and that they incorporate fewer of their own thoughts and ideas into their work. However, it is important to note that using AI-generated content for homework assignments or other academic work generally violates the rules of must universities and is considered plagiarism. It can also result in the work not being appropriately assessed and can even prompt disciplinary measures. It is important that students read and understand their university guidelines and that they always produce work in an ethical and responsible way.

Tobias Jenert: The problem you describe is not a new one: there have long been ways of creating plagiarised work that is difficult to identify. Years ago we already had automatically generated research papers that made it through the peer review process. Ultimately, we can only fundamentally solve this problem by making examination methods so exacting that cheating becomes more difficult. However, we will never be able to completely eliminate deliberate cheating.

What student examination methods are there that AI cannot solve?

ChatGPT: Student examination methods that AI cannot solve are oral examinations, practical examinations, project work, essays and case studies. These examination methods require an understanding of the topic as well as creative and critical thinking, communication skills and practical application, which cannot be reproduced by AI.

Oliver Müller: That is a good list. And that is exactly what we are doing for master’s programmes as well. In the Information Systems department we have very few written examinations that are not accompanied by oral presentations or evaluating specific project results, e.g. programmes that have been developed.

Tobias Jenert: As well as examination methods, the skills required are key. If AI methods are able to operate more or less independently in standard situations in the future, students will need different skills: creativity to actually develop new solutions to complex problems, empathy to identify the interests of different stakeholders and establish genuine compromises, or logic and reasoning to develop alternative solutions and weigh them up against each other. Furthermore, discussions at universities go beyond the use of AI and also cover topics such as electronic examinations or legal certainty. We are therefore constantly seeking to balance standards, fairness, legal certainty and of course resources.

Milad Mirbabaie: In addition, we need to ensure that technologies such as this are viewed as decision-making aids or guidance. When Google and Wikipedia caught on, we were similarly critical. However, we learned that reference materials and search engines could also be viewed as a complementary addition. I would say the same about AI-based systems, which essentially (in simple terms) do no more than combine Wikipedia and Google and then reproduce them visualised in text form. I would therefore advocate taking a much more open approach to this topic.

Have you already seen students using ChatGPT?

Oliver Müller: I am not currently aware of any cases where ChatGPT has been actively used. However, the use of ChatGPT to generate texts for examinations violates Paderborn University’s regulations. We make students aware of this as well. From my personal experience, I can say that writing texts helps me to understand complex issues and also hugely aids my own creative thinking process. I think that anyone who has ever written a bachelor’s or master’s thesis would agree with me. We therefore need to make students aware of this from an early stage and show them the benefits of academic writing.

Milad Mirbabaie: Generations Alpha and Beta have essentially grown up with AI systems, and using such systems is increasingly becoming a reality of life, just like social media use was for Generation Z onwards. Although we have not yet seen any cases, we will not be able to prevent it from happening. The challenge will be how to firmly embed the ethical use of AI within our curriculum.

How can teachers identify texts that have been created using AI text generation systems?

ChatGPT: Teachers can identify texts made using AI text generation systems by noting indications of irregular syntax, content discrepancies or a lack of context references. The lack of a personal or consistent writing style throughout an extended text can also be an indicator of an automated text generation. However, it is important to note that some AI text generation systems are very advanced and are able to produce human-like texts, which can make it necessary to review multiple indicators.

Oliver Müller: Simple plagiarism checkers cannot handle ChatGPT. ChatGPT’s plagiarism is rarely word-for-word – instead, it generally produces its own texts, like a ghostwriter. However, OpenAI (the company behind ChatGPT) has announced a new future version of ChatGPT that will contain ‘watermarks’ invisible to humans. These are statistical patterns in the use of words, in order to expose texts generated by the service. 

Milad Mirbabaie: I wonder if it is really worth investing so much effort in detection. Would we not be better off promoting discourse and critical thinking, and taking a much more open and transparent approach to technology? I think that we should talk to our students about it a lot more and stimulate discussion. I am also a huge fan of oral final exams. In final papers, we can avoid plagiarism by getting together with our students more often to discuss and jointly reflect on their substantive findings. Future versions of ChatGPT could even help with this and illustrate the current state of research. This would save us hundreds of hours spent on research, allowing us to work in a much more target-oriented way.

How can ethically responsible use of AI be incorporated into teaching?

Milad Mirbabaie: My classes already tackle the issues of ethics and AI, and the impact that artificial intelligence has on our society. It is not a new topic. Joseph Weizenbaum was examining this as early as the 1960s, when ELIZA passed the Turing Test. AI at the time was already able to hold human-like conversations. ChatGPT is therefore just old wine in new bottles: it is simply now accessible to everyone, and the data situation is different. The large quantity of training data means that the system is more precise and comprehensive when answering questions. I will continue to introduce students to the topics of explainability and responsible AI, and also explain how ChatGPT works.

Oliver Müller: For a few years now, all of our information systems courses have tackled the topic of AI. We examine it from various angles – technical, economic, social. We teach our students not to be passive AI users, but rather to build their own AI tools and actively help to shape the technology.

If AI will soon be able to write better texts and provide better answers than humans, what will students learn in the future?

ChatGPT: It is likely that in the future, AI will be able to operate as well as humans in many sectors, including writing texts and answering questions. However, this will not mean that there will be no need for human knowledge and skills. Students should focus on skills that are difficult or impossible for AI to automate, such as critical analysis, creative thinking and complex problem-solving. It is also important for students to learn to work with AI systems and to understand how they operate.

Oliver Müller: I agree with that statement. We need to find a good division of labour between humans and AI. However, this is nothing new. The division of labour between humans and animals or humans and machines has been continually adjusted and redefined throughout human history.  The same will happen again in years to come. Students do not need to worry about being replaced by AI – just looking at the current skills shortage makes this clear. I am convinced that although AI will be able to take over some tasks in the future, this could benefit all of us.

Milad Mirbabaie: Systems like this are still serving as a form of black box. That will change in the future. For example, students will need to be able to change such AI-based systems by adjusting their source code or supplying them with additional training data. The work of students should also be to understand complex issues. Based on this, it is important to gain analytical thinking skills and solve problems in a solution-oriented and team-oriented way. Technology will not be able to do this. We as a society are also not standing still, but rather are continuing to develop. Technology catapults us to the next level. Now, we have to learn about this level and its impact, and how to preventively protect ourselves from dangers.

Tobias Jenert: Ultimately it is not about the texts themselves, but about the problems and tasks underlying them. Take the example of AI in goods management: the AI can complete standard tasks such as inventory control, demand forecasting and purchasing fantastically well. So what will make a good retailer in the future? Creativity to tap new customer groups and expand their business model. Analytical skills to identify when AI will be unable to predict something, e.g. because there has never been a pandemic before for the AI to learn from, or if something implausible happens. We will need to address these skills even more strongly in the future. However, we are continually updating our courses of study, so I feel that we are on a good path in this regard.

Have you already started using AI to prepare for or improve your teaching activities?

Milad Mirbabaie: Not for preparation or improvement, but yes as a core topic of teaching. This might be having students develop an AI-based system, for example to improve communications with customers or to determine requirements for AI-based systems in hospitals, via expert interviews. However, we are currently submitting some applications for third-party-funded projects tackling exactly this, including the development and use of an AI system to improve teaching or support students.

Tobias Jenert: The topic has enjoyed huge attention in recent years, including as regards university teaching and teacher training. There are some major funding programmes devoted to this. Paderborn University has an initiative to promote students’ ‘digital literacy’. In my view, discussions can sometimes tend too far towards the technological – AI and other digital technologies also have an impact in areas where technologies are not directly used. So even if I do not want to work at a company that is shaped by technology, I still need to deal with the phenomenon – and possess the associated skills.

Photo (Paderborn University): Artificial intelligence such as the ChatGPT chatbot creates both opportunities and risks for university teaching.

Contact