The debate of using artificial intelligence (AI) in students’ coursework is nonetheless a heated one. Some camps want to ban it entirely while others vouch for this type of technology that they believe enhance creativity and performance.

Those who view the worst-case scenario expect a future in which human-generated and computer-generated text are indistinguishable, essay assessments or prose become meaningless, and students, along with teachers, lose the skill of academic writing.

We know that university policies have had to evolve with the growth and use of AI, raising many legitimate concerns for academic integrity. But could technology be also used to provide much needed check and balance for assessing students’ work? Let’s take a look.

AI and its uses

We are learning that technology like ChatGPT is essentially an information aggregator. It trawls vast quantities of human-produced texts and extracts data, which it then synthesises into a response to a given prompt or query.

At first glance, ChatGPT’s responses on many topics appear coherent enough that they may be superficially indistinguishable from student writing.

But, some gaps show up.

Its lack of data on current events, for example, limits its capacity to respond to prompts such as “How is the war in Ukraine progressing today?” There are also guardrails set to prevent it from being used for criminal activities, though cybercrime groups are already seeking to circumvent these.

ChatGPT is limited in terms of sourcing information correctly. It’s noted that any request for a bibliography produces false and, or, irrelevant citations. Nor is it error-free: users have found comical or blatant errors when ChatGPT is asked a question as straightforward as “How do you work?”

Like all AI, it is subject to the biases of those who supply its data. And, most fundamentally, it is not true artificial intelligence. We don’t see any indications that ChatGPT understands the questions it is being asked or what it is producing in answer. What this means is that – it is not sapient, meaning, it doesn’t have the ability to show judgement. This is worth keeping in mind as the debate rages over whether such tools are capable of replacing human creativity.

So essentially, assessment producers, teachers and educators have the ability to pick out these gaps when assessing a student’s skill – checking against logic, ability to comprehend and tone of the writing.

Playing at its own game

Talk is rife about GPTZero, which claims to differentiate human and computer-generated text with a high degree of accuracy. Meanwhile, we have seen other plagiarism tools, such as Turnitin, offer their own AI-detection tools and are rapidly working to detect the newest generation of generative AI text.

And finally, and more importantly, the makers of ChatGPT are exploring “watermarking” technology to indicate when a document has been generated by the software.

Universities can also use software to identify cheating by applying AI algorithms to identify whether students have used AI in their essays or coursework. If proof is found, these can be used in investigations on allegations of plagiarism or cheating to support disciplinary actions.

But, we must keep in mind that there are opportunities for students to generate the content of their work with AI and then simply paraphrase it or rephrase the text in a way which can become undetectable or untraceable to such technologies.

Perhaps the solution to AI cheating would warrant a look at the past year or so for guidance.

For example, some universities across the UK already require students to present their work orally, where they are tested on their knowledge and understanding of the subject.

This approach is thought to be a successful strategy to ensure that students have done their own research and understand the material. However, this solution is not foolproof, as students can still cheat by using ChatGPT or similar tools to create condensed summaries of their research.

The majority of questions asked during oral presentations or “vivas” may then be answered using these shortened summaries. This means that while vivas may be a positive step in the right direction, it’s still not enough to ensure that students don’t use ChatGPT to cheat.

Best of both worlds

Ultimately, the best solution is a combination of human and non-human interventions. These include oral presentations, written examinations or group work alongside updated technology to detect cheating. This of course is done with constant reminders of strong work ethics and good study habits amongst students.

Perhaps the era of artificial intelligence and emerging technologies is a clear reminder that teaching methods in the educational system need to be considerably improved, and evolved with the times.

Universities can achieve this by providing students with improved study groups, tutors, and online courses that help them understand the material and develop their skills.