Generative artificial intelligence is a growing branch of AI which helps with generating new and original content. ChatGPT, the word-of-mouth generative AI these days, is a language model created by Open AI that uses algorithms to produce human-like answers to user questions.
Generative AI & ChatGPT popularity
Since its advent, people have incorporated ChatGPT into chatbots and virtual assistants to offer customised and interactive user experiences.
People have also used ChatGPT to create content like letters, emails, reports, essays, poems, jokes, articles, song lyrics and so on. Many people have been mesmerised by the convincing and realistic answers that ChatGPT produced in a very short period of time.
During a recent , Bill Gates, the billionaire philanthropist and co-founder of Microsoft, emphasised the significance of advancements in AI, referring to it as
"the most important" innovation of our time. He further said,
“this will change our world”.
ChatGPT became available to the public on 30 November 2022, and attracted one million users in only five days.
Five ways generative AI can disrupt universities’ business
As an industry heavily dependent on production and assessment of textual content, higher education is heavily going to be impacted by Generative AI like ChatGPT.
Below shows the five main ways in which universities are going to be challenged by Generative AI:
- Students can use Generative AI to complete their assessments such as written assignments, reports, essays, projects, multiple choice quizzes, online exams, creating presentation slides and so on. This puts academic integrity at a high risk. Having graduate professionals who have completed their subjects’ assessment and thus passed their subjects by Generative AI, can significantly lower university graduates’ quality and competence. This not only ruins the image of universities but leads to incompetent professionals in various roles in the society.
- Our research shows that text produced by Generative AI is not plagiarism free. Generative AI uses giant datasets to generate answer to user questions and the text produced can be plagiarised to different extent. Thus, students’ use of Generative AI to complete their assessments, without receiving proper training on it, mean a significant increase in misconduct cases in universities.
- Research students can use Generative AI in choosing a research topic, formulating their research, designing research methodology, collecting and analysing data, creating research findings, writing thesis and so on. This puts research integrity at a very high risk. Specially at present, three months after the introduction of ChatGPT, when there is a lack of established software capable of detecting content produced by Generative AI. The same issue stated above regarding research students inappropriately using Generative AI to conduct research can be undertaken by university academics and research staff. This can lead to a significant growth in the cases of academic misconduct, and this negatively impact universities’ reputation.
- Generative AI can be used by academics to grade and provide feedback on student assignments and exams, yet no research to-date has proven the accuracy and legitimacy of grades and feedback produced by Generative AI. In the absence of proper training provided by universities to the academics, they are likely to use Generative AI for evaluating student assessments leading to unfair assessment evaluation and student complaints.
- The adoption of generative AI by staff and students can potentially create cybersecurity risks like hacking, threat to data privacy and data breaches. This needs to seriously be addressed by the universities to allow protecting their data and systems.
Four recommendations for universities
Question here is what should universities do to minimise the risks Generative AI can pose to them? Should they ban the use of Generative AI? The answer to the latter question is certainly a no!
Even if universities limit access to generative AI on the university network, staff and students can still use these platforms on their own networks.
What can be advised to universities to minimise the risk of Generative AI is, firstly, to offer proper training to staff and students on the ethical and responsible use of Generative AI.
Secondly, to form a work group to create required policies and procedures on the ethical and responsible use of Generative AI.
Next, to keep up with the Generative AI news to be up-to-date with the constant changes happening in this space and look for software which can properly detect AI produced content.
And lastly, to form a workgroup to deal with technical challenges posed by Generative AI such as threat to privacy and data breach.