ChatGPT: A Tool That Will Forever Change Higher Education

0

ChatGPT has taken the world by storm. Its easy-to-use, non-technical interface has put the power of generative AI into the hands of students, faculty, and staff in Higher Education Institutions (HEIs) around the world. It also made that same technology accessible to those with malintent. ChatGPT’s breakneck acceleration in terms of development makes any long-term projections about this technologies effect on any sector of society hard to determine. That is just as true when it comes to ChatGPT’s effect on higher education. ChatGPT and AI seem to be at a crucial fork in the road. The policies being enacted presently will set the tone for AI in higher education institutions (HEIs) well into the future. ChatGPT can have both advantages and disadvantages to any institution’s cyber security and HEIs’ quality of education. Therefore, it is important to be well-informed about the current state of ChatGPT and other AI tools so that policies can be more effective in striking a balance between protection and innovation.   

In terms of cyber security, ChatGPT can pose an immense risk to any organization. There has never been a tool as capable of creating social engineering campaigns or writing malware as flexibly as ChatGPT. Anyone, despite their technical experience, can utilize ChatGPT to generate phishing campaigns, capable of responding to victims in a manner that is both targeted to individuals and indistinguishably human-like. Professor Samuel Addington states, “The conversational nature of ChatGPT and its ability to generate realistic-sounding responses could potentially make it more effective in these types of attacks” (Addington). The age of the Nigerian Prince scam is over. Now, bad actors can use ChatGPT, along with accessible personal information on the internet, to target individuals and increase the success of attacks. HEIs must double down on training and education for faculty, staff, and students in terms of social engineering threats (Addington). Oftentimes, the easiest way to infiltrate an organization’s cyber domain is by means of manipulating people (Social Engineering), not computers. Now, with AI and ChatGPT, that can be done quicker and easier than ever.  

At its heart, AI is a pattern recognition system. Over time, algorithms become better at accurately detecting more complex patterns. AI technology, like ChatGPT, can be leveraged by cyber professionals to comb through large amounts of data to recognize patterns. That technology may be useful in finding suspicious network activity, phishing emails, summarizing complex security reports, etc. Isogent, an IT as a service company, writes that AI and ChatGPT can be used to Automate processes, allowing IT departments to better utilize their staff for more important projects (Locke).  This can allow IT departments, often understaffed, to handle an institution’s cyber hygiene and security more effectively. Locke warns against the overreliance on AI technology, stating that a false sense of security might tempt organizations to remove humans from the loop entirely. This can lead to unsupervised systems that are capable of causing harm if any errors are made (Locke). It is, therefore, important to explore hybrid solutions that allow for the automation of IT tasks with the help of human supervision. Policies such as this, at least for the time being, will be the future of cyber security and data management.  

Aside from the possible security threats that ChatGPT and AI cause, HEIs are also worried about the threats these technologies bring to the education of their students. From its very beginning, students have found ChatGPT to be a gold mine when it comes to getting an easy A on assignments.  Simply copy and paste a prompt or question from an assignment into ChatGPT, and within seconds, you can have a well-written, informative, and stylized response to your query. What students do not understand is that the technology behind ChatGPT does not guarantee factually correct information. While it may give you a response that seems well-researched and understood, the underlying facts and premises might be completely false. UNESCO released a report on ChatGPT in the context of higher education, finding some negative side effects of AI technology in education. Some of those side effects are academic integrity violations, privacy concerns, cognitive bias in the training models, and accessibility (UNESCO). These findings show that ChatGPT poses a serious risk to the education of current and future students. The means by which institutions have taught their students for decades must change and adapt moving forward. One can liken this change to what was seen with the implementation of the internet into education. Access to online databases and web browsers rendered libraries and reference cards obsolete. While AI is still in its infancy, one can reliably claim that it will remain a part of our society from this point on. HEIs will have to pivot their policies and curriculums once again to better educate students in a world with AI rather than ban the use of AI under the guise of protecting education.  

What that change in education will look like has yet to be discovered. HEIs must begin to create policies and teach their students best practices regarding AI. Businesses will most likely implement AI into operations as a means of making employees more productive (McLaren). An institution that refuses to teach its students how to reliably use AI will be doing its pupils a disservice by not preparing them for the future of the workforce. UNESCO’s report on AI in HEIs also provides some examples of ways that institutions might teach their students to use AI. This list includes tutoring, study partners, Socratic learning tools, inspiration, etc. UNESCO provides an example of how ChatGPT can be used to help professors understand the knowledge of their students. Professors can have students write their current understanding of the course into ChatGPT, and the AI can create a user profile that shows the professor how the student is learning and what information they understand (UNESCO). This could be groundbreaking in education. Rather than having large exams to test a student’s understanding of specific problem sets, AI can help professors see the rough spots in their students’ understanding, allowing for instruction tailored to specific weaknesses. Thus, increasing the quality of education. ChatGPT and AI, in general, can revolutionize the foundation of education if harnessed and responsibly taught and utilized by HEIs. Moving forward, HEIs should be focusing on their plans to implement this tool so that they and their students do not fall behind.  

Obviously, AI is new and unpredictable. The future of AI can seem both scary and promising at the same time. The decisions made now by HEIs will have a long-lasting impact on students and AI implementation in the future. Just because AI is new does not mean that HEIs should distance themselves from it and stick to the status quo. Instead, HEIs should be doing the opposite, utilizing AI in IT and changing the way students are educated by teaching them how to responsibly access one of the greatest advancements in technology since the personal computer. 

Works Cited  

Addington, Samuel. ChatGPT: Cyber Security Threats and Countermeasures, 4 Apr. 2023. Accessed 9 June 2023. 

Locke, Jeff. “The Two-Faced Nature of ChatGpt: Examining the Risks and Benefits of AI in Cybersecurity.” The Two-Faced Nature of ChatGPT: Risks and Benefits of AI, 22 Mar. 2023, www.linkedin.com/pulse/two-faced-nature-chatgpt-examining-risks-benefits-ai-cybersecurity. 

McLaren, Kirk  W. “The Rise of AI In The Workplace.” Forbes, 19 May 2023, www.forbes.com/sites/forbesbooksauthors/2023/05/18/the-rise-of-ai-in-the-workplace/?sh=455268762224. 

UNESCO. ChatGPT and Artificial Intelligence in Higher Education: Quick Start Guide, https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf. Accessed 9 June 2023. 

Share.

About Author

I am a computer science major at Fordham University, working as an IT risk analyst assistant in the Fordham University IT department.

Comments are closed.