The Use and Abuse of AI in Education Navigating a Double-Edged Sword​

 

 

 

Authored By: Kyle Conley – Tech Evangelist, AI & Cybersecurity at Assessed Intelligence

 

With each passing moment, the rapid development of artificial intelligence (AI) becomes more ingrained into our modern-day society both directly and indirectly. Regardless of one’s individual level of understanding, perception, or use thereof, the fact is that most humans interact with some form of AI regularly one or more times throughout the day.[1] Consequently, along with the various forms of AI technology increasingly weaving itself into human life has come a reduction in human-derived, manual information collection and dissemination. Increased dependence on machine-generated data (MGD) has led many across various industries to rely on what is in many cases open-source, unstructured data for a multitude of purposes.[2] One of particular interest that will be the focus of this article pertains to the use of MGD – or generative AI (GenAI) as a double-edged virtual sword in education.

From the blatantly obvious abuse of tools such as ChatGPT to pass university-level exams, to student and faculty use of various adaptive AI technologies to tutor and identify individual learning plans.[3] GenAI in education can be, and is being used for both beneficial and nefarious purposes. Irrespective of the individual intent behind the use of AI technologies in education, several questions come to mind, such as the validity of the data itself, potential for bias privacy concerns, intellectual property rights, and more.[4] 

A Brief History of AI in Education

Believe it or not, the use of AI as a tool to educate humans has been around since the 1960s when Don Bitzer, a PhD engineering student attending the University of Illinois Urbana-Champaign (UIUC) decided to build what would later be called the PLATO I terminal. Bitzer’s idea would soon be labeled as a type of “Programmed Logic for Teaching Operations” that rapidly evolved into multiple enhancements to the product being released in subsequent years all the way through the late 1980’s.[5] 

Beginning in the 1990s leading into the early 2000s, a new rise in intelligent tutoring systems and educational software began to quickly surmount; one of the more notable during this time was the release of an AI-powered system called Cognitive Tutors, developed by Carnegie Mellon in the late 1990s.[6]

Throughout the 2010s, an emergence of adaptive learning platforms such as Khan Academy and Duolingo began to surface, offering dynamically adjustable educational resources to anyone with a reliable Internet connection.[7] 

Fast-forward to the technologically advanced, modern-day version of the “roaring 20s” where tools such as ChatGPT, officially released in 2022, have paved the way for GenAI as it is known today. The technology was quickly made accessible to teachers, students, and the greater academic communities. Although an undeniably amazing breakthrough in the development of AI, the most recent trend in using these enhanced toolsets continues to raise the eyebrows of many scholars and skeptics alike throughout the education sector.

Common Uses of GenAI in Education

As highlighted previously, the education system has successfully integrated various forms of AI since the early 1960’s beginning with the PLATO system.[8] Consequently, the practically seamless amalgamation of AI into the education realm in and of itself is not a new concept by any means. What is new, however, are the significant improvements to AI capabilities and the broad scope of data that can be ingested and reproduced with minimal effort and time.

With these recent developments in GenAI, educators are now able to offer adaptive, individually tailored learning plans and content to meet the individual needs of each student.[9] Not only has this vastly expanded the learning potential for special needs student populations, but it has also significantly reduced the requirement for staff and faculty to manually create and regularly revise multiple types of learning resources. Moreover, GenAI further assists educators in meeting the individual needs of students by providing detailed, high-quality feedback and grading automation. Such abilities provide time-sensitive assessments to students while promoting overall efficiency in the classroom.[10] There are most certainly many other benefits that commonly result from the use of GenAI that for the sake of time, cannot be discussed in this article. The key takeaway is that GenAI in education provides countless benefits to both staff and faculty as well as students, depending on the manner in which it is utilized.

Common Abuses of GenAI in Education

Much like anything else, GenAI has the potential to be misused by anyone with malicious intentions. Some of the more common abuses of AI within the education realm include its use to complete assignments, including solving mathematical, and scientific equations, research reports, writing essays, and more in a fraction of the time it would take a human to complete. This rapid evolution of GenAI has resulted in many solutions being offered that allege the ability to provide an effective mechanism that can detect plagiarism and/or AI-generated content. While at its core, the idea may seem advantageous – recent studies have shown that the detectors are far from accurate, many providing a mere 1% accuracy rate[11] In fact, some well-known academic institutions, such as Vanderbilt University have completely suspended the use of AI detection tools for the foreseeable future.[12] 

Because of this, many within academia have now been left with no viable solution other than to create their own strategies for analyzing AI-generated content for potentially plagiarized information. This, combined with the lack of standardization and regulation, has resulted in universities creating their own AI honesty policies and codes of ethics. For universities such as Massachusetts Institute of Technology (MIT), it is common for revisions to be implemented into existing policies aimed at keeping students honest while using AI technologies within the classroom while promoting transparency and inclusion.[13] 

Regulating Ethical and Unethical AI in Education

As AI technology continues to expedite its digital expansion of human capabilities by means of machine augmentation, it will become more crucial for those in academia to not only be aware of but also be able to distinguish information AI-generated versus human-generated outputs. Furthermore, the long-term impacts of both the ethical and unethical uses in education must also be considered.

Although there are many potential venues that could be used to formalize and regulate ethical use of AI in education, there are currently few gatekeepers with expertise and authority to do so effectively. While not specific to academia, according to a study published by the Brookings Institute, there are three main challenges to regulating AI. First is the inability to keep up with the pace of AI’s nimble enhancements, second is the challenge of determining what to regulate, and third is the challenge of who should regulate and how.[14]

The Future – Best Practice Recommendations and Advocacy for Ethical Use

Since there are currently no standardized regulations or other types of commonly used frameworks that have been made available to academia for the purpose of maintaining ethical use throughout the education sector, perhaps the most viable interim solution is to adopt a similar tactic to that which is being used by MIT as described previously.

Although not foolproof by any means, collaboration and engagement between students and faculty is critical including the implementation of a revised academic honor code.[15] This combined with the adoption of responsibly implemented AI technologies, may be a step in the right direction. Student “Rights and Responsibilities” like that adopted by the Academic Senate for California Community Colleges  and other educational institutions could prove to be the effective path forward to an emerging set of complexities resulting from the expansion of GenAI.[16]

 

 

 

 

 

 

 


[1] Kennedy, B., Tyson, A., & Saks, E. (2023, February 15). Public Awareness of Artificial Intelligence in Everyday Activities. Pew Research Center Science & Society; Pew Research Center. https://www.pewresearch.org/science/2023/02/15/public-awareness-of-artificial-intelligence-in-everyday-activities/

[2] Madrid, J. (2024, November 13). Data for AI: Human-Generated vs. Machine-Generated. Devoteam. https://www.devoteam.com/expert-view/ai-data-human-vs-machine-generated/

[3] Quay-de la Vallee, H. (2023, March 15). Generative AI Systems in Education – Uses and Misuses. Center for Democracy and Technology. https://cdt.org/insights/generative-ai-systems-in-education-uses-and-misuses/

[4] Chapman University. (n.d.). Bias in AI. Www.chapman.edu; Chapman University. Retrieved May 25, 2025, from https://www.chapman.edu/ai/bias-in-ai.aspx

[5] Kaiser, C. (2023, March 17). PLATO: How an educational computer system from the ’60s shaped the future. Ars Technica; Ars Technica. https://arstechnica.com/gadgets/2023/03/plato-how-an-educational-computer-system-from-the-60s-shaped-the-future

[6] Institute of Education Sciences. (2016). Cognitive Tutor  A summary of findings from a systematic review of the evidence Secondary Mathematics Cognitive Tutor. Program Description 1. Institute of Education Sciences. https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_cognitivetutor_062116.pdf

[7] Enders Analysis. (2014). Case study: Online Academies. Enders Analysis. https://www.ofcom.org.uk/siteassets/resources/documents/tv-radio-and-on-demand/broadcast-guidance/psb/psb3/9._online_academies.pdf?v=334490

[8] Kaiser, supra note 5.

[9] Quay-de la Vallee, H. (2023, March 15). Generative AI Systems in Education – Uses and Misuses. Center for Democracy and Technology. https://cdt.org/insights/generative-ai-systems-in-education-uses-and-misuses/

[10] McCreary, D. (2023, November 22). GenAI is Hyper-Personalizing Education – The Modern Scientist – Medium. Medium; The Modern Scientist. https://medium.com/the-modern-scientist/genai-is-hyper-personalizing-education-e28019027944

[11] Fowler, G. (2023, April). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin

[12] Coley, M. (2023, August 16). Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector. Vanderbilt University. https://www.vanderbilt.edu/brightspace/2023/08/16/guidance-on-ai-detection-and-why-were-disabling-turnitins-ai-detector/

[13] MIT Sloan Teaching & Learning Technologies. (2025). AI Detectors Don’t Work. Here’s What to Do Instead. MIT Sloan Teaching & Learning Technologies. https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/

[14] Wheeler, T. (2023, June 15). The three challenges of AI regulation. Brookings Institute. https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/

[15] MIT Sloan Teaching, supra note 9. 

[16] Eaton, J., Hecht, A., & Palmer, L. (2024). Rights and Responsibilities Regarding AI Use in Academia | ASCCC. Academic Senate for California Community Colleges. https://asccc.org/content/rights-and-responsibilities-regarding-ai-use-academia