The Problem with ChatGPT Writing Your Essay

By John Lalli (first essay)
and ChatGPT (second essay)
The problem with ChatGPT writing your essay

ChatGPT seems capable of doing anything requested of it. This power makes ChatGPT a handy tool but comes with questions, including pesky ethical ones about its role in work and education. To explore these concerns, we should first understand exactly what ChatGPT is and how it operates.  

ChatGPT is a generative artificial intelligence (AI) chatbot developed by the corporation OpenAI. This means the AI is a program that has learned in a way which mimics a human to generate complex text or images in response to user input [1]. ChatGPT is both a fun way to spend some time and a useful tool that leverages cutting edge technology. It can provide users with information on various topics and even carry out simple text-based tasks such as translating, editing, and writing. These uses are beneficial under the proper circumstances but the easily accessible nature of generative AI, gives rise to concerns of misuse.  These concerns are probably most dire in the field of education. With the advent of ChatGPT and other generative AI programs, educational institutions may be forced to alter their academic codes of conduct and reassess the nature of plagiarism.  

Traditionally, plagiarism is the act of taking someone else’s work or new ideas and passing them off as one’s own [2]. By plagiarizing an individual can reap benefits that should have been awarded elsewhere without any work of any sort. Plagiarism is an inherently deceitful practice commonly thought of as immoral in most ethics frameworks. Plagiarism is not illegal in the United States in most settings unless a specific situation breaches trademark laws. Still, various professions frown upon or outright bar the act of plagiarism. In education, plagiarism constitutes a serious offense because the act not only represents a breach of ethics, as it does in every industry, but it also leaves the offending party without the benefits an education seeks to deliver. If an individual plagiarizes, they do not properly learn the material much less how to synthesize information into coherent thoughts of their own. For these reasons educational institutions customarily include in their academic code of conduct, a strict ban on plagiarism [3]. Now educators and administrators are tasked with deciding whether the use of AI represents a similar breach in their codes of conduct.  

On one hand, accessible AI represents a stunning scientific advancement and a valuable tool for students. On the other, the overuse of AI may come with the same disbenefits of plagiarism and result in individuals failing to gain valuable life and professional skills. There are also ethical considerations to consider such as the scope of training data, and if generative AI is itself a form of plagiarism. Each educational institution needs to weigh the benefits and drawbacks of using generative AI and decide which best serves their students and upholds the ethical values of the institution. Higher education often does not make this difficult choice [4], leaving students in a contractual and moral gray area where everyone must make this decision for themselves. This essay serves to evaluate the ethics utilizing ChatGPT, or other generative AI, specifically in writing an essay for a course in which plagiarism has been expressly banned.    

ChatGPT and Deep Learning

To understand the ethics of using generative AI such as ChatGPT for an essay in a course which forbids plagiarism, it is necessary to gain a clear understanding of these programs, particularly how they learn to perform. ChatGPT is a generative AI based on deep learning, a subcategory of machine learning in which inputted data are turned directly into outputs without human assistance in the learning process. Deep learning AI is set up by a team of programmers to mimic the human brain and given a dataset from which to learn. This dataset is called the training data. From this point on, the program is not interfered with and allowed to develop on its own. Deep learning differs from other forms of machine learning where data are first formatted in specific and easily digestible ways before being fed into the program [5].  

An example of deep learning AI to explain the process and the power of the technology comes from chess. Chess has been a testing ground in computer science since the so-called father of the field and developer of primitive AI, Alan Turing, developed his Turbochamp algorithm in 1950 [6]. In 2017 Google’s DeepMind team once again used chess as a proving ground, this time for deep learning AI, AlphaZero. As a deep learning algorithm, AlphaZero was programmed only with the rules of chess and the objective of winning. It learned not from hundreds of hours of programming with assistance from chess masters, but instead, by playing thousands of training games against itself [7]. Consequently, AlphaZero learned in a way which mimicked a human’s learning process without any programming intervention. 

To test its strength, DeepMind entered AlphaZero into a match against Stockfish, the top traditional chess engine of the time. Stockfish is based on a traditional machine learning process and continues to have evaluations of chess positions tweaked by a team of programming experts and chess masters. Stockfish also possesses several common initial sequences of moves hard coded into its database, something AlphaZero is without. In this match, AlphaZero defeated Stockfish handily after only about nine hours of super-fast training games [7]. 

Though there remain questions of which engine is truly more powerful due to hardware differences this match displayed the true capabilities of deep learning AI models. At a high level, ChatGPT’s learning process is like that of AlphaZero. There is no human intervention once the process begins. The learning process of ChatGPT is similar albeit a bit more complicated due to the differing nature of the tasks. Instead of the rules and objectives of chess, ChatGPT is given access to an array of sources as training data and given the objective of receiving a good review [8]. ChatGPT parses this training data and then attempts to respond to human requests as part of its training. These responses are graded by the OpenAI team based on how well responses emulate a desirable chatbot response. Its responses to these requests are given a score. ChatGPT integrates these scores into its memory and over many cycles learns to respond properly. This process of deep learning is called reinforcement learning from human feedback [8]. It is important to understand ChatGPT parses the training data itself and this data remains the same throughout the entire process. This appreciation matters because when users prompt ChatGPT today, the AI synthesizes this same training data to present a response. When a student uses  ChatGPT to write an essay they are arguably using the information from the training set, an action which borders on plagiarism.  

Ethics Overview of Plagiarism  

With a deeper understanding of ChatGPT’s learning process it is now beneficial to evaluate the ethics of plagiarism in a broad sense. Some professors and administrators argue using ChatGPT and plagiarism are essentially the same actions [4]. This conclusion stems from two main arguments, the first being a technical distinction, and the second a semantic one. The first argument deals with how the AI has learned to generate its responses. A majority of the data included in the training dataset are owned by individuals and each response by ChatGPT using this information arguably should be attributed to the original authors. This means by using a ChatGPT response for an essay, the user is plagiarizing indirectly from the sources in the training dataset. This is a complicated and contested argument based upon the technical backing of generative AI. The argument calls into question generative AI’s ability to synthesize truly new information from the original training data.  

The second argument asserts that students are in fact taking the information from ChatGPT and thus, the program should be cited as the source of this information. Even though plagiarism is traditionally defined as stealing information from an individual as Alice Dailey, chair of the Academic Integrity Program at Villanova University explains, “ Students who commit plagiarism often borrow material from a ‘somewhere’—a website, for example, that doesn’t have clear authorial attribution.”[4] Dailey and others argue even though ChatGPT is not an author in the traditional sense, the modern definition of plagiarism has already moved past the need for a traditional author.  The use of generative AI in writing an essay stands as a clear example of plagiarism. Of course, those who argue using generative AI does not amount to plagiarism refute these claims with counterarguments. For this reason, it is necessary to evaluate both the ethics of plagiarism, and the ethics of using generative AI assuming no true plagiarism is taking place.  

A more complete understanding of the issue begins with an evaluation of plagiarism, followed by study of counter arguments to the view that ChatGPT written outputs do not constitute plagiarism, and finishes with an evaluation of ChatGPT use without the consideration of plagiarism. 

Evaluation of Plagiarism 

Plagiarism is generally viewed as unethical. Reasons vary according to the ethical system used to evaluate the action. There are two main issues with plagiarism that cause the largest concerns in deontological and utilitarian ethics respectively. The first ethical issue is the action represents a form of theft concerning intellectual property. [2] In deontological ethics an action’s morality is contained directly in the action itself, not on any consequences. According to the theory, a moral action may have negative consequences and still be considered the correct ethical decision. The morality of these actions is based upon one’s duty to someone or something, often in the form of an obligation to some higher power. Deontological ethics forms the moral foundation of major world religions like Judaism, Christianity, and Islam. The framework contains rules which individuals are not meant to break for nearly any purpose. Doing so is considered a breach of ethics [9]. 

In many deontological systems there is a rule against theft. A primary example of this rule in the Bible contains the consideration “thou shalt not steal.” Plagiarism is a theft of intellectual property. Though deontology traditionally concerns material goods, the concept can certainly be extended to intellectual property. This logic follows because in the modern world individuals make a living from intellectual property. Consider an individual who publishes research. If they were constantly plagiarized their information would be worthless to readers and they would be unable to make a living. Examples such as this, explain why the United States and other governments have patent and trademark laws [2]. Even in situations not covered by these laws, the plagiarizing individual passes another person’s words or ideas as their own, seeking to unjustly benefit personally and breaching the theft principle of deontological ethics. Plagiarism of a single essay or phrase is not permissible according to deontology.

The second ethics issue is plagiarism may allow individuals to pass a course without gaining the necessary knowledge. This outcome becomes an important consideration when evaluating plagiarism through a utilitarian lens. In classical utilitarian philosophy, the best acts maximize total utility. Utility refers to the measure of benefit given by consuming a good or service or in this case that arises as the consequence of an action [10]. As such, ethical actions are those where the outcome maximizes benefit or limits harm overall. 

When evaluating plagiarism using this system there are some superficial benefits for the plagiarizing individual. They are able to save significant amounts of time and effort and have the potential to receive a higher grade than otherwise possible. Plagiarizing leaves every individual who engages in it without the true benefits of schooling, as they often do not learn, synthesis skills, or develop the ability to think for themselves. This harm far outweighs any of the added convenience or grade boost arising from plagiarizing. For this reason alone, plagiarism represents a great negative from a utilitarian perspective. This potential for negative utility only increases when long term consequences are evaluated. Professionals who make their way through their education by plagiarizing can lack the skills required to succeed and in some cases even pose a danger to themselves and society. 

Take the example of an architectural engineer who plagiarizes several research papers about proper safety building requirements. It would be unwise to wish them to be the same individual who must authorize the safety of a building. Allowing them to do so can potentially result in disaster. Although this is an extreme example and there are safeguards put in place within professions to prevent such a disaster from occurring, using the extreme example illustrates the negative consequences of plagiarism. Every plagiarized essay represents an ethical shortcoming with its own negative consequences, though it may not immediately cause disaster. The better decision as far as utilitarianism is concerned is for everyone to improve their own knowledge and skills by avoiding plagiarizing.  

ChatGPT’s Essay is Not Plagiarism: Responses

For some this evaluation of plagiarism is much in line with an assessment of using ChatGPT to write an essay. Many would argue using ChatGPT for essays falls in a gray area and does not actually represent an act of plagiarism. In the most technical sense, plagiarism requires someone from whom to plagiarize and ChatGPT certainly is not a human being. For this reason, some say the ethics and shortcomings of plagiarism are not necessarily inherent to the use of ChatGPT in a classroom setting. Even without taking the use of generative AI as plagiarism itself, it is still possible to morally evaluate the use of ChatGPT in a classroom setting where plagiarism is forbidden. The most straightforward evaluation can be done using utilitarian ethics.

Using generative AI has largely the same superficial benefits and overwhelming detriments as plagiarism. Considering the benefits, it is true the individual who uses AI to write an essay is likely able to pass a class without as much time, effort, or energy. This is beneficial to the individual as it affords them the opportunity to use these resources elsewhere. Using generative AI similarly leaves individuals without the skills that are the very point of an education. 

There is even the potential that using ChatGPT is worse in this regard due to two factors. The first is ChatGPT can complete an entire essay on its own. If someone was to plagiarize an entire essay word for word from a traditional source, they would likely be caught by a program or a simple Google search. ChatGPT can write a unique essay for each user making it much harder to detect [11]. This could lead to an individual writing even less of the work they turn in, further diminishing the benefits to their own writing and research skills the assignment was designed to develop.

The second factor is ChatGPT operates with private training data. There is often no way to know exactly the source of the information. When asking ChatGPT to provide sources for information included in the essay it responds, “As an AI language model, I don’t have direct access to my training data or know where it came from.” [12] Since the AI and thus the plagiarizing individual do not know the exact source of the information, they are not even afforded the opportunity of skimming through these sources for bits of information. In short, using ChatGPT allows for even less effort and time to be put into the assignment and thus for even less educational benefit to be reaped. These two arguments show that using ChatGPT can be even more problematic than simple plagiarism when evaluated through a utilitarian perspective. An individual who uses ChatGPT for writing an essay in a class where plagiarism is not permitted is committing a disservice by robbing themselves of educational experience and potentially becoming a less reliable member of society.  

From a deontological perspective, generative AI fills a much different role than traditional plagiarism. Though the information provided by ChatGPT is from its training data, some argue these sources are synthesized into a single response and therefore, does not amount to plagiarism from these sources. The use of ChatGPT likens the use of AI to receiving help from a person who is knowledgeable and does not want credit in one’s essay. 

Even taking this claim at face value and completely separating the ethics of plagiarism and the use of ChatGPT on an essay, problems remain. The major problem with using ChatGPT to write an essay for class is the user is in most cases committing an act of deception. Most of the time when ChatGPT is used to write an essay the user does not disclose this information to their professor and instead passes the work off as their own. Deception, especially just to skirt a responsibility, is frowned upon in a deontological approach. Noted philosopher Immanuel Kant’s work states that lying under any circumstance is in violation of what he termed as the Categorical Imperative or the moral laws which all people must follow regardless of circumstance [13]. For Kant this truly meant that explicit lying is not ethically permissible even in the direst of situations such as those of life and death [14]. For this reason, Kant would not permit many essays written with ChatGPT. There is potential for cases where this issue is not a concern. For instance. if a professor allows ChatGPT in class and the student informed their professor they used the program. Though rare, in this instance there is no issue with the action provided the argument distancing these programs from plagiarism is taken as a given. Even in these rare situations there remains the issues from a utilitarian perspective. Whether generative AI is regarded expressly as plagiarism or not, in the majority of cases, there are a number of ethics issues with using ChatGPT on an essay in a class in which plagiarism is banned.  

When prompted to write an essay on the ethics of using ChatGPT to write an essay for class, ChatGPT responded with the following: 

Title: The Ethics of Using ChatGPT to Write an Essay for Class  

Introduction: In an era of advanced technology, artificial intelligence (AI) has made significant strides, providing new tools and opportunities for various tasks. One such advancement is ChatGPT, a language model developed by OpenAI that can generate human-like responses. While ChatGPT offers convenience and assistance, its usage raises ethical concerns, particularly when it comes to using it to write essays for academic purposes. This essay aims to explore the ethical implications of employing ChatGPT as a tool for academic writing.  

I. Academic Integrity and Authenticity: One of the primary ethical concerns associated with using ChatGPT to write essays is the violation of academic integrity. Education systems emphasize the development of critical thinking, research skills, and the ability to express oneself. By relying on ChatGPT to compose an essay, students risk bypassing the essential learning process and submitting work that is not genuinely their own. This undermines the principles of academic integrity and devalues the educational experience.  

II. Intellectual Property and Plagiarism: Plagiarism, the act of presenting someone else’s work or ideas as one’s own, is a serious offense in academia. When students employ ChatGPT to generate essays, they may unintentionally or intentionally plagiarize content. ChatGPT’s responses are generated based on vast amounts of existing text, and there is a risk of reproducing sentences or paragraphs verbatim from existing sources without proper attribution. This compromises the originality and authenticity expected in academic work.  

III. Skill Development and Personal Growth: Academic assignments serve a broader purpose beyond earning grades. They provide an opportunity for students to develop critical thinking skills, engage in research, and articulate their thoughts effectively. Using ChatGPT to write essays eliminates these learning opportunities. Students miss out on the chance to strengthen their writing abilities, conduct independent research, and engage in a creative thought process that contributes to personal growth and development.  

IV. Fairness and Equity: Another ethical concern arises from the potential disparities in access to technology. While some students may have access to ChatGPT or similar AI tools, others may not. This discrepancy can create an uneven playing field, where students who lack access to such resources may be at a disadvantage. The use of ChatGPT for essay writing may widen the existing divide between students from different socioeconomic backgrounds and compromise the principle of fairness in education.  

V. Responsible Use of Technology: Promoting responsible use of technology is crucial in today’s digital age. While ChatGPT has its merits as a tool for assistance and guidance, using it as a crutch for essay writing raises questions about dependency and the development of necessary skills. It is essential to strike a balance between utilizing AI tools for support and ensuring that students actively engage in the learning process, cultivating their own abilities and knowledge.  

Conclusion: The ethics of using ChatGPT to write an essay for class involves considerations of academic integrity, plagiarism, skill development, fairness, and responsible technology use. While AI-powered tools like ChatGPT offer convenience, they also pose risks to the educational experience. It is crucial for students, educators, and institutions to reflect on these ethical concerns and establish guidelines to preserve the integrity of academic work, promote fairness, and prioritize the development of essential skills that contribute to personal growth and success. By embracing technology responsibly, we can navigate the ethical challenges and leverage AI tools effectively within the educational landscape.  

Response by John Lalli

Comparing these two writing pieces provides a few insights. ChatGPT is pretty good at what it does. The essay written by the AI is clear and concise. The essay and the claims in it are easily understood. This conciseness comes with a drawback: ChatGPT does not go into as much detail and depth as a human writing a similar essay would. The greatest drawback of the essay written by ChatGPT is that it does not cite any sources and fails to go into specifics with its ethics evaluation. Its lack of sourcing is detrimental not just for plagiarism reasons but also because the lack of cites further distances the reader from researching the provided information. This is especially evident in the ethics evaluation. Instead of sourcing this evaluation in concrete theory, ChatGPT explains the main concerns plainly based on what it seems to think of as a general code of ethics. This shortcoming typifies the problem with using ChatGPT as discussed in John Lalli’s essay. 

Works Cited  

  1. “What Is Generative Ai?” McKinsey & Company, 19 Jan. 2023, www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai.  
  2. “Plagiarism.” Legal Information Institute, Aug. 2022,www.law.cornell.edu/wex/plagiarism#:~:text=Plagiarism%20is%20the%20act%20of,a%20person’s%20school%20or%20workplace.  
  3. “Plagiarism.” University of Oxford Students,www.ox.ac.uk/students/academic/guidance/skills/plagiarism#:~:text=Passing%20off%20another’s%20work%20as,of%20the%20degrees%20it%20issues.  
  4. Barnett, Sofia. “ChatGPT Is Making Universities Rethink Plagiarism.” Wired, 30 Jan. 2023,www.wired.com/story/chatgpt-college-university-plagiarism/.   
  5. Martineau, Kim. “What Is Generative Ai?” IBM Research Blog, 26 May 2023,www.research.ibm.com/blog/what-is-generative-AI.   
  6. Stezano, Martin. “In 1950, Alan Turing Created a Chess Computer Program That Prefigured A.I.” History.Com, 22 May 2023, www.history.com/news/in-1950-alan-turing-created-a-chess-computer-program-that-prefigured-a-i.  
  7. Silver, David, et al. “AlphaZero: Shedding New Light on Chess, Shogi, and Go.” Google DeepMind, 6 Dec. 2018, www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go.  
  8. Barnett, Sofia. “ChatGPT Is Making Universities Rethink Plagiarism.” Wired, 30 Jan. 2023,www.wired.com/story/chatgpt-college-university-plagiarism/.  
  9. “Deontology.” Seven Pillars Institute, 2 Jan. 2018, https://sevenpillarsinstitute.org/glossary/deontology/  
  10. “Utilitarianism.” Seven Pillars Institute, 2 Jan. 2018, https://sevenpillarsinstitute.org/glossary/utilitarianism/  
  11. Heikkilä, Melissa. “Why Detecting AI-Generated Text Is so Difficult (and What to Do about It).” MIT Technology Review, 7 Feb. 2023, www.technologyreview.com/2023/02/07/1067928/why-detecting-ai-generated-text-is-so-difficult-and-what-to-do-about-it/.  
  12. ChatGPT. “Citing My Sources.” ChatGPT, 30 July 2023, https://chat.openai.com/.  
  13. “Categorical Imperative.” Seven Pillars Institute, 2 Jan. 2018,https://sevenpillarsinstitute.org/glossary/categorical-imperative/.  
  14. Kant, Immanuel, et al. “On a Supposed Right to Lie from Philanthropy.” Practical Philosophy, Cambridge University Press, Cambridge, U.K., 2008, pp. 605–616.