Key points:
- ChatGPT is here to stay, and it’s wise to now consider it a part of learning
- In fact, every assignment moving forward must be graded with ChatGPT in mind
- See related article: How educators can navigate AI-driven plagiarism
- Everything You Need To Know About AI In Education
You may have heard of ChatGPT. According to Google, about 350,000 articles have been written on the subject, and a significant percentage are related to education. With so much publicity, it is reasonable to assume that all students from middle school through post-secondary are aware of its power. Whether you like it or not, we have a new partner in the classroom.
Many primers on ChatGPT are available, but I want to focus on teachers’ and students’ concerns about using it in the classroom. Some schools (such as the entire NYC public school district) have attempted to ban it entirely, while others such as Yale have taken the opposite approach. In my opinion, attempting to ban anything in the world of ubiquitous cell phones is a waste of time and effort. Students are ingenious, especially when it comes to getting around the rules. From a search of articles, both scholarly and in mainstream media, the approach I am suggesting has not yet been proposed. I came upon it while thinking about the eternal pedagogical problem: how to grade group projects.
It is well-documented and often repeated in teachers’ professional development that the right type of co-learning can deepen understanding and long-term knowledge gains. The critical question is, “What is the right type of co-learning?” Sometimes group projects work well. Sometimes one partner does all the work and another just coasts along for the ride. How are teachers supposed to grade these efforts? Give everyone the same grade? Let students grade each other’s contributions? Try to guess how much time each student put in? There is no perfect solution.
And that, in a nutshell, is where we find ourselves with ChatGPT. From now on, every assignment must be explicitly graded as a partner project with ChatGPT. Individual essays, science fair partner projects, group programming assignments, digital and physical art pieces–every single assignment from now on has a silent partner.
Of course, this does not mean that every student will use ChatGPT on every assignment. What it does mean is that we must assume that they might. We must transfer the responsibility of evaluating how much of the work is original from the teacher to the student, and we must explicitly teach students how to take on that responsibility. ChatGPT might be the partner that did everything, the partner that didn’t show up, or somewhere in between. Despite many efforts, there will never be a tool that can evaluate how much of an assignment was influenced by AI. I will even double down by saying not only will there not be such a tool, there should not be such a tool.
This leads to the most important question: If no such tool exists, how can educators know how much help the students received? How do we evaluate their knowledge? The answer: we ask them. We need to give that responsibility back to the students. We are their partners in learning, not their masters, and it is our job to help them understand what they are learning and how, not to police and punish them for using tools we don’t fully understand or feel comfortable with.
It is time for educators to treat ChatGPT as an unreliable partner in all assignments and to provide a way for students to let us know how much help they received. I specify an unreliable partner because there is no way to know where ChatGPT got its information for any single response. It uses a mathematical model of likely words, not research. It’s basically auto-complete on steroids. ChatGPT is like a classmate who has read extensively and is really confident about everything they say but can’t remember exactly where they got their information from. It could be an academic publication or it could be a conspiracy website. And that is how we should treat it – a partner who sounds like they know what they are talking about but still needs to be fact-checked.
I would like to propose the following sample rubric based on how partners might rate each other in real life:
Category | Student-Driven | Moderate ChatGPT Help | ChatGPT-Driven |
Topic Selection and Thesis Formulation | Student independently selected the essay topic and formulated the thesis. ChatGPT input (if any) was limited to guidance, suggestions, and corrections. | ChatGPT assisted in refining the essay topic or thesis statement, but the initial idea was student-generated. | The essay topic and thesis statement were primarily or entirely suggested or formulated by ChatGPT. |
Research and Data Collection | Student conducted all research and collected supporting evidence independently or with minimal ChatGPT consultation. | ChatGPT assisted in finding sources or evidence but did not do the research for the student. | ChatGPT conducted the majority or all of the research and data collection. |
Analysis and Argumentation | Student independently analyzed data and evidence to build arguments supporting the thesis. ChatGPT may have provided guidance on analytical methods. | ChatGPT assisted in the analysis and argumentation but did not build the argument for the student. | ChatGPT primarily or completely analyzed the data and constructed the argument. |
Writing and Structure | The essay’s structure, including the introduction, body paragraphs, and conclusion, was formulated by the student. ChatGPT involvement was limited to feedback and suggestions. | ChatGPT assisted in structuring the essay or improving its readability, but the content and organization were student-generated. | The essay was primarily or entirely structured and written by ChatGPT. |
Final Draft and Editing | Student independently revised and edited the essay. ChatGPT may have provided minor suggestions for improvement. | Student utilized ChatGPT for more significant revisions and editing but maintained original thought and structure. | ChatGPT conducted the majority or all of the revisions and editing. |
This rubric could easily be modified for any assignment, from a programming challenge to a play. It requires no technical knowledge about ChatGPT. In fact, we could substitute the word “ChatGPT” with “Parents,” “Wikipedia,” “Google Search,” “Tutor,” or “TA.” It takes no more than a few seconds to fill out and read. And it still allows the teacher to specify how much ChatGPT is permitted for any given assignment. Even if the rule is “none at all,” the rubric is still valid. The student must still write down that they did not use the tool. It takes it from “I’m just tricking the teacher to save some time” to “I am explicitly lying about what I did.”
The value of this rubric is that it places the responsibility for learning back on the student’s shoulders. This proposal is not about making less work for the teacher or taking away their authority. It is about helping students develop their own moral compass. As CS Lewis so famously said, “Integrity is doing the right thing, even when no one is looking,” which is especially critical in the world of online learning. This rubric gives students the opportunity to show us what they did when we weren’t looking. It allows them a chance to have their integrity reinforced through practice. And if we treat this opportunity with understanding instead of punishment, it has the possibility of helping the students who need it the most.
You will notice that this rubric has no points attached. What if, instead of using it simply as another entry in the grade book, we took it as an opportunity for discussion with the student? If they are not afraid of getting a 0 for admitting that they used ChatGPT, it opens up a whole world of possible discussions, depending on their answers:
“I didn’t really understand the question, but once I did, I was fine.”
“I work every day after school and then look after my siblings…. I just didn’t have time.”
“I thought my essay was really good and didn’t know what changes to make.”
If we allow students to self-evaluate without grade-based consequences, we can learn what supports they need as well as how we can improve our curricula. We can even use it as a perfect opportunity to teach students how to support themselves using tools like ChatGPT properly without resorting to plagiarism. We could boost the equity in our classrooms immensely if students can individualize the help they are getting at the time, place, and pace they need.
It is no use burying our heads in the sand and banning AI-based tools. These tools are becoming more and more powerful and are being used in new ways every day. We have a real chance to help students understand their own responsibility, take charge of their own learning, and use this amazing technology to improve their self-efficacy, their knowledge, their outcomes, and ultimately their lives.
- Building ethical AI usage in K-12 education - December 23, 2024
- Science teachers, math teachers, history teachers–we’re all reading teachers now - December 20, 2024
- Using AI to teach persuasive writing to English learners - December 20, 2024