Exploring ChatGPT’s Presence at Newton South

One student’s initial reaction was that he’d never have to read again. 

Another felt optimistic that this new technology could empower humanity. 

One said that her morals have never been challenged like they have been in the past few months. 

Another said that nothing that he heard about it sounded real.

All of these were real reactions from Newton South students regarding the sudden explosion of artificial intelligence. Due to AI’s accelerating scope of capabilities, ChatGPT and similar tools are roping in more and more high school users. 

With quick, easy, and free access to these ever-so-powerful tools, students and teachers across Newton South vary in their responses, especially in how to handle artificial intelligence in a moral and beneficial way. 

“My honest initial reaction was that I’d never have to read again,” said one senior who asked to remain anonymous. This self-proclaimed “Chat merchant” — we’ll call him M. — explained that artificial intelligence websites alleviated some pressure in a stress-filled junior year, often even leading to better grades than he felt he deserved. “There were definitely a few assignments that I got an A or A-minus on that I realistically wouldn’t have gotten higher than a B, B-minus on without GPT.”

When OpenAI publicly released ChatGPT on November 30th of last year, it took the world by storm. According to analysis by web analytics firm Similarweb, the site reached 100 million active users within just two months of launching, becoming the fastest-growing application in history. 

Quickly, students found ways they could use this new technology to their advantage — some more reliant on it than others. According to a Study.com survey of over 1,000 students, almost 90% of students have used ChatGPT to help with homework assignments.

Senior class president Kevin Yang said that he uses ChatGPT as a way to “accelerate [his] ability to learn.” For Yang, this means using the site as a “study buddy.” 

“I asked ChatGPT, ‘Hey, can you give me a quiz about the first five chapters of this book in French?’” Yang explained that it executed flawlessly, not only creating a comprehensive practice test in seconds but also providing specific feedback on his short responses. “I was a little surprised, but it knew exactly what it was talking about.”

But not all students are using AI as a learning tool; some are letting the technology do their work for them.

“It took me a few weeks to fully grasp ChatGPT, but once I did, I was using it on most of my history and English assignments,” said M. “I’d usually have it summarize texts … but sometimes it would help me write a paragraph here or there.”

Unlike M., Yang almost never uses AI in his humanities courses. “If I write an essay, all those words are mine.” He admits that this choice has less to do with morals and more to do with the fact that while websites like ChatGPT are “great at solving technical things, they struggle with creating compelling writing.”

While some students might wish for ChatGPT to excel in all areas, there could be downsides to AI becoming too advanced, specifically in creative writing. 

“I am very concerned that students will not learn to write and think effectively,” said history department head Jennifer Morrill. “In an age of tweets and texts, we’re used to soundbites, not deep thinking. My fear with ChatGPT is that students will just type in the question and see this beautiful response. It’s a cool tool, but it shouldn’t think for us.” 

Beyond potentially hindering students’ learning and thinking, ChatGPT and other AI tools are pushing people to pay closer attention to (or ignore) difficult moral decisions.

Another Study.com survey of over 200 K-12 teachers revealed that 26% of teachers have caught a student cheating with ChatGPT.

“My use lies on the ethical border,” said an anonymous senior. “I try to use it more as a resource than a substitute for my own work, but when you know you can save two or more hours and still get the same grade, it’s hard not to fall into that trap.” 

Other students felt more sure of the ethics of their practices. “So long as there’s no plagiarism involved, it’s not unethical. I never plagiarized,” said M. When pressed to clarify his definition of plagiarism, M. replied: “I never copy and pasted a whole paragraph or anything like that. Maybe I take a few vocab words here and there, but the thoughts were mine and that’s what counts.” 

Although senior Ben Detrich didn’t admit to any misuse himself, he recalled hearing many stories of students who seemed to “abuse the technology” from his perspective.

The problem is that everyone defines their own ethical boundaries. What’s defensible to one person may be inviolable to someone else.

“I don’t think the line is gray. In fact, it’s very clear,” said 11th-grade history teacher Lily Eng, who taught both M. and Yang. “Am I going to cite [ChatGPT] or not? Am I going to take these ideas?” Eng explained that while the categories of proper use and misuse of ChatGPT are clear to her, what’s difficult is figuring out how to approach situations of misuse. “I couldn’t prove it, but it clearly sounded like a robot and not what they wrote in class … I reminded them gently, ‘You should write your own work.’” Eng recommends using AI as just another source. “It’s like Google — just another way to find information.” 

As long as students aren’t overly reliant on this technology, there are beneficial ways of using it, but students and teachers need to seek those honest and constructive uses. 

“It’s not going away,” said Eng. “So we just need to embrace it, but also be morally aware.” 

Yang agrees. “You give man fire, you give him a knife, and there’s always the possibility of someone getting stabbed or [burned]. But you can also cut with a knife and cook with fire. [AI] is no different — the benefits outweigh the dangers; it’s on us to use it properly.”

But what does using artificial intelligence properly look like?

Morrill preaches that the best way to embrace these new technologies is to use them as brainstorming tools. She gives an example of a lesson she presented to students on the Sepoy Mutiny. Asking ChatGPT to produce an example response to her prompt, she was able to spend 30 minutes editing it rather than hours drafting from scratch. Morrill explains that this approach is much safer for someone who already knows a lot about the topic and can identify any inaccuracies and make changes when needed. 

Eng warns of cognitive dissonance — when one’s beliefs don’t line up with their actions. She explains that many students are aware of ethical and unethical uses, but even if they’re clearly using AI unethically, if they don’t feel guilty, they can justify it to themselves. “[Newton South] is a pressure cooker. People feel stressed out with all that’s going on and begin to lean on AI too heavily — often unethically. They don’t intend on being untruthful. But it sort of spirals that way.”

Eng thinks students can learn from Yang. She believes “top-achievers” are the ones who usually fall victim to reliance on tools like ChatGPT due to unmanageable stress levels. But she cites Yang as an example of someone who uses this technology in an intellectually curious and responsible way. “He does have a very different approach in his philosophy of things … [and] a very unique skill set. But that doesn’t mean that [students] can’t approach [AI] the same way he does.”

Less than a year after ChatGPT’s release, plenty of ambiguity remains about its use in educational settings. But Yang is sure about one thing: AI is here to stay. 

“I think any attempts at shutting it down are futile. It would be an injustice for teachers to deny students the ability to learn how to use this tool because this is the future. This is how humans will greatly accelerate how they work, how they study. And so we just have to embrace it.”