Illustration By Neo Clark

PSU needs a plan for ChatGPT

AI tools are here, and we’re not prepared for what comes next

ChatGPT and other generative Artificial Intelligence (AI) tools appear to be on track to reshape the landscape of higher education, and universities across the country are scrambling to adjust to the changes that are coming. Portland State isn’t immune to the challenges—and opportunities—that AI tools like ChatGPT will bring, and it’s imperative for the university to formulate a plan to deal with those eventualities now, rather than later when they may become too overwhelming to handle.

 

When incoming President Ann Cudd takes over this summer, university policy regarding generative AI will be one of the most pressing issues on the docket. It’s time to give the problem the attention it deserves—we need to figure out what the guidelines are going to be as soon as possible.

 

First, what is ChatGPT? A GPT, or Generative Pre-trained Transformer, is essentially an algorithm that can create new data on the basis of data it takes in, according to Time. ChatGPT is one of those models, introduced by the company OpenAI.

 

I’m no computer scientist, so here’s a basic—and likely over-simplified—explanation of what ChatGPT does, and why universities are so scared of it. ChatGPT can take prompts from users, like “write me an essay on the significance of the Spanish Civil War” and use information collected from various sources to generate a product based on that prompt. It can do a lot more than that, too, but that’s about as in-depth as I’m able to go on the technological aspects.

 

The possibility of students using ChatGPT to write entire essays has caused a panic, understandably, among faculty and staff at many universities. Vice reports that there’s a growing divide between those who see ChatGPT and other AI tools as an easy method for committing plagiarism, and those who view them more like a “convenient research assistant and nothing more.”

 

Currently, there’s an alarming lack of clarity in PSU policy on ChatGPT and similar tools. For example, the Dean of Student Life’s webpage on “Academic Misconduct” doesn’t explicitly mention AI anywhere in its explanation of the policy. It states that “academic misconduct” may include “submitting for credit work done by someone else,” which includes plagiarism and failure to cite sources—something that could certainly be applied to an essay written entirely by ChatGPT.

 

However, it’s that “could” that’s the problem. I can read that policy and infer that if I simply gave ChatGPT an essay prompt and turned in the paper it churned out, that would be a clear example of plagiarism. But where’s the line? If I give ChatGPT the same prompt, but go through each paragraph of the essay it generates and rewrite them in my own words, is that still plagiarism? If I use ChatGPT to generate a structure for my essay, which I then write myself, is that acceptable? What if I ask ChatGPT to summarize a selection of research materials for me, and I then use those summaries to write my essay?

 

Also, it’s worth noting that the current academic misconduct policy is based on human-to-human interactions—the plagiarism policy prohibits students from submitting work done by someone else, but not necessarily something else.

 

These may seem like nitpicks, and maybe they are, but it’s better to be over-prepared rather than under-prepared—and with something as unpredictable as the future of machine learning, the time is fast approaching when obscure edge cases might become the new normal.

 

Of course, this isn’t only a problem for PSU, but for universities across the United States and elsewhere—and the consequences of approaching generative AI tools from a place of fear can be disastrous.

 

Take, for example, the response to ChatGPT from one university in Texas. In May, a professor at Texas A&M University-Commerce falsely accused an entire class of using ChatGPT to write their essay assignments, temporarily denying the students their diplomas and threatening their graduation status, according to Rolling Stone. The university later gave a statement walking back the move, stating they were “investigating the incident and developing policies to address the use or misuse of AI technology in the classroom.”

 

More frighteningly, the University of Texas and Texas State University systems won a legal battle in April when the Texas Supreme Court ruled six to two that the universities can revoke degrees from students after graduation if they determine the degree was obtained through academic misconduct, according to Inside Higher Ed. With the unclear boundaries around the use of ChatGPT, what precedent does this set?

 

As a student, I am terrified of using ChatGPT or similar tools in any of my assignments, even when they might be genuinely helpful. I would guess many students feel the same way—and that’s a shame. Generative AI, for all its dangers, has the potential to be a revolutionary new technology if we approach it with curiosity rather than fear.

 

There needs to be clearer lines around good-faith use of AI technology in classrooms. Just as students should be aware of exactly what constitutes academic misconduct with AI tools, students shouldn’t be afraid to exploit the possibilities of generative AI. If we don’t encourage the adoption of AI in positive ways, we’re going to end up cutting off a valuable and exciting new avenue for academic exploration.

 

For example, The University of North Texas Center for Learning Experimentation, Application and Research, housed in their Division of Digital Strategy and Innovation, has this to say on their page, Positive Uses for ChatGPT in the Higher Education Classroom: “ChatGPT can be used as a tool for creativity as well as an accommodation, and teaching approaches can utilize it to innovate teaching and learning.” They imagine AI being used in “writing assignments in which students actively analyze the tool’s strengths and limitations,” AI-assisted draft editing, automated text-to-speech or speech-to-text accommodations for students with auditory or visual impairments and more.

 

It seems that there are a few practical options for PSU to take on AI tools going forward. The first is to just ignore it and hope the problem resolves itself—not the most practicable solution.

 

The second option is to ban the use of ChatGPT and other generative AI altogether. This might be doable, and it may even be attractive for the university administration as a relatively mess-free solution. However, this would be a mistake—trying to hold back the tide of technological advancement has never worked in the past, and PSU would risk falling behind as AI becomes more commonplace in the world.

 

The third option, and the one I believe to be the most reasonable, is to come to some kind of accommodation with AI in the classroom. Nobody really knows what that means yet, but it seems clear that AI is here to stay—the least we can do is find a way to make the most of it.

 

PSU President Stephen Percy said about ChatGPT in his latest press conference, “The Provost has convened a committee because it’s sort of in the academic realm, and they’re looking at that…I think they’re just beginning to figure out what the guidelines are for that.”

 

Once Percy hands over the role to Cudd in August, I hope that she will take up the issue as one of the first priorities of her new term. Cudd’s experience as Provost for the University of Pittsburgh should give her hands-on experience with these thorny questions of academic integrity and exploration, and a matter as delicate as this needs an experienced hand.

 

PSU, as an urban research university in the heart of the largest city in Oregon, has a unique opportunity to become a leader in the state on how universities approach AI moving forward. In this moment, there’s no better way for PSU to live up to its motto, “Let Knowledge Serve the City,” than to tackle ChatGPT head-on and become a model for universities across the state. The time is now to put the reins on generative AI, and to harness it for our own good.