AI is “anytime your technology is helping you think or is doing the thinking for you in some way” such as spell check, Alexa devices, Google home and when your car knows where you’re going. Generative AI is when it’s generating something for you, such as Google translate, dictation software or assistive technology.
“If we’re not literate (in the process of AI), if we don’t help our students become literate in that process, we are failing them, not the technology,” said Dillon.
“They’re all going to be expected to know how to utilize generative AI for good rather than evil, and as professors and educators, we need to make sure that we’re helping them understand these tools, get practice using those tools, and begin to figure out how to question and probe these tools so they know how to best use them when they could or should use them,” she said.
Adrienne Forgette, vice president of Academic Affairs at Clark State College, echoed Dillon, saying, “We look at AI as a tool — if used well, it can help students learn and be prepared for jobs where knowledge of it will be expected.”
Dillon — who teaches classes focused on media, such as social media, media law and media literacy — is leading an AI committee at the university along with other professors from computer science, philosophy, natural science, biology and education departments focused on generative AI.
“We try to find representatives from all different disciplines because AI is going to affect disciplines differently as well as the same,” she said.
Credit: Bill Lackey
Credit: Bill Lackey
Clark State also has a committee that’s rewriting its academic integrity policies so students understand when it’s OKto use AI and how to acknowledge their use of it.
“We want students to be aware of how AI can interfere with their learning and may not actually be appropriate for a task, whether that’s an academic or a work-based task. AI is not going away, and faculty and students need to understand it,” Forgette said.
‘Moral panic’
With generative AI becoming more accessible and available to the public, Dillon said it is has caused “moral panic” to set in.
“It is sometimes scary and sometimes it can blow us away ... I think hammers can help build houses, but they can also tear out those nails and destroy those houses. I view technology in the same way,” she said. “With each new technology, there’s a moral panic. The good thing about moral panic is it helps us take pause, and it helps us try to become literate in the new technology before we form opinions about it, or at least it should.”
Dillon said for the staff still working to embrace AI, it has more to do with the lack of understanding, someone who values original creativity, someone who has had negative experiences with AI, or even the discomfort with new technology.
The committee did a survey of faculty, and of those who answered, they’re most concerned about plagiarism, or students turning in work that’s not completely their own, which is both an ethical issue and a concern for professors.
Dillon said they’ve been working to come up with syllabi statements that express the professor’s expectations of if students use AI, how they should tell professors they’re using it and where AI is not allowed.
She said students need to tell her and give her the printout if they use AI, whether it’s for outlining, putting together a speech or looking for grammatical mistakes, because then she can submit it to the learning management system and compare it.
“We’re treating it like a source. Tell me you use this source or tell me you use this tool, and how you used it. But what you’re turning in is something that you have created, maybe with the assistance of a tool, but the final work is your work,” she said. “If you’re going to use AI to write your papers, you’re cheating yourself, and you’re paying a lot of money to have a free tool.”
The right time
Some professors in Dillon’s department also put together a PowerPoint presentation and recorded video for students to watch about AI on the good, where it’s from, where it’s going and what their expectation is.
“I want my students to understand how to use these tools and when the right time is to use them, when is the right time to question them and how we can tell whoever we’re producing something for why we used it,” she said.
As for Dillon’s classes, she has had students use AI for brainstorming or reviewing purposes, and activities to help them understand how it’s put together. She said everyone in the class will enter the same prompt into ChatGPT, see what each student got, how they were different, look for common themes and have a conversation about it.
“If you all use the same prompts for the paper, you’re giving up that learning process to a technology, (and) we do give up some of our learning processes to technology. Back in the day, (people would say) you think you’re going to have a calculator in your back pocket, so really, generative AI and AI is the calculator of the 21st century,” she said.
Best practices
Some instructors at Clark State are also using generative AI in their classes to help students understand its uses and limits, but most are still learning about it since it’s “fairly new.”
“We are offering professional development and training to faculty so that if they want to use it, they understand more of what it is, best practices for its use, how to set up good assignments, and how to evaluate them,” Forgette said.
Dillon uses generative AI in her personal and professional life, whether it’s for recipes, crafting, putting together reports or schedules, and rubrics for activities.
“I’m one of those that want to invite the students into the sandbox but recognize every once in a while, the castle’s going to fall and you’re going to get dirty, so you need to understand how to best use those tools in that sandbox,” she said.
“I’m also not naive enough to think that my students are never going to turn in something AI has generated in some way. I also know that I’m not good enough of a human to detect that, (but) our AI detectors are terrible and they’re just not keeping up with the generative AI technology,” she added about how hard it can be to know if you’re talking to a human or robot, or understanding if something is generated by AI or a human.
When it comes to using AI, Dillon said it’s something people will never be able to contain, and that’s it not harmful to students, but the way it’s used can be harmful to students’ experiences.
“If we ignore it and ban it, that could be harmful to students’ experiences because they’re going into a world where it’s being used, which means they need to understand when they’re encountering it, and they’re going into a world where they’re expected to understand how to use it, ethically and correctly, and if we ban it they won’t know how to do that,” she said.
About the Author