Skip to content

AI in education is here to stay, Holy Family board told

The use of artificial intelligence, or AI, in schools is a fact of life now, and to ban it would be to ban technology from the classroom, trustees of the Holy Family school board were told.
dr-couros-hf-board-0652
Dr. Alec Couros of the University of Regina, centre, gave a presentation on the uses of AI in schools, and the implications for teachers and students.

WEYBURN – The use of artificial intelligence, or AI, in schools is a fact of life now, and to ban it would be to ban technology from the classroom, trustees of the Holy Family school board were told on Wednesday evening.

They heard from guest, Dr. Alec Couros of the University of Regina, who shared how AI is being used and the impact it can have on the classroom and the implications of using such tools as ChatGPT in teaching and learning.

Dr. Couros serves as the director of the Centre for Teaching and Learning at the U of R, and is the professor of Education Technology and Media in the faculty of Education.

Demonstrating some of the possibly applications of AI that are currently available, he said there are definitely implications for education in the classroom that teachers and board members need to be aware of.

“If you decided as a board to ban AI, you’re banning technology essentially,” said Dr. Couros, noting that some boards that initially banned it in the U.S. are going back to it.

“There are kids who are on Snapchat, and their first friend is an AI bot,” he said in noting AI is a major part of most programs available right now.

In one demonstration, he had a video of himself talking about AI. “In seconds I can drop it in and speak in something like 26 languages,” he said, showing how his video could be changed to French, German and Mandarin with a keystroke.

In addition, with his talking head video, he could type in content and it can emulate his face and voice to say anything he wants it to say. “It isn’t even me recording it, it was totally generated by AI,” Dr. Couros pointed out. “It doesn’t look that fake.”

There are implications of this for educators as well as for society, political leaders or any public persona, as the AI technology can be used to literally make any person say anything, true or not. He noted all it had for checks and balances was a box to check if this was truthfully him creating the video, and it’s difficult to authenticate as well.

To show what else AI’s ChatGPT is capable of, he asked his phone assistant for some points about the implications of AI, positive and negative, and within seconds, the essay was filled in on the screen.

While this presents “a bit of scary reality”, AI can also provide very useful teaching tools that can be used in the classroom, he said.

He demonstrated how a sheet of math problems could be presented, and a student only needs to ask the AI program to solve it, and the answers are filled in.

“It’s scary and powerful technology, but at the same time I have a lot of trust in teachers, and they know these sorts of things are out there,” he said.

There have been some “incredible advancements” in ChatGPT since it was first introduced, noting that GPT-4 is far better than version 3 was, and “this is only what’s been released so far. It’s as much as society can take at this point. Some people speculate there’s already some artificial intelligence available, but the impacts on society could be something we don’t quite understand totally,” said Dr. Couros.

“It’s narrow intelligence. It’s good at some things, but not very intelligent about everything. That’s where researchers are taking things beyond general knowledge,” he added.

For teachers, the program can be used by teachers in helping to create lesson plans. He demonstrated how he could type in a request for a Grade 4 lesson plan on geometry, based on the Saskatchewan curriculum, and shortly the GPT program produced a lesson plan including questions and reference materials.

After having the GPT program produce an essay on a given topic, Dr. Couros cautioned, “I don’t know how good this is. The better the input you type in for your request, the better the essay is generated. It looks very impressive, the writing is good, but I don’t know how accurate it is.”

An example he showed was to ask ChatGPT to provide an argumentative five-paragraph essay at a Grade 9 level on why students shouldn’t get homework. “Of course, teachers don’t like this,” he said, and he then specified the essay to be written “so that a five-year-old can understand it”, and it automatically revised the essay to that level. He went on to ask for an essay about “Reaganomics”, and again asked it to be rewritten for a five-year-old’s level.

One shortfall in relying on AI to write essays is, it can draw on false information and perpetuate it.

There are several tools available also, such as Diffit, for differentiated instruction, or Quillbot, that can be used to paraphrase paragraphs produced by AI, and reword it to sound better.

Some tools can be used as detectors of students work, but Dr. Couros cautioned that these tools are inadequate and can give false evaluations of work. His concern is that some students who genuinely produce written work can be labeled as plagiarists or of using AI when they have not used it.

He gave an example of some paragraphs entirely written by AI, and a detector scanned it and said it was 85 per cent written by AI mixed with human-written material. He ran it through Quillbot, and the detector then said it was 40 per cent written by AI.

“Often it over-estimates. … There is no consistent good way of detecting that it’s AI-generated. There’s far too many false positives. Some kids who are not necessarily cheaters or are not cheating may get unduly caught up in that accusation,” he said, adding the U of R policy is a teacher cannot use an AI score as a definitive reason to do an academic investigation of submitted work.

His best hope for AI, he added, is with good policy and understanding of how AI works, it that it can be used to support the work of a teacher, help them write outlines but not provide everything. “This is something we all have to grapple with,” he said. “The possibilities are endless, for good or for bad. … This is not something we can avoid.”