Pupils will be allowed to quote work generated by the ChatGPT artificial intelligence system in their essays, the International Baccalaureate (IB) has said.
ChatGPT is an AI chatbot capable of producing content mimicking human speech. Accessible for free, the service can be used to generate essays, technical documents, and poetry.
The chatbot has been banned in some schools worldwide after students were caught submitting automatically generated essays as their own work.
But the IB, which offers four educational programmes taken by pupils at 120 schools in the UK, said it will not ban children from using ChatGPT in their assessments as long as they credit it and do not try to pass it off as their own.
Matt Glanville, the qualification body’s head of assessment principles and practice, told The Times of London: “We should not think of this extraordinary new technology as a threat. Like spellcheckers, translation software and calculators, we must accept that it is going to become part of our everyday lives.”
He said: “The clear line between using ChatGPT and providing original work is exactly the same as using ideas taken from other people or the internet. As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography.
‘Sensible Approach’
The IB’s approach has won some support in the teaching profession.Geoff Barton, general secretary of the Association of School and College Leaders (ASCL), said: “ChatGPT potentially creates issues for any form of assessment that relies upon coursework where students have access to the internet. Allowing students to use this platform as a source with the correct attribution seems a sensible approach and in line with how other sources of information are used.
“We would caution, however, that ChatGPT itself acknowledges that some of the information it generates may not be correct and it is therefore important for students to understand the importance of cross-checking and verifying information, as is the case with all sources.
“What is important is that students do not pass off pieces of work as their own when this is not the case, and that they use sources critically and well.”
Harder to Mark Schoolwork
A survey by the British Computer Society (BCS), found that 62 percent of computing teachers said AI-powered chatbots such as ChatGPT would make it harder to mark the work of students fairly.Julia Adamson, managing director for education and public benefit at BCS, said: “Computing teachers want their colleagues to embrace AI as a great way of improving learning in the classroom. However, they think schools will struggle to help students evaluate the answers they get from chatbots without the right technical tools and guidance.”
She said machine learning needs to be brought into mainstream teaching practice, “otherwise children will be using AI for homework unsupervised without understanding what it’s telling them.”
School Bans
The proposal to incorporate AI into teaching practices has not been accepted by all educators.NYCDOE spokesperson Jenna Lyle told Chalkbeat: “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”
Dangers of AI
Many people have been raising alarm bells over the rising development of AI. In June of last year, Google put a senior software engineer in its Responsible AI ethics group on paid administrative leave after he raised concerns about the human-like behavior exhibted by LaMDA, an AI program he tested.The employee tried to convince Google to take a look at the potentially serious “sentient” behavior of the AI. However, the company did not heed his words, he claimed.
Tech billionaire Elon Musk has also warned about the dangers of AI.
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees of a National Governors Association meeting in July 2017.
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
“We also need enough time for our institutions to figure out what to do. Regulation will be critical and will take time to figure out. Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” Altman wrote on Twitter.