Skip to main content

Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach

The student admitted to using ChatGPT

When Antony Aumann, a professor of philosophy at Northern Michigan University, graded essays for his course on world religions last month, he read what he described as easily “the best paper in the class.” It examined the morality of burqa bans with clean paragraphs, apt examples and rigorous arguments.

A red flag immediately went up.

Mr. Aumann confronted his student with whether he had written the essay himself. The student admitted to using ChatGPT, a chatbot that provides information, explains concepts and generates ideas in simple sentences – and in this case wrote the paper.

Disturbed by his discovery, Mr. Aumann decided to change the way he writes essays for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and limit computer activity. In later drafts, students must justify each revision. Mr. Aumann, who may refrain from essays in subsequent semesters, also plans to incorporate ChatGPT into the classroom by asking students to rate the chatbot’s responses.

“What happens in class will no longer be, ‘Here are some questions — let’s talk about this between us humans,'” he said, but “it’s like, ‘What is this alien robot thinking too?'”

Across the country, university professors like Mr. Aumann, chair holders, and administrators are beginning to overhaul classrooms in response to ChatGPT, leading to a potentially huge shift in teaching and learning. Some professors are completely redesigning their courses, making changes that include more oral exams, group work, and handwritten exams instead of typed exams.

The movements are part of a real-time engagement with a new wave of technology known as generative artificial intelligence. ChatGPT, released by artificial intelligence lab OpenAI in November, is at the forefront of the shift. The chatbot generates incredibly articulate and nuanced text in response to short prompts, which people use to write love letters, poetry, fan fiction – and their schoolwork.

This has turned some middle and high schools upside down as teachers and administrators try to find out if students are using the chatbot for their schoolwork. Some public school systems, including those in New York City and Seattle, have since banned the tool on school Wi-Fi networks and devices to prevent fraud, although students can easily find workarounds to access ChatGPT.

In higher education, colleges and universities have hesitated to ban the AI ​​tool because administrators doubt the move would be effective and do not want to violate academic freedom. This means that the way people teach is changing instead.

“We’re trying to establish general policies that support the faculty member’s authority to lead a class with certainty,” rather than targeting specific cheating methods, said Joe Glover, provost at the University of Florida. “This won’t be the last innovation we have to deal with.”
The Rise of OpenAI
The San Francisco-based company is one of the world’s most ambitious artificial intelligence laboratories. Here’s a look at some recent developments.

This is especially true as generative AI is still in its infancy. OpenAI is expected to release another tool soon, GPT-4, which can generate text better than previous versions. Google has developed LaMDA, a competing chatbot, and Microsoft is discussing a $10 billion investment in OpenAI. Silicon Valley startups, including Stability AI and Character.AI, are also working on generative AI tools.

An OpenAI spokeswoman said the lab recognized that its programs could be used to mislead people and is developing technology to help people identify text generated by ChatGPT.

ChatGPT has now jumped to the top of the agenda at many universities. Administrators set up task forces and host university-wide discussions to respond to the tool, with much of the guidance being to adapt to the technology.

At schools like George Washington University in Washington, DC, Rutgers University in New Brunswick, NJ, and Appalachian State University in Boone, NC, professors issue the open-book take-home assignments — which have become a predominant assessment method in the became pandemic but now appear vulnerable to chatbots. They opt instead for homework, handwritten work, group work, and oral exams.

Gone are requests like “write five pages about this or that”. Some professors instead formulate questions they hope are too smart for chatbots and ask students to write about their own lives and current events.

Students “plagiarize this because the assignments can be plagiarized,” said Sid Dobrin, chair of the English department at the University of Florida.

Frederick Luis Aldama, the humanities chair at the University of Texas at Austin, said he plans to teach newer or niche texts that ChatGPT may have less information about, such as B. William Shakespeare’s early sonnets instead of A Midsummer Night’s Dream.

The chatbot can “motivate people who rely on canonical, primary texts to actually look beyond their comfort zone for things that aren’t online,” he said.

In case the changes aren’t enough to prevent plagiarism, Mr. Aldama and other professors said they plan to introduce stricter standards for what they expect students to do and how they grade. It is no longer enough for an essay to have just a thesis, an introduction, supporting paragraphs, and a conclusion.

“We need to improve our game,” Mr. Aldama said. “The imagination, creativity and innovation of analysis that we normally think of as an A paper needs to seep into the B division papers.”

Universities are also aiming to educate students about the new AI tools. The University at Buffalo in New York and Furman University in Greenville, SC said they plan to embed a discussion of AI tools in required courses that teach undergraduate or undergraduate students about concepts such as academic integrity.

“We need to add a scenario to this so students can see a concrete example,” said Kelly Ahuna, who directs the University of Buffalo’s Office of Academic Integrity. “We want to prevent things from happening rather than catch them when they do.”

Other Universities Try to Set Boundaries for AI Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies to include generative AI in their plagiarism definitions

John Dyer, vice president of enrollment services and educational technologies at Dallas Theological Seminary, said the language in his seminary’s code of honor feels “a little archaic anyway.” He plans to update his plagiarism definition to include: “Using text written by a generation system as your own (eg, entering a command prompt into an artificial intelligence tool and using the output in a paper).”

The abuse of AI tools will most likely not end, so some professors and universities said they plan to use detectors to eradicate this activity. Plagiarism detection service Turnitin said it will be adding more AI identification features this year, including ChatGPT.

More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up for GPTZero, a program that promises to quickly recognize AI-generated text, said Edward Tian, ​​its creator and senior at Princeton University.

Some students see value in using AI tools to learn. Lizzie Shackney, 27, a law school and design school student at the University of Pennsylvania, has started using ChatGPT for paper brainstorming and coding problem sets debugging.

“There are subjects that want you to share them and don’t want you to turn the wheels,” she describes her computer science and statistics classes. “The place where my brain is useful is in understanding what the code means.”

But she has concerns. ChatGPT, Ms Shackney said, sometimes misstates ideas and misquotes sources. The University of Pennsylvania also has no regulations governing the tool, so it doesn’t want to rely on it if the school bans it or considers it a scam, she said.

Other students don’t have such qualms, posting on forums like Reddit that they’ve submitted assignments written and solved by ChatGPT — and sometimes for fellow students as well. On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool, writing papers, and solving programming problems.

A video shows a student copying and pasting a multiple-choice exam into the tool, with the caption: “I don’t know about you, but I’m just letting Chat GPT do my final exams. Have fun with your studying.”


Popular posts from this blog

Code-generating tools could be more of a security hindrance than help

New research by a group of Stanford-affiliated researchers has uncovered that code-generating AI tools such as Github Copilot can present more security risks than many users may realize. The study looked specifically at Codex, a product of OpenAI, of which Elon Musk is among the co-founders.  Codex powers the Microsoft-owned GitHub Copilot platform, which is designed to make coding easier and more accessible by translating natural language into code and suggesting changes based on contextual evidence. AI-coding problems Lead co-author of the study, Neil Perry, explains that “code-generating systems are currently not a replacement for human developers”. The study asked 47 developers of differing abilities to use Codex for security-related problems, using Python, JavaScript and C programming languages. It concluded that the participants who relied on Codex were more likely to write insecure code compared with a control group. Read more > These are the best laptops for progr

Port of Lisbon hit by ransomware attack

One of Europe’s busiest seaports, the Port of Lisbon, has been hit with a ransomware attack that knocked some of its digital systems offline. "All safety protocols and response measures provided for this type of occurrence were quickly activated, the situation being monitored by the National Cybersecurity Center and the Judicial Police," a statement shared by the Port of Lisbon Administration (APL) with local media earlier this week said. The incident failed to impact the port’s operations, but did take its official website,, offline. LockBit taking responsibility "The Port of Lisbon Administration is working permanently and closely with all competent entities in order to guarantee the security of the systems and respective data," the statement concludes. While the company doesn’t explicitly say it was targeted with ransomware, the LockBit ransomware operator has added APL to its leaks website, taking responsibility for the hit.  The databas

This new Linux malware floods machines with cryptominers and DDoS bots

Cybersecurity researchers have spotted a new Linux malware downloader that targets poorly defended Linux servers with cryptocurrency miners and DDoS IRC bots. Researchers from ASEC discovered the attack after the Shell Script Compiler (SHC) used to create the downloader was uploaded to VirusTotal. Apparently, Korean users were the ones uploading the SHC, and it’s Korean users who are targets, as well. Further analysis has shown that the threat actors are going after poorly defended Linux servers, brute-forcing their way into administrator accounts over SSH.  Mining Monero Once they make their way in, they’ll either install a cryptocurrency miner, or a DDoS IRC bot. The miner being deployed is XMRig, arguably the most popular cryptocurrency miner among hackers. It uses the computing power of a victim's endpoints to generate Monero, a privacy-oriented cryptocurrency whose transactions are seemingly impossible to track, and whose users are allegedly impossible to identify. Fo

Twitter has hidden the chronological feed on iOS again – and I'm furious

In a controversial move, Twitter has brought back a feature that removes the 'Latest Tweets' view for users on iOS, which is something that many users, including me, hated back in March 2022 – and it's now rolling out. The first time the company decided to do this, 'Home' would appear first in a tab at the top, and there was no way of changing it so that 'Latest Tweets' would be the default view. It was reverted back after the company said it was a 'bug' for iOS users. This time though, it's no bug. Instead, it's 'For You' and 'Following' where you can only swipe between them now, which doesn't make much sense for a platform where you're using the platform to keep up to date with who you follow. It's a bizarre change that makes me ask – who wants this, especially during a time when its new owner, Elon Musk, is bringing in and reversing changes almost every week still? This one change will have big consequenc