Skip to main content

Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach

The student admitted to using ChatGPT

When Antony Aumann, a professor of philosophy at Northern Michigan University, graded essays for his course on world religions last month, he read what he described as easily “the best paper in the class.” It examined the morality of burqa bans with clean paragraphs, apt examples and rigorous arguments.

A red flag immediately went up.

Mr. Aumann confronted his student with whether he had written the essay himself. The student admitted to using ChatGPT, a chatbot that provides information, explains concepts and generates ideas in simple sentences – and in this case wrote the paper.

Disturbed by his discovery, Mr. Aumann decided to change the way he writes essays for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and limit computer activity. In later drafts, students must justify each revision. Mr. Aumann, who may refrain from essays in subsequent semesters, also plans to incorporate ChatGPT into the classroom by asking students to rate the chatbot’s responses.

“What happens in class will no longer be, ‘Here are some questions — let’s talk about this between us humans,'” he said, but “it’s like, ‘What is this alien robot thinking too?'”

Across the country, university professors like Mr. Aumann, chair holders, and administrators are beginning to overhaul classrooms in response to ChatGPT, leading to a potentially huge shift in teaching and learning. Some professors are completely redesigning their courses, making changes that include more oral exams, group work, and handwritten exams instead of typed exams.

The movements are part of a real-time engagement with a new wave of technology known as generative artificial intelligence. ChatGPT, released by artificial intelligence lab OpenAI in November, is at the forefront of the shift. The chatbot generates incredibly articulate and nuanced text in response to short prompts, which people use to write love letters, poetry, fan fiction – and their schoolwork.

This has turned some middle and high schools upside down as teachers and administrators try to find out if students are using the chatbot for their schoolwork. Some public school systems, including those in New York City and Seattle, have since banned the tool on school Wi-Fi networks and devices to prevent fraud, although students can easily find workarounds to access ChatGPT.

In higher education, colleges and universities have hesitated to ban the AI ​​tool because administrators doubt the move would be effective and do not want to violate academic freedom. This means that the way people teach is changing instead.

“We’re trying to establish general policies that support the faculty member’s authority to lead a class with certainty,” rather than targeting specific cheating methods, said Joe Glover, provost at the University of Florida. “This won’t be the last innovation we have to deal with.”
The Rise of OpenAI
The San Francisco-based company is one of the world’s most ambitious artificial intelligence laboratories. Here’s a look at some recent developments.

This is especially true as generative AI is still in its infancy. OpenAI is expected to release another tool soon, GPT-4, which can generate text better than previous versions. Google has developed LaMDA, a competing chatbot, and Microsoft is discussing a $10 billion investment in OpenAI. Silicon Valley startups, including Stability AI and Character.AI, are also working on generative AI tools.

An OpenAI spokeswoman said the lab recognized that its programs could be used to mislead people and is developing technology to help people identify text generated by ChatGPT.

ChatGPT has now jumped to the top of the agenda at many universities. Administrators set up task forces and host university-wide discussions to respond to the tool, with much of the guidance being to adapt to the technology.

At schools like George Washington University in Washington, DC, Rutgers University in New Brunswick, NJ, and Appalachian State University in Boone, NC, professors issue the open-book take-home assignments — which have become a predominant assessment method in the became pandemic but now appear vulnerable to chatbots. They opt instead for homework, handwritten work, group work, and oral exams.

Gone are requests like “write five pages about this or that”. Some professors instead formulate questions they hope are too smart for chatbots and ask students to write about their own lives and current events.

Students “plagiarize this because the assignments can be plagiarized,” said Sid Dobrin, chair of the English department at the University of Florida.

Frederick Luis Aldama, the humanities chair at the University of Texas at Austin, said he plans to teach newer or niche texts that ChatGPT may have less information about, such as B. William Shakespeare’s early sonnets instead of A Midsummer Night’s Dream.

The chatbot can “motivate people who rely on canonical, primary texts to actually look beyond their comfort zone for things that aren’t online,” he said.

In case the changes aren’t enough to prevent plagiarism, Mr. Aldama and other professors said they plan to introduce stricter standards for what they expect students to do and how they grade. It is no longer enough for an essay to have just a thesis, an introduction, supporting paragraphs, and a conclusion.

“We need to improve our game,” Mr. Aldama said. “The imagination, creativity and innovation of analysis that we normally think of as an A paper needs to seep into the B division papers.”

Universities are also aiming to educate students about the new AI tools. The University at Buffalo in New York and Furman University in Greenville, SC said they plan to embed a discussion of AI tools in required courses that teach undergraduate or undergraduate students about concepts such as academic integrity.

“We need to add a scenario to this so students can see a concrete example,” said Kelly Ahuna, who directs the University of Buffalo’s Office of Academic Integrity. “We want to prevent things from happening rather than catch them when they do.”

Other Universities Try to Set Boundaries for AI Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies to include generative AI in their plagiarism definitions

John Dyer, vice president of enrollment services and educational technologies at Dallas Theological Seminary, said the language in his seminary’s code of honor feels “a little archaic anyway.” He plans to update his plagiarism definition to include: “Using text written by a generation system as your own (eg, entering a command prompt into an artificial intelligence tool and using the output in a paper).”

The abuse of AI tools will most likely not end, so some professors and universities said they plan to use detectors to eradicate this activity. Plagiarism detection service Turnitin said it will be adding more AI identification features this year, including ChatGPT.

More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up for GPTZero, a program that promises to quickly recognize AI-generated text, said Edward Tian, ​​its creator and senior at Princeton University.

Some students see value in using AI tools to learn. Lizzie Shackney, 27, a law school and design school student at the University of Pennsylvania, has started using ChatGPT for paper brainstorming and coding problem sets debugging.

“There are subjects that want you to share them and don’t want you to turn the wheels,” she describes her computer science and statistics classes. “The place where my brain is useful is in understanding what the code means.”

But she has concerns. ChatGPT, Ms Shackney said, sometimes misstates ideas and misquotes sources. The University of Pennsylvania also has no regulations governing the tool, so it doesn’t want to rely on it if the school bans it or considers it a scam, she said.

Other students don’t have such qualms, posting on forums like Reddit that they’ve submitted assignments written and solved by ChatGPT — and sometimes for fellow students as well. On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool, writing papers, and solving programming problems.

A video shows a student copying and pasting a multiple-choice exam into the tool, with the caption: “I don’t know about you, but I’m just letting Chat GPT do my final exams. Have fun with your studying.”

Source: www.nytimes.com

Popular posts from this blog

Apple and Meta Reportedly Discussed AI Partnership for iOS 18

Apple has held discussions with Meta about integrating the Facebook owner's AI model into iOS 18 as part of its Apple Intelligence feature set, according to a report over the weekend. Meta launched Llama 2, its large language model, in July 2023, and in April, the company released the latest versions of its AI models, called Llama 3 . The Wall Street Journal reports that the two longtime rivals have held talks about offering Meta's model as an additional option to OpenAI's ChatGPT. The paywalled report notes that the discussions haven't been finalized and could fall through. As part of Apple Intelligence, Apple has announced a partnership with OpenAI that will allow Siri to access ChatGPT directly in iOS 18, iPadOS 18, and macOS Sequoia to provide better responses in relevant situations. Using ChatGPT will be optional, so users with concerns about the technology can abstain and still make use of Apple's own new AI features. Speaking at WWDC 2024, Apple's

Here Are the macOS Sequoia Features Intel Macs Won't Support

When Apple released macOS Monterey in 2021, some key features required a Mac with Apple silicon. The same scenario played out with macOS Ventura in 2022, and then again the following year with the release of macOS Sonoma. With macOS Sequoia set to arrive in the fall, which new features can Intel Mac owners expect to be unavailable to them this time around? Apple says that macOS Sequoia is compatible with the same Macs as macOS Sonoma, but Apple's fine print reveals that certain new features won't work on Intel machines. If you're still on an Intel Mac, here's what you won't have access to. Apple Intelligence Apple Intelligence , a deeply integrated, personalized AI feature set for Apple devices that uses cutting-edge generative artificial intelligence to enhance the user experience, won't be available on Intel Macs. Apple says the advanced features require its M1 chip or later, so if your Mac was released before November 2020, you're out of luck. T

iPhone 16 Pro Models to Adopt 'M14' Advanced Samsung OLED Panels for Improved Brightness and Lifespan

The upcoming iPhone 16 Pro and iPhone 16 Pro Max will be the first Apple smartphones to adopt Samsung's high performance "M14" OLED display panel, claims a new report coming out of South Korea. According to ETNews , Samsung's "M" series of OLED panels are made for flagship smartphones, while "14" refers to the number of high-performance materials used to produce them. "M14" is the first series of its kind, and the panel is said to have been developed to deliver superior brightness and longevity. Samsung has reportedly placed orders for the M14 materials and is preparing to mass produce the displays in the second half of the year for Apple's iPhone 16 Pro models. Google's Pixel 9 smartphone is the only other device that is expected to adopt the high-performance displays in 2024. A previous report out of China claimed that this year's ‌iPhone 16 Pro‌ models will feature up to 1,200 nits of typical SDR brightness – a 20%

Apple Boosts A18 Chip Orders in Anticipation of High iPhone 16 Demand

Apple is said to have upped its order of next-generation chips from TSMC to between 90 million and 100 million units, following heightened demand expectations for its iPhone 16 series. Last year's initial chip order volume for the iPhone 15 series launch is believed to have been in the region of 80-90 million units, suggesting Apple is anticipating higher demand for its 2024 devices, according to Taiwanese outlet CTEE . The arrival of Apple Intelligence in iOS 18 is expected to boost initial sales of the devices. One of the reasons is that Apple Intelligence requires at least an iPhone 15 Pro to run, which means owners of last year's iPhone 15 and iPhone 15 Plus will miss out on Apple's new AI features unless they upgrade to an iPhone 15 Pro or plump for one of the iPhone 16 models. Last year, the iPhone 15 and iPhone 15 Plus were equipped with the A16 Bionic chip – the same chip that was in the iPhone 14 Pro models – whereas the iPhone 15 Pro and iPhone 15 Pro Max f