Tian, a 22-year-old student at Princeton University, decided to build an app to detect whether text has been generated by a machine or written by a person.
“Every human wants to know the truth,” Tian said.
Over a few days in a Toronto coffee shop during winter break, he got to work. On Jan. 2, he launched GPTZero, which analyzes different properties of a text.
Tian, who studies computer science and journalism, said he expected a few dozen people to ever try it. But he woke up the next morning stunned by the response.
By now, it has gotten more than 7 million views, he said, and he has heard from people all over the world – many of them teachers. He has also heard from college admissions officers. Many people have subscribed for updates from Tian as he works to improve the technology; he hopes to create something that will help teachers.
ChatGPT – a conversational language model which launched in November and is free and simple to use – can swiftly produce poems, math equations or essays on topics such as the causes of the Civil War, prompting concern that students will misuse the technology. And because it doesn’t copy an existing text, there is no easy way to be certain whether a human or a bot wrote the answer.
Some school officials – including leaders of public schools in New York City and Los Angeles – have banned access to ChatGPT in classrooms.
Tian is not the only one trying to craft technology that can distinguish writing created by human thought from that generated by a machine; there are plagiarism-detection companies scrambling to do just that. The organization that launched ChatGPT also is working on ways to signal the text was produced with AI.
But the quick response to Tian’s effort highlighted the breakneck pace at which technology is changing classrooms, teaching, and the ways that people define and understand learning.
The AI is really exciting, he said, but the technology needs some safeguards.
Tian learned about developments in various ways, including AI-detection research at Princeton and during a summer internship at Microsoft. He had already been using CodePilot, which uses artificial intelligence to help find coding solutions.
“A lot of people are like . . . ‘You’re trying to shut down a good thing we’ve got going here!’ ” he said. “That’s not the case. I am not opposed to students using AI where it makes sense. . . . It’s just we have to adopt this technology responsibly.”
Technology is evolving rapidly, said Eric Wang, vice president of artificial intelligence for Turnitin, a company that uses software to help detect plagiarism.
The company can identify multiple forms of AI-generated text, he wrote in an email. It can detect writing produced by ChatGPT in its labs, and it expects to offer the tool publicly later this year when testing is completed.
“We do expect that as these tools improve, accurately identifying text created by ChatGPT will be possible, even certain,” he said.
And OpenAI, the organization that launched with funding from Elon Musk and others and that produced ChatGPT, is working on ways to mark text created with artificial intelligence. OpenAI’s policy calls on users sharing content to clearly indicate it was generated by AI.
“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else,” a spokesperson wrote in an email, “so we’re already developing mitigations to help anyone identify text generated by that system. We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from artificial intelligence.”
Tian’s approach measures a few properties, such as the perplexity, essentially the randomness, of the text and its burstiness, an effort to gauge whether the writing is complex and varied, as human writing can be.
His thesis adviser, Karthik Narasimhan, an assistant professor of computer science at Princeton, said that GPTZero has worked surprisingly well for a model developed quickly and that Tian is working to improve it. He also said the efforts have research potential beyond the practical application of detecting possible plagiarism, as people try to understand what the language models are doing.
Vincent Conitzer, a professor of computer science at Carnegie Mellon University, said he has heard considerable concern from colleagues about ChatGPT.
But, he said, efforts to identify machine-generated text could create a sort of arms race, spurring repeated adjustments to the technology to avoid detection. And some systems risk false negatives and false positives – maybe a student just has a nondescript writing style that reads similarly to a typical AI-produced text, which tends to be clear but generic. “Are you going to fail somebody?” he asked.
He said the watermark idea from OpenAI could be helpful.
But he said it may also require nonscientific efforts to combat cheating, such as professors refining their essay questions to require more complex thought, or drawing on local and current information that would not be widely available. They could require students to write in the classroom, on paper. They could follow up with questions about the writing, to ensure students have a full understanding, Conitzer said.
It’s possible students could even learn from the machine-generated text, which has some positives such as clarity, Conitzer said.
Tian said it would be sad if, years from now, people mostly relied on AI and writing became far more uniform.
“There’s something implicitly beautiful in human prose,” he said, “that computers can never co-opt.”