Last November OpenAI, a not-so-big (albeit associated, through a $1 billion investment, with Microsoft and co-founded, in 2015, by Elon Musk), artificial intelligence lab in San Francisco, introduced a newly developed chatbot—ChatGPT—that has made impressive inroads into our understanding of the challenges of artificial intelligence. The company first coded a chatbot in 2020, GPT-3, which is one of the first AI tools that responds to prompts in viable human-like text, for the most part both grammatically and, it is hoped but not confirmed, factually, correct.
Google has been answering prompts for years, but not with the succinct conversations, comments and suggestions of these new chatbots. ChatGPT (Generative Pre-trained Transformer) has achieved a cutting edge in the fast-growing market. Its core function is to mimic a human conversationalist, and in doing so it generates eerily articulate, nuanced, subtle, relevant text; it politely apologizes if it does not understand the prompt; it asks for clarification; it explains concepts and asks questions and includes related asides; it is very understandable. It can also answer test and exam questions, provide summaries of research, write poetry, plays, songs, and stories, play games, write and de-bug computer programs, and translate languages and dialects from the world over. And it accomplishes all of this, speedily, in human-like tones, understanding, and thought. What more could a struggling student, an aspiring professional or any social butterfly want?
The reason for the immense attention on ChatGPT lies in its massive corpus of high-quality text data, thought to be in the range of hundreds of billions of words, which allow it to generate coherent, grammatically correct responses to a wide range of topics and in a plethora of styles. On the down side it has demonstrated uneven factual accuracy, and it can write plausible-sounding, but nevertheless incorrect and nonsensical, responses—in AI speak, hallucinations. Of course we are in the early innings here; the more it’s used, the smarter it becomes.
ChatGPT has been caught writing papers for students. Because it sounds so like us and creates logical answers, the education sector—taken somewhat off guard—is now trying to figure out, quickly, how to deal with this new, artificial, very effective intelligence. Many public schools have banned the use of the chatbot on their campuses, fully aware that students can access it from any other place. Most colleges and universities, believing that ChatGPT’s potential as an educational tool far outweighs its risks, have gone another route: Professors are revising their curricula and their teaching methods—for starters, no more take-home exams, more in-class assignments and hand-written papers, and oral exams. They are also initiating required conversations with their students about academic integrity and plagiarism, and they are making use of the newly developed plagiarism detectors—Turnitin and GPTZero, the latter created by a college undergraduate. OpenAI is also working on technology to identify ChatGPT text.
ChatGPT is not the only generative AI chatbot. Google has built LaMDA and called Larry Page and Sergey Brin back for help, OpenAI soon will release GPT-4, and there are more coming from Silicon Valley. This is just the beginning of the generative AI eruption that may, and probably will, change the way we think, work, and create.