Adam Maksl
Faculty Fellow for eLearning Design and Innovations in UITS Learning Technologies, Associate Professor of Journalism and Media at IU Southeast
Opening up the discussion about ChatGPT and other AI tools, and what they might mean for the future of higher education.
By now many of us have heard about ChatGPT and other artificial intelligence tools.
There are probably more questions than answers, not just about what this technology is and how it works, but also about appropriate use and its possible effects on creative expression and information ecosystems (including the problem of creating and spreading misinformation).
What are these tools and how do they work?
The tool most people have been talking about lately is ChatGPT, a product from OpenAI. It's part of a subset of artificial intelligence referred to as "generative AI." Generative AI can create new data–like text and images–based on patterns it has learned from the large sets of existing data on which it is trained. ChatGPT is a chatbot tool intended to generate text in response to requests and input from users. Tools have also been developed to generate other types of data, such as OpenAI's DALL-E 2, which can produce images based on short strings of text provided by users. Similar technologies are starting to be applied to other media, such as sound and video.
Can AI-generated output be passed off as human generated? How can we address that reality in our assignments?
In some ways, this is an extension of plagiarism, using someone (or, in this case, something) else's ideas as your own. One important difference, though, is that unlike "traditional plagiarism," the work produced by generative AI engines is unique. It's based on the prompt provided by the user, and even the same prompt will produce slightly different results.
This makes detecting work generated by an AI difficult. Nonetheless, companies like Turnitin and others have attempted to do just that. Some are skeptical of ed tech's ability to engineer our way out of this problem, seeing it as a sort of perpetual "AI arms race" between the technology used to produce data and the tech used to detect AI-produced data.
A better approach may be to redesign assignments, such as using fewer simple fact-based assignments; prompting students to use examples from material unique to your class; allowing students to add their own personality, perspectives, and examples to writing assignments; and scaffolding larger writing assignments into smaller parts. IU Bloomington's Center for Innovative Teaching and Learning discusses these and other strategies to redesign assignments. Your colleagues, especially in writing disciplines and writing centers, may also prove to be essential resources to explore how you might build stronger writing assignments.
Can these tools produce false and/or biased information?
Yes, because they're essentially trying to predict likely words or phrases that come next in responses. (Think autocorrect on steroids.) ChatGPT, for instance, will try to answer a question with a plausible-sounding response, even if it is completely inaccurate.
For example, I asked it to summarize a theoretical framework my research partners and I have developed called the "5 Cs of news literacy." ChatGPT wrote a very plausible-sounding summary of five concepts beginning with the letter C that were quasi-related to news and media literacy. But it was not our framework, or even close to it. But if you didn't know better, it would be easy to believe its explanation.
In a world where we already feel overwhelmed by information, including some that is intended to mislead, could generative AI make things worse? As these tools and technologies develop, finding the signal in the noise may be more difficult and will require more emphasis on critical thinking.
Additionally, because these systems are trained on data created by humans and in societal structures with inherent preferences and biases, that's going to shape the data the generative AI produces. For example, if an image generator is asked to produce a photograph of a tenured college professor, it might produce more images of white men because the historical images it was trained on reflect social inequalities.
How will generative AI like ChatGPT affect the role of faculty?
Late last semester, I attended an open forum about what ChatGPT might mean for the future of higher ed. One participant said something that has stuck with me these last few weeks, that much of what we do as faculty–especially in imparting fact, presenting curriculum, and even evaluating learning–is going to be questioned. In fact, when I showed ChatGPT to a family member over the holiday break, they questioned whether these technologies will replace professors.
I don't think we'll be replaced by AI, but these technologies will force us to think critically about our day-to-day work and clearly articulate our value. It will likely advance adoption of active learning practices and cause us to place a greater emphasis on the importance of relationships, community, and interdisciplinary inquiry.
How important is it that we introduce these tools to students?
Just as with academia, generative AI will influence the economy and civil society in ways that will most certainly affect the world we're preparing students to enter. And people who know how to use AI effectively–not the AI itself–will be most valuable in the future job market.
AI is one of several emerging digital skills and competencies that are essential for our students to build. We must help them develop digital literacies so they can use these new and emerging tools and technologies efficiently, effectively, and especially ethically.
At IU, we've developed the Digital Gardener Initiative (DGI), a university-wide program focused on integrating digital ways of knowing, doing, and making into our students' college experiences. A signature program of DGI has been our Faculty Fellows Program, where colleagues from across multiple campuses and multiple disciplines come together to integrate digital literacy into their curricula. We're planning more ways to add emerging skills like AI into our work in that program, and all faculty are encouraged to apply for future cohorts.
In the meantime, if you're exploring ways to integrate AI and tools like ChatGPT productively into your work with students, you might want to check out CITL's resource on AI-generated text. Also check with your teaching center for additional resources.
Continue the discussion
The Learning Technologies Division of UITS has convened a group to begin to help faculty explore these questions and issues. So far, we've conducted two events -- and our next, a hands-on workshop with Dall-E and ChatGPT, will be on April 10. Many of these will be listed in the Upcoming Events section of Teaching.IU. Here's video from our first event, a panel of faculty experts exploring challenges and opportunities. Check also with your campus teaching centers to learn what other programs they might be planning.