Description of the video:
>> [MUSIC] Welcome to Conversations with the Connected Professor. I'm Laurie Burns McRobbie, and today we're talking about Generative AI, and where things stand in higher education -- and, of course, here at Indiana University -- as we approach the nearly one year anniversary of the emergence of ChatGPT. I'm joined here by three guests. Dr. David Crandall from the Luddy School of Informatics, Computing, and Engineering, Dr. Cindy Hmelo-Silver from the School of Education, and Dr. Sara Skrabalak from the Department of Chemistry in the College of Arts and Sciences.
>> Let's get right to our conversation. We'll start very generally, and David maybe with you. How has your thinking evolved since the emergence of ChatGPT nearly a year ago, in terms of what its impacts will be and how it changes teaching, learning, research?
>> Well, I'm a computer scientist, I work in the technology underlying AI, and I've been working in the field for about 25 years, but the field really goes back more like 70 years. And I think throughout the whole history of AI, there's been these ups and downs. There's been these moments of excitement and hype, followed by moments of crash when all the promises didn't come true. And I've had that experience over and over again, actually in just my short career. And I feel like ChatGPT, I've had a similar experience with it there. So when it first came out, I was so impressed that there was this tool out there that you could have a natural language conversation with, that it seemed to know all about the world. That it could help you solve problems, that it could help you generate ideas, that it could help you write Shakespearean sonnets about whatever you want to write about. And yet, the more that I've used it over time, actually, the more skeptical I am about how much it really understands about the world. It's like this language model that has learned on a huge amount of the text on the Internet, it has a lot of biases because of that, it's locked in one place when it was trained in 2021 or something like that. So its world view isn't being updated over time. It required a fantastically enormous amount of money and computation power, and energy to be able to train. And so it's interesting. At the beginning I thought, oh my gosh, this changes everything. Now I feel yes, it has a lot of impacts and we should all be looking at it both in higher education and throughout society but I actually in some ways think that all of the fundamental things that the world was a year ago are actually the same. There's still misinformation, we still need to be critical thinking. We still need to teach our students how to do problem solving. We still need to teach them logical thinking and communication, and ChatGPT could help with some of that, but gosh, I don't think it's a solution to really anything.
>> Cindy, what are your views?
>> I think when it first was coming out, I was at a meeting for our National Science Foundation AI Institute, and for the AI Institute's PI meetings. And so it came out at a time that everybody was talking about it among people who knew a whole lot more about it than I do because I'm an educational researcher and I study how we use technology to help people learn. So I think there was a lot of hype about this as a magic bullet for education. Then there was the, oh no, what is ChatGPT going to do to education? [LAUGHTER] And now I think we're trying to think about what are places where we need humans and AI to work together. Where are the humans in the loop, how do we think about that in more sophisticated ways? What are also ways that we can use this as an opportunity to teach people how to think critically about the information that ChatGPT can, and generative AI models can spit out? And it's been interesting when, for example, if I ask it to give me a bio of one of my colleagues and it'll tell me that they're one of my students because we've written a lot together. So just doing lots of small reality checks even I think both the critical thinking as well as where can we have partnerships that are going to be productive. So trying to find this middle ground between oh no and oh boy, yeah.
>> Sara, how would you add to this?
>> Well, I'm not an active researcher in AI but I like new technology, and to try it out and sometimes I'm a fast adopter and other times I'm not. And in this case, I jumped at the opportunity to try out ChatGPT and quite quickly I became, I guess, a little dismissive of it because it was very clear how many errors were being introduced in a lot of the generated content. And so I could see that it would be something that would be useful for fairly routine tasks but for higher level tasks, it seemed like it was going to fall short on a lot of the things that we were hearing and reading about in the papers and everything. At the same time, I am an Editor in Chief for two scientific journals, and serve on the publication ethics committee for the publisher. And by January, all of our conversations had turned towards the ethical use of generative AI tools in publishing. And so on one hand, I was seeing the shortcomings of ChatGPT, I was also being completely inundated with the potential of what it could be with additional development, and additional training. That has really left me with this sense that we're often quite unprepared for new technologies when they're released and that we should be thinking ahead into the future.
>> You know we're talking about ChatGPT here and of course, even I think, David, you can confirm this. It even took some experts in the field by surprise, who really weren't expecting a tool of that level of sophistication to emerge as quickly as it did. But ChatGPT isn't all that we are talking about when we're talking about generative AI. And I wonder if we can say a little bit more about maybe generative AI more generally. Now, that's the conversation we're having because of ChatGPT, but it goes well beyond that. And depending on what field you're in, the way in which those tools can be used obviously is going to vary greatly. And maybe Sara, this one picks up your point about what should we be looking ahead to.
>> Yeah.
>> David starting with you, where do you think (at least in the short term) are we headed with generative AI?
>> Generative AI is a very general term. As you said, with ChatGPT, we're thinking about it generating text, but there's also been a lot of work that generates images, so there's these tools like Midjourney and DALL-E and so on that you can go in and type a prompt, like to create a photo of something and it will create for you a photo, or an image or a drawing. And often those images look pretty amazingly like a real world image or something that a real artist would have painted or would have created. There's opportunities and there's challenges there as well, there's a lot of concern about how those tools are maybe using all of the work of thousands of artists around the globe the AI has been trained on, they're basically plagiarizing that work, but it's also exciting. It makes it so that a person like me can create [LAUGHTER] an artistic drawing that I wouldn't be able to really do. And I think there's other forms of generative AI, so for example in science, and Sara, you probably know more about this. But for things like drug discovery, for things like designing parts for airplanes or something that instead of having a part that is created by a human, and then tested in a wind tunnel, maybe instead the AI itself could generate the part by running lots of simulations and figuring out what that part could do.
>> Yeah, and certainly in medicine, the way that those things can be very helpful in testing drug effects, for example, in an artificial environment and not on you, for example. [LAUGHTER] Cindy, anything you're seeing with respect to thinking perhaps beyond text generation in general that we think we ought to be keeping our eye on in general for generative AI?
>> So I think there's certainly talk about people using this to help generate course syllabi, for example, is one thing that I've read about. And you hopefully have people who know enough about the content to be able to look and say, this makes sense or this doesn't make sense. People are still trying to figure out what to do about writing and about the written assignments. I still feel I'm reading more about what people are worrying about, than what they're actually doing. But I think to the extent of where it can help faculty in generating syllabi. Thinking about ways where it might be able to help with coaching in project based learning classes, where you've got things going on with small groups. Or being able to help summarize what's going on. And giving information to faculty to help them know what's going on, especially to be able to change what's going on in large classes. So we might also be trying to think about how does it support teaching in ways that we hadn't thought about before. And in ways that people faculty who didn't have the really strong tech backgrounds might be able to take advantage of some of this.
>> Maybe a basic chemistry lab. Are there other things going on there that are pointing the way to new ways to explore certain avenues?
>> Yeah, so people are using ChatGPT and related tools to create or generate new assignments. But I think one of the discussions that I had recently, that I thought was quite exciting was how people were also using that to look at what assignments and products would be generated from a particular type of assignment. And then modifying or coming up with creative alternatives to the activities that they would be using in their classroom. So that it would not be something that would be so easily generated through an AI tool. And that there are ways in which our engagement with the generated content can help us frame what people should know. But also where we should be looking for new creative avenues as well. If that makes sense.
>> Creative avenues in the discipline, but also in the lab or the classroom.
>> Yes.
>> How students learn and maybe even how research takes place.
>> Definitely.
>> So thinking about students, I want to ask a number of questions about this. I'm wondering if any of you are seeing any kinds of a difference in the kinds of things going on. Obviously once ChatGPT hit, it hit immediately and it probably hit immediately with your students suddenly using it to complete assignments and so forth. And maybe a little bit ahead of where you each might have been in terms of how you set expectations and constraints on what they can use it for. I know in my own teaching that I've done, I've had very little. But I had to think about can my students use ChatGPT to do this assignment? And if they do, what am I going to do about that? And I'm sure that those conversations went on. So I'm curious about your experiences doing that, but also whether there's been any evolution with the students over the last year, because they've been out using this probably quite a lot. Are they getting more sophisticated? Are they getting bored with it, [LAUGHTER] are they thinking maybe the old fashioned ways? That may be a stretch. I'm just wondering any of you, Cindy, maybe they want to start here.
>> I think for graduate students, they're certainly thinking not about using it to write, but maybe using it for editing or maybe using it for a jump start on how do I get started writing this paper.
>> I can actually comment directly on this. And this gets at being caught off guard because until recently I was Director of Graduate Studies in the chemistry department and I was on a student's qualifying exam committee in February. So only shortly after ChatGPT had come out. And exactly what you mentioned about using it to edit their qualifying exam document. And the committee we were discussing how well written this document was and better than what we had anticipated. And that was when it came out that the student had used ChatGPT to enhance the language. And I'm sitting there thinking, wow, we are without any policies to really guide the student. And so on the fly, the committee had to decide how we wanted to handle that and it led to a broader discussion with graduate standards. But this is where my role as an editor in chief came into play, because we had already been discussing all of this at the publisher. And we were able to pull in our professional society's guidance to map onto our graduate program documents. But I think you're exactly right that the students were jumping at it very quickly and using it in ways that we didn't necessarily anticipate and being caught off guard and then having to respond to that.
>> And I think for ourselves, I think yeah, in that situation, I'm not sure what the right answer is, particularly for students who may have trouble with language and where they're using it to clean up their writing as opposed to writing.
>> In this case, English might not be their first language. Or they have a learning challenge of some expression challenge of some kind. And this can be actually exactly what they need. David, your experience here?
>> So I agree with everything that's been said. I'm going to try to put it in a slightly optimistic light, which is that I actually found it an opportunity to revisit my assignments and to think about what I really wanted the students to learn and what I was really measuring. And I think in my own class, I was overemphasizing the value of the product and not the process. So I had this moment last semester where I was grading assignments and, I had asked the students to write a project report. And I was quickly trying to read the project reports and I was thinking, I bet a lot of these were written by ChatGPT. And then I was thinking, and I bet I could use ChatGPT to grade these. And I was like, what is wrong with this situation? Like, why am I forcing the students to write this document that I don't really want to read? Because it isn't really measuring the thing that I wanted to measure. This was about doing a research project. And instead of looking at things throughout the process, I was trying to judge things all based on the product. So I pivoted a little bit, I still required them to write a report, because I think that's very important. And I tried to give parameters around how they could and could not use ChatGPT. But then I also had a poster session. And we're going to have a poster session and we're going to invite the whole school to the poster session. You can stand by your poster and explain your work to me and explain it to everybody else and you know it's sure they could maybe use ChatGPT to write part of the poster. But otherwise they really have to understand and be willing to explain what they did to not only me, but all of their peers.
>> That's right.
>> And of course, it all depends on the context. It depends on the type of class and what actually you're trying to get students to do. But for me, anyway, it was this opportunity to reflect on how I'm doing as an instructor. And I think some of my laziest assignments were the ones that were easiest for ChatGPT to solve.
>> I've actually heard that from professors over the course of this last year. People who actually required students to use it so that it levels the playing field, if you will, for the whole student body. And it's expected. But then, and again, this person said that meant I had to completely rethink how I was asking students to function and doing a lot more with oral presentation and so forth in class assignments to get around the ability to skate through an assignment without really learning anything from it. Are any of you, and this may come from both your experience with your colleagues or things like in your case Cindy, what you're doing in the center that you run with respect to how we think about how we should prepare students, not only to complete the assignments given in a classroom, but when they leave you and they graduate and go out into the world, there maybe deeper understanding of the tools that they've encountered in their coursework. And really actually interrogating the tools themselves and not just the outcome was a little bit of the thing you were talking about. Are you seeing that work going on?
>> So certainly in the K through 12 arena, we're seeing funding for AI education from elementary school through high school. And I would assume that before long along with computer literacy, we'll want to see more specifically AI literacy, so students know what are they looking at? Where did this come from? What kind of data is being used? What biases may be there? Whose data is not included?
>> So in a sense, data literacy becomes a pretty key part of many disciplines, I would think, if not all of them.
>> We definitely within chemistry have already started to build into our professional development course and our scientific communication course modules that help the graduate students understand the underlying principles of how these tools are working. So that they can understand what they can do and what they can't do and what they might be able to do in the future. And I think that is going to be essential at all levels of instruction moving forward. You don't have to be an expert in AI, but you need to have some understanding of how these tools are working and what they are capable of.
>> Anything else to add? I think you're certainly in the business of learning about the tools and creating those tools and so forth. And I know that obviously that's really what you're doing right now. But I'm wondering if any of the new tools coming on the scene are also changing what you hope your students will carry with them out into their working lives.
>> Yes and no. I think fundamentally the skills that I want to give the students (no matter what the classes are really about), foundational things like problem solving, like critical thinking, like how logical thinking, like how to make an argument, like how to communicate effectively. I think all of those things are enduring, regardless of the change in technology. In computer science, we have new languages pop up like every couple years, and fads in lots of different ways, new frameworks and so on. And so even though we, of course, teach some of those, because we want students to be able to go out into the workforce and start using the tools that are being used from day one. I think overall our goal is to teach these foundational skills because we know that in five years or 10 years, the technology is going to be completely different. And so the successful students are going to be the ones that got those foundational skills to be able to tackle whatever comes.
>> What's culturally okay for us to be doing with these tools, and with the data that these tools are relying on?
>> I don't think that there really are a lot of cultural norms at this point. I think it's quite a disruptive technology. And so if you talk to one person, you'll get one opinion and talk to another person, you're going to get another opinion. And that's where, in the context of higher education, we've been putting emphasis, at least within my field on disclosing use of these tools and how you're using them and treating them as part of a method until some of these details I think get sorted out. And we do have the cultural norms. But I also have been following very closely a lot of the concerns over the using of people's likenesses in the generation of art or music, or screenplays. And that, I think gives a lot of pause and we don't know what the legal framework is going to be for a lot of this, yet.
>> Obviously, this is playing out in real time with the Screen Actors Guild and so forth. That's exactly these kinds of issues.
>> I think there's this underground norm like people will whisper that, I wrote this letter of recommendation [LAUGHTER] and used ChatGPT to help or I started this paper, but it's whispers because I think people are still trying to figure out what these norms are and what they should be. Is it okay to use this if you did this? I have at least one little letter I wrote that no, I didn't use it because I let ChatGPT help me and I didn't know if that was okay. So I think they're trying to figure out if well, we can try to do these things and they might save us time and they might be okay, but is it still us writing it? And so I think, around issues of disclosure and around what's okay and what's not okay, I think people are still struggling with the ethical issues of when it's okay to use these. I think the Screen Actors Guild and the Writers, those are some really clear examples. But I think as we do little things, I think we're still trying to figure out what those norms should be.
>> So I am on the program committee of a major conference in computer science, so I've been wrestling with some of the same issues that Sara was talking about with her journal. I think one of the exciting things that's already come up is that it does really lower the barrier for grammar specifically because I think it's fundamentally unfair that I, as a native English speaking person, can more easily write a paper to an international conference. And people all around the world cannot and their paper might get rejected just because of grammar. And so we have embraced in this conference, we've embraced the use of yes, you can use ChatGPT to clean up your grammar. Everything you write in your paper is your responsibility. So when ChatGPT makes up a reference that doesn't exist, that's really serious.
>> Well, I think all it does feel a little like cheating to use it for certain things. But one of the things I did in the course I taught was simply to say that students needed to disclose that they were using it and they need to disclose the prompts they were using to generate the text. One of the just observations, this was the tiniest of samples, nine students, it was a one credit graduate course. One student used ChatGPT and I could tell the difference. There was something very stilted and clinical about how precisely the question got answered with very little personality. Now, she could have gone back and added that, of course. But I think to some extent, some of what we're certainly the conversations I hear is about transparency. It's about having the ability to tell and to figure out whether something has been generated. And that's got to be absolutely crucial in peer review because of the ability to not just generate text, but generate results of a scientific experiment.
>> Well, I think that's one of the key points to get across to students and also researchers is that ChatGPT and these other tools, they're not authors. That they cannot take responsibility for the content that is being put forward in whatever document that you're generating. And so that is a distinction between that generated content and what you might turn in. And so if there's an error in the references, that's on you as the author. And so thinking about who has the responsibility for the content ultimately, I think, can help put some boundaries on how people might be using these resources and how to think about the ethics underlying their use.
>> Someone, and David might even have been you described ChatGPT, other tools like that as having a really smart first year undergraduate helping you with your research which is you would not let that individual just put stuff out there without you seeing it and having it go through you. And you're the quality control, that's actually a useful way to think about these tools.
>> But I really like your point, David, about how it could potentially level the playing field for some individuals and remove biases based off of language use because that is definitely, I think, a concern in academia.
>> What would be a good starting point for someone who's not really sure how to proceed in terms of structuring a student work, or perhaps research work. Generally, what resources would you point them to?
>> I think play with them. [LAUGHTER]
>> If you haven't yet, copy and paste your assignment into ChatGPT, and see what happens. [LAUGHTER]
>> I got very interested in this general topic simply by trying out ChatGPT and other related technology. And from there I wanted to start reading more about the underlying ethics and philosophy and enhance my understanding of how they work. And so for that, I listened to a lot of podcasts. [LAUGHTER]
>> Have you found one that you thought was particularly useful?
>> I like the Philosophize This podcast and also some of the discussions that Ezra Klein will have on AI, I thought were quite interesting and were good entry points. And then there's often other references and books that are recommended from those podcasts that you can go and go deeper into too.
>> So Cindy, anything that you listen to, you're doing a lot of work in this area. Maybe anything you want to tell us about.
>> I guess combinations of doing a lot of playing, looking at what happens when I keep asking the same thing over and over and getting different answers, I think that's been interesting. I think thinking about the prompts, and I've been just trying to read through a lot of the reports from Department of Education, from UNESCO, National Academy that have been coming out on using AI for research and AI for education. I think I've done the playing, and then the reading, and trying to think about where I stand and what the next frontier would be for me to be playing with that AI in education and using these kinds of large language models.
>> I echo all of that. I also think, like you said, it's probably very discipline specific. So a lot of the very useful conversations I've had with colleagues, both in my department and other departments, about how other universities, about how they're handling these things. And if folks are looking for local resources, we have this new Luddy AI center. And there is a website for that where we're trying to collate the various events and resources and things around campus. And there's a whole bunch of AI related lectures and series and symposium stuff coming out over the next year or so.
>> Are you aware of any further work going on, any of you, about where the institution as a whole is going or should be with respect to general policies that govern faculty, staff, student behavior around this? Or is this still so new, so evolving, so discipline specific, that those still have to emerge? And of course they have to emerge outside the university as well. Just a question of whether you're seeing it.
>> Yeah, well, I know that last year they actually reviewed the academic code to see if there was a necessary need for revision to encompass anything coming from generative AI. But it's written in such a way that it withstands the test of time and the evolution of different technologies that come in and out, and that there are some broad statements that would cover a lot of the things that could come from the use of generative AI. And so I'm not aware that they're going to be making big sweeping changes to that code because when I was confronted with this discussion as DGS, I immediately went and talked with that office to understand it. And so this is where we, as a department, basically put into place the idea of disclosure of use. And that in doing so, you're honoring what the academic code here at IU, its expectations, would be. So I can't say that they're not going to make further changes, but I think our role as professors is really to try to teach students how to use these tools, what they're capable of, and what the norms would be in terms of disclosure.
>> I think getting close to the end of our time here, I guess we'd like to ask each of you to just do a little bit of speculating, perhaps in your own discipline or just generally, I think especially thinking more broadly about higher education and these kinds of technologies and higher ed. What do you think broadly you might be looking at going forward, what would you like to kind of leave us to think about?
>> Super easy question.
>> Yeah [OVERLAPPING] [LAUGHTER]
>> So a couple of thoughts. On one hand, I think this new resurgence of AI interest may be best embodied by ChatGPT, but other things as well. On one hand, I think it's a very important moment. I think it has many practical implications to higher education in the way we teach and so on. At the same time, I also think all of the foundational stuff that we've always been doing about teaching students about problem solving and critical thinking, logical thinking and all of that is still the mission of the university, just like it always was. And if anything, ChatGPT just makes that more important so that people can really think critically about the information they're consuming and where it can come from and the tools that they have available to them and whether they should trust them and how they should trust them and so on. And I think up until this point, most of the history of AI has actually been about trying to create technology that would replace humans that would do a job as well as humans or better. And I think from this moment on, it's going to be much more about creating collaboration between AI and humans. How can we create techniques that solve the kinds of problems that people want them to solve, that can learn from people, that can help people learn, that can collaborate with people that can explain its reasoning to people, that can learn from expert humans to work together as a team that works better than any either the AI or the human could have done individually. And I think underlying this whole conversation is that idea, like no, you can't have ChatGPT write your whole essay for you because that doesn't accomplish anything. But maybe it could be a collaborative tool that helps polish your grammar at the end, or that helps generate some ideas at the beginning, or that helps find sources of information that you can then go out and verify that maybe you weren't aware of.
>> I definitely agree with everything David has said about students. I think there's also ways for collaboration with instructors about how can we come up with innovative kinds of learning experiences for students. If we're doing project based and problem based learning, can we come up with some really interesting problem contexts? Can we use this to help support collaborative learning in ways that has been really hard in the large classes that we often teach? As well as thinking about what can we do with our smaller classes, so thinking about the ways that we can have those kinds of collaborations with instructors as well as with students.
>> It's an exciting time. I think about how a lot of this technology can enable new advances in research, and that's amazing and wasn't quite the focus of today, but I think it's really worth bringing that forward. But at the same time, we heard examples about how this is fostering creativity among us by rethinking how we're doing our assignments and how we can reengage in that idea of trying to foster critical thinking and adaptability among students, which are going to be the most important things I think when they leave IU because what they think they're going to do may not exist in 20 years. But that critical thinking, that adaptability, that can come from how we are responding to these new technologies and training students in that context, I think that could better prepare everybody, so it's exciting.
>> Yeah, very much so. Well, I want to thank all three of you for great conversation. Lots to keep thinking about and talking about, and as you said, very exciting. Much more to come. [MUSIC]