This article aims to help faculty and students understand Generative Artificial Intelligence tools and how we can responsibly approach their use from a teaching and learning context.
- If you have less than 15 minutes to dedicate to this issue and your greatest concern is your syllabus, skip to the section on Syllabus Statements. Choose and modify whichever of the three statements fits your class best. Consider making a plan to meet with colleagues or attend a workshop as the term goes on to inform yourself more completely about Generative AI.
- If you have more time, consider some of the additional sources. Take advantage of anything you find useful, keep an open mind, and be practical about your time. There’s almost no end to the information or resources you could consume.
Generative AI
What are Generative Artificial Intelligence tools like ChatGPT, Bard, Dall-E, etc?
Artificial Intelligence has been shaping our lives for longer than we might realize: in everything from language resources like Grammerly to the algorithms that push new TV shows our way. And now, powerful generative AI tools like ChatGPT, Bard, Bing Chat, and others can produce polished, human-sounding text in a range of formats. Similar tools can generate images, video, music, computer code, etc. To get a sense of the scope of resources, take a look at at this list of 100+ tools. Naturally, the availability of these tools affects teaching and learning and will continue to influence how we engage in the educational process.
What do Generative AI Tools mean for teaching and learning?
Some of you will be unconcerned about Generative AI: your classes already prioritize the learning process in such a way that it’s unlikely ChatGPT or similar platforms will force any kind of shift in your classroom. For others, the risks are significant: these tools reflect the biases of the data they were trained on (the Internet!), they lack external regulation, their mechanisms aren’t transparent, and they can be used to cheat. You might share John Oliver’s views in this “Last Week Tonight” segment from early 2023. Still others will see Generative AI as expansive terrain for experimentation, innovation, and adaptation. This change might be exactly the push you need to revisit research-based teaching and learning strategies and reshape your classes. Regardless of where you are at, consider the following:
- Put student interests at the center. Find the places where student backgrounds and experience intersects with course content. The more you encourage students to incorporate their lived experience into their work, the more invested they will be. By caring more about the assignment, and seeing their identity and experience valued in the class and in the work, the less appealing it will be to turn in AI generated content.
- Prioritize the learning process over the product:
- Focus on multiple drafts and revisions of writing assignments. Take advantage of Union's Writing Center.
- Encourage metacognition by asking students to reflect on their role in the learning process. This can take the form of a “Dear Reader” revision letter, turned in with updated drafts of writing assignments. You could have students write a weekly learning reflection, or a final reflection at the end of the term. It might mean using exam wrappers and two-stage exams.
- Utilize AI resistant assessments such as projects or performances. These assignments can also be more enjoyable for students and lead to a more complete understanding of course material.
- Be cautious about returning to potentially exclusionary teaching strategies focused on “in-person only” or “pen and paper only” style activities. Although these strategies certainly have value, and are somewhat AI resistant, they can also put students with learning disabilities at a disadvantage.
Essential resources
What about Generative AI Checkers?
The short answer: they are unreliable and are not recommended.
Privacy Considerations, Specifically with ChatGPT
When faculty are designing learning activities for courses, keep in mind the perspective that ITS will not be engaging in a contract with OpenAI (or other entities) to "License" ChatGPT for the College. Currently, OpenAI is not extending that option to colleges and universities. ITS normally covers things like protecting FERPA data as part of the process of entering into contracts with vendors. Therefore, the various terms and policies on OpenAI's website will apply to faculty AND students on an individual basis. Therefore, if faculty want to require students create a ChatGPT account (or another similar platform), I encourage you to not only inform students of the limitations of such platforms (i.e., they can provide inaccurate or biased information, fabricated quotes, etc.) but also potential data privacy concerns. Everyone who chooses to create an account should review the terms and privacy policy to fully understand the provisions permitting the sharing of information. It is important to note that there is no recognition of potential FERPA data being entered into the system, so, therefore, there are no precautions being taken to keep that data confidential nor identifying data in that fashion.
ChatGPT has noted the following privacy concerns that students need to be aware of (along with faculty):
- Data Collection: ChatGPT collects and stores information about the user's interactions, which could include sensitive information such as personal details or confidential academic information. ChatGPT ignores "Do Not Track" settings.
- Data usage: ChatGP may use the data collected for research, analysis or commercial purposes.
- Data security: There is always a risk of data breaches and unauthorized access to the information stored by ChatGPT.
- Data retention: ChatGPT may retain data for an indefinite period of time, which could lead to privacy issues in the future.
If you haven't created a ChatGPT account yet, individuals are required to enter a phone number to use for verification (can receive SMS) which some people may not be comfortable providing.
INSTRUCTIONAL DESIGN PERSPECTIVE RE: DATA PRIVACY
If faculty decide they'd like students to use ChatGPT (or a similar, "free" generative AI tool), faculty should intentionally design an alternative way of completing the assignment that removes any requirement that students must create a personal account as a part of their grade. Students should be informed of the privacy considerations mentioned above and given the agency to decide whether or not they feel comfortable creating the account or not.
Your syllabus is a valuable space for communicating with students about your expectations and concerns regarding Generative AI. Feel free to use and modify one of the following three statements depending on the needs of your course.
- The first is more permissive: AI use is acceptable with proper citation.
- The second is conditional: AI may be used in some contexts but not in others.
- The third is prohibitive: AI use does not coincide with the learning goals of the course and is not allowed.
And if you still aren’t sure where you stand, ask yourself: What’s my stance on genAI in this class? (Resource developed by the Gettysburg College Johnson Center for Teaching and Learning)
Statement A: open/flexible/permissive
You are generally okay with student use of generative AI. Note that this is a modified version of the statement included in our sample syllabus in the article Create a Motivational Syllabus.
[Again, if you are open to experimenting with AI in your courses, consult Cynthia Alby’s AI Prompts for Teaching: A Spellbook]
Understanding how to live and work with digital tools and platforms – from statistical software to data visualization tools to artificial intelligence tools – is an essential skill for all students in this day in age. In this course I encourage you to use all the tools available to you (and that you are familiar enough with to use efficiently and effectively) to aid your learning. This includes artificial intelligence (AI) copy writing and chatbot tools such as ChatGPT, Humata.ai, DALL-E 2, and others.
To decide whether or not AI is a worthwhile aid, consider the learning goals associated with the task. For example: Will it help deepen your understanding of the assignment? Will it make your work more meaningful, more precise, more compelling? Will it decrease certain barriers to your learning, allowing you to invest more time, energy, and thought in your work?
Be aware that large language models like ChatGPT reproduce the inaccuracies and biases of the data they are trained on. They can provide you with results that are racist and sexist or simply false. They are also known to “hallucinate,” as they synthesize, reassemble, and present information in convincing ways while misrepresenting reality. It is your responsibility to evaluate the factual accuracy and integrity of the products these tools provide.
As with any other resource you use to aid your work in this course, you must acknowledge any and all AI tools that you use in the development of your work. You must also substantially revise any writing or work produced by an AI tool before submitting it for credit in this course.
If you use an AI tool at any point in the development and/or creation of your work for this course – including discussion board posts, exams, and projects – you must include appropriate citations and the acknowledgment below in your Reference list:
Name of publisher/tool producer. (year). Name of AI tool (version date) [Large language model].
You must also include a full transcript of the writing or work produced by the AI tool in an appendix to your work.
Consult the Generative AI citation resources appropriate to the class or discipline. For example: MLA https://style.mla.org/citing-generative-ai/ APA: https://apastyle.apa.org/blog/how-to-cite-chatgpt. Chicago: https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html. [Choose the appropriate model]
Statement B: Conditional
You are okay with students using AI tools in some situations (but not all)
Understanding how to live and work with digital tools and platforms – from statistical software to data visualization tools to artificial intelligence tools – is an essential skill for all students in this day in age. An important part of using these tools is knowing their limits. Therefore, the use of Generative AI tools isn't allowed under any circumstances for [assignment types X, Y & Z]. However, in the following course activities [example: assignment types A, B & C], I encourage you to use all the tools available to you (and that you are familiar enough with to use efficiently and effectively) to aid your learning. This includes artificial intelligence (AI) copy writing and chatbot tools such as ChatGPT, Humata.ai, DALL-E 2, and others. If you are unsure of when Generative AI tools are acceptable, please consult with the professor.
Be aware that large language models like ChatGPT reproduce the inaccuracies and biases of the data they are trained on. They can provide you with results that are racist and sexist or simply false. They are also known to “hallucinate,” as they synthesize, reassemble, and present information in convincing ways while misrepresenting reality. It is your responsibility to evaluate the factual accuracy and integrity of the products these tools provide.
If you use an AI tool at any point in the development and/or creation of your work for this course – including discussion board posts, exams, and projects – you must include appropriate citations and the acknowledgment below in your Reference list:
Name of publisher/tool producer. (year). Name of AI tool (version date) [Large language model].
You must also include a full transcript of the writing or work produced by the AI tool in an appendix to your work.
Consult the Generative AI citation resources appropriate to the class or discipline. For example: MLA https://style.mla.org/citing-generative-ai/ APA: https://apastyle.apa.org/blog/how-to-cite-chatgpt. Chicago: https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html. [Choose the appropriate model]
Statement C: Restrictive
You prefer that students avoid the use of AI tools entirely.
The use of generative Artificial Intelligence tools like ChatGPT is not allowed in this course under any circumstances. There’s no doubt about their broad applicability and powerful potential. But they present risks that run contrary to our course goals. Given our use of methods ______, ________, and __________, to achieve our learning goals [provide an explanation of this relationship. Students value knowing the reasoning behind your rules], the use of generative AI should be avoided. These tools are known to “hallucinate,” as they synthesize, reassemble and present data in convincing ways while misrepresenting reality. In many instances it’s not possible to evaluate the accuracy and integrity of the products they provide. The companies that stand to make a profit on them aren’t transparent about how their mechanisms work. Similarly, there are no organizations or agencies to regulate these tools. And they reproduce the inaccuracies and biases of the data they are trained on (Look up the case of Tay, Microsoft’s AI Chatbot). Simply put, Generative AI can provide you with results that are racist and sexist or simply false. We still have a lot to learn before they can safely and ethically be integrated into this class. Thus, the use of Generative AI in this course will be viewed as work that is not your own and it will be considered a violation of the Honor Code.
Additional Syllabus Resources
For a broader range of perspectives, look through the resources below. If you add an innovative policy to your syllabus and don't mind sharing it with others, please send it to collingd@union.edu and we will add it to the list below.
Union College Faculty Perspectives
Below you will find some ideas of how you can think about these tools, as well as concrete ideas for assessments from your colleagues at Union College (click on the double arrow to expand the section). If you have ideas you are willing to share with others, please send them to collingd@union.edu and we will add it to this section.
Marianna Ganapini, Assistant Professor in Philosophy
Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education
Montreal AI Ethics Institute article by Marianna Ganapini, Pamela Lirio, and Andrea Pedeferri.
Jan. 3, 2023
ChatGPT is the latest Open AI chatbot, able to interact with human agents conversationally. You can ask questions, many of which will be answered in seconds. Syntactically this chatbot writes like a pro: its sentences are usually well-structured and grammatically correct. The tone of its writing sounds – for the most part – professional, courteous, and well-polished. Often, the answers generated sound legitimate: it feels like ChatGPT knows what it’s talking about!
But is this AI ethical? Can it be used responsibly? What harm might it generate? CLICK HERE TO CONTINUE TO READ FULL ARTICLE
Mark Dallas, Associate Professor of Political Science
I've been trying to keep up with this space, but it changes so fast! Beware: GPT-4 will be released in the next 3 to 6 months and people are saying that it will be SUBSTANTIALLY better. This is just the beginning.
Very briefly, these are a few headline points that I've garnered from a variety of different sources:
- Faculty could go to the [chatGPT] website and enter their take-home exam or essay questions to familiarize themselves on the type of answers they receive back which can help determine how easily it can return answers and the quality of the answers.
- Talk to students about it instead of pretending it does not exist. And talk to them about your expectations and do's and don'ts. This is to make sure they are aware of any rules you have.
- Possible methods to counteract the technology:
- The more specific the question, the harder for the AI to generate meaningful responses.
- As Peter Bedford said, requiring citations (but keep an eye on the citations, as I've read articles in which it can also return citations. But it often makes up the citations out of thin air! There is no such journal article, but it makes it look like there is)
- It has trouble with questions with pictures and graphs, etc. [Might help with art history or economics, etc]
- Consider in-class exams, oral exams or assessment where progress is tracked over time and multiple assignments (sort of like WAC classes in which writing assignments build on the prior ones). [This requires a larger overhaul of the course]
- It is trained on information before 2021, so it will have a weaker response to more recent events. This could change quickly, however.
- Possible methods to incorporate the technology: Plug your essay questions into ChatGPT, get a result from it and then give the result to students. Then:
- ask students to annotate the ChatGPT essay with their thoughts on its strengths/weaknesses.
- ask student to "correct" the essay for errors, or to interpret the essay.
- ask students to write a rebuttal (if argumentative in nature), or expand upon the various thoughts in the essay.
These methods will focus students on critical thinking.
[WARNING: students could potentially plug some of these back into ChatGPT, so you may have to check this also]
I'm less familiar if these methods will work in quantitative work or coding in CS, but I've heard some of them can work.
Hope some of this helps for Winter term at least! If GPT-4 is released in Spring term, this may need to be reworked. I'm guessing that educators will eventually have to move towards the "incorporate the technology," but this will be controversial. IMO, it is going to transform education and for the better after a period of adjustment. But, just my opinion and too complex to explain here.
Joe Johnson, Director of Writing Programs
Joe Johnson has provided the Union Community with a January 30, 2023 statement from the Association for Writing Across the Curriculum* (AWAC) Executive Committee on Artificial Intelligence Writing Tools in Writing Across the Curriculum Settings.
For over a decade, researchers and entrepreneurs have been developing Artificial Intelligence text generators. In recent years, tools such as OpenAI’s GPT-2 and GPT-3 or Chat GPT have become sophisticated enough to produce texts that some readers find difficult to distinguish from texts produced by human writers. This development raises practical, pedagogical, and ethical concerns, including in academic settings.
A fundamental tenet of Writing Across the Curriculum is that writing is a mode of learning. Students develop understanding and insights through the act of writing. Rather than writing simply being a matter of presenting existing information or furnishing products for the purpose of testing or grading, writing is a fundamental means to create deep learning and foster cognitive development. Learning to write within a field or major is also one of the most critical ways that emerging scholars and professionals become enculturated in a discourse community. We are concerned that relying on AI text generators limits student learning and enculturation.
Our Position
As scholars in the discipline of writing studies more fully explore the practical and ethical implications of AI language generators in classroom and other settings, we underscore this: Writing to learn is an intellectual activity that is crucial to the cognitive and social development of learners and writers. This vital activity cannot be replaced by AI language generators.
That said, we understand that institutions, departments, and faculty will have to decide locally what role AI text generators should play in their situations. Some learning communities might reject these technologies outright, including them, for example, in campus policies about plagiarism. Other communities might find productive pedagogical roles for this technology; indeed, some writing teachers are having students explore and experiment, in a critical fashion, with AI writing: its potential for aspects of the writing process, its limitations, its ethics, its costs.
Furthermore, in some professional fields, AI tools have been available for years, and professors in those fields have incorporated attention to them in teaching.
Context, Past and Future
The history of writing is marked by changes in technologies that have shaped how people write and what writing can accomplish: from clay tablets to papyrus, quill pens to pencils, handwriting to typing to texting, words to image to design to multimodality, physical library to the world wide web. On one hand, AI text generators are yet another technology with potential uses in various invention, drafting, and editing processes. On the other hand, their potential autonomy from human writers makes them qualitatively different from previous technologies.
While exclusively having AIs generate writing does not engage students in an essential mode of learning, it is also clear that writing scholars and WAC faculty should explore whether–and, if so, how–AI text generation tools might be integrated into writing pedagogy. The WAC Clearinghouse hosts a page of useful resources: AI Text Generators and Teaching Writing: Starting Points for Inquiry. We might pose these research questions: Might the acts of critiquing, rewriting, or discussing AI-generated text foster growth? Are there scenarios where student writing might productively be complemented, supplemented, or assisted by AI language generators? Can this happen in ways that do not preempt student learning?
It is premature to provide answers to such questions, which need thoughtful investigation. We look forward to that research.
Reaffirming Best Practices
Current AI discussions remind us, yet again, of long-established best practices in Writing Across the Curriculum, grounded in research and extant for decades: designing meaningful and specific assignments that foster learning and develop skills; focusing on processes and practices such as peer-response and revision; encouraging writing in multiple genres, including ones connected to specific disciplinary practices.
We recommend fostering the kind of deep learning and cognitive development that students gain through writing to learn and through learning to write in specific situations.
*About AWAC and this Document
The Association for Writing Across the Curriculum (AWAC) is an international organization that brings together the intellectual, human, political, and economic capital of the Writing Across the Curriculum (WAC) community to grow WAC as a global intellectual and pedagogical movement. AWAC promotes initiatives that support students’ writing across their academic careers, faculty development related to student writing and writing pedagogy, and research into writing across domains (e.g. disciplines, professions, communities, and academic levels) and transnationally.
Joining the Executive Committee (Doug Hesse, Justin Rademaekers, Ann Blakeslee, Laurie Britt-Smith, Karen Moroski-Rigney, Sherri Craig, and Paula Roskinksi) in drafting this statement was Stacey Sheriff. We sent a draft version to AWAC members in mid-January, and their thoughtful comments informed revisions. Contact: Doug Hesse, AWAC Chair, at dhesse@du.edu.
Denise Snyder, Director of Learning Design and Digital Innovation
“We do not learn from experience... we learn from reflecting on experience.”
John Dewey
The rise of generative AI tools is an opportunity for faculty to take a step back and examine the instructional design of their courses, and more specifically, their assessments. I encourage faculty to ask themselves, am I structuring assessments OF understanding or am I structuring assessments FOR understanding? It is a slight, yet powerful tweak in perspective that can yield the construction of learning activities that AI generation tools find difficult, if not impossible, to complete (at least on their own). What I am really talking about is going back to the basics of what good instructional design looks like. Well designed assessments can promote learning, as well as measure it.
What does that mean? Well, assessment isn't always simply evaluating the learning that has already happened, it can also help learning happen—this is what is meant by “assessment FOR learning.” Part of good instructional design is knowing how to design both assessments FOR learning and OF learning.
This is where faculty can get creative and go beyond the traditional research paper and/or exam and create multiple, engaging, formative opportunities for students to actively make their thinking visible, while also including space for students to reflect, revisit, and revise their thinking. Requiring metacognitive reflection not only helps defeat the use of generative AI tools, more importantly, it makes learning "stick". Attending to metacognitive thinking in your assessments can improve transfer of understandings to authentic, real world scenarios well beyond the walls of the classroom.
The Learning Design and Digital Innovation is here to help faculty rethink assessments at their convenience. We also encourage faculty to apply to our summer FDI course incubator, where faculty will receive support from colleagues, instructional designers, librarians, learning technologists, and more to help (re)design and develop courses.
Curated Resources
We will continue to compile a list of articles and resources to share with the Union College community about this topic on this Google Document. If you come across items you'd like to add, please send them to collingd@union.edu.
If you are having difficulty or you have unanswered questions, please contact the Help Desk through the ITS Service Catalog or call (518) 388-6400.