Body
This article helps faculty develop a thoughtful, learning-centered approach to managing generative AI in their courses. Rather than defaulting to blanket policies, we encourage a framework grounded in your learning goals, your students, and your discipline.
In This Article
- Five Steps to Managing Generative AI in Your Course
- Why AI Detectors Are Not the Answer
- Beyond Policy: Rethinking Assessment
- Faculty Pause: Your Use of GenAI in Teaching
- Privacy Considerations
Five Steps to Managing Generative AI in Your Course
The rise of generative AI tools (like Gemini, ChatGPT, Claude, and others) requires intentional design choices. Use these five steps to integrate AI thoughtfully into your course.
Step 1: Define Learning Goals
Before setting a policy, articulate the specific knowledge, skills, and values you want students to master. Ask yourself:
- What skills are essential for this course (e.g., critical analysis, primary source reading, professional writing, quantitative problem-solving)?
- Does the use of AI enhance or undermine the practice of these essential skills?
- What do you really want students to get out of your class? What do you want them to know, or be able to do, in one, five, or ten years?
Consider using the Six Facets of Understanding (from Understanding by Design) or Fink's Taxonomy of Significant Learning to clarify what kind of understanding you’re after.
If you’d like a sounding board, book a meeting with Learning Design & Digital Innovation. You may find that by refining your goals and the structures that support them, you’ve already begun to address the challenges of generative AI.
For further experimentation, prioritize Google’s Gemini App (using your Union.edu account), or the Gemini-powered Gems and NotebookLM, all of which have enterprise-grade data protection. Gemini, Gems, and NotebookLM are the only genAI apps that ITS supports.
Step 2: Test Your Assessments
Evaluate your current assessments against your learning goals. Copy and paste your assignment prompts into a genAI tool and evaluate the results honestly:
- Would work produced this way meet your desired learning goals?
- If a student submitted this output, would it earn a B or higher?
- Could you tell it was generated by AI?
If AI can produce a passing response, the assignment is measuring the wrong thing. Redesign so students must demonstrate thinking, not just produce a product.
When redesigning, consider three types of structural moves that make thinking visible:
- Lived Experience: Connect to personal memories, campus/local data, interviews, or course-specific materials you provide
- Real-Time Verification: Oral defense of choices, in-class checkpoints, random questions on submitted work
- Disciplinary Critique: Evaluate AI output using disciplinary norms, identify methodological errors, critique evidence and assumptions
Rather than using a blanket red/yellow/green policy for your entire course, make a clear decision for each major assessment: which lane does it belong in?
| Lane 1: Assurance |
Lane 2: Human-AI collaboration |
Purpose: Assure Capability
Supervised, in-person, controlled. AI use restricted by environment. |
Purpose: Develop Skills + Learn with AI
AI use allowed or even expected. Focus on evaluating process + human thinking. |
Examples
Skill-building fundamentals, in-class exams/quizzes, oral defenses, core writing-to-learn tasks |
Examples
Iterative projects with checkpoints, AI-assisted drafts + critique, collaborative work, process portfolios |
Design Question
“What must students demonstrate they can do independently?” |
Design Question
“How can AI use become part of meaningful learning that expands student cognition?” |
Why two lanes instead of a stoplight? The stoplight metaphor (red/yellow/green) applies a single policy to an entire course. But different learning goals within the same course may call for different approaches. A biology course might use Lane 1 for an in-class exam on evolution’s unifying role (assuring independent understanding) while using Lane 2 for an AI-powered simulation that lets students experience natural selection from the inside (expanding cognition through interaction). The lane decision flows from learning goals, not from fear of cheating.
Your choice should be rooted in your learning goals and disciplinary context. The key is to know your reasons for each decision.
Step 4: Develop Assessments and Student-Facing Policy
At this point you will likely need to revise current assessments or develop new ones, and draft clear policy language for your syllabus.
| Lane 1 (Secure) Policy |
Lane 2 (Open) Policy |
Template:
“For [assessment type], you must complete the work without AI assistance. This is because [learning goal]. These assessments will be conducted [in class / supervised conditions].”
Include: Which assessments, why it matters, how it’s structured |
Template:
“For [assessment type], you may use AI tools. If you do, you must: (1) disclose which tool, (2) submit prompts / process log, and (3) explain what you verified or changed. The goal is [learning goal].”
Include: Which assessments, transparency requirements, what learning looks like |
Always include the “why.” Students need to understand the pedagogical rationale behind your policy. When they understand why a particular assessment is Lane 1 or Lane 2, compliance becomes buy-in.
Consider co-creating boundaries with students: ask what feels fair, then explain your pedagogical reasoning. This kind of transparency builds trust and encourages students to take responsibility for their learning.
Note: Unclear policies create problems for the Honor Council. For more information, read the Union College Honor Council Guidance on AI Generated Content and Academic Integrity.
Step 5: Practice, Refine, and Talk About GenAI with Students
Your AI policy is not set in stone.
- Be transparent about your own professional use of AI.
- Bring 1–2 AI outputs into class and model disciplinary critique. Show students how experts evaluate AI.
- Build in low-stakes assignments that teach students how to use AI ethically and effectively as a tool in your discipline.
- Use prompts like: “What would you miss learning if AI did this for you?” and “Show me your AI conversation — what didn’t work and why?”
- Share your findings, assignment designs, and challenges with the Learning Design & Digital Innovation team so we can continue to refine guidance across the college.
Regardless of your position on generative AI, your students are thinking about it and most likely using it. You don’t need to be an expert to model how to think critically about emerging technologies. A genAI policy in your syllabus is a start, but it is important to talk about the policy before students begin work on assessments.
⚠️ A Note on the “Discomfort Warning”
AI tools may misgender students, reproduce stereotypes, or make harmful assumptions based on word choice. Set expectations before students use AI. Invite student choice: offer an opt-out path or alternative task. Use moments of harm as curriculum: Why does this happen? What training-data patterns are showing up?
Why AI Detectors Are Not the Answer
⚠️ Generative AI Detectors Are Not Reliable
After extensive research, our recommendation is clear: do not rely on AI detectors for academic integrity decisions. The evidence shows they are unreliable, biased, and easily bypassed. Instead, spend your time reshaping assessments and maintaining open dialogue with students.
Here is what the current research tells us:
They produce false positives that harm students
- AI detectors generally have false positive rates between 1–10%, meaning they regularly flag human-written work as AI-generated. Even a 1% rate applied across thousands of student essays produces hundreds of false accusations per year at a single institution.
- A 2025 study in Advances in Physiology Education (Hyatt et al.) found that human raters actually had a higher false positive rate (5%) than AI detectors (1.3%) — but neither is reliable enough for high-stakes decisions.
- The Washington Post tested Turnitin’s AI detector and found a false positive rate of approximately 50% in their sample, far higher than the company’s claimed rate.
They are biased against vulnerable student populations
- A Stanford study (Liang et al., 2023) found that seven AI detectors unanimously misclassified over 19% of text written by non-native English speakers as AI-generated. Over 61% of TOEFL essays by non-native speakers were falsely flagged. The reason: these students use simpler vocabulary and sentence structures, which detectors interpret as low “perplexity” — the same statistical pattern AI-generated text exhibits.
- A Common Sense Media report found that Black students are more likely to be accused of AI plagiarism by their teachers.
- Students with autism and other neurodivergent conditions have been falsely accused of cheating based solely on AI detector output, because their structured or literal writing styles resemble AI patterns.
They are easily bypassed
- Simple paraphrasing, inserting anecdotes, or adjusting sentence structure can significantly reduce detection accuracy. Dedicated “humanizer” tools specifically designed to evade detectors are widely available.
- As LLMs improve, the statistical differences between human and AI writing continue to shrink, making reliable detection increasingly difficult.
What to do instead: Replace policing with process evidence
Rather than trying to catch AI use after the fact, design assessments that make thinking visible throughout the process:
- Drafts and revisions showing the evolution of thought
- Annotated critique of an AI-generated draft (the student marks and improves AI output)
- Prompt and verification logs documenting the student’s process
- In-class checkpoints or oral defenses (6–8 minutes) where students explain their reasoning
- AI Use Statements requiring students to name the tool, explain their purpose, assess AI’s influence, and declare what they verified or changed
Grade the thinking, not the product.
Key Sources
Beyond Policy: Rethinking Assessment
Having a course AI policy is a good start. You also may have found that assessments you’ve relied on for years are no longer valid in an age of AI. If you haven’t spent time thinking about the assessments in your course recently, now is the time.
Begin by reading:
Then, browse strategies:
Faculty Pause: Your Use of GenAI in Teaching
As genAI tools become more available, it’s important for faculty to reflect on how and why they use these tools in their own teaching practice. Whether creating lessons, grading, or designing courses, thoughtful use helps preserve the human-centered values of teaching and learning.
Reflection prompts:
- Where in my teaching/assessment workflow do I currently use (or plan to use) generative AI? (e.g., drafting lesson slides, creating quizzes, grading short responses, generating rubrics)
- What am I hoping to achieve by using AI in that space? (e.g., save time, increase consistency, enhance creativity)
- What might I be risking by outsourcing or automating that part? (e.g., loss of personal feedback, decreased student–instructor connection, bias or unintended messages about value of student work)
- What guardrails will I put in place to ensure the human element remains central? (e.g., human review of all AI outputs, explicit disclosure to students, iterative drafts with instructor feedback)
Privacy Considerations
The only genAI apps ITS supports are Google’s Gemini App, Gemini-powered Gems, and NotebookLM — all three have enterprise-grade data protection through your Union.edu account.
ITS does not currently license ChatGPT (or other platforms) for the College due to prohibitive enterprise cost. This means OpenAI’s terms and policies apply to faculty and students on an individual basis. If faculty want to require students to create a ChatGPT account (or use another non-supported platform), they should:
- Inform students of the limitations of such platforms (inaccurate or biased information, fabricated citations, etc.)
- Alert students to potential data privacy concerns
- Design an alternative way of completing the assignment that removes any requirement to create a personal account as part of their grade
- Give students the agency to decide whether they feel comfortable creating an account
Key privacy concerns with non-supported tools (e.g., ChatGPT)
- Data Collection: ChatGPT and similar tools collect and store information about user interactions, which could include sensitive or confidential academic information. ChatGPT ignores “Do Not Track” settings.
- Data Usage: User data may be used for research, analysis, or commercial purposes.
- Data Security: There is always a risk of data breaches and unauthorized access to stored information.
- Data Retention: Data may be retained indefinitely.
- FERPA: There is no recognition of potential FERPA data being entered into the system, so there are no precautions to keep that data confidential.
Instructional Design Recommendation
- If you expect students to use a generative AI tool, the College encourages the use of Google’s Gemini App with enterprise-grade data protection that ITS provides through the MyApps login.
- If you choose to use ChatGPT (or a similar non-supported tool), design an alternative path so that creating a personal account is never required for a grade.
- Never paste student work with identifying information (or any medium-to-high-risk data) into a tool you don’t trust.