This article helps faculty develop a thoughtful, learning-centered approach to managing generative AI in their courses. Rather than defaulting to blanket policies, we encourage a framework grounded in your learning goals, your students, and your discipline.
In This Article
- Five Steps to Managing Generative AI in Your Course
- Why AI Detectors Are Not the Answer
- Beyond Policy: Rethinking Assessment
- Faculty Pause: Your Use of GenAI in Teaching
- Privacy Considerations
Five Steps to Managing Generative AI in Your Course
The rise of generative AI tools (like Gemini, ChatGPT, Claude, and others) requires intentional design choices. Use these five steps to integrate AI thoughtfully into your course.
Step 1: Define Learning Goals
Before setting a policy, articulate the specific knowledge, skills, and values you want students to master. Ask yourself:
- What skills are essential for this course (e.g., critical analysis, primary source reading, professional writing, quantitative problem-solving)?
- Does the use of AI enhance or undermine the practice of these essential skills?
- What do you really want students to get out of your class? What do you want them to know, or be able to do, in one, five, or ten years?
- Consider using the Six Facets of Understanding (from Understanding by Design) or Fink's Taxonomy of Significant Learning to clarify what kind of understanding you're after.
If you'd like a sounding board, book a meeting with Learning Design & Digital Innovation. You may find that by refining your goals and the structures that support them, you've already begun to address the challenges of generative AI.
For further experimentation, prioritize Google’'s Gemini App (using your Union.edu account), or the Gemini-powered Gems and NotebookLM, all of which have enterprise-grade data protection. Gemini, Gems, and NotebookLM are the only genAI apps that ITS supports.
Step 2: Test Your Assessments
Evaluate your current assessments against your learning goals. Copy and paste your assignment prompts into a genAI tool and evaluate the results honestly:
- Would work produced this way meet your desired learning goals?
- If a student submitted this output, would it earn a B or higher?
- Could you tell it was generated by AI?
If AI can produce a passing response, the assignment is measuring the wrong thing. Redesign so students must demonstrate thinking, not just produce a product.
When redesigning, consider three types of structural moves that make thinking visible:
- Lived Experience: Connect to personal memories, campus/local data, interviews, or course-specific materials you provide
- Real-Time Verification: Oral defense of choices, in-class checkpoints, random questions on submitted work
- Disciplinary Critique: Evaluate AI output using disciplinary norms, identify methodological errors, critique evidence and assumptions
Look at an example of the above principle activity,
Step 3: (Re)Design Assessments Using the Five Principles
Rather than using a blanket policy for your entire course, ground each assessment decision in five emerging principles drawn from recent research and institutional practice. These principles, adapted from the Northeastern University AI Assessment Framework, provide a foundation for intentional assessment redesign. When you are finished reviewing these five principles, you may find LDDI's Resources for Faculty for Teaching in an AI Ubiquitus World helpful, in particular the detailed reflective AI Use Statement you can copy/customize for your assignments (based on Daniel Pink’s six human skills: Questioning, Taste, Iteration, Composition, Allocation, Integrity).
When AI can generate polished work that experienced graders cannot distinguish from student work, the question shifts from "Are students cheating?" to "Does this assessment measure what it claims to measure?" Use the five principles below to guide your (re)design.
Principle 1: Transparency & Clear Communication. Research reveals widespread confusion about acceptable AI use. Students construct individual ethical frameworks when institutional guidance is absent or ambiguous — creating anxiety for compliant students and cover for those circumventing learning (Corbin et al., 2025).
- Use the Transparency in Learning and Teaching (TILT) framework: describe each assignment's purpose, task, and criteria. Select appropriate AI use levels using the AI Assessment Scale (No AI → AI Planning → AI Editing → AI Collaboration → AI Exploration) based on learning goals.
- Articulate your pedagogical rationale — explain why your position serves learning goals. Communicate expectations via syllabus, assignment instructions, AND classroom discussion.
Principle 2: Validity over Detection. When students can submit AI-generated work that appears to demonstrate competence, the assessment fails to validly measure what students know and can do. Validity should take precedence over detection — this is a measurement problem before it is a moral problem (Dawson et al., 2024). For each assessment, decide between two approaches:
- Controlled / Minimize AI Use: Activities and assignments that verify human capability. Reward process over product and personal reflection/metacognition; supervised environments (e.g., oral presentations, in-class demos, lab, exam, peer learning, service learning).
- Open to AI Experimentation: Professional applications of tool use. AI engagement is transparent and human contribution is measurable (e.g., clear AI use communicated and AI statement of use submitted with learning artifact). Reflection.
Principle 3: Process over Product. When AI can generate polished final products instantly, assessing only completed work fails to capture student learning. Effective assessment must capture the process — the decisions, iterations, and thinking that produced the outcome (Kickbusch et al., 2025). Make it personal. Also, document and reflect on progress: peer-to-peer conversations as a starting point, draft versions showing evolution of thinking, annotations, peer review, revision histories, student reflections explaining how feedback was acted upon, and chat transcripts if AI was used. Aim to maximize student investment. Help show them that a focus on the process is itself a powerful learning strategy.
Principle 4: Evaluative Judgment. As content generation becomes automated, the essential human capability is evaluative judgment: the ability to recognize quality in work produced without, with, and by AI. We must certify students' ability to judge quality, not just produce artifacts (Bearman et al., 2024). Design critique assignments where students evaluate AI output alongside high-quality human work, identify strengths, weaknesses, errors, and hallucinations, improve the work to professional standards, and then apply that same judgment to their own work.
Principle 5: Programmatic Coordination. Isolated assignment redesigns are insufficient. When AI can complete many traditional tasks, validity threats require coordinated responses across entire programs — the "Swiss cheese" approach where vulnerabilities are staggered across a student's work (Lodge et al., 2023). Consider program mapping in your department: which outcomes require independent human demonstration, which involve professional tool use, where AI skills are explicitly taught, and whether assessment diversity provides sufficient validity evidence.
Five Emerging Principles of AI Assessment
The following table summarizes the five principles referenced in Step 3 above. These are drawn from the Northeastern University AI Assessment Framework and adapted for the Union College context. For hands-on activities and ready-to-use templates for each principle, see the Navigating New Landscapes workshop workbook.
| Principle |
Core Insight |
Implication |
| 1. Transparency |
Students construct individual ethical frameworks when guidance is absent — creating anxiety for compliant students (Corbin et al., 2025) |
Use TILT framework, adopt graduated AI Assessment Scale levels, articulate pedagogical rationale for every policy |
| 2. Validity over Detection |
When AI can produce competent-looking work, assessment fails to measure what students know. This is a measurement problem before a moral one (Dawson et al., 2024) |
Controlled/Minimize AI Use assessments verify independent capability; Open to AI Experimentation assessments make human contribution measurable with transparency |
| 3. Process over Product |
Assessing only final products fails when AI can produce polished work instantly (Kickbusch et al., 2025) |
Require drafts, revision histories, student reflections, peer review, chat transcripts as process evidence |
| 4. Evaluative Judgment |
The essential human capability is recognizing quality in work produced without, with, and by AI (Bearman et al., 2024) |
Design critique assignments where students evaluate AI output before producing their own work |
| 5. Programmatic Coordination |
Isolated assignment redesigns are insufficient; program-level approaches are more robust (Lodge et al., 2023) |
Map assessments across programs; scaffold AI skills from first year to graduation; triangulate methods |
Step 4: Develop Student-Facing Policy Language
At this point you will likely need to revise current assessments or develop new ones, and draft clear policy language for your syllabus. For each assessment, state the approach and rationale:
Controlled / Minimize AI Use Policy:
"For [assessment type], you must complete the work without AI assistance. This is because [learning goal]. These assessments will be conducted [in class / supervised conditions]."
Open to AI Experimentation Policy (with transparency):
"For [assessment type], you may use AI tools. If you do, you must: (1) disclose which tool, (2) submit prompts / process log, and (3) explain what you verified or changed. The goal is [learning goal]."
Always include the "why." Students need to understand the pedagogical rationale behind your policy. When they understand why a particular assessment is controlled or open, compliance becomes buy-in.
Consider co-creating boundaries with students: ask what feels fair, then explain your pedagogical reasoning. This kind of transparency builds trust and encourages students to take responsibility for their learning.
Note: Unclear policies create problems for the Honor Council. For more information, read the Union College Honor Council Guidance on AI Generated Content and Academic Integrity.
Step 5: Practice, Refine, and Talk About GenAI with Students
Your AI policy is not set in stone.
- Be specific about your own professional use of AI (or non-use).
- Invite collaboration: ask your students to collectively develop AI policies for the class.
- Articulate pedagogical rationale to students — explain why your strategies serve learning goals.
- Communicate expectations continuously via syllabus, assignment instructions, AND classroom discussion.
- Set expectations before students use AI: ask them to research and understand the risks (bias, hallucinations, environmental damage).
- Allow student choice: opt-out or alternative critique task.
- Analyze moments of harm productively: Why does this happen? What does it say about the source data? What does it say about coding choices?
Regardless of your position on generative AI, your students are thinking about it and most likely using it. You don't need to be an expert to model how to think critically about emerging technologies. A genAI policy in your syllabus is a start, but it is important to talk about the policy before students begin work on assessments.
Remember: dialogue is ongoing. Revisit expectations before each major assessment, not just on day one.
Why AI Detectors Are Not the Answer
⚠️ Generative AI Detectors Are Not Reliable
After extensive research, our recommendation is clear: do not rely on AI detectors for academic integrity decisions. The evidence shows they are unreliable, biased, and easily bypassed. Instead, spend your time reshaping assessments and maintaining open dialogue with students.
Here is what the current research tells us:
They produce false positives that harm students
AI detectors generally have false positive rates between 1–10%, meaning they regularly flag human-written work as AI-generated. Even a 1% rate applied across thousands of student essays produces hundreds of false accusations per year at a single institution.
- A 2025 study in Advances in Physiology Education (Hyatt et al.) found that human raters actually had a higher false positive rate (5%) than AI detectors (1.3%) — but neither is reliable enough for high-stakes decisions.
- The Washington Post tested Turnitin's AI detector and found a false positive rate of approximately 50% in their sample, far higher than the company's claimed rate.
They are biased against vulnerable student populations
- A Stanford study (Liang et al., 2023) found that seven AI detectors unanimously misclassified over 19% of text written by non-native English speakers as AI-generated. Over 61% of TOEFL essays by non-native speakers were falsely flagged. The reason: these students use simpler vocabulary and sentence structures, which detectors interpret as low "perplexity" — the same statistical pattern AI-generated text exhibits.
- A Common Sense Media report found that Black students are more likely to be accused of AI plagiarism by their teachers.
- Students with autism and other neurodivergent conditions have been falsely accused of cheating based solely on AI detector output, because their structured or literal writing styles resemble AI patterns.
They are easily bypassed
- Simple paraphrasing, inserting anecdotes, or adjusting sentence structure can significantly reduce detection accuracy. Dedicated "humanizer" tools specifically designed to evade detectors are widely available.
- As LLMs improve, the statistical differences between human and AI writing continue to shrink, making reliable detection increasingly difficult.
What to do instead: Replace policing with process evidence
Rather than trying to catch AI use after the fact, design assessments that make thinking visible throughout the process:
- Drafts and revisions showing the evolution of thought
- Annotated critique of an AI-generated draft (the student marks and improves AI output)
- Metacognition to make student thinking visible
- Structured group work with documentation and reflection
- In-class checkpoints or oral defenses (6–8 minutes) where students explain their reasoning
- AI Use Statements requiring students to name the tool, explain their purpose, assess AI's influence, and declare what they verified or changed
Grade the thinking, not the product.
Key Sources
- AI-Detectors Biased Against Non-Native English Writers — Stanford HAI (Liang et al., 2023)
- AI Detectors: An Ethical Minefield — NIU Center for Innovative Teaching & Learning, 2024
- Using Aggregated AI Detector Outcomes to Eliminate False Positives in STEM-Student Writing — Hyatt et al., Advances in Physiology Education, 2025
- A Critical Examination of AI Detectors in Academic Integrity Enforcement — 2025
- Can We Trust Academic AI Detectives? — Acta Neurochirurgica, 2025
Beyond Policy: Rethinking Assessment
Having a course AI policy is a good start. You also may have found that assessments you've relied on for years are no longer valid in an age of AI. If you haven't spent time thinking about the assessments in your course recently, now is the time.
Begin by watching + reading:
Then, browse strategies:
Faculty Pause: Your Use of GenAI in Teaching
As genAI tools become more available, it's important for faculty to reflect on how and why they use these tools in their own teaching practice. Whether creating lessons, grading, or designing courses, thoughtful use helps preserve the human-centered values of teaching and learning.
Reflection prompts:
- Where in my teaching/assessment workflow do I currently use (or plan to use) generative AI? (e.g., drafting lesson slides, creating quizzes, grading short responses, generating rubrics)
- What am I hoping to achieve by using AI in that space? (e.g., save time, increase consistency, enhance creativity)
- What might I be risking by outsourcing or automating that part? (e.g., loss of personal feedback, decreased student–instructor connection, bias or unintended messages about value of student work)
- What guardrails will I put in place to ensure the human element remains central? (e.g., human review of all AI outputs, explicit disclosure to students, iterative drafts with instructor feedback)
Privacy Considerations
The only genAI apps ITS supports are Google's Gemini App, Gemini-powered Gems, and NotebookLM — all three have enterprise-grade data protection through your Union.edu account.
ITS does not currently license ChatGPT (or other platforms) for the College due to prohibitive enterprise cost. This means OpenAI's terms and policies apply to faculty and students on an individual basis. If faculty want to require students to create a ChatGPT account (or use another non-supported platform), they should:
- Inform students of the limitations of such platforms (inaccurate or biased information, fabricated citations, etc.)
- Alert students to potential data privacy concerns
- Design an alternative way of completing the assignment that removes any requirement to create a personal account as part of their grade
- Give students the agency to decide whether they feel comfortable creating an account
Key privacy concerns with non-supported tools (e.g., ChatGPT)
- Data Collection: ChatGPT and similar tools collect and store information about user interactions, which could include sensitive or confidential academic information. ChatGPT ignores "Do Not Track" settings.
- Data Usage: User data may be used for research, analysis, or commercial purposes.
- Data Security: There is always a risk of data breaches and unauthorized access to stored information.
- Data Retention: Data may be retained indefinitely.
- FERPA: There is no recognition of potential FERPA data being entered into the system, so there are no precautions to keep that data confidential.
Instructional Design Recommendation
- If you expect students to use a generative AI tool, the College encourages the use of Google's Gemini App with enterprise-grade data protection that ITS provides through the MyApps login.
- If you choose to use ChatGPT (or a similar non-supported tool), design an alternative path so that creating a personal account is never required for a grade.
- Never paste student work with identifying information (or any medium-to-high-risk data) into a tool you don't trust.