Three years on, classroom decisions about generative AI aren’t happening in committee meetings or lively hallway conversations. They’re happening quietly: while planning a course, revising an assignment, or staring at a syllabus late at night, trying to decide what feels ethical and doable. For some instructors, these decisions are also shaped by a growing fatigue, particularly after reading large volumes of AI-generated text and a desire to reconnect with students’ own voices and thinking.
With AI stories hard to avoid in the media, there’s often an unspoken pressure to “have a position” on the technology, even when guidance is still evolving and examples vary widely across disciplines and industries. In practice, most instructors I work with aren’t looking for definitive answers. They’re looking for reassurance that there’s more than one way forward, and that it’s okay to start small, stay cautious, or change course over time. For many, that may mean avoiding the technology in the classroom altogether.
This post offers a high-level overview of four approaches shaped by learning outcomes, student needs, and instructor comfort, with links to examples for anyone who wants to explore further.
The examples below are drawn primarily from U.S. higher education including work by writing and philosophy instructors, but the approaches themselves are not discipline-specific and can be adapted across disciplines and institutions.
Approach #1: Intentional Non-Use
Philosopher Kate Manne has written openly about choosing to keep AI tools out of her courses entirely — not as a punishment, but as an intentional pedagogical choice. Her rationale in limiting her writing assignments to in-class only is grounded in the belief that certain kinds of learning require sustained, “brain-only” thinking: reading closely, forming arguments, grappling with ambiguity, and developing one’s own voice.
In this approach, the focus is on protecting cognitive processes that can be easily short-circuited by generative tools. It also acknowledges that the “AI voice” can flatten individuality, which has led some instructors to redesign assignments simply because they’re exhausted by formulaic, AI-generated prose. What makes this approach work is clear communication with students: not just “don’t use AI,” but why it matters for this course and these outcomes.
Approach #2: AI-Aware but AI-Limited
Writing instructor Emily Pitts Donahoe takes a middle path. Rather than banning or fully embracing AI, she invites students to make a public commitment to one of two tracks: “AI-free” or “AI-friendly.” Students choosing the AI-friendly track may use AI for specific, limited purposes that she outlines in advance: brainstorming, outlining, or locating counterarguments, for example.
Importantly, disclosure is built into the workflow. Students must summarize how they used AI, share chat logs, and reflect on what they believe they gained or lost through that process. This structure doesn’t assume AI use is inherently beneficial or harmful, it simply makes it visible. The emphasis shifts from policing behaviour to supporting metacognition and transparency.
This approach is especially appealing to instructors who want to reduce secrecy around AI without overhauling their entire curriculum. Read her recent blog post, More on AI in the Writing Classroom, for how she introduced the two tracks, how her students responded, what worked and what didn’t.
Approach #3: AI as a Feedback Partner
At UC Davis, instructors piloting the PAIRR (Peer & AI Review + Reflection) framework use AI as one voice in a broader feedback ecosystem that still centers student thinking and peer review. Students begin by drafting on their own, then exchange peer feedback, and finally receive AI-based formative feedback, all before revising.
AI isn’t used to generate content or write on students’ behalf. Instead, it acts as a structured thinking partner, offering comments or suggestions that students must interpret and decide whether to adopt. Reflection is built in at multiple stages, which helps maintain authorship and accountability.
This approach is useful for instructors who want to support writing or critical thinking without automating the learning process. A version of the PAIRR framework is also being piloted at Langara this year, with instructors exploring how structured peer and AI feedback can support writing and reflection.
Approach #4: AI-Integrated by Design
Literature and composition instructor Michelle Kassorla represents a more fully integrated approach, embedding AI directly into course design. In her assignments, students are expected to use AI tools as part of the process — not as shortcuts, but as components of critical, iterative workflows. Students might analyze AI-generated texts, compare their drafts to machine-generated versions, or critique the limitations and biases of outputs.
Here, AI becomes both an object of study and a practical tool, aligning course activities with emerging professional practices. What makes this approach work is the clarity of expectations: students know when and how to use AI and how their work will be evaluated.
This model often resonates in disciplines where AI literacy will be part of students’ future workplaces, but its principles can be adapted anywhere transparency and critical analysis are central. She shares tips and resources via a google doc Practical AI for Teaching and her LinkedIn account.
As these examples illustrate, there’s no single “right” way to approach generative AI in the classroom. Approaches are shaped by context and the realities of teaching right now. Small experiments, thoughtful limits, and clear communication with students can go a long way.
If you have an approach that’s been working in your context and are open to sharing it, feel free to reach out to me Alex Samur (asamur@langara.ca). You can also find additional AI-related resources on the TCDC website and the Edtech website.
