# 'AI' Harm Reduction ### CAMSEE (Winter 2026) --- ### Icebreaker Question ## How do you *feel* about AI? --- ### Lots of 'AI' Discussions are higher-education focused - How will AI affect teaching and learning in college classrooms? - How can AI be used to improve or optimize or support teaching? - What about academic integrity issues and assessment and grading in the AI era? - How can we ensure that AI is not used improperly in my classroom? - How can we ensure that students are being trained to meet the AI-saturated labor market that awaits them? - *How can we remove or minimize or harness the disruption of this technology to continue doing higher education?* --- ### These conversations often make questionable assumptions - AI is as promising for academia/higher-education as the hype claims - Using AI confers an academic/grading advantage to students who use it - Grading and Ranking of Students based on assessments is a critical part of higher education - AI use can be detected and (meaningfully) prohibited outside of controlled conditions - All students have access to State-of-the-Art AI tools and will continue to have access to AI tools - All students want to use AI, and find doing so acceptable --- ### *What changes if we think about AI use from a student-well-being, harm-reduction perspective?* --- ### The four questions I'd like to discuss - Given that AI is here and unavoidable, what are the actual harms for students who use it? - What messaging should we give to students about the harms of *their* use of AI? - How can we accommodate and support students who have a financial, personal or ethical position against AI use? - What messaging should we give to students about the harms of *our* use of AI? --- ### Proposed Ground Rules for today's discussion - We are not allowed to talk about academic integrity, assessment, changing labor markets, 'AI Tutors', grading, or 'AI-resistant' question design - No discussion of AI detection, 'banning AI', or 'AI prevention'. Assume it is undetectable and unbannable and inevitable - We will avoid the perspective of the academic establishment and what *we* need, and focus only on the student perspective and what *they* need - We won't focus on technological solutions to these problems, but human solutions --- ### What are the harms of AI for students? - What nuances are there for 'cognitive offloading'? - What are the potential social/interpersonal harms for AI use? - How is this different for students of different levels? - Are the effects different for students with different socioeconomic, educational, personal, language backgrounds? - Are there actually true benefits of independent AI use for some students? --- ### What messaging can we give to our students to help mitigate these harms? - Syllabus messaging? - In class messaging? - Topics for in-class discussions? - Are there kinds of messaging that make these harms worse? --- ### How can we accommodate and support students who have a financial, personal or ethical position against AI use? - What advantages and disadvantages do they face? - Can we offer accommodations to students who object to AI use? Should we? - Should AI use be treated and declared up front as a course cost (like textbooks or clickers) to help students with financial planning? - Are there approaches that reduce the harms they're worried about, while giving benefits? --- ### What messaging should we give to students about the harms of *our* use of AI? - How do we explain our choice to use AI in the classroom to AI objectors? - How do we discuss our choices to use AI-generated materials? Should we? - Does our use of AI encourage some students to fall into the harmful behaviors discussed previously? - How do we mitigate the environmental effects of our AI use? Should we? --- ### Thank you!