- AI Framework by Igor Chirkov, Berkeley
- Tidewater: Off the Page Discussions with Reneé Hosang-Alleyne, Associate Professor of Sociology, Norfolk Chair, Social Sciences Pathway, Norfolk Faculty Fellow
- AI and Course Design: Machines Can Help, but Only Humans Can Teach by Deb Adair and Whitney Kilgore
AI Framework
Igor Chirkov, a researcher at Berkeley wrote: “How Instructors Regulate AI in College: Evidence from 31,000 Course Syllabi.
He uses this framework:
“Task displacement occurs when AI performs tasks instead of students, eliminating practice opportunities and risking skill erosion.
Task augmentation occurs when AI supports task practice without displacing essential cognitive effort, potentially enhancing learning.
Task reinstatement occurs when AI enables new tasks not previously feasible.”
Tidewater: Off the Page Discussions with Reneé Hosang-Alleyne, Associate Professor of Sociology, Norfolk Chair, Social Sciences Pathway, Norfolk Faculty Fellow
Today, we talked about The Opposite of Cheating by Tricia Gallant and David Rettinger. I had the opportunity to speak with Sheri Prupis, who sends us our Tech Tuesday Tips from the VCCS. All of us who are teaching are figuring out the use of AI in the classroom, and our feelings are mixed. The Opposite of Cheating was an easy read, it is not jargon heavy, touches on many of the issues that we confront in the classroom – building the syllabus, assessment and teaching practice, and the authors framed AI use from the perspective of academic integrity. Apart from chapter one, all the other chapters can be read out of order based on interest. My favorite aspect of this book was how the authors positioned themselves, they described ten principles that guided their writing – don’t skip it 😊. I really value the ten principles but there are three that resonate most with me, principles 2, 3, and 4.
- Principle 2 is an invitation to prevent and respond to cheating by extending our role as educators not a “police officer”.
- Principle 3 asks us to see that “knowledge is constructed not received; therefore, as the instructor you are not the deliverer of knowledge but the facilitator of learning”.
- Principle 4 is a view that “students learn from mistakes, errors, and failures and [the] instructors’ job [is] to create a culture accepting of errors. This also implies that redemption, even from serious errors in judgement is both possible and desirable.”
I narrowed in on these principles because they are directly tied to teaching practice and classroom dynamics, and I would day it gives a possible and useful framing for how to approach AI use in the classroom. These principles, like the others, are challenging especially in light of the cultural messaging around who students are supposed to be. For example, our students feel like they cannot make mistakes, and I wonder do we encourage them to, do we build opportunities to learn from mistakes and respond to ambiguity? With or without AI, students need to feel like they can learn from a mistake. The pervasiveness of AI shows us that if our students feel that mistakes are unacceptable, they will use AI and even irresponsibly.
I really hope you get a chance to read this book or even if it’s a couple of chapters. It does not answer all of our questions, but it does give some practical framing. What this book may be missing are exercises that can be easily adapted into the classroom. In lieu of these, there are many examples that can be used to create our own exercises. Finally, if we are open, this book helps us to have productive conversations not about preventing cheating but about building academic integrity, and that is a worthy conversation.
P.S. David Rettinger is the keynote speaker at New Horizons this year, a happy coincidence.
AI and Course Design: Machines Can Help, but Only Humans Can Teach by Deb Adair and Whitney Kilgore
Cue the irony: Students love and actively use artificial intelligence (AI), but they still want humans in charge. A 2025 survey found that while 42 percent of students report using generative AI at least weekly, they overwhelmingly prefer human support when they need help. According to the survey, 84 percent of students said they primarily seek help from a person when struggling with a course concept, compared to 17 percent who primarily turn to generative AI.Footnote1 Despite the hype and fanfare, students continue turning to faculty they trust for guidance with academic matters and decreasingly use AI to help them understand concepts.
It’s clear that AI is reshaping higher education. The technology is no longer knocking on the door. It’s already inside, and it’s rearranging the furniture. In faculty lounges, curriculum committees, and course design meetings, conversations about AI are urgent, often fraught, and almost always unclear. There’s excitement, but there’s also fatigue, skepticism, and confusion. Colleges and universities are seeking meaningful and practical ways to engage with the technology; however, most institutions lack a working policy. (Read the full article on the Educause Website)