Shifting the Narrative

Imagine a classroom where AI isn’t seen as a threat but as a powerful tool that empowers students to deepen their learning experiences. This future is not a distant dream; it’s well within our reach. Yet, many educators remain preoccupied with the fear that AI will inevitably lead to cheating, focusing on banning, detection, and punishment rather than fostering meaningful engagement. As Ethan Mollick, professor at the Wharton School and author of Co-Intelligence: Living and Working with AI, points out, AI detection tools often have high false positive rates, proving they’re not a reliable solution. This creates a “gotcha” culture—something I would never want in my district. Would you?
Rather than fostering a culture of suspicion, we should focus on creating a culture of academic integrity. The aim should be to help students understand the ethical use of AI while developing the skills they need to thrive in an AI-driven world. Promoting responsible use of AI, rather than policing misuse, fosters a deeper connection to learning and prepares students for real-world challenges. As school leaders, we have the power to shift the narrative from one rooted in fear to one grounded in integrity and personal responsibility.
Shades of Gray
When it comes to academic dishonesty with AI, it isn’t black and white—there’s a wide spectrum that ranges from entirely human-generated content to completely AI-generated content. The tricky part is deciding where to delineate what’s considered dishonest. Determining this boundary, or drawing the line, will vary from school to school, teacher to teacher, and assignment to assignment.
Cheating in the Age of AI
Let’s be real—have you ever cheated? When I ask this in a room full of people, almost every hand goes up. Cheating has been around long before AI, with students finding ways to bend the rules, whether by copying homework, sneaking a peek at a neighbor’s test, or even paying someone to complete their assignments. Interestingly, Stanford researchers have found that cheating levels among high school students have not significantly increased since the introduction of AI tools. Around 60 to 70% of students admitted to cheating before AI was widely available, and that figure hasn’t changed much post-AI.
The underlying reasons students cheat have more to do with academic pressure, fear of failure, lack of engagement or understanding, and time management issues, rather than the availability of new technologies. The reality is, AI is not the villain—it’s simply a tool, and while it can be misused, it hasn’t drastically altered overall cheating behavior. So, instead of putting all our energy into catching students who misuse AI, let’s focus on creating a school culture where academic integrity is valued.
Position vs. Policy vs. Guidelines

As we navigate the integration of AI into education, it’s important to distinguish between position, policy, and guidelines. Each serves a different purpose and contributes uniquely to fostering a culture of academic integrity. By understanding and implementing all three, we can create a balanced, ethical approach to AI that encourages responsible use while fostering student growth.
• Position
Your position defines where you stand on the use of AI in your school or district. To maintain academic integrity, we must clearly communicate expectations to both teachers and students. Your stance on AI will vary from district to district, school to school, and teacher to teacher, even down to specific assignments. You must communicate this position clearly to both your teachers and students.
• Policy
Your policy sets the overarching rules, ensuring consistency across the board. In my opinion, an AI-specific policy isn’t necessary; we don’t do that for the internet, or any other tool for that matter. In our district, we chose to revise our current policies to focus on encouraging students to ethically leverage technology for learning and growth without specifically mentioning AI. Following the advice of ASCD/ISTE’s CEO Richard Culatta, we reframed our policy in a positive light. The big ideas in our refreshed policies are that we expect students to be respectful and kind, responsible and careful, safe and secure.
Instead of a laundry list of restrictions to warn students what not to do, we rewrote them to focus on the many positive ways we want them to safely and ethically leverage technology for learning and growth. Here is the specific language we incorporated regarding AI: “Academic Integrity: Always submit work that’s your own. Give credit to sources and practice ethical research by citing references properly. Adhere to assignment guidelines, copyright laws, and intellectual property rights when using, sharing, or creating digital content…”
• Guidelines
Guidelines are the specific, flexible frameworks that empower educators to adapt AI use to their classrooms. As leaders, we have the opportunity to foster positive and responsible technology use. My guidelines are straightforward: They keep users safe and secure while placing ownership and power in the hands of teachers. By doing so, we empower educators to become tech leaders, guiding students through the ethical and effective use of AI in their learning. Instead of focusing on if students are using AI, we should be asking how they are using it. AI’s role will vary depending on the task, so establishing clear expectations for each assignment is essential. Professional development plays a crucial role in supporting teachers as they develop their own assignment-specific guidelines.
Questions to consider:
- Where do you position yourself in this evolving conversation? Make sure to communicate that with your teachers and students.
- Take a look at your own policies: Count the number of Don’ts. Count the number of Dos. Interesting, right?!
- What language will you add to your policies?
- How will you support and empower educators to develop assignment guidelines?
Reimagining Assignments for an AI-Driven Era
One of the key areas where we need to reframe our approach is in how we design assignments. When Google became mainstream, we used to say “If your question is ‘Google-able,’ it’s not a good question.” With generative AI, the same idea applies: If the assignment is “AI-able,” it’s not a good assignment. Think about it, if a teacher uses AI to design a task, the student uses AI to complete it, and then the teacher uses AI to grade it, what’s the purpose of the assignment?

Let’s face it, if AI can replicate student work, then perhaps it’s time to rethink the assignment itself. As we move forward, we need to focus more on the learning process rather than the final product. We should be asking students to apply their knowledge and show us that they comprehend the content. Asking for rote facts is no longer important. We need to think about what’s important and redesign our lessons to create authentic assessments that help students thrive in this new era, which can be uncomfortable for many educators.
This requires a paradigm shift toward tried-and-true educational frameworks and instructional practices where students are encouraged to explore, experiment, and collaborate in authentic contexts. By embracing pedagogical approaches, such as project-based learning, the flipped classroom, and Universal Design for Learning (UDL), we can create more engaging and personalized learning experiences for all students that are “AI-proof.” AI-proofing assignments isn’t about making them AI-resistant; it’s about using AI as a tool and designing tasks that ask students to demonstrate higher-order thinking skills—tasks that challenge them to apply, analyze, and synthesize information.
If you’re unsure if an assignment is AI-proof, run it through an AI tool yourself before asking students to complete it. I’ve taken a persuasive essay and turned it into a TED-Ed talk, where students had to pitch their ideas to parents about building something new in the community. I also revamped a biography unit into “The Perseverance Pod,” an interview-style podcast highlighting how historical figures pushed through tough times. And the best part? We used AI to support us every step of the way!
Strategies for Responsible AI Use
Ultimately, I’m suggesting that we think of ourselves as detectives (carefully checking for misinformation and hidden bias in the output) and DJs (remixing the AI-generated content to reflect our voice and style) when using AI. For example, we should:
- Be the leader: Write down our thoughts and tell the AI exactly what we need—be clear and specific.
- Be bossy: If the response isn’t what we expected, say, “No, I’m looking for ____.”
- Be authentic: The final product should always sound like us.
These strategies should be explicitly taught to all teachers and students to ensure they use AI effectively and responsibly. For example, a lesson on remixing AI-generated content can provide hands-on practice in selecting what resonates with them and personalizing the output to make it uniquely their own.
As you engage in meaningful conversations about AI in education, it’s essential to tap into the wealth of knowledge within your school community by including both teachers and students. Their involvement makes these efforts more meaningful and builds greater buy-in from everyone involved. By building professional learning communities that emphasize trust and integrity, we create an environment where everyone grows together and AI becomes a powerful tool for learning, not a challenge
to overcome.
Teachers
Teachers possess valuable insights and experiences that can contribute to the successful implementation of AI. Allowing teachers and staff to determine its strengths and weaknesses on their own terms encourages a sense of ownership and investment in the technology. By identifying and empowering teacher leaders who are successfully pioneering AI integration, schools can foster a culture of collaboration and experimentation. These teacher-tech champions can lead workshops and training sessions on AI integration.
Facilitating knowledge-sharing activities at your faculty meetings is another way to build a collaborative community. For example, an “AI Exchange” might involve participants finding someone they don’t typically work with and sharing one new way they’ve used AI since the last meeting. After two to three rounds, teachers come back together and share one interesting idea they learned from a colleague. This approach encourages participants to highlight others’ successes, fostering openness, collaboration, and continuous learning.
Students
Students can take on a more active role by participating in advocacy groups like Students for Innovation (studentsforinnovation.org), giving you access to valuable insights, feedback, and the voices of your most important stakeholders. These student advocates help bridge the gap between theory and practice, creating a more inclusive learning environment where every voice is heard. It’s essential to encourage open dialogue about the ethical use of AI, empowering students to ask questions without fear of punishment.
These students can also provide valuable professional development for educators through student-led PD or one-on-one tutoring, offering a unique perspective on the role of AI and the support they need in the classroom. By involving student leaders, schools foster a collaborative culture where students and educators learn from one another to build trust, integrity, and shared ownership of learning outcomes. Involving students in the process helps teachers understand that using AI responsibly can enhance learning, not detract from it.
Leading With a Positive Perspective
AI is a powerful tool that, when used responsibly, can elevate learning and empower students. Instead of viewing it as a means of cheating, we have the opportunity to reframe the conversation. By embracing this mindset, we can better equip students for success in an AI-driven future. It’s our responsibility to prepare them for what lies ahead, and AI will undoubtedly be a part of that. Not teaching students how to use it responsibly and ethically would only widen the digital divide—one that goes beyond socioeconomic status.
The future of education lies in your competent hands, and your leadership and vision will guide your schools toward a future that responsibly and effectively embraces AI. As you rise to the challenge, remember that you are not alone. Connect with fellow educators, leaders, and educational technology experts to share ideas, experiences, and support. Build a collaborative environment that welcomes change and empowers everyone to participate.
Alana Winnick is the educational technology director and data protection officer for Pocantico Hills Central School District in Sleepy Hollow, NY, and the Hudson Valley director for the New York State Association for Computers and Technologies in Education (NYSCATE). She is the author of The Generative Age: Artificial Intelligence and the Future of Education and the host of “The Generative Age” podcast. Learn more at AlanaWinnick.com
References
Mollick, E. (2023, October 12). What people ask me most. Also, some answers. One Useful Thing. www.oneusefulthing.org/p/what-people-ask-me-most-also-some
Spector, C. (2023, October 31). What do AI chatbots really mean for students and cheating? Stanford Graduate School of Education. ed.stanford.edu/news/what-do-ai-chatbots-really-mean-students-and-cheating
Winnick, A. (2023). The generative age: Artificial intelligence and the future of education. ConnectEDD.