Issue at a Glance | NASSP Position | Recommendations for Policymakers | Recommendations for State Leaders | Recommendations for District Leaders | Recommendations for School Leaders | Download PDF
Issue at a Glance
Although artificial intelligence (AI) is an increasingly useful tool in many sectors, its growing presence in schools requires close attention. While AI has the potential to personalize learning and increase academic support, its rapid growth also raises concerns related to academic integrity, data privacy, digital safety, and the development of critical thinking skills.
Tools like ChatGPT and other AI text generation assistants can help students draft essays, solve math problems, or translate texts, instantly. However, this ease of access raises important questions about the line between support and academic dishonesty, particularly for students who may rely on AI in place of building essential skills. According to ACT Education Corporation in 2023, roughly half of high school students reported regular use of AI tools like ChatGPT, and usage is only increasing. AI use for non-academic purposes can also pose harm. AI-generated images that resemble real photos of individuals, also known as deepfakes, are increasingly used in cyberbullying. According to the National Education Association (NEA), between 40% and 50% of students are aware of deepfakes being circulated within their school. These images are often difficult to trace, yet they pose significant safety concerns, particularly to female students who are more likely to be victimized by deepfake images. Further, AI use by students, educators, and school officials alike raises significant questions about online security. In 2023, the U.S. Department of Education’s Office of Education Technology (OET), released “Artificial Intelligence and the Future of Teaching and Learning.” which found that these tools are often not designed with specific protections for students and school-based users and they may not align with existing regulations protecting student data and privacy.
Despite these drawbacks, AI still represents a valuable tool in schools. While educators report considerable concerns about AI use, they also report that AI tools have been positively used in their classrooms. According to the 2024 NEA Task Force report, “Artificial Intelligence in Education,” some educators find that AI helps save time when planning lessons by assisting with idea generation, and that AI-driven tools like scene-readers can help students with disabilities navigate their classrooms. In addition, TeachAI developed the 2025 resource, “AI Literacy Framework for Primary and Secondary Education (AILit Framework)” for the education community as AI technology in schools continues to develop.
Considering this quickly evolving landscape, schools, local education agencies, and policymakers must ensure that guidance surrounding the use of AI empowers students and educators, rather than undermines learning goals or school safety.
NASSP Position
- NASSP supports using AI tools in the classroom as long as it does not undermine learning, particularly with regard to the development of critical thinking and writing skills that AI can easily shortcut.
- NASSP believes that students deserve to be protected from the harm of AI tools in their educational environments, whether from cyberbullying or data privacy concerns.
- NASSP recognizes the importance of schools preparing students for the rapidly evolving professional landscape, driven in part by the development of artificial intelligence.
- NASSP supports innovative technologies that enhance learning, especially diverse technology tools for students with special needs.
- NASSP urges internet providers to immediately address deepfakes and other detrimental AI images which can severely impact a student or educator’s safety and well-being.
- NASSP supports the “Take It Down Act of 2025” which prohibits non-consensual content involving minors and requires internet providers to immediately remove the content within 48 hours.
- NASSP supports immediate legal action against perpetrators of deepfakes or other illicit images.
- NASSP supports the AI Literacy Framework for Primary and Secondary Education (AILit Framework) which includes AI literacy assessments, competencies, and education scenarios.
Recommendations for Policymakers
- Federal lawmakers and policymakers should refrain from passing restrictions limiting state regulations of AI that may protect students, educators, and schools.
- Fully fund Title II, Part A of ESSA, which provides resources for states and districts with flexible funding for school-based professional learning opportunities to implement AI technology in the classroom and ensure digital and privacy safeguards are in place for students and educators.
- Enact AI-related policy which regulates internet providers or platforms defined as public websites, online services, or applications that provide a forum for user-generated content.
- Policymakers must ensure that AI use in schools complies with regulations related to student data privacy, such as the Family Educational Rights and Privacy Act (FERPA), the Children’s Internet Privacy Act (CIPA), the Children’s Online Privacy Protection Act (COPPA), and the Take It Down Act.
Recommendations for State Leaders
- Provide ample professional development opportunities to equip educators with the knowledge and skills to understand how AI is impacting students, how to guard against its drawbacks, and how to use it as a positive tool within the classroom.
- Partner with the federal government and the Administration to explore how AI can enhance teaching and learning, expand access, and support educators, without replacing the critical role they play in providing alignment with applicable statutory and regulatory requirements.
Recommendations for District Leaders
- Provide resources for school-based professional learning opportunities to train school leaders, teachers, students, and parents on the use of AI technology in the classroom.
- Collaborate with the National Parent Teacher Association to provide resources, tips, and conversation forums for families to build healthy digital habits and stay safe online.
- Foster responsible AI use to address topics such as online safety, screen readiness, and preventing online risks such as scams and deepfakes.
- Maintain regular, two-way communication between parents and educators regarding the ethical and effective use of AI tools in the classroom.
Recommendations for School Leaders
- Ensure that such tools advance and do not undermine equity given that AI systems are trained on human data that may reflect biases, particularly against students of color and those with disabilities.
- Develop or procure AI-powered instructional tools that adapt to learner needs in real time.
- Expand access to high-quality, personalized learning materials across all subjects, grade levels, and learning environments.
- Train educators, providers, and families to use AI tools effectively and responsibly.
- Create hybrid models where human tutors are complemented by AI-based learning platforms.
- Use diagnostic and scheduling tools that use AI to match learners with tutoring services based on need.
- Leverage platforms to help students identify career interests, explore pathways, and make informed choices.
- Ensure accessibility of AI tools or systems for students, educators and family members with disabilities and who require digital accommodation.
- Evaluate the validity of outputs to understand and teach the appropriate use and navigation of AI.
- Provide digital safety systems which comply with federal privacy laws, including FERPA.