When problems arise in an organization, a memo is sure to follow.
If more intervention is needed, that notice may be succeeded by written expectations, recommendations, guidelines or policies. (If it involves the government, stick legislation after guidelines.) The first are advisory, suggesting ways to think about a problem without mandating them. They provide wiggle room and can be walked back as one’s understanding of a situation evolves.
Policies, on the other hand, explicitly and legally identify and codify terms and required behaviors. When I was an administrator, we often said, “When in doubt, check the policy.”
Policy is intended and expected to be more enduring and forward looking. And therein lies the challenge.
Policy’s version of “damned if you do and damned if you don’t” has three variations:
- Act too quickly and you don’t have information of the best quality or right quantity to inform your decisions. Wait too long and your stakeholders will make their own decisions, at everyone’s risk.
- Come down as too strict and you disrupt innovation, diversity, even compliance. Act too leniently and no one knows for sure what to do, diminishing your position as a leader.
- Write guidance too specifically (and currently) and it’s soon out of date. Phrased too generally (especially when attempting to forecast the future) and your policy is too open to interpretation, which leads to confusion and loopholes.
Well-written policies prevent us from doing harm and protect us when we make mistakes. They represent both a legal and moral agreement. School staff and students sign off each year that they have read and will abide by district policies and codes of conduct.
Better Earlier Than Never?
When it comes to creating a policy, you want to be neither the first nor the last. A school in NJ made headlines for being an early adopter of AI guidance but included specific apps and terms that had vanished by the time their policy was board approved. Artificial intelligence is the kind of disruptor that is diversifying and expanding so quickly that leaders often find mandates outdated before the ink has dried. At the other extreme, many schools and universities are still playing wait-and-see, to the confusion of staff and students.
Policies both empower and limit, ideally finding a sweet spot between freedom and order. But that Goldilocks zone can be a moving target. In the year following the 2022 launch of ChatGPT, almost no one moved on policy. By spring of 2024, 45 states had produced AI bills. Most expanded on existing legislation regarding data use and privacy. Others kicked the can down the road by initiating committees and task forces to study the issue. Some directed schools and public agencies to develop their own guidelines.
Where has that left those with boots on the ground? When it comes to the use of artificial intelligence technologies, students, teachers, and the public in general are, in the absence of leadership, exploring every alternative you can imagine, for better and for worse.
Many use it daily in place of traditional search engines, to draft communications, summarize complex information (ex. as a ten minute podcast), and plan activities (like a recipe using what’s on hand). Others, out of concerns ranging from personal privacy to global warming, are avoiding AI as much as possible, which, of course, is just about impossible.
For the 2024-2025 school year, twenty-two states had “official” protocols available for the use of artificial intelligence in classrooms. These vary in scope, specificity, and accountability for addressing them. New Jersey mandates training for all state employees on generative AI. Virginia’s guidelines for AI “integration” includes how to train teachers. Arizona provides a two-hour course for educators specifically on using OpenAI’s ChatGPT. Oregon and a few other states address the potential benefits of AI for children with disabilities.
Most of these documents aren’t actual policies, though. For example, California’s Learning with AI, Learning About AI opens with this disclaimer:
“This document is meant to provide helpful guidance to our partners in education and is, in no way, required to be followed. The information is merely exemplary, and compliance with any information or guidance in this document is not mandatory.”
Practitioners as Leaders on AI Policy
One of the biggest problems in education, as with other large systems and industries, is consistency and interoperability across units. (I recently presented on this issue with AI in hospitals.) In states where professional groups like superintendents associations, boards of education, teacher unions, and school law advisory firms are influential, policies tend to be similar among districts. In the absence of counsel from such “insiders,” others build policies based on recommendations from ”outside” tech organizations and companies.
Does a profit-based mission make the latter’s advice less trustworthy? Probably, but it’s more a matter of whose needs policies address. Wyoming kept the focus on students by adapting Furze et al’s 2023 continuum for usage of AI, which was backed by academic field research. Does it make sense to use recommendations from UNESCO or the World Economic Forum to help craft local AI and learning policies? Are national organizations like the Department of Ed or the NEA any better? How about Google or OpenAI? Do they have our best interests in mind?
Whatever guidance is available to you, you may be feeling that the impact of AI on teaching and learning could be defined and addressed better. If so, here’s my recommendation:
Don’t write a new AI policy.
I’ve found three reasons why:
Practitioners are our best guide. School policies typically interpret legislation, which only mandates the kinds of actions one should take. They aren’t automagically practical. Instead, guide from the ground up, from classrooms to districts, even the state level. I am sure some teachers in your school have established guidelines for students to use AI for research, writing papers, and taking tests. These two stakeholders represent the most needed voices in policy development on AI because they are living with it. Talk to them to find what’s working, what’s not and why. You will likely find AI being used to augment and improve what teachers and students have already been doing, and, if we’re lucky, inspiring them to replace what hasn’t been working. (I’m looking at you, homework.)
Existing policies need to be reevaluated through an AI lens. AI isn’t a boon or bust in itself, but it can magnify (and uncover) existing positives and negatives– for which policies already exist. One of your first stops should be upgrading your student code of conduct to address AI (suggestions below). Think your existing policy language already has you covered? Now is the time to investigate how AI is different from the computers, internet, and search engines we’ve been using for over 25 years and have policies for. For one, AI can make decisions for us that may not be in our best interests, if we let it.
AI’s potential impacts should be addressed in the contexts where they belong. Acceptable use, student privacy, and academic integrity policies immediately come to mind. But don’t stop there. Realizing how far reaching AI’s influence is exerting on everything in our culture and workplace, from commerce to mental health, we ought to scan every other policy to assess what could be affected. For example, AI is fast becoming an expert diagnostician. Your Screening for Dyslexia policy could reference its likely benefits, medical privacy issues, and the potential liability and remedy for misdiagnoses.
Essential Questions on AI Policy
As you craft recommendations, guidelines, and policies to support your school community, ask your team:
Will they help stakeholders…
- choose AI tools based on their (evidence-based) capacity to positively impact learning?
- implement AI practices that protect student and staff privacy, rights, and well-being?
- monitor and evaluate the effectiveness of AI usage?
- be transparent and open about their use of AI?
- use AI ethically and be accountable for the consequences of misuse and violating policy?
Words Matter
When choosing the best descriptions of AI use, don’t fall into the trap of referring to specific technologies that may not work the same or even exist tomorrow. Instead, describe the kinds of applications staff and students are using AI tools for (for better or worse). For example, set parameters for the use of generative video apps.
Instead of writing an AI policy, write AI into policy.
Focus on needs to be met. Augment and strengthen existing policies. Put AI in its proper context. For example, equity policies are designed to ensure fair access to learning. Not all AI tools are as equally accessible. Instead of describing one tool that may be free and available today but not tomorrow, articulate how any kind of generative AI tutor can be used to personalize learning (and how that should be executed).
Here are examples of add-in language options regarding AI in related policies:
If there is any reason to create a standalone AI policy, it would be to define terms and concepts and to direct the school community to other policies affected. For example, NJ law firm Strauss-Esmay suggests this new introductory language:
The Board of Education recognizes the use of artificial intelligence (AI) may result in increased and enhanced learning opportunities for students in the school district. For the purpose of this Policy, “AI” means all types of generative AI technologies that create new content or outputs from a prompt to produce text, images, videos, or music. For the purpose of this Policy, “AI tools” means software applications and platforms that utilize AI technologies to perform specific tasks and solve problems that typically require human intelligence.
The Board recognizes the potential of AI tools to enhance and transform a student’s educational and co-curricular experience in the district. However, AI tools are not inherently knowledgeable and are trained from large amounts of data collected from various sources. Outputs generated by an AI tool may be inaccurate, inappropriate, or incomplete.
Beyond Policy
This work is about more than making sure kids don’t cheat on assignments. How will our efforts to define AI’s role in education help respect and preserve human dignity? How will it improve our culture, communities, and planet? And, by differentiating where artificial intelligence can and can’t (even should and shouldn’t) augment human intelligence, how can we let it lead us to a better appreciation of human capacity?
We need more than a memo. We need your leadership.
Banner image generated in Google Gemini by Marc Natanagara October 2024
When duplicating this post in any form, please be sure to include the attribution in italics above.
When duplicating an infographic, be sure to include any attributions within or adjacent to the image.
©2024 Marc Natanagara, Ed.D. All rights reserved. Reprinted with permission.
Articles, services, and other resources accessible at authenticlearningllc.com
Resources and References
UNESCO: AI and Education guidance for policy makers
World Economic Forum 7 Principles
US Department of Educational Technology AI use guidance
National Artificial Intelligence Initiative Act of 2020
White House guidance on AI policy
National Council of State Legislatures AI 2024 Legislation
NJDOE Dept of Innovation Qs for consideration
NJ School Boards Association resources
Peninsula (Washington) School District principles and beliefs statement
Strauss Esmay Policy 3265 Use of Artificial Intelligence Systems and Tools
Strauss Esmay Policy 3265 Acceptable Use of Generative Artificial Intelligence (AI)
TeachAI policy and guidance for schools
www.esparklearning.com/blog/ai-school-district-acceptable-use-policy