NYC Teachers: 3 Days to Comment on the AI Traffic-Light Rules

NYC Public Schools' new traffic-light AI policy bans grading and IEP help. The May 8 comment window closes Friday. Here's how to file in 10 minutes.

If you’re a NYC public school teacher who’s quietly used ChatGPT to draft an IEP goal, generate a grading rubric you tweaked, or translate a parent letter you didn’t have time to wait three days for the language coordinator on — the new rules now have a name for what you did. Some of it is yellow light. Some of it is red.

You have until Friday, May 8 to file a public comment before the rules become the citywide playbook in June.

Parents are filing in numbers. So are principals. Teachers are quiet. That’s the gap this post is trying to close.

What Just Changed

On March 24, 2026, NYC Public Schools — the largest district in the country, 78,000 teachers — released its preliminary AI guidance. (Chalkbeat)

The framework is a traffic light. Green for approved uses. Yellow for “professional judgment essential.” Red for “never, no exceptions.”

Chancellor Kamar H. Samuels framed it in his foreword:

“By releasing this guidance, we aim to set a clear standard for innovative, education-focused, equity-centered AI adoption across our entire school system. I’m proud that this has been a community-driven effort, including input from over 1,000 stakeholders.” (NYC DOE)

UFT President Michael Mulgrew said AI “must be used wisely” and cannot replace the “human connection between educators and students.” (GovTech)

A 45-day public comment window opened the same day. It closes May 8. The full Playbook lands in June.

Here’s what each light actually covers — pulled directly from the official NYC DOE document, not paraphrased from press coverage.

The Three Lights, Decoded

🔴 Red: Never, No Exceptions

What’s prohibitedThe rule references
Decisions about students — placement, discipline, eligibility, promotion, graduation, program accessCR A-101, A-443, A-501
IEP and 504 plan developmentIDEA (federal), Special Ed SOPM, CR A-710
Grading and assessments — “the educator of record determines what a student knows. AI-generated data is advisory only”CR A-501
Behavioral monitoring and student surveillanceCR A-443, Students’ Bill of Rights
Counseling, crisis intervention, therapeutic supportCR A-411, A-755
Algorithmic placement that can’t be overridden by educatorsCR A-101
Letting student data train AI models or be soldEd Law §2-d, FERPA, COPPA

This is the list teachers need to read line-by-line. If you’ve drafted IEP goal language using ChatGPT — even as a starting draft you heavily edited — that’s red. If you’ve used AI to suggest behavioral interventions or counseling responses, that’s red. If you’ve fed student PII into a tool not on the ERMA-approved list, that’s a privacy violation, not just a yellow.

🟡 Yellow: Professional Judgment Essential

What needs review before student-facing useThe condition
Surfacing patterns in student/school dataEducator interprets findings with knowledge of each student
Critical communications via AI translationMust be reviewed and approved by a qualified linguist before distribution
Translations and transadaptations of bilingual instructional materials, accommodations, scaffoldsReviewed by certified bilingual / ENL teachers and IEP team members
Student use of AI for research, exploration, creative projectsEducator guidance + critical evaluation + age-appropriate context required

The Spanish-translation case is the one most teachers will trip on. A parent letter translated by AI and sent home is yellow — and the official rule is “all translations must be reviewed, edited, and approved by a qualified linguist prior to distribution.” (NYC DOE)

That’s a much higher bar than what most schools actually do today.

🟢 Green: Approved, Encouraged, Supported

What’s good to goThe catch
Brainstorming lesson ideas, approaches, unit planningAligned with intellectual property guidance
Drafting and refining communications on any topicHuman review and ownership required before distribution
Scheduling, formatting, summarizing non-sensitive information
Synthesizing operational data for resource planningNYCPS Fiscal Policy applies
Translation of non-critical school communicationsHuman reviewer required (or AI-generated disclaimer + clarification path)
Creating accessible materials for familiesSection 504, ADA, IDEA standards apply
Educators’ own professional development and research

Green is wider than people think. Lesson planning, your own PD reading, weekly newsletter drafts — all approved when you use ERMA-cleared tools and own the final output.

How to File a Comment in 10 Minutes

The comment URL is straightforward:

https://survey.alchemer.com/s3/8751093/Guidance-on-Artificial-Intelligence

The form is open to any stakeholder. The Chancellor’s foreword explicitly invites teachers. (NYC DOE)

What to do when you open it:

  1. Identify yourself as a NYC public school teacher. If there’s a role dropdown, pick teacher. In open-text fields, name your school and borough — it gives your comment more weight than anonymous feedback.
  2. Pick one or two specific items from the red/yellow/green tables above. Generic “I support / I oppose” comments are tallied but rarely shape final language. Specific ones — “the yellow-light requirement that ALL translations be reviewed by a qualified linguist will block routine parent communications in schools without ENL staff” — get cited in the final playbook.
  3. Use the open-response box to describe your actual workflow. If you draft IEP goals with AI as a starting point and then heavily revise, say so. If you’ve used Google Translate for a parent text and now realize the policy reframes that as yellow-light, name it. The DOE’s stated goal is clarity. Your real-life examples are the input that produces clarity.
  4. End with the single change you’d ask for. Something concrete. “Add ‘first-draft IEP goal language’ to the yellow-light category with an explicit human-rewrite requirement, instead of red-banning it” is a comment a policy writer can act on. “I disagree with red light” is not.

That’s the 10 minutes. Stop the timer.

Where Each NYC Educator Stands Right Now

If you’re a classroom teacher: The most consequential question is whether you’ve been doing IEP draft assistance, grading rubric generation, or behavioral-trend identification — all of which are now red or hard yellow. The policy doesn’t punish past use; it sets a forward standard. But filing a comment is the only way the final June playbook reflects what teachers actually need from AI versus what looks safe on paper.

If you’re an instructional coach or curriculum coordinator: You’re the bridge between the rule and the day-to-day. Your comment is the one place the DOE will hear “the ENL teacher review requirement assumes every school has bilingual staff, and 30% of mine don’t.” Specifics about staff capacity belong in your filing.

If you’re a principal or AP: The principals’ union has already pushed back publicly. As parent advocate Leonie Haimson summarized: “NYC Principals union agrees: NYC Schools AI guidance hugely deficient: ‘we provided 40, 50 questions that we have no idea the answers to these questions — and neither did they.’” (Class Size Matters resolution) If you have specific questions you want answered before June, file them now.

If you’re a UFT chapter leader: The union has a public position from Mulgrew, but field-level voice is missing from the comment record. Your chapter’s specific concerns about workload (every translated newsletter now needs a qualified-linguist review?) belong in the survey.

If you’re a parent reading this: The Parent Coalition for Student Privacy has filed a 9-page critique calling the guidance inadequate and asking for a moratorium. (Class Size Matters/PCSP critique) Their position is that the rules aren’t strong enough on privacy — not that AI shouldn’t be used. Your comment can argue either way; what matters is that the comment is filed.

What This Doesn’t Fix

A few honest limits worth naming.

ERMA approval is opaque. The rules require teachers to use ERMA-approved tools for any sensitive task. The NYC DOE has not published a comprehensive public list of which tools are approved. Multiple privacy advocates flagged this gap directly:

“The DOE AI guidance provides no clarity or transparency about which AI products can be used with students, or those that have gone through the DOE privacy vetting process known as ERMA.” — Parent Coalition for Student Privacy

So even if you want to follow the rules, you may not know which tools count. Note this in your comment.

No district-wide enforcement mechanism is published. The June playbook is supposed to land with implementation guidance. As of today, the practical answer to “what happens if I use AI for a red-light task” is unclear — likely a school-level conversation rather than a citywide consequence. That ambiguity itself is worth commenting on.

The DOE explicitly acknowledges gaps. The guidance document itself lists items “under development”: grade-band specifics for K-5 vs 6-8 vs 9-12, bias and equity review of AI tools, biometric/behavioral data governance, homework and academic integrity guidance, managed-tools-vs-personal-accounts rules. (NYC DOE) If your most pressing question falls in one of those buckets, your comment is what tells the DOE which gap to close first.

This is NYC only. If you teach in Yonkers, Buffalo, or any other district, the playbook here is a preview, not your rules. But the framework — traffic light, public comment window, ERMA-style approval list — is the model other large districts are watching. LAUSD, Chicago, and Houston have softer guidance and no comparable codified red list. (Pursuit policy roundup) What NYC writes next month travels.

The Bottom Line

The Friday deadline isn’t symbolic. NYC’s DOE has said the June playbook will be “informed by” the comments received. With 1,000+ stakeholders already consulted before the draft, the comment window is the second-and-final input cycle. Parents and principals are filing. The classroom voice — the one that knows whether the IEP red-light actually fits how special-ed teams write goals, whether the translation yellow-light is workable in 30%-bilingual-staff schools, whether the grading red-light is precise enough to distinguish AI-feedback from AI-grading — is missing.

You have a few minutes between dismissal and dinner. Use them.

For the broader question of which AI uses are safe across professions where stakes are high — legal, medical, education — our ChatGPT Privilege & Safe Workflow course walks through the same red/yellow/green decision pattern that the NYC DOE just adopted. AI for Teachers and Educators covers the daily green-light use cases — lesson planning, parent communication drafts, accessible material generation — that the new rules explicitly support.

Open the survey. File the comment. Tell the policy writers what you actually do.


Sources

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume