A Survey and Task Force: AI’s Future at Stevenson
- Kyle Figueroa-Rhudy
- 4 days ago
- 3 min read
In the US, around 60% of K-12 teachers and 86% of students admit to using AI for school-related tasks—a statistic that highlights AI’s growing significance in education. On average, teachers nationally save 6 hours per week by generating lesson plans and diverse classroom activities to reinforce the curriculum. Simultaneously, students report that generative AI enables efficient, personalized studying and improved accessibility.
It is true: AI can absolutely improve your educational experience, but the benefits depend on how it is used. Experts differentiate between executive help, where AI does the work for you, and instrumental help, where it is used to deepen learning.
A USC study found that professors who encourage responsible AI use tend to create students who prefer instrumental use, protecting the critical thinking abilities, memory, and creativity that executive use would dull through cognitive offloading. We see this teacher-endorsement effect at Stevenson. Used primarily for instrumental help—Socratic tutoring, teacher-assigned interactive learning, and guided study, for example—the AI use survey found that Flint was the second-most-used AI tool among students.

Though it was nearly 45% behind the most popular agent ChatGPT, Flint's popularity highlights the importance of teacher guidance in fostering healthy AI use. The challenge in education, thus, is not determining how to keep students from using AI, but teaching students to effectively use AI.
This question drives Ms. Ally Wenzel and Mr. Elijah Colby’s AI task force. Similar to the previous Phone Policy Committee, their goal is to integrate student voices into Stevenson’s AI Policy discussion.
The committee’s kickoff was the AI Use survey that we students answered in advisory on January 26. Ms. Wenzel suggests Stevenson’s AI policy direction will be integrative and forward-looking, acknowledging that “AI is no longer a tool; it is an environment. Stevenson must adopt a 'human-in-the-loop' approach, focusing on the critical thinking and creative synthesis that AI cannot replicate.”
Like a Tesla that uses AI to make decisions while allowing the driver to take control at any time, a human-in-the-loop (HITL) approach refers to a method of AI use that offloads the laborious and menial tasks, while the human guides, reviews, and corrects.
In practice, an HITL approach in education favors students and teachers firmly in control of the thinking; AI simply serves to optimize the learning and planning process. Essentially, AI use, in this framework, should enable students and teachers to dive deeper into material rather than replace human skills.
For teachers, this might mean using AI to flag basic grammatical errors or lapses in logic for early drafts of writing, streamlining the initial round of feedback that, in offloading, allows teachers to give higher-impact feedback on conceptual clarity and argumentative strength. For students, on the other hand, AI platforms like Flint can probe reasoning, ask follow-up questions, and challenge assumptions to enable deep thinking and further understanding of a given topic among students.
Regardless of your agreement with a HITL approach, now is the time to speak up: to join the AI Task Force and help decide the future of Stevenson’s AI Policy. What makes the stakes so high in education is that today’s students are and will be the greatest consumers of Artificial Intelligence. By modeling the norms of transparency and ethical use, schools will play a significant role in deciding if the future market favors the construction of AI that prioritizes humanity or completely pushes us out.




Comments