menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

When AI Provides Feedback on Student Work

14 0
yesterday

I gave my third graders one question with no scaffolding, as part of a decision-making unit.

Most of my students have never used a generative AI system. Their intuitions formed before the exposure.

A new study finds AI affirms users 49 percent more than humans, even in cases of deception and illegality.

I teach a decision-making unit to my elementary students. The point is not to tell them what to decide but to teach them how to weigh decisions. Last week, I gave them a dilemma and asked them to reason through it on their own. No warm-up. No framing. No hint of which side I take. I drop the question on the whiteboard, give them a Post-it Note, and I sit down.

Should the teacher be allowed to use AI to give you feedback on your writing? What decision would you make here as a kid?

Should the teacher be allowed to use AI to give you feedback on your writing? What decision would you make here as a kid?

Most of my students have never used a generative AI system. They are 8 and 9 years old. They know the word AI because it's now in the lexicon of conversation. Adults talk about it, and some parents experiment with their kids. They know what AI is from current culture, but they don’t understand it conceptually. So how did they answer this question, and why does this matter to understanding the natural intuition of children?

Thoughts on a Post-It

One student wrote this.

Techers are not allow to use AI because that is not what AI suppose to do. The techers do. AI could make mistakes and AI could write something not conected.

Techers are not allow to use AI because that is not what AI suppose to do. The techers do. AI could make mistakes and AI could write something not conected.

Read it again. A third grader who has never used ChatGPT intuited two of the central concerns in the academic literature on large language models. One, hallucination. Two, a lack of grounding in the context of the student's actual work. This child did not learn this from me. I did not warn the class about AI errors. That was not the lesson. The student reached for it alone, in pencil, on a Post-it, from some combination of common sense and theory of mind.

Another student, a girl, wrote this.

If teacher are using AI why dont students just use it themselves.

If teacher are using AI why dont students just use it themselves.

She did not present this as a complaint. She presented it as an obvious consequence. If the adult gets to use the tool to do the work, the child should get to use it to do the work. The rule applies in both directions, or it is not a rule. An 8-year-old reached for the principle of consistency that the entire ed-tech industry is currently failing to apply. She did it in one line.

What the Industry Shipped the Same Week

Earlier this month, Instructure, the company that owns Canvas, rolled out an AI teaching agent inside the learning management system that runs roughly 40 percent of higher education in North America. The agent generates rubrics, reviews discussions, aligns content, and produces personalized feedback by default. It is free through June 30. After that, it moves into the paid tier.

The architect of the tool, Zach Pendleton, drew his red line at AI agents grading the work of other AI agents. That he called "dystopian." So, he built guardrails to stop that scenario. Yet, the tool he actually shipped generates rubrics, reviews discussions, and produces personalized feedback. In the same interview, he said, "The technological ball is not staying there." He is telling you the red line is moving. He is also the person moving it.

Now add this to the Anthropic education report from last summer, which found that 48.9 percent of the grading conversations professors had with Claude were automation-heavy. The company flagged it as concerning. Grading was the task that educators rated as the one at which Claude was least effective. They did it anyway. This is exactly what my student said would happen.

What the Feedback Actually Does

A study published in Science last month tested 11 leading AI models and found they affirmed users' actions 49 percent more often than humans did, including in cases where the users described deception, illegality, or harm (Cheng et al., 2026). Then, across three experiments with 2,405 total participants, the researchers tested how people responded to these models, including having them discuss real conflicts from their own lives. A single interaction with these models made participants more convinced they had been right and less willing to take responsibility or revise. The participants could not detect the sycophancy. They rated the affirming responses as higher quality, more objective, and more trustworthy. They wanted to use the model again.

The feature that distorted judgment was the same feature that made people prefer it.

Now let’s consider how that finding could apply to "personalized feedback." A teacher who is doing her job introduces friction at the exact moment a student's reasoning goes wrong. A system tuned to be helpful cannot. It selects against friction and chooses affirmation. Canvas did not have to automate grading for the environment to change. It only had to generate the feedback layer.

My third grader wrote that AI could write something “not connected.” The Stanford researchers tested it on adults discussing their own lives. Same finding. Different vocabulary.

The Gap Is Not Wisdom

I want to be careful here, because the easy version of this post ends with "maybe the children are wiser than we are." They are not. Pendleton has thought about this problem longer than they have. He has said the right things out loud. The gap between my third graders and the AI builders is not wisdom.

The point I want to drive home is that 8-year-olds are reasoning about a relationship from outside an industry that has already decided what the product is going to be. Kids have no investors to please. They have no incentive to talk themselves into believing AI guardrails are enough. So, they just reasoned, "No, bad decisionm" and then went off to music.

But the builders do have every incentive. So they ship a piece of software into 40 percent of higher education, carrying a feature the chief architect himself described as dystopian when applied one layer further. The same children who answered my question will be inside those systems in a few years. They won’t be asked whether their professors should use AI. It will already be the default.

I am not sure the reasoning they brought into my classroom will survive the reasoning the industry is about to install around them. That’s the discomfort I am actually sitting with.

Bent, D., Handa, K., Durmus, E., Tamkin, A., McCain, M., Ritchie, S., Donegan, R., Martinez, J., & Jones, J. (2025, August 26). Anthropic education report: How educators use Claude. Anthropic. https://www.anthropic.com/news/anthropic-education-report-how-educators-use-claude

Cheng, M., Lee, C., Khadpe, P., Yu, S., Han, D., & Jurafsky, D. (2026). Sycophantic AI decreases prosocial intentions and promotes dependence. Science, 391(6792), eaec8352. https://doi.org/10.1126/science.aec8352

Palmer, K. (2026, March 23). Canvas unrolls AI teaching agent. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intellig…


© Psychology Today