AI in Schools: Policing It Will Break You. Designing With It Won’t. - PPA Buddy

If you’re a teacher in 2025, you’ve probably had one of these moments.

 

You’re marking something that reads a bit too polished.

You’re watching a pupil “magically” leap two writing grades overnight.

You’re in a meeting where the conversation turns into “We need to catch them”.

 

That instinct is understandable.

It’s also a trap.

Because policing AI is a workload strategy. And it scales horribly.

If your plan is detection, you’ve just volunteered for an arms race you cannot win. Your evenings disappear, your trust with pupils gets brittle, and the quality of learning still doesn’t improve.

There’s a better move.

Stop treating AI as something to catch. Start treating it as something to design around.

 

What changed (and why this isn’t a niche issue)

Microsoft AI published a large “usage at scale” report analysing 37.5 million de-identified Copilot conversations across January to September 2025. 

 

Two details matter:

First, this is consumer usage, not school or enterprise accounts. Microsoft explicitly says they exclude enterprise traffic and users signed in on a commercial or educational enterprise account. 

Second, they’re not claiming learning outcomes. They’re showing behaviour patterns. The logs are processed with “eyes-off” classifiers, meaning no human researcher reads conversation content. 

 

So what does that have to do with your classroom?

It tells us AI isn’t “a school tool”. It’s a normal life tool. Learners don’t experience a neat boundary between “home tech” and “school learning”. We do, because we write policies. They don’t.

And when Microsoft ranked why people use Copilot, “Learning” was the 4th most common intent (after searching, advice, and creating). 

That’s not fringe. That’s mainstream.

 

Here’s the bit schools keep missing: context drives AI behaviour

One of the most useful parts of Microsoft’s report is the focus on when and how people use AI. Their headline finding is that usage depends fundamentally on context and device type. 

On desktop, work and tech dominate during business hours. On mobile, the pattern shifts towards more personal topics. 

In school terms, that should ring a bell.

A lot of AI use isn’t late-night chaos. It’s “sit down, open the laptop, get help, keep going”. It looks like routine support.

 

Which means the real question for schools is not “how do we stop it”. It’s this:

How do we make sure AI support strengthens thinking rather than replacing it?

 

The school problem that FE and ID conversations don’t always touch: workload

Most of the internet debate about AI in education gets stuck in ethics.

In schools, the lived problem is simpler and more brutal: time.

If you rely on policing, you create more marking, more conflict, more meetings, more rewriting of tasks, and more uncertainty for staff.

Design is the alternative.

Design gives you routines pupils can follow, boundaries you can explain, and evidence of learning you can actually assess.

 

The PPA Buddy model: Decide, Design, Demand

Here’s a framework you can use across primary, secondary, and sixth form without needing a “whole-school AI curriculum” first.

 

1) Decide what must be independent

Not everything needs AI. And not everything should allow it.

Be explicit about your “no-AI moments” and why.

 

These tend to be:
  • retrieval practice (so you can see what’s actually stored)

  • in-class checks for understanding

  • exam-style writing under timed conditions

  • any task where you’re assessing a specific taught method

 

This isn’t anti-AI. It’s clarity.

Pupils cope better with boundaries when the boundary makes sense.

 

2) Design one AI moment into the sequence

The mistake schools make is treating AI as “either banned or everywhere”.

Instead, choose a moment where AI support genuinely helps learning, and name it.

 

Examples of a useful “AI moment” are when pupils:
  • are stuck and need a hint, not an answer

  • need an explanation in different language

  • need more practice questions at the right level

  • need to test whether their reasoning holds up

 

You’re not handing over the lesson. You’re building a scaffold you control.

 

3) Demand the thinking trace

This is the part that stops everything turning into neat, empty output.

If pupils use AI, they must show the thinking that happened around it.

 

The simplest “thinking trace” works like this:

They commit to an attempt first.

They use AI for a specific kind of support.

They revise.

They explain what changed and how they checked it.

That last line is where learning becomes visible again.

 

What this looks like across phases (without making it a tech circus)

 

Primary: keep it teacher-led and build habits early

In most primary settings, the safest route is teacher-mediated use. That doesn’t make it weaker. It can make it better.

One of my favourite approaches is using AI as a discussion starter, not a pupil tool.

You generate two or three short explanations of the same concept, then the class decides which is clearest and why. You’re building language, metacognition, and the habit of judging quality.

You can also teach verification without any child accounts at all.

Show a short AI-generated answer next to a worked example or a trusted text. Ask pupils to spot what matches and what doesn’t. You’re quietly teaching something that will protect them later: confident writing is not the same as correct writing.

 

KS3: choose habits now or fight them later

KS3 is where pupils decide what AI is “for”.

If they learn it’s for shortcuts, you will spend KS4 trying to undo it. If they learn it’s for support while thinking, you get better independence, not less.

A clean routine is: first attempt, then a targeted hint, then improvement.

You can even put the hint type on the worksheet: “Ask for a worked example”, “Ask for one clue”, “Ask for the mistake in my method”. It normalises purposeful use and makes misuse easier to spot without turning you into a detective.

 

KS4 and KS5: raise standards by marking decisions, not polish

At GCSE and A level, the risk isn’t that pupils use AI. The risk is that we keep pretending they don’t.

For homework and non-exam practice, you can build a simple expectation: if AI was used, there’s a short appendix that shows what they asked for, what they changed, and how they verified it.

This does two things at once.

It stops the empty “perfect paragraph” problem.

And it gives you something far more assessable than tone and vocabulary: judgement.

 

Safeguarding and equity: the part that decides whether your approach is credible

Schools can’t copy the internet’s AI enthusiasm. We have to do this with safeguarding and fairness in mind.

Two non-negotiables:
  • First, no pupil personal data. No identifying details. No uploading student work into non-approved tools. Microsoft’s report is about consumer use, but school use has a different duty of care. 
  • Second, don’t bake in inequity. Some pupils have access at home, some don’t. Some have parents who can coach prompting, some don’t.

 

That’s another reason “policing” fails. It punishes the honest and rewards the hidden.

Design is fairer because you can build “thinking traces” that don’t require fancy access. Pupils can show reasoning, verification, and revision even when AI isn’t available.

 

If you’re still asking “Is it cheating?”, you’re already behind

Not because you’ve failed.

Because that question pulls you into the most exhausting version of this problem.

AI isn’t going away. What you can control is whether your classroom becomes a constant game of gotcha, or a place where thinking is visible and standards are clear.

Policing is a workload sink.

Design is a workload boundary.

 

Quick FAQs

Is using AI cheating?

Sometimes. If it replaces the thinking you’re trying to develop or assess, it’s a problem. If it supports thinking and the thinking is visible, it can be part of the learning design.

 

Won’t it make students lazy?

It can, if tasks reward output more than understanding. That’s why “show your thinking” structures matter.

 

What about primary pupils?

Primary can be teacher-led and still powerful. You can build verification habits and metacognition without giving children open access to tools.

 

What if my school policy is “no AI”?

Even then, you can teach AI literacy without tool use: evaluating reliability, spotting confident errors, comparing explanations, and talking openly about what pupils are encountering outside school.

 

Ready to stop dabbling and actually get good at this?

If you want to move from “I can use AI” to “I can design learning with AI”, that’s exactly what we do inside the PPA Buddy Skool community.

The AI Workload Revolution module closes the competent-to-confident gap. It’s not a prompt dump. It’s the practical shift from using AI to create tasks, to using it to design learning where pupils still have to think.

If you’re serious about protecting your time and keeping the bar high, join us.

 

Join the PPA Buddy Skool community

 

No fluff. No prompt hoarding. Just routines you can actually run next week.