Friction]

An AI-supported assessment platform that makes student thinking visible.

01 — The problem

Educators see the product.
They never see the process]

Generative AI has irrevocably changed how students work. Universities now ask for "AI reflections" — but the reflection itself can be fabricated by the same tool that wrote the essay.

[ sidenote 01 ]

Educators receive the final essay and nothing else — the process is invisible.

[ sidenote 02 ]

Student over-reliance on GenAI — using it to substitute for engagement rather than enhance it — has become a recognised hurdle in higher-ed assessment.

[ sidenote 03 ]

The window for building assessment systems that meaningfully evaluate AI-assisted learning is closing — fast.

Many educators are moving from prohibition toward integration — embedding AI into assessment to prepare students for an AI-literate job market.

But the shift creates a validity problem. Instructors receive the final essay and nothing else. They cannot tell whether a student engaged with the ideas or simply delegated cognition to a machine.

Assessment systems built only around final products cannot answer the question that now matters most: how was this made?

¹

Friction is the resistance between intention and insight. We think that's where learning happens — and where assessment should look.

02 — The solution

Three agents.
One visible record of thinking.

Friction is a web-based workspace that makes student–AI collaboration visible, structured, and pedagogically meaningful. A layered AI architecture observes the process and reports on it — without surveilling the student.

01
Layer 01 · live

Task Assistant

The student's primary workspace — a ChatGPT-style interface embedded beside a working document. Every prompt, response, paste event, and timing signal is captured as it happens.

RoleWorkspace
PosturePermissive
CapturesEvery event
02
Layer 02 · ambient

Companion Peer

An AI study partner — not an authority figure. It observes the full workflow and intervenes at pedagogically significant moments, asking the student to explain, verify, justify, or rethink.

RoleStudy partner
PostureReflective
Policies13 configurable
03
Layer 03 · on submit

Friction Report

On submission, a third agent synthesises the entire interaction record into a criteria-aligned report for the educator. Engagement quality, paste behaviour, working time, process depth — all in one published artefact.

RoleEvidence
PostureCriteria-aligned
OutputReport document
friction.ai/workspace/law5104 — essay-draft-02
The contested duty of care in algorithmic decision-making
In recent years, courts have increasingly grappled with whether the common-law duty of care extends to algorithmic decision-making systems deployed by public authorities.
The standard test articulated in Caparo requires foreseeability, proximity, and considerations of fair, just, and reasonable imposition — a structure that maps awkwardly onto opaque algorithmic systems, which operate at a scale and speed beyond traditional duty-bearers.
As established in case law, AI systems must be held accountable under existing frameworks.
The difficulty, as Turner (2019) identifies, lies in locating the defendant
Layer 01 · Task Assistant● live
draft me 3 paras on duty of care for AI
Here are three paragraphs on the contested duty of care in algorithmic decision-making, drawing on Caparo...
paste ↵
Layer 02 · Companion Peer — intervention
Trigger · paste > 200wPolicy 04/13
Fig. 01   Companion Peer intervening at a paste event > 200 words.Policy 04/13 · reflection-laundering
03 — How it works

Two flows.
One continuous record.

Friction runs alongside existing assessment, not on top of it. The student experience stays familiar; the educator gets something they've never had before.

A · Student flow

01Writes

Student drafts in the workspace

Essay on one side, AI beside it. Prompts, pastes, and edits captured in real time.

02Pauses

Companion Peer intervenes

At pedagogically meaningful moments, the Peer asks the student to explain, verify, or rethink.

03Justifies

Student responds in their own words

The response is logged — not graded — as evidence of engagement with the material.

04Submits

Final submission closes the record

The full interaction trace is sealed and passed to Layer 03 for synthesis.

B · Educator flow

01Configures

Educator sets policies per assessment

Thirteen observer behaviours — copy-paste detection, reflection laundering, prompt injection, answer extraction, and more.

02Receives

Friction Report, not just a grade

A criteria-aligned account of process: working time, paste events, intervention responses, scaffold retention.

03Reads

Evidence is mapped to rubric

Each learning objective maps to observed behaviour — or, visibly, to "Not Observed".

04Acts

Formative, not forensic

Feedback the student can act on. Integrity flags where warranted, guidance where helpful.

04 — The Friction Report

A published object,
not a dashboard]

We treat each report as an academic artefact — criteria-aligned, citable, and designed to be read.

The Friction Report arrives with the submission. It doesn't replace the essay — it sits beside it, surfacing the process the essay conceals.

Generous typography, plain language, and a structure educators already recognise from journal articles. No scores. No leaderboards. No surveillance theatre.

  • 01Executive summary — process in plain language
  • 02Criteria-aligned evidence panel
  • 03Working-time and engagement metrics
  • 04Intervention log with student responses
  • 05Paste behaviour and scaffold retention
  • 06Flagged observations (where warranted)
Friction Report · Vol. 01LAW 5104 — A. Reed

Friction] Report

A criteria-aligned account of process, not product.

Executive summary

Submission completed in 14 minutes of active working time across 3 sessions. 2,140 words pasted from AI across 7 events; 58% of final text retains the AI scaffold unchanged. No evidence of source verification or argument development.

[ flag ]
Criteria 02 (originality of analysis) and 04 (engagement with sources): Not Observed.
Working time
14min
Paste events
7
AI scaffold retained
58%
Peer interventions
4
05 — Questions
No. Friction captures interaction data inside the assessment workspace only, with the student's informed consent. There is no camera access, no keystroke logging outside the workspace, no cross-site tracking. The Companion Peer acts as a study partner, not an invigilator — and students see exactly what the educator will see, when.
06 — Demo

Five days.
One winning prototype.

Friction won the challenge track at the 2026 EduX Oceania Hackathon — designed, built, and deployed in five days. The demo below was the winning submission.

Fig. 02   Live walkthrough · 2026 EduX Oceania Hackathon submission.Challenge track · winner

See clearly.
Act meaningfully]

We're aiming to open pilots soon with a small number of law schools and AI-forward faculties. One semester, one assessment, one subject.

Log in to live prototype Or reach out · isam.elsheikh@monash.edu