Prompt is real (not a placeholder)
What this checksFRQ stems must not be placeholders like 'test prompt' or 'TODO'.
Why it mattersPlaceholder prompts ship to students and make the course visibly broken.
How to fixReplace the placeholder with the actual FRQ prompt.
FRQ has a rubric
What this checksEvery FRQ must have a non-empty rubric.
Why it mattersWithout a rubric, the AI grader has no scoring criteria and student responses cannot be graded.
How to fixAuthor a rubric with point allocations and accepted answer paths.
Rubric uses criteria language
What this checksThe rubric should mention point allocations (e.g., '1 point') or scoring criteria language.
Why it mattersFree-form rubric text without explicit criteria is hard for the grader to apply consistently.
How to fixRestructure the rubric into explicit criteria with point values.
Autograder URL is set
What this checksEach FRQ must point to an autograder URL.
Why it mattersWithout a grader URL, FRQ submissions go nowhere — students don't get scores or feedback.
How to fixWire the FRQ to a configured autograder.
Autograder URL is well-formed
What this checksThe autograder URL must use a valid scheme and not contain typos like the old /api/ prefix or doubled https://.
Why it mattersMalformed URLs silently fail to grade — the platform does not warn you.
How to fixVerify the URL against the canonical grader endpoint format.
FRQ marked as required response
What this checksThe interaction element must have required="true" so students can't skip it.
Why it mattersWithout required="true", students can advance without responding.
How to fixSet the required attribute on the FRQ interaction.
Expected response length set
What this checksThe FRQ should declare an expected number of lines so the response box is sized correctly.
Why it mattersWithout a sizing hint, students see a tiny box for a long-essay prompt and write less than they should.
How to fixAdd the expected-lines attribute matching the rubric's expected response length.
FRQ outcome declarations are canonical
What this checksThe FRQ should declare the canonical set of outcome variables (API_RESPONSE, FEEDBACK_VISIBILITY, GENERATED_FEEDBACK, SCORE).
Why it mattersNon-canonical outcomes prevent the grader from writing scores back or showing generated feedback to students.
How to fixMigrate the FRQ XML to the canonical outcome declaration pattern.
Rubric is in the correct QTI location
What this checksThe rubric must live in <qti-rubric-block> inside <qti-item-body> per the QTI spec — not in metadata.rubric or metadata.modelAnswer.
Why it mattersRubrics in non-standard fields are invisible to the platform's grading tooling and to graders who follow the spec — the rubric content might exist but won't be applied where it counts.
How to fixMove the rubric content from metadata.rubric (or metadata.modelAnswer) into a <qti-rubric-block use="ext:criteria" view="scorer"> element inside <qti-item-body>, with one block per criterion.
FRQ uses autograding (preferred)
What this checksThe FRQ should be configured for autograding rather than manual/human scoring.
Why it mattersManual scoring is allowed, but autograded FRQs scale better and give students faster feedback — autograding is preferred where feasible.
How to fixIf the FRQ is currently set to scoringType="manual" or requiresHumanScoring=true, consider building an autograder for it.
FRQ prompt is clear
What this checksAn AI reviewer confirms the prompt is unambiguous and well-specified (clear task verb, scope, expected response form).
Why it mattersAmbiguous prompts get a wide range of responses that the rubric can't fairly score.
How to fixTighten the prompt's task verb, add scope constraints, and specify response form (essay/list/diagram).
Rubric scores what the prompt asks
What this checksAn AI reviewer confirms the rubric criteria map to what the prompt asks (no orphan criteria, no unscored prompt parts).
Why it mattersMisalignment means students do what the prompt asks but get scored on something else.
How to fixWalk through the prompt and rubric line by line; ensure each prompt part has a scoring path.
FRQs collectively serve the course goal
What this checksAn AI curriculum reviewer judges whether the FRQs in the course (taken together) actually work toward the course's named goal — and that no FRQ ships with broken/placeholder content.
Why it mattersEven if individual FRQs are fine, an off-target FRQ set or any single broken FRQ undermines the whole course.
How to fixRe-author off-target FRQs to match the gap; fix or remove any broken FRQs.