Explainable AI · Assessment Intelligence

Explainable short-answer scoring

Run the dual-stage pipeline that powers this project: lexical alignment, semantic reranking, and rich evidence surfaces for every teacher + student pair.

Latency ~2.3s
Explainers SHAP · Drift
Deployments Vercel / Local

Pipeline snapshot

Stage I
Lexical alignment

Anchor extraction, edit distance, and semantic overlap establish a safe floor before reranking.

Stage II
Semantic reranking

Concept clustering + drift scoring highlight where the student summary diverges.

What the live demo shows

  • Predicted grade with verdict narrative
  • SHAP bars, sentence attributions, anchor coverage
  • Semantic drift matrix, concept map, batch exports
  • 📈 Temporal learning analysis (multi-submission tracking)

Everything renders in-browser using scorer.js, explainability modules, and temporal drift research.

Live evaluation workspace

✍️ Live Scoring Demo

Paste a teacher reference and a student response. We will run the dual-stage pipeline and surface explainability artifacts instantly. 💡 Temporal Analysis: Submit multiple answers to see learning trajectory, improvement scores, and consistency metrics.

📊

Fill the form to see explainable results.

Bulk evaluation

🚀 File Grade Runner

Automate grading for an entire cohort with CSV or pair long transcripts with multi-student summaries via our XLSX workflow.

📁 Batch CSV Grading

Upload a CSV file containing columns like `question`, `reference_answer`, and `student_answer` for bulk processing.

📄

Drop CSV here or browse

📊 Script Evaluation (XLSX)

Grade multiple student summaries (.xlsx) against a long Meet transcript (.docx).

Teacher .docx

No file selected yet

Student .xlsx

No file selected yet