Proposal for collaborative (competitive?) subtitling

I'm prompted to post this suggestion after discussions with Bruce Scharlau ( Thanks for seeding these ideas Bruce!

General principle: Students compete in teams to subtitle lecture recordings.

Goal: Increased engagement with lecture content, improve accessibility (hearing impaired, english-as-second-language students) and searchability of lecture materials.

Step by step breakdown:

  • lecturer records, and uploads video recordings of lectures each week
  • audio track of lecture recordings are automatically segmented and timecoded (uweAudioSegmentation - github)
  • each segment is roughly one paragraph in length and a corresponding draft transcript for is produced by speech recognition technology (uweDNSBatchTranscription - github)


  • the cohort for a given module is divided into small groups (3-5 students per group)
  • groups allocated 'correction duty' on rotation, interact with collaborative transcription tool (uweNOENRIEHORNIOWHEUORHWNERWE)
  • each group memeber allocated N-minutes (contiguous segments, automatic allocation)
  • assessment incentive (e.g. compulsory coursework component worth <= 5% of module total marks)


  • collaborative submissions need not be monitored by lecturer and subtitles can be automatically deployed (via API).
  • draft subtitles can appear immediately after machine transcription, and student edits can be pushed to the repository for "live" updates.


  • end-of-term vote on which lecture team produced the "best" (e.g. accurate transcription, well punctuated, grammar, spelling, formatting, etc)
  • immediately obvious when an entire group/team has not engaged
    • could insert "[draft]" tag in segments un-edited by students?
  • also obvious when individual group member has not engaged
    • could augment subtitles with attribution "The following N subtitles were edited by [foo] of team [bar]"?


64 views and 0 responses