Workflow for educators - speech recognition assisted lecture subtitling

Important

This draft is published on September 23rd 2011. It is likely that changes will be made to improve the quality or detail of this post.

The following diagram illustrates the current workflow for educators:

 

Step by step screenshots:

TBA

on local machine:

  • (1.1) uploading audio via AFP
  • (1.2) configuring processing for transcription / subtitling
  • (2.1) interacting with uweRemoteTranscription.app
    • in "on campus" mode, 
    • in "off campus mode"
  • (2.2) connecting the remote audio system in AUNetReceive
  • (3.1) interacting with uweTranscriptionHub
    • previewing transcripts
    • downloading SRT subtitles

on remote server:

  • (2.2) interacting with uweScribeBatchTranscription.app
    • launching
    • selecting a new lecture transcription
    • resuming a previous lecture transcription
  • (2.2) interacting with the commercial speech recognition software
    • updating the voice profile
    • training during transcription

Example input / output:

TBA

Custom software components hosted on gitHub:

uweRemoteTranscription (github)

  • ssh tunnels
  • osx screenshareing
  • osx auLab, auNetSend and auNetReceive


uweScribeBatchTranscription (github)

  • custom applescript
  • MacSpeech Scribe

Final note:

It is likely that our approach will change and improve over time. To keep track of these improvements and changes, search for similar posts using the tags (e.g. "speech recognition", "workflow", "subtitling"). Be sure to check the full list of tags for this post.

205 views and 1 response

  • Sep 23 2011, 7:49 AM
    Marcus Lynch responded:
    Excellent stuff!