What is the source of captioning errors in professionally done sites?
I got access to lynda.com to watch some tutorials (which turned out to be quite unnecessary) and despite my first impression, I actually ended up finding a serious error before even finishing a fifth of the tutorial
and it was the kind of error that we usually attribute only to machine-generated transcripts.
(And incidentally the punctuation and the way the transcript has been broken up into captions are also substandard. No one in my program would accept such captions as accessible even if there were no transcription mistakes.)
Since lynda.com is a professional site (or at least that’s the image it’s projecting), this leaves us with a few questions that really beg for an answer. First,
- were the captions generated by machine and then post-edited (after all, my first impression was that they were fairly accurate — a human is obviously involved somewhere),
- were they transcribed by professional transcriptionists, or
- were they crowdsourced (since we can pretty much rule out normal volunteering)?
And, secondly, if the answer is either of the first two, then we are left with more questions as to why such an obvious error has crept in:
- Was the pay too low?
- Were the deadlines unreasonable?
- Were the transcriptionists non-native (something I don’t personally believe in, but a lot of people do)?
- Was it some other reason? or
- Was it a combination of the above?
I would be tempted to say a combination of the first two, but is there a way to find this out? If an experiment could be designed, would such an experiment be ethical? I wonder.
This entry was posted in Posts
and tagged lynda
. Bookmark the permalink