It doesn’t seem like a big ask to write a blog about my job.
“We want to write a piece about our multilingual AV work”,
said my colleague.
“About how it always needs to be done quickly and how complex the projects are – you’ll think of something!”
Fortunately, I love talking about subtitles. In today’s world, we’re constantly consuming movies and videos without even realizing that they might have originated in a completely different language. And the reason for that is often simply clever translation in the subtitles.
Good subtitles go unnoticed
Subtitles are everywhere these days. If they are doing their job, we don’t even notice them. At the same time, we ultimately realize that we somehow understood everything despite not always paying attention, not understanding the language or not hearing every word.
If they are poor, however, we are just glad to have understood anything at all by the end of the clip.
What is subtitling, exactly?
The rendering in a different language of verbal messages in filmic media, in the shape of one or more lines of written text, presented on the screen in synch with the original verbal message.
This definition from Henrik Gottlieb, a Danish translation professor and subtitler, may be a bit long-winded and convoluted, but it does accurately sum up the task.
Despite being several decades old, Gottlieb’s definition still holds true today, although now it’s becoming increasingly common to have subtitles in the same language as the spoken original, known as intralingual subtitling.
One specific example of this kind of subtitles is used for the deaf or hard-of-hearing (SDH) community, and they often include descriptive information alongside the words being spoken.
Social media also rely on intralingual subtitles as we consume more and more video content without sound. What we used to hear, we now read, although the parameters remain the same. This may seem quite straightforward, but making it happen is no easy task. Even if the language in translation tends to use longer, more complex words, the subtitles need to adapt to video, not vice versa – which entails a healthy dose of puzzle work and creativity in editing.
What makes the perfect subtitle?
Fortunately, there is widespread consensus in the field about what constitutes good subtitles. Just like Gottlieb suggests in his definition, they should ideally be one or two lines long, as opposed to three or four, and should stay in sync with the audio.
If you’ve ever watched a video that veers from these standards, it’s likely caught your attention. It’s much more natural to read the subtitle while the person is talking, not five seconds before or after.
How do we know what works best? Linguistics research and, increasingly, teaching theory has told us a lot. Numerous scientific studies and experiments looking at the quality of subtitles have shown how best to design subtitles for maximum effect in movies.
They obviously must be easy to read and remain on screen long enough to do so. On top of that, they should not omit any relevant content or imply anything unsaid, and they should also be linguistically accurate and well-written.
Even their position on the screen makes a difference, as does their appearance: Subtitles presented in white text are useless in a movie that features lots of snowy backdrops!
Where subtitling meets the science of language
Ideally, public service broadcasters, VoD providers, language service providers and freelancers all adhere to the same linguistic rules and take a standard approach to subtitling, which involves strict parameters for characters per line, length of time on screen or text segmentation, for instance.
Even the number of characters the human brain can process per second is factored into subtitling quality standards.
Subtitlers do their utmost to help the human eye move easily from one subtitle to the next by ensuring that nothing is left to chance, from checking for typos to making sure a subtitle is only in place for the right number of frames, so that nothing distracts the viewer from what is happening on the screen.
Writing subtitles for movies is a very special and sophisticated skill. Linguistic creativity is a must when it comes to condensing 15 seconds of spoken language into two lines, particularly if the target language requires many more words to render what was said.
Today’s excellent subtitling tools and exacting technical standards demand a high level of training and ability.
The art of subtitling, which entails text segmentation and time-coding (known as “spotting”), must be learned and practiced, and translating subtitles is a whole separate specialty. Ideally, all parties work closely together when it comes to multilingual subtitling projects.
After all, it’s impossible to fully separate the content of the translated written words from the content and context of what is happening on the screen. And that’s something that doesn’t escape the viewers, either.
Recently, for example, there was a media uproar surrounding the English closed captions for the Netflix series Squid Game, as they did not always render the original Korean dialog either correctly or fully, distorting both cultural aspects and character depiction.
Viewers with a command of both languages notice such subtleties, while those who rely on the translation miss out. In extreme cases, the viewers will have experienced two entirely different stories.
But they’re only subtitles – what could go wrong?
Bad subtitles can have serious consequences – not only in the entertainment industry, but also for other types of films, such as corporate videos.
Imagine you work for a large company and are involved in the rollout of a hotly anticipated new company initiative. The comms team has worked tirelessly in the ramp-up phase and kept all colleagues in the loop with emails, articles and interviews.
The pinnacle of all this hard work is the CEO speaking live, and the speech will be recorded and subtitled for distribution to all markets.
When checking the subtitles, however, you are horrified to see that the initiative is called different things in different languages and it even shows up with various spellings within one of the languages.
As the CEO looks at their watch toward the end of the speech and starts speaking a little faster, the subtitles suddenly don’t match what is being said, and sometimes even entire chunks of sentences are missing.
Your CEO actually has a reputation for being a particularly eloquent speaker, but these subtitles don’t convey that at all!
Not all subtitles are created equal
This is a common problem, and one that is not always easy to resolve when three worlds collide!
The video producers “do their thing,” the CEO follows (or not!) the script carefully curated by the comms team, and the subtitling and translation teams do what they do best – write easy-to-read subtitles that match the video.
Anything that doesn’t line up is adjusted to fit: Filler words are removed, long sentences are shortened and grammatical errors are corrected.
The more quickly the speaker speeds through their notes, the more the text will have to be pared down. If they talk slowly, the viewers begin subconsciously to read the subtitles twice if they are on screen for too long, which explains why subtitles are fragmented.
In the corporate world, there is the added matter of having to use particular language and terminology, a matter of priority for all parties.
Emotional messages trump all
“Traditional” subtitling for movies, TV series or marketing material requires a certain amount of poetic license.
In addition to being linguistically correct, it must also be in keeping with cultural norms that cannot be transposed 1:1 and therefore require localization.
It is imperative that the message is conveyed correctly, as this is what determines the overall viewing experience.
The corporate video exception
In the corporate environment, the language parameters are stricter, and the words spoken by the company figureheads, such as the CEO, should ideally be rendered as literally as possible, even to the extent that filler words and slips of the tongue are subtitled for reasons of authenticity.
Here it is important to use specific wording that is consistent with the content of articles, interviews or other reference materials. On top of this, such videos often feature non-native and/or untrained speakers, which makes subtitling even more challenging, and we are seeing a huge increase in demand for this service, particularly post-pandemic.
We are also witnessing the emergence and growth of a new subdiscipline of subtitling that combines fast-paced film production requirements with those of high-level corporate comms.
This, in turn, requires close collaboration among the production team, corporate language owners, comms/PR, speech writers and the translation and subtitling teams. The only way such very disparate requirements can be satisfactorily fulfilled is if they all work together.
Regardless of whether they are rendering a highly charged argument between two characters in a movie, describing the flora and fauna of Banff National Park in a documentary, promoting an innovative new product in a marketing video or delivering a speech by a CEO to multiple markets, the best subtitles simply become an integral and almost unnoticed part of the video.
In the right hands, they are both wonderfully versatile and extremely effective.
One of my teachers once commented on some of my subtitling work (which was a complete disaster by today’s standards!) with the brilliantly humbling observation that “We’re not in Hollywood!”
From a geographical point of view, I have to agree, but with the increasingly fast-moving and inordinately complex projects that cross our desks every day, this ever-so-slightly dramatic element does indeed come into play, and serves to make our job in the AV team all the more interesting, exciting and varied.
A few years down the road, the world of subtitling may be facing a completely different set of challenges, and I look forward to seeing just how effortlessly it adapts.
Our audiovideo team will be happy to answer any further questions.