At a quick glance, close captions and subtitles look almost identical, but in-fact they are not. Most people use the tow terms interchangeably, however they refer to different things. In a nutshell captions are a transcription of a dialogue, while subtitles are a translation. They both appear as the text on top of the video and typically represent the words that are spoken. In this article we're going in depth to get a closer look at the meaning, differences, and when you should use either closed captions or subtitles.
What is closed captioning?
As you might have guessed, besides the term Closed Caption (CC), there is also Open Caption (OC). But first, let's define what close captions are exactly.
Closed Captioning refers to the transcription of all elements of an audio format, including non-speech elements such as a ringing telephone, dogs bells, or surrounding sounds. These type of captions are typically made for people that are hard of hearing. The term "Closed" means that the captions are located on a separate track from the video file. That allows you to dynamically turn them on or off. In the most video player like YouTube, Vimeo, LinkedIn, Facebook, or even on the TV watching Netflix for example you can toggle them on or off.
The pardon to Closed Captions are Open Captions and differ in that they are permanently burned into the video. So you can not remove them from the video, series, or movie that you are watching. However both Closed and Open Captions are also helpful for people that struggle to follow along with the audio on videos.
On a technical level, captions are encoded as a stream of commands, control codes, and text that ensure the captions appear at the right time and in the right place in the video.
Where are closed captions used?
The most common use of captions is to increase the accessibility. The text describes everything that is happening in the video including non-speech parts like background noises and sound effects. Even different speakers are displayed, so you know who is speaking what.
As mentioned earlier captions are made for those who deaf or hard of hearing. According to U.S. law, it is mandatory for all major video platforms and sharing sites to make their content accessible and allow displaying close captions.
Other people who benefit from captions are people who want to learn a language or are not native speakers of the language the video is in. Here, captions can demonstrably help people understand the content better and thus help them learn a language more easily.
How are subtitles different now?
Unlike close captions, subtitles assume viewers can hear and are in general used when the viewer doesn't speak the language in the video. Movies are a great example where you find subtitles in all the languages in which the film was broadcast.
Subtitles are basically timed transcriptions or directly translated transcriptions into other languages. There are a bunch of different subtitles formats.
However the most common subtitle formats are .SRT or .VTT. You can use these files to add them to your videos depending on the platform you want to publish your video on. For example YouTube, Facebook, LinkedIn etc. allow you to upload .SRT files to add subtitles to your video. Our editor Streamlabs Podcast Editor can generate this files automatically. If you want to learn more about that check out this article.
On a more technical point of view subtitles differ from Closed Captions in the way that they are encoded as bitmap images, a series of tiny dots or pixels straight in the video.
Subtitles can either be burned into your video or can be toggled on and off just like Closed Captions.
With our editor Streamlabs Podcast Editor you can either automatically generate subtitles and translate them into +30 different languages or download them as .SRT or .VTT files. If you want to create Closed Captions you have to modify the transcribed text that Streamlabs Podcast Editor generates for you. It requires a bit more manual work since you have to add the text for sound effects.
As a rule, these are shown in square brackets. For example a [Phone ringing] if your phone is ringing while you're recording.
Conclusion
To summarise Closed Captions and Subtitles are not the same even though they look very similar since both represent mainly the spoken words in text format and appear on the video.
The biggest differences between both is that Closed Captions addresses mainly people who deaf or hard of hearing for the purpose of accessibility while Subtitles help non native speaker to better understand the content and therefore are often available in different languages.
The format and files are also different but as an end consumer you wouldn't noticed how the subtitles or captions are processed in the video. You only can tell if subtitles or captions are able to set on or off that they are not burned into the video.
If you want to learn more about how Streamlabs Podcast Editor not only automatically transcribe your video into text but also generates subtitles in multiple languages you should check this out.