Video Encoding Basics: What is Latency and Why Does it Matter?
In the world of video streaming, you often hear promises of solutions offering not just “low” latency, but ”ultra-low”, “extreme-low” or even “zero” or “no” latency. So what do these terms mean and why is latency so important? In this post, we’ll break it down for you with a high level explanation of what video latency is, what causes it, when it matters and what can be done to reduce it. Interested in reading more about video encoding basics? You can check here for the posts in this series.
What is Video Latency?
Typically, when we’re talking about latency, we’re referring to end-to-end latency, also known as “glass-to-glass” latency – the amount of time it takes for a single frame of video to transfer from the camera to the display. This delay can range widely, from several minutes to a matter of milliseconds and while there is no absolute value that defines low latency, it’s often considered as less than 1 second, while under 300 milliseconds is referred to as ultra low latency.
End-to-End Latency Depends on the First Mile
The delivery, distribution, playback and production phases of the journey of live video are all dependent on the “first mile” otherwise known as video contribution. To best manage overall video latency, it’s critical to minimize it at the starting gate.
What Causes Video Latency?
There are many factors which contribute to latency depending on your delivery chain and the number of video processing steps involved. While individually these delays might be minimal, cumulatively they can really add up. Some of the key contributors to video latency include:
Network type and speed
Whether it be by public internet, satellite or MPLS networks, the network you choose to use to transmit your video impacts both latency and quality. The speed of a network is generally defined by throughput or how many megabits or gigabits can it handle in the course of a second and also by the distance traveled.
Individual components in the streaming workflow
From the camera to video encoders, video decoders to the final display, each of the individual components in streaming workflows create processing delays which contribute to latency in varying degrees. OTT latency for example, is usually much higher than digital TV for example because video needs to go through additional steps before being viewed on a device.
Streaming protocols and output formats
The choice of video protocol used for contribution and distribution formats for viewing on a device greatly impacts video latency. In addition, the type of error correction used by the selected protocol to counter packet loss and jitter can also add to latency as well as firewall traversal.

Overview of latency contribution in video streaming workflows
Why Does it Matter?
We’re often asked if video latency really matters and the answer is, it simply depends on your application. For certain use cases such as recording and streaming previously recorded events, higher latency is perfectly acceptable especially if it results in better picture quality through robust prevention of packet loss. In linear broadcast workflows, the delay between the feed and the actual live feed is typically somewhere around 10 seconds. The European Broadcast Union, for example, defines “live” as 7 seconds from glass to glass. A short delay between broadcast production and playout may even be intentional in order to facilitate live subtitling, closed captioning, and prevent obscenities from airing. For OTT workflows, video latency can vary from anywhere from 10 seconds to as much as a minute. In the case of live interactive and real time applications however, this figure is nearer to 200 milliseconds, as close to real-time as possible.
Spoiler Alert: High Latency Kills the Viewing Experience
For live video streaming applications, users generally want latency to be as low as possible. Whether you’re streaming live sporting events, esports or bi-directional interviews, nothing kills the viewing experience like high latency. We’ve all watched a live broadcast on location with long, awkward pauses, or people talking over each other in interviews because of latency issues. Or perhaps you’ve watched a hockey game online while your neighbor watches live over the air and you hear them celebrate the winning shot 10 seconds before you see it. Or worse still, imagine watching election results and they appear on your twitter feed before you even get to see it on your TV screen. In these cases, low latency is critical to assure an optimal viewing experience with great viewer interactivity and engagement.
Learn the latest in video streaming technology
[mc4wp_form id=”22388″]
Milliseconds Matter
While video latency presents a serious annoyance to users, in certain use cases, it’s critical. For applications such as surveillance, it’s mission critical. For online auctions and gambling to successfully work, audiences can only be fully engaged and part of the action with very low latency video.

Live video streaming latency
In the Blink of an Eye
For a long time studies suggested that the lowest perceptible limit for humans to correctly identify an image was around 100 ms. However, more recent studies suggest that the fastest rate at which humans appear to be able to process incoming visual stimuli is as low as 13 ms. To get a sense of how fast this is, try this online test which measures human reaction time in response to a visual stimulus.
How Can Latency be Reduced?
There are several ways to minimize video latency without having to compromise on picture quality. The first is to choose a hardware encoder and decoder combo engineered to keep latency as low as possible, even when using a standard internet connection. The latest generation of video encoders and video decoders are able to maintain low latency (under 50ms in some cases) and have enough processing power to use HEVC to compress video to extremely low bitrates (down to under 3 Mbps) all while maintaining high picture quality.
Another key factor in achieving lower levels of latency is to select a video transport protocol that will deliver high-quality video at low latency over noisy, public networks like the internet. Successfully streaming video over the internet without compromising picture quality, requires some form of error correction as part of a streaming protocol to prevent packet loss. Different types of error correction will all introduce latency, but some more than others. The Secure Reliable Transport (SRT) open source protocol leverages ARQ error correction to help prevent packet loss while introducing less latency than other approaches including FEC and RTMP.
Quality, Latency and Bandwidth: Getting the Balance Right
When looking at minimizing latency, it’s important to carefully consider the impact of configuration of the different components depending on the use case. There are always trade-offs to be made.

Balancing Bitrate, Latency, and Picture Quality
Achieving a higher quality for the end-user usually means higher resolutions and frame rates and therefore higher bandwidth requirements. While new technology and advanced codecs strive to improve latency, finding the right balance will always be important.
Ultimately, the individual targeted use case will determine the best balance within this triangle of video encoding and streaming considerations. For applications where latency is critical such as video surveillance and ISR, picture quality can often be traded in favor of minimal latency. However, for use cases where pristine broadcast quality video matters, latency can be increased slightly in order to support advanced video processing and error correction. By delivering the optimal combination of bandwidth efficiency, high picture quality, and low latency, viewers can enjoy a great live experience over any network – with no spoilers.