您当前位置: 首页 »

网络协议

分类目录归档: 网络协议

Why We Created SRT and the Difference Between SRT and UDT

Article Form :
https://www.haivision.com/blog/broadcast-video/created-srt-difference-srt-udt/

Editor’s Note: This post originally appeared on the GitHub Wiki for SRT. It has been slightly modified for formatting. 

Some people have asked us why we’re using the UDT library within our SRT protocol. Actually, some people claimed that SRT is just a slightly modified version of UDT and that UDT is known to be useless for live video transmission. Guess what, the latter is true. UDT has been designed for high throughput file transmission over public networks. However, SRT is far from being a slightly modified version of UDT. I’ll get into the details, but will start with a little bit of history.

Haivision has always been known for lowest latency video transmission across IP based networks — typically MPEG-TS unicast or multicast streams over the UDP protocol. This solution is perfect for protected networks and if packet lossbecame a problem, enabling forward error correction (FEC) fixed it. At some point we were getting questioned as to whether it would be possible to achieve the same latency between customer sites in different locations, between different cities, countries or even continents.

Of course it’s possible with satellite links or dedicated MPLS networks, but those are quite expensive solutions, so people wanted to use their public internet connectivity instead. While it’s possible to go with FEC in some cases, that’s not a reliable solution, as the amount of recoverable packet loss is limited, unless you accept a significant amount of bandwidth overhead.

After evaluating the pros and cons of different third party solutions, we found that none satisfied all our requirements. The lack of insight into the underlying technology drove us to the the decision to develop our own solution, which we then could deeply integrate into products. That way, it would become the “glue” that enables us to transmit streams between all our different products, locally or across far distances, while maintaining our low latency proposition.

There were a few of possible choices to consider:

  • The TCP based approach. Problem for live streaming: Network congestion, too slow packet loss recovery.
  • The UDP based Approach. General problem: Packet lossjitter, packet re-ordering, delay
  • Reliable UDP. Adds framing and selective retransmit.

Having had a history with UDT for data transmission, I remembered its packet loss recovery abilities and just started playing with it. Though not designed for live streaming at all, it kind of worked when using really big buffers. I handed it over to one of our extremely talented networking guys in the embedded software team (thanks, Jean!) and asked him whether he’d be able to make this a low latency live streaming solution. I didn’t hear anything back for quite a while and had almost lost my hope, when he contacted me to tell me he had to rewrite the whole packet retransmissionfunctionality in order to be able to react to packet loss immediately when it happens and that he added an encryption protocol, which he had specified and implemented for other use cases before. Nice 🙂

We started testing sending low latency live streams back and forth between Germany and Montreal and it worked! However, we didn’t get the latency down to a level we had hoped to achieve. The problem we faced turned out to be timing related (as it often is in media).

Bad Signal

What happened was this: 

The characteristics of the original stream on the source network got completely changed by the transmission over the public internet. The reasons are delay, jitterpacket loss and its recovery on the dirty network. The signal on the receiver side had completely different characteristics, which led to problems with decoding, as the audio and video decoders didn’t get the packets at the expected times. This can be handled by buffering, but that’s not what you want in low latency setups.

The solution was to come up with a mechanism that recreates the signal characteristics on the receiver side. That way we were able to dramatically reduce the buffering. This functionality is part of the SRT protocol itself, so once the data comes out of the SRT protocol on the receiver side, the stream characteristics have been properly recovered.

The result is a happy decoder: 

Good Signal

We publicly showed SRT (Secure Reliable Transport) for the first time at IBC 2013, where we were the only ones to show an HEVC encoded live stream, camera to glass, from a hotel suite outside the exhibition directly onto the show floor, using the network provided by the RAI. Everybody who has been at a show like this knows how bad these networks can get. And the network was bad. So bad that we expected the whole demo to fall apart, having pulled the first trial version of SRT directly from the labs. The excitement was huge, when we realized that the transmission still worked fine!

Since then, we added SRT to all our products, enabling us to send high quality, low latency video from and to any endpoint, including our mobile applications. Of course there were improvements to be made and the protocol matured on the way. Until NAB 2017, where we announced that SRT is now Open Source.

You can learn more about SRT at the SRT Alliance website here.

To view the SRT on GitHub and start contributing to this open-source movement, click here!

2019-06-10 | | udt, 网络协议, 音视频_图像相关

Why We Created SRT and the Difference Between SRT and UDT已关闭评论

【程序结构、系统构架】有效且高效的程序结构/构架之一

首先看一下下图,下图是stun或者说是NAT穿越时需要用到的NAT检测服务器拓扑组网图:

 

 

目前,PC B和PC A需要进行p2p连接,首要要知道他们所在的网络是呈现哪一种NAT类型。因此就涉及到了NAT检测服务器(stun服务器)和客户端。

NAT检测服务器的功能实际上很简单,就是自身拥有2个或2个以上的IP,并每个IP监听至少2个UDP端口,一共就有4个可连端(即4个socket)。

当客户端与NAT服务器进行通讯时,NAT服务器和客户端都将会知道客户端外网IP:Port,和客户端内网的IP:Port。

但在实际设计NAT检测服务器时,就遇到了一个问题:NAT服务器是否需要记录客户端的IP:Port信息。如果需要记录出于什么目的?如果记录那需要多少存储能力和计算能力?

 

首先分别讨论一下存储和不存储的情况。对于存储客户端IP:Port信息来说,可以有效的统计和判断当前客户端群(带了个“群”字样)的NAT情况,并且能够间接节省掉一个统计数据上报的过程,数据的及时性和有效性也得到了保障。但带来的是服务器程序设计上的复杂性和性能损失,带宽成本相对于不存储的来说或许会更低一些。

对于不存储的情况来说,客户端可以设计的很简单,并专注于NAT类型检测的任务,单点抗压能力可以发挥到最大;但带来的是数据统计与反馈方式需要另辟蹊径,增加了客户端的复杂度,某种程度上来说会增加带宽的支出(需要将IP:Port数据上报)。为了该工作还需要配套一个完整的数据统计与上报模块(服务端+客户端),某种程度上增加数据反馈的不可靠性。

 

在看新版本webrtc的p2p模块时(nat检测服务器),发现服务端并不存储任何客户端的信息,只是由客户端来自行决定是否存储。于是想到了这个问题。

 

对于NAT服务器来说,确实不应该存储过多的客户端信息,但如果在需要可靠性较强的环境中时,NAT检测服务器还是存储一定的客户端信息。并将一些服务器不需要做的计算和存储工作保留在客户端可以有效的达到合理使用的目的。

 

例如:在NAT检测中,往往会出现两种情况,即:

1,多级NAT中的NAT在不同时间段呈现不同的NAT类型(不规则NAT)

2,在单级NAT中,由于IP、Port限制型场景中,由于检测次数的不够充分,将IP:Port误判断成其他类型的NAT。

在这两种情况中时,需要进行定量或者定性统计分析时,如果数据仅仅保存在客户端时,由于样本不够大(NAT检测次数不够多、该局域网内可能不只一个客户端),导致分析不充分,进而影响NAT穿越成功率以及间接影响分享率、分享公平性。

所以需要将此类数据进行服务端统计,并作筛选,才能有效的判断出该NAT环境下较大的概率会呈现哪一种NAT。

2016-12-23 | | 程序结构/程序构架, 编码技巧, 网络协议

【程序结构、系统构架】有效且高效的程序结构/构架之一已关闭评论