I saw the introduction of http/2.0 in developers.google.com. However, I feel confused about the request and response multiplexing. So I decided to make a demo or an example for better understanding.
TCP Connection ReuseWhen I saw the TCP connection reuse, I had a lot of questions in my head. For instances,
how do I know if the TCP was reused?
What would the network be if the TCP wasn"t reused?
It seems that HTTP 1.1 also supports TCP connection reuse. So, what"s the difference?
....
After searching, I found that there is a Connection ID column in the chrome dev tool Network panel. For example, here is the network image of baidu.com :
According to this question
The new Connection ID Network Panel column in Canary can help indicate to you that a TCP connection was reused instead of handshaking and establishing a new one.
Combined with the above image, we can say that in the network panel of baidu.com
Requests to ss1.bdstatic.com are based on H2 (HTTP2.0.) and share the same TCP connection because there is only one connection ID.
Requests to www.baidu.com are based on http/1.1 and 6 requests share two TCP connections because there are two connection IDs.
It seems http/1.1 also support TCP connection Reuse. So, how can I prove the advantages of H2 or what"s the difference between the connection reuse of http/1.1 and 2.0? That confused me in the past.
Prove The Advantages of H2I pick two requests from the network record and then fetch it at the console. The code of requests to http/1.1 is:
Array(13) .fill() .forEach(() => { fetch("https://www.baidu.com/favicon.ico", { credentials: "omit", referrer: "https://www.baidu.com/", referrerPolicy: "unsafe-url", body: null, method: "GET", mode: "cors" }) })
And the code of requests to http/2.0 is:
Array(13) .fill() .forEach(() => { fetch( "https://ss3.baidu.com/6ONWsjip0QIZ8tyhnq/ps_default.gif?_t=1556369856347", { credentials: "omit", referrer: "https://www.baidu.com/", referrerPolicy: "unsafe-url", body: null, method: "GET", mode: "cors" } ) })
Here are the results:
Take a closer look at the pictures, we can find that
On http/1.1 connections, chrome would open up to 6 TCP connections per host and reuse the connections. While on http/2.0 connections, chrome would open only one TCP connection per host on http/2.0 connections.
Also, on http/1.1 connections chrome would send the requests one by one when the requests are using the same TCP connection. Just as the developers.google.com said:
On HTTP 1.0/1.1 connections, Chrome enforces a maximum of six TCP connections per host. If you are requesting twelve items at once, the first six will begin and the last half will be queued. Once one of the original half is finished, the first item in the queue will begin its request process.
This would bring more delay when sending more requests.
While on http/2.0 connections, chrome would send all the requests to the same origin simultaneously without delay.
Differences of TCP Connection Reuse between HTTP/1.1 and 2.0On http/1.1 connections, chrome would reuse TCP connection by default and you can find
Connection: keep-alive
in the response headers. But according to docs in mdn
This connection will not stay open forever: idle connections are closed after some time (a server may use the Keep-Alive header to specify a minimum time the connection should be kept open).
And for http/2.0, according to developers.google.com
all http/2.0 connections are persistent, and only one connection per origin is required, which offers numerous performance benefits.
Source
Reference文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/103952.html
摘要:要使用协议我们不可能自己实现一个,现在比较流行的解决方案就是使用套接字编程,已经帮我们实现了协议的细节,我们可以直接拿来使用不用关心细节。 前几天写了 浅谈cgi、wsgi、uwsgi 与 uWSGI 等一些 python web 开发中遇到的一些名词的理解,今天博主就根据 wsgi 标准实现一个 web server,并尝试用它来跑 Django、tornado 框架的 app。 编...
摘要:支持多路复用支持对和已建立连接的复用,如果旧连接已失效则主动关闭旧连接,如果连接有效则尝试使用已有连接传输数据。 背景 对于同一服务可能存在多次调用的情况,然而每次调用都需要建立一次tcp连接导致大量重复工作的同时还增加了连接超时或连接错误的概率,为了减少tcp连接次数最大限度的提高连接利用率,需要能够重复利用每个tcp连接。 原理 HTTP1.1与HTTP2.0支持对于一次TCP连...
摘要:假如我们底层的连接得到重用,这时候的情况会是这样子很明显,在获取的请求中,减少了一次握手往返。在使用持久连接后,避免了一次握手往返总延迟减少为。其代价往往是不能充分利用网络连接,造成服务器缓冲开销,有可能导致客户端更大的延迟。 欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 本文由腾讯IVWEB团队 发表于云+社区专栏作者:yangchunwen HTTP协议是前端性能乃...
摘要:假如我们底层的连接得到重用,这时候的情况会是这样子很明显,在获取的请求中,减少了一次握手往返。在使用持久连接后,避免了一次握手往返总延迟减少为。其代价往往是不能充分利用网络连接,造成服务器缓冲开销,有可能导致客户端更大的延迟。欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 本文由腾讯IVWEB团队发表于云+社区专栏 作者:yangchunwen HTTP协议是前端性能乃至...
摘要:在基础架构部沉浸了半年,有一些认知刷新想和童靴们交代一下,不一定全面,仅代表此时的认知,也欢迎筒靴们提出看法。本文聊一聊口嗨用语长连接短连接,文章会按照下面的思维导图来讲述在基础架构部沉浸了半年,有一些认知刷新想和童靴们交代一下, 不一定全面,仅代表此时的认知, 也欢迎筒靴们提出看法。本文聊一聊口嗨用语:长连接、短连接, 文章会按照下面的思维导图来讲述:
阅读 1181·2023-04-26 02:42
阅读 1632·2021-11-12 10:36
阅读 1779·2021-10-25 09:47
阅读 1261·2021-08-18 10:22
阅读 1801·2019-08-30 15:52
阅读 1212·2019-08-30 10:54
阅读 2634·2019-08-29 18:46
阅读 3494·2019-08-26 18:27