Re: HTTP/2 Expression of luke-warm interest: Varnish

In message <CAP+FsNetP9uhjrNXwXkJgZNGXMbbj0U8WfJfbyHK6jBF0TnKrw@mail.gmail.com>
, Roberto Peon writes:
>The holdup is that users have bookmarks, external links, etc. and so sites
>are reasonably reluctant to change their (unfortunately complex and
>potentially order dependent rule) mappings when doing so might lose them
>traffic.
Fortunately 1Tbit/s isn't going to happen over night either.
>It gets worse when one considers pieces of hardware which are not
>upgradable and have url-space hardwired into their firmware. 
I don't think there is any realistic prospect of retiring HTTP/1.x
entirely in the next 10 years. In 15 years maybe.
But if we do HTTP/2 right, the amount of traffic left on HTTP/1.x
would be cut in half about three years after HTTP/2.0 ratification.
If we do a good job, in particular on HTTP/1->HTTP/2 connection
upgrades, it will happen even sooner than that.
>Any ideas as to how to reprogram site designers/authors? :)
Is it important or even necessary to do so ?
If they work for a site where performance matters, there will
be a local feedback loop to steer them in the right direction.
If they work a place where performance doesn't matter, they would
be wasting time by optimizing for performance in the first place.
But delivering a simpler to understand protocol would certainly
help, for instance many webdesigners have only very sketchy ideas
about how cookies and authentication actually work.
-- 
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe 
Never attribute to malice what can adequately be explained by incompetence.

Received on Tuesday, 17 July 2012 09:05:37 UTC

AltStyle によって変換されたページ (->オリジナル) /