From: Andy Glew
Newsgroups: grc.securitynow
Subject: https vs http - why not signed but not encrypted https?
X-Draft-From: ("nntp+news.grc.com:grc.securitynow")
Gcc: nnfolder+archive:sent.2015-05
Date: Thu, 07 May 2015 11:57:35 -0700
Message-ID:
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (darwin)
Cancel-Lock: sha1:3QSNHoOOsLInTT2t9aCFbf/tYoY=
(New user in grc.securitynow. Longtime podcast listener. Very long time
ago USEnet user (not so much nowadays). My apologies if this is a FAQ.)
OK, so there's a trend to encrypt all traffic - to use https, to
discourage http. If for no other reason than to make man-in-the-middle
attacks harder.
One of the big losses is caching: the ability for somebody like a school
in a bandwidth deprived part of the world (like Africa, now; like
parts of Canada, when I grew up, although no longer so true) to cache
read-only pages that are used by many people. Like the website I used
to run, and which I hope to bring back up sometime soon - a hobbyist
website for computer architects. No ads. No dynamic content.
Heck, like this newsgroup would be, if it were presented as webpages.
HTTPS encryption, with a different key for each session, means that you
can't cache. Right?
Q: is there - or why isn't there - an HTTPS-like protocol where the
server signs the data, but where the data is not encrypted?
(I thought at first that the null cipher suite in HTTPS / TLS was that,
but apparently not so.)
Having the server sign the data would prevent man-in-the-middle
injection attacks.
An HTTPS-like handshake would be needed to perform the initial
authentication, verifying that the server is accessible via a chain of
trust from a CA you trust. (Bzztt.... but I won't rant about web of
trust and CA proliferation.)
Possibly you might want to encrypt the traffic from user to server,
but only sign the traffic from server to user.
So, why isn't this done?
It seems to me it would solve the "HTTPS means no caching" problem.
OK, possibly I can answer part of my own question: signing uses the
expensive public key cryptography on each and every item that you might want to
sign. Whereas encryption uses relatively cheaper bulk encryption,
typically symmetric key protocols like AES.
Signing every TCP/IP packet might have been too expensive back in the early days
of the web. Not to mention issues such as packet fragmentation and recombining.
But note that I say "each and every item that you want to sign".
Perhaps you don't need to sign every packet. Perhaps you might only
sign every webpage. Or every chunk of N-kiB in a web page.
A browser might not want to start building a webpage for display until
it has verified the signature of the entire thing. This would get in
the way of some of the nice incremental fast rendering approaches.
But, perhaps the browser can incrementally render, just not enable
Javascript until the signature has been verified? Or not allow such
Javascript to make outgoing requests? I am a computer architect: CPU
hardware speculatively executes code befopre we know it is correct, and
cancels it if not verified. Why shouldn't web browsers do the same?
I.e. I don't think latency of rendering should be an obstacle to having
cacheable, signed but not encrypted, HTTPS-like communication.
Probably the plain old computational expense would be the main
obstacle. I remember when just handling the PKI involved in opening an
SSL connection was a challenge for servers. (IIRC it was almost never a
challenge for clients, except when they opened too many channels to try
to be more parallel.) What I propose would be
even more.
But:
(1) CPUs are much faster nowadays. Would this still really be a
problem?
+ I'm a computer architect - I *love* it when people want new
computationally demanding things. Especially if I can use CPU
performance (or GPU, or hardware accelerator) performance, which is
relatively cheap, to provide something with social value, like saving
bandwidth in bandwidth challenged areas of the world (like Africa - or,
heck, perhaps one day whden the web spans the solar system).
(2) Enabling caching (or, rather, keeping caching alive) saves power -
now I mean power in the real, Watt-hours, sense, while requiring
signatures and verifying them consumes CPU cycles. I am not sure that
the tradeoff prohibits what I propose.