But what does an SSL connection cost?
Consider the following setup: an Apache host, configured as a reverse proxy, transfers incoming request (from the internet) towards an internal (local) server. Using conchart, we get the following:
The lower rectangle is the communication between the client and the reverse proxy, while the upper rectangle represents the communication between the proxy and the backend server.
Again we get some interesting information on the time spent on the different steps: for both connections the 3-way handshake takes a significant amount of time, the proxy waits ±0.3ms before opening the request to the backend server, while it takes ±1ms before the client request is actually being forwarded to the backend, but only a fraction of that is spent to forward the backend's response back to the client.
The following shows a situation where the client is fetching a page containing two other objects:
We notice that the client is using a single connection to the proxy to request all data, whereas the proxy needs 3 separate connections to the backend to fetch everything. The reason for this is that between client and proxy the keepalive feature of HTTP/1.1 is used, while the backend server doesn't know about keepalive features as it speaks only HTTP/1.0.
We also notice the large blue areas for the second and third request being due to the fact, that for some reason unknown to me, the proxy doesn't send separate, empty acks on the client request (even though the TCP-push flag was set), but immediately answers with the response when it becomes available: (See here as well)
... 11:58:01.468195 IP 126.96.36.199.43041 > 188.8.131.52.80: P 118:282(164) ack 560 win 218 <nop,nop,timestamp 2738597096 2740430305> 11:58:01.468447 IP 184.108.40.206.51736 > 220.127.116.11.80: SWE 1745315649:1745315649(0) win 5840 <mss 1460,sackOK,timestamp 2740430307 0,nop,wscale 6> 11:58:01.468694 IP 18.104.22.168.80 > 22.214.171.124.51736: SE 3271798259:3271798259(0) ack 1745315650 win 5792 <mss 1460,sackOK,timestamp 227469689 2740430307,nop,wscale 6> 11:58:01.468719 IP 126.96.36.199.51736 > 188.8.131.52.80: . ack 1 win 92 <nop,nop,timestamp 2740430307 227469689> 11:58:01.468799 IP 184.108.40.206.51736 > 220.127.116.11.80: P 1:261(260) ack 1 win 92 <nop,nop,timestamp 2740430307 227469689> 11:58:01.469692 IP 18.104.22.168.80 > 22.214.171.124.51736: . ack 261 win 108 <nop,nop,timestamp 227469689 2740430307> 11:58:01.470340 IP 126.96.36.199.80 > 188.8.131.52.51736: . 1:1449(1448) ack 261 win 108 <nop,nop,timestamp 227469689 2740430307> 11:58:01.470352 IP 184.108.40.206.51736 > 220.127.116.11.80: . ack 1449 win 137 <nop,nop,timestamp 2740430308 227469689> 11:58:01.470441 IP 18.104.22.168.80 > 22.214.171.124.51736: . 1449:2897(1448) ack 261 win 108 <nop,nop,timestamp 227469689 2740430307> 11:58:01.470453 IP 126.96.36.199.51736 > 188.8.131.52.80: . ack 2897 win 182 <nop,nop,timestamp 2740430308 227469689> 11:58:01.470522 IP 184.108.40.206.80 > 220.127.116.11.43041: . 560:2008(1448) ack 282 win 108 <nop,nop,timestamp 2740430308 2738597096> 11:58:01.470531 IP 18.104.22.168.80 > 22.214.171.124.43041: . 2008:3456(1448) ack 282 win 108 <nop,nop,timestamp 2740430308 2738597096> ...
Anyway, using the -w option for generating this chart gives a better representation of the facts in this case:
Shaking SSL On top
Encryption is cool, and we want some! Lets use SSL between client and proxy for increased security, but not between proxy and backend as we consider our internal network to be sufficiently secure:
We see that 90% of the is spent on the SSL-handshake between client and server (no data is sent to the backend). Where a non-ssl transaction was taking around 6ms, we now spent 115ms. Security comes at a price!
Fairness obliges to admit that the client in the above isn't exactly the most performant one in the pack. Actually its a Pentium III 650Mhz with Linux, where I use wget as the client. The fact that it a Gentoo doens't stop if from being slow really ;-)
The same test with a Core 2 Quad CPU at 2.4Ghz (running Ubuntu, got married with children in the mean time), gives us:
Some extra horsepower makes up, but still the transaction is taking more than 50ms, which is a lot compared to the 6ms we had before. Note how the faster client needs much less time to do his side of the SSL computation, the time spent between the first red bar and the following blue one: around 70ms in the first example for only 12ms in the latter.
So how does this situation look like when we fetch the complete page?
Security definitely comes at a price! Our clients opens a news SSL connection for every object it fetches and the total time is exactly 3 times the amount of time spent on a single transaction. How does those SSL servers get any work done?
Keeping the SSL alive
The reason the SSL hands are shaken so often is because of some ancient Apache setting still lingering around in the default config, which forces HTTP/1.0 for all IE browsers (or wget with user-agent override as in this case ;-). The absence of the keepalive feature forces the client to renegociate the SSL connection for every HTTP request.
The config responsible are the following lines somewhere the SSL specific configuration of apache:
... BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 ...
This turns out to be just a little crude towards our IE using friends as it forbits the HTTP/1.1 protocol for every IE browser while only the ancient v4 seems to be having troubles. The following config turns out to be sufficient:
... BrowserMatch "MSIE 4\.0b2;" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 ...
Et voila, performances is where it should be, and we now have a server that will scale...
The complete transfer of the 3 objects clocks in at 77ms which is not to bad for starting at 52ms for a single object.