blags of matt
(560 days ago)learn. do. fun.
learndowhatnow?
I spent my 2012 building http://learndofun.com/ -- what did you do?

Here's an edited-out excerpt from learndofun prelaunch ranting about existing online educational garbage:

They've taken the format of university lectures, removed the whiteboard, scaled down a writing surface to the size of two hands, and they make you watch as they write out everything. If you try to watch these videos on a large screen, the hand is like some freaky spider of knowledge spinning out ink in droning patterns. There's a voice to the video, but no human interaction. No face. No human level contact. You are basically watching them write a textbook. A very poorly formatted and untypeset textbook.


We can do better. We will do better.
learndofun, ai, machine learning, learndowhatnow, you're a vegetable, wanna be startin somethin
you don't know the worth of your personality feature vector
"In some respsects, it's pretty clear that we are sleepwalking into nightmares. Absolutely. And the most distressing thing is how hard that is to explain to anybody."

"The way in which people will give away their personal details in much the way as in the child history books about colonialism. You had native chieftains who handed over mineral rights in return from some baubles from some canny imperialist. All that stuff is happening but it's very hard to explain to anybody."

(At the 36 minute mark = http://www.youtube.com/watch?v=yUXh-GPa5dI&t=36m00 )



Thanks to Stephen for pointing me towards the video.
sharecropping, trading identity for beads
i don't know what it's like to land and not race to your door
One of the rarest John Mayer songs, presented here in all its multifaced glory.

Original Mass Performance. Where The Light Is: 2007.


More Recent Mass Performance. BJCC: 2010.


Unbelievably good cover


Covered by an adorable guy

without your voice to tell me, I love you, take a right
cipher? i don't even know 'er!
update
Hot on the heels of hivemind devops alert: nginx sucks at ssl, I issue another hivemind devops alert: nginx does not suck at ssl!

aftermath
After circulating my previous alert through the 'tubes, it became abundantly clear: nobody has any idea how SSL performance works. A few people suggested I was running out of entropy (nice guess, but wrong), many people mentioned ssl session caching (nice try, but not relevant when testing all new connections), and a few people chimed in about keepalive (nice try, but then results get skewed depending on how many assets each client requests). Everybody seemed to care about the absolute numbers and not the relative performance differences.

A few people went with a very tactful "dude, that's just wrong. I know it works" response which is perfectly valid and appreciated. I knew something was wrong, but couldn't put my finger on it.

Then David chimed in, and it clicked. I wasn't verifying equal cipher negotiation against all servers and the benchmark utility. Even so, why would nginx pick a more computationally intensive cipher than stunnel or stud? Let's find out.

the what
The problem is that annoying line in your nginx config you copied from somewhere else and you're not entirely sure what it does. It's a lengthy security incantation. Maybe you picked a PCI compliant list? Maybe you filtered it yourself? It looks something like [a]:

ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:RC4 RSA: HIGH;


So what does it mean? The line above gives you a nice and secure encryption setup for nginx. Unfortunately, it also includes a very computationally intensive cipher using an ephemeral Diffie-Hellman exchange for PFS. Sounds scary already, doesn't it?

the how
The problem cipher is DHE-RSA-AES256-SHA [b]. It uses DH on each new connection to ensure PFS (DHE = "Diffie-Hellman Ephemeral"). The DH portion requires extra code in software using OpenSSL to enable DHE negotiation. nginx has the extra code built in. stud doesn't have at all. stunnel has it as a compile time/certificate configurable option.

Ding!

nginx configures OpenSSL so completely, it enables the very-secure-yet-very-slow cipher by default! stunnel does not enable that cipher by default because it doesn't configure DH itself (you can configure it by hand though). stud does not enable DH at all.

The next-most-secure cipher is AES256-SHA, which is what stunnel and stud were using to out-perform nginx on my thundering herd connection tests.

the fix
You can force nginx to not enable the expensive cipher by excluding all DHE ciphers. Add "!kEDH" to your cipher list. It disables (the ! disables) any cipher using Ephemeral Diffie-Hellman.

the new nginx benchmarks
Now that we found the problem, let's look at nginx SSL using a sane performance cipher.

nginx (AES256-SHA) -> haproxy: 1300 requests per second
nginx (AES256-SHA with keepalive 5 5;) -> haproxy: 4300 requests per second


There is a slight speed boost from also disabling iptables. As always, do not trust these numbers. Performance depends on: your firewall config, your ciphers, your backend latency, keepalive, session caching, and how many faeries currently live in your system fans [c].

final answer
To get more performance out of nginx SSL, remove its ability to negotiate slow encryption ciphers. Add "!kEDH" to your list of allowed ciphers (unless you are passing around government secrets about aliens or are an internationally wanted arms dealer). Do it now.

Curious about what cipher your install is negotiating? Test it with a quick:

openssl s_client -host HOSTNAME -port 443

Look at the Cipher: line. If it says DHE-RSA-AES256-SHA, your site could be going much faster over SSL by disabling DHE.


-Matt

[a]: If you want to understand the format (! versus versus -), read http://www.openssl.org/docs/apps/ciphers.html.

[b]: "problem" from a website speed point of view.

[c]: Disable iptables on all web-facing boxes if you want to maximize performance. Use SSL session caching and (sane) keepalive values, but those settings aren't panaceas. Everything depends on your page composition, user habits, and individual system issues.


bonus insight about SSL benchmarks
Don't trust anybody's SSL benchmarks unless they include at a minimum details about: Is the OS firewall enabled? Is the benchmark being run over localhost on the same machine? Which cipher is being negotiated between the benchmark tool and the SSL server? Is keep alive on? Is the benchmark tool using keep alive? Is session resumption on? Is the benchmark tool using session resumption? Which benchmark program is being used (they all have different inherent performance problems)?

(note: I didn't mention any of those things. Do not trust my numbers. Benchmark your own systems.)

Over-reliance on becnhmarking keepalive and session resumption can yield false results unless you ever only have one client to your website and they use one keepalive connection and one SSL session constantly.

If you care about absolute numbers, require details about: How many cores? How fast? Do you have an OpenSSL-recognized hardware accelerator engine being used? What else is running on the box? What's the latency among all components?

bonus insight about social diarrhea
I launched my original post over the HN fence and to my twitter account at the same time. It quickly fell off the new page of HN. It immediately started getting re-twatted on the twitters.

Every @reply I received from twitter was supportive, helpful, understanding, or very politely confused/questioning.

Later in the day, to stop me from whining, a friend re-submitted my post to HN. This time the article shot to the #1 spot. Uh oh. If I hadn't developed such think Internet Defense skin over the years, I would have been terribly offended by half of the HN comments.

Remember: The Internet is a big place. If you get upset when somebody isn't as perfect as you are, you'll spend your life being miserable. Be nice. Be understanding.

Final feeling: Twitter is better than HN in all social dimensions of engagement, kindness, and authenticity.

fin.
nginx, openssl, ciphers, DHE-RSA-AES256-SHA bad, AES256-SHA good, hn, twitter, glee soundtrack
in nginx russia, ssl tests you
UPDATE
The problem with nginx is resolved in nginx does not suck at ssl!

background
What do you use to serve content over SSL? mod_ssl? nginx compiled with ssl support? stunnel? A hardware accelerator-jigger?

I benchmarked a few SSL terminators in front of haproxy last week. The results may (or may not) surprise you.

initial benchmark results

(on an 8 core server...)
haproxy direct: 6,000 requests per second
stunnel -> haproxy: 430 requests per second
nginx (ssl) -> haproxy: 90 requests per second


initial benchmark results reaction
what. the. fuck.

<rhetorical>
Why is nginx almost 5 times slower than stunnel? It can't all be nginx's http processing, can it? What is crappifying nginx's SSL performance? [a]
</rhetorical>

After recovering from the shock of nginx's crap ssl performance and cleaning up spewed hot chocolate off my monitor, stud strutted up to me and begged to be benchmarked too (he hates being left out). stud looks perfect -- a simple, uncrapified TLS/SSL terminator created because stunnel is old and bloated.

stud's one glaring fault is a lack of HTTP header injection support for adding X-Forward-For headers. [1]

Woe is me. How do we get around not having X-Forward-For headers? Do we sit around and complain online? Do we pay someone else to add it? Do we stay with nginx because it's "what we know?" Heck no. Write it yourself.

Now we have a stud with X-Forward-For support. [2]

More benchmarks against plain stud (factory default) and stud with http header injection added:

more benchy markys

(on the same 8-core server...)
first, results from before:
haproxy direct: 6,000 requests per second
stunnel -> haproxy: 430 requests per second
nginx (ssl) -> haproxy: 90 requests per second

now, enter stud (the -n number is how many cores are used):
stud -n 8 -> haproxy: 467 requests per second
stud-jem -n 8 -> haproxy: 475 requests per second

stud-http-jem -n 1 -> haproxy: 440 requests per second
stud-http-jem -n 7 -> haproxy: 471 requests per second
stud-http-jem -n 8 -> haproxy: 471 requests per second


We have a winner! (special note: according to my tests, running stud with jemalloc speeds it up in all cases.)

The added work of parsing, extracting bad headers, and injecting proper ones shows no practical performance impact versus factory default stud.

okay, so what did you do?
I've modified the crap out of stud and its Makefile. All changes are sitting in my add-HTTP-x-forward-for branch on my le github.

Modifications to stud so far:
Dependencies (libev, http-parser, jemalloc) automatically download during the build process. Nothing needs to be installed system-wide.

I cleaned up the build process for stud so you can configure it in a dozen different ways without rewriting the entire Makefile.

By default, everything is statically linked. You can move your one stud binary to another server without installing libev or jemalloc. [3]

The stud Makefile now builds four binaries (if you "make all"): stud, stud-jem, stud-http, and stud-http-jem. jem means "with jemalloc" and http means "automatically injects X-Forward-For and X-Forward-Proto headers." [4]

All http support is isolated in ifdef blocks. Running a non-http stud is exactly the same as stud from bumptech/stud.

in short
<LIES>
Use stud. Don't use stunnel. Never let nginx listen for SSL connections itself.
</LIES>

<TRUTH>
Keep using nginx for SSL termination. Just make sure your ciphers are set correctly. See nginx does not suck at ssl for an overview of how to fix your nginx config and why this post is wrong.
</TRUTH>


-Matt, your friendly bay area neighborhood web performance junkie.

[a]: I tested nginx as a proxy, serving static files, and serving nginx-generated redirects. I tried changing all the relevant ssl parameters I could find. All setups resulted in the same SSL performance from nginx. I even tried the setup on more than one server (the other server was quad-core nginx got up to 75 requests per second).

[1]: Yes, stud supports writing source IP octets before a connection and now even writing haproxy's own source format before a connection, but I like routing everything back through nginx.

[2]: note: still in-progress. It works, but you can probably craft bloated headers to drop your connection (stud won't crash or segfault -- the errors just break the connection).

[3]: At the top of the Makefile you can easily twiddle static linking on and off per library (for libev and/or jemalloc).

[4]: X-Forward-For header injection is done properly. Any X-Forward-For or X-Forward-Proto headers originating from the client are removed and then replaced by stud-injected headers. We can't allow our clients to inject X-Forward headers our applications expect to be truthful.
nginx, ssl, stunnel, stud, russia, haproxy, jemalloc, libev, http-parser
bridges make barracades
Captures from Vienna Teng & Alex Wong & Friends at Great American Music Hall on Thursday, December 23, 2010. 8pm - 11:30pm.

Radio


Letter From My Lonelier Self


Everything's Fine


Okay, New York (Gonna Make it Home)


Promises


Stray Italian Greyhound



Blue Caravan



It's Not Even Christmas, It's Just Thursday Night (Christmas Song 2010)


vienna teng, alex wong, paul freeman, other dudes, illegal videos, no recording allowed, dinner took an hour to arrive, stood in balcony