technobabble by matthttp://blagomatic.com/b/matt/technobabbletechnobabbleblagomatic - genetic gestureslearn. do. fun.http://matt.io/technobabble/learn.__do.__fun./uvhttp://learndofun.com/ -- what did you do?

Here's an edited-out excerpt from learndofun prelaunch ranting about existing online educational garbage:

They've taken the format of university lectures, removed the whiteboard, scaled down a writing surface to the size of two hands, and they make you watch as they write out everything. If you try to watch these videos on a large screen, the hand is like some freaky spider of knowledge spinning out ink in droning patterns. There's a voice to the video, but no human interaction. No face. No human level contact. You are basically watching them write a textbook. A very poorly formatted and untypeset textbook.


We can do better. We will do better.]]>
Tue, 08 Jan 2013 12:42:47 GMThttp://matt.io/technobabble/learn.__do.__fun./uv
hivemind devops alert: nginx does not suck at sslhttp://matt.io/technobabble/hivemind_devops_alert:_nginx_does_not_suck_at_ssl/urupdate
Hot on the heels of hivemind devops alert: nginx sucks at ssl, I issue another hivemind devops alert: nginx does not suck at ssl!

aftermath
After circulating my previous alert through the 'tubes, it became abundantly clear: nobody has any idea how SSL performance works. A few people suggested I was running out of entropy (nice guess, but wrong), many people mentioned ssl session caching (nice try, but not relevant when testing all new connections), and a few people chimed in about keepalive (nice try, but then results get skewed depending on how many assets each client requests). Everybody seemed to care about the absolute numbers and not the relative performance differences.

A few people went with a very tactful "dude, that's just wrong. I know it works" response which is perfectly valid and appreciated. I knew something was wrong, but couldn't put my finger on it.

Then David chimed in, and it clicked. I wasn't verifying equal cipher negotiation against all servers and the benchmark utility. Even so, why would nginx pick a more computationally intensive cipher than stunnel or stud? Let's find out.

the what
The problem is that annoying line in your nginx config you copied from somewhere else and you're not entirely sure what it does. It's a lengthy security incantation. Maybe you picked a PCI compliant list? Maybe you filtered it yourself? It looks something like [a]:

ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:RC4 RSA: HIGH;


So what does it mean? The line above gives you a nice and secure encryption setup for nginx. Unfortunately, it also includes a very computationally intensive cipher using an ephemeral Diffie-Hellman exchange for PFS. Sounds scary already, doesn't it?

the how
The problem cipher is DHE-RSA-AES256-SHA [b]. It uses DH on each new connection to ensure PFS (DHE = "Diffie-Hellman Ephemeral"). The DH portion requires extra code in software using OpenSSL to enable DHE negotiation. nginx has the extra code built in. stud doesn't have at all. stunnel has it as a compile time/certificate configurable option.

Ding!

nginx configures OpenSSL so completely, it enables the very-secure-yet-very-slow cipher by default! stunnel does not enable that cipher by default because it doesn't configure DH itself (you can configure it by hand though). stud does not enable DH at all.

The next-most-secure cipher is AES256-SHA, which is what stunnel and stud were using to out-perform nginx on my thundering herd connection tests.

the fix
You can force nginx to not enable the expensive cipher by excluding all DHE ciphers. Add "!kEDH" to your cipher list. It disables (the ! disables) any cipher using Ephemeral Diffie-Hellman.

the new nginx benchmarks
Now that we found the problem, let's look at nginx SSL using a sane performance cipher.

nginx (AES256-SHA) -> haproxy: 1300 requests per second
nginx (AES256-SHA with keepalive 5 5;) -> haproxy: 4300 requests per second


There is a slight speed boost from also disabling iptables. As always, do not trust these numbers. Performance depends on: your firewall config, your ciphers, your backend latency, keepalive, session caching, and how many faeries currently live in your system fans [c].

final answer
To get more performance out of nginx SSL, remove its ability to negotiate slow encryption ciphers. Add "!kEDH" to your list of allowed ciphers (unless you are passing around government secrets about aliens or are an internationally wanted arms dealer). Do it now.

Curious about what cipher your install is negotiating? Test it with a quick:

openssl s_client -host HOSTNAME -port 443

Look at the Cipher: line. If it says DHE-RSA-AES256-SHA, your site could be going much faster over SSL by disabling DHE.


-Matt

[a]: If you want to understand the format (! versus versus -), read http://www.openssl.org/docs/apps/ciphers.html.

[b]: "problem" from a website speed point of view.

[c]: Disable iptables on all web-facing boxes if you want to maximize performance. Use SSL session caching and (sane) keepalive values, but those settings aren't panaceas. Everything depends on your page composition, user habits, and individual system issues.


bonus insight about SSL benchmarks
Don't trust anybody's SSL benchmarks unless they include at a minimum details about: Is the OS firewall enabled? Is the benchmark being run over localhost on the same machine? Which cipher is being negotiated between the benchmark tool and the SSL server? Is keep alive on? Is the benchmark tool using keep alive? Is session resumption on? Is the benchmark tool using session resumption? Which benchmark program is being used (they all have different inherent performance problems)?

(note: I didn't mention any of those things. Do not trust my numbers. Benchmark your own systems.)

Over-reliance on becnhmarking keepalive and session resumption can yield false results unless you ever only have one client to your website and they use one keepalive connection and one SSL session constantly.

If you care about absolute numbers, require details about: How many cores? How fast? Do you have an OpenSSL-recognized hardware accelerator engine being used? What else is running on the box? What's the latency among all components?

bonus insight about social diarrhea
I launched my original post over the HN fence and to my twitter account at the same time. It quickly fell off the new page of HN. It immediately started getting re-twatted on the twitters.

Every @reply I received from twitter was supportive, helpful, understanding, or very politely confused/questioning.

Later in the day, to stop me from whining, a friend re-submitted my post to HN. This time the article shot to the #1 spot. Uh oh. If I hadn't developed such think Internet Defense skin over the years, I would have been terribly offended by half of the HN comments.

Remember: The Internet is a big place. If you get upset when somebody isn't as perfect as you are, you'll spend your life being miserable. Be nice. Be understanding.

Final feeling: Twitter is better than HN in all social dimensions of engagement, kindness, and authenticity.

fin.]]>
Wed, 13 Jul 2011 23:10:44 GMThttp://matt.io/technobabble/hivemind_devops_alert:_nginx_does_not_suck_at_ssl/ur
hivemind devops alert: nginx sucks at sslhttp://matt.io/technobabble/hivemind_devops_alert:_nginx_sucks_at_ssl/uqUPDATE
The problem with nginx is resolved in nginx does not suck at ssl!

background
What do you use to serve content over SSL? mod_ssl? nginx compiled with ssl support? stunnel? A hardware accelerator-jigger?

I benchmarked a few SSL terminators in front of haproxy last week. The results may (or may not) surprise you.

initial benchmark results

(on an 8 core server...)
haproxy direct: 6,000 requests per second
stunnel -> haproxy: 430 requests per second
nginx (ssl) -> haproxy: 90 requests per second


initial benchmark results reaction
what. the. fuck.

<rhetorical>
Why is nginx almost 5 times slower than stunnel? It can't all be nginx's http processing, can it? What is crappifying nginx's SSL performance? [a]
</rhetorical>

After recovering from the shock of nginx's crap ssl performance and cleaning up spewed hot chocolate off my monitor, stud strutted up to me and begged to be benchmarked too (he hates being left out). stud looks perfect -- a simple, uncrapified TLS/SSL terminator created because stunnel is old and bloated.

stud's one glaring fault is a lack of HTTP header injection support for adding X-Forward-For headers. [1]

Woe is me. How do we get around not having X-Forward-For headers? Do we sit around and complain online? Do we pay someone else to add it? Do we stay with nginx because it's "what we know?" Heck no. Write it yourself.

Now we have a stud with X-Forward-For support. [2]

More benchmarks against plain stud (factory default) and stud with http header injection added:

more benchy markys

(on the same 8-core server...)
first, results from before:
haproxy direct: 6,000 requests per second
stunnel -> haproxy: 430 requests per second
nginx (ssl) -> haproxy: 90 requests per second

now, enter stud (the -n number is how many cores are used):
stud -n 8 -> haproxy: 467 requests per second
stud-jem -n 8 -> haproxy: 475 requests per second

stud-http-jem -n 1 -> haproxy: 440 requests per second
stud-http-jem -n 7 -> haproxy: 471 requests per second
stud-http-jem -n 8 -> haproxy: 471 requests per second


We have a winner! (special note: according to my tests, running stud with jemalloc speeds it up in all cases.)

The added work of parsing, extracting bad headers, and injecting proper ones shows no practical performance impact versus factory default stud.

okay, so what did you do?
I've modified the crap out of stud and its Makefile. All changes are sitting in my add-HTTP-x-forward-for branch on my le github.

Modifications to stud so far:
Dependencies (libev, http-parser, jemalloc) automatically download during the build process. Nothing needs to be installed system-wide.

I cleaned up the build process for stud so you can configure it in a dozen different ways without rewriting the entire Makefile.

By default, everything is statically linked. You can move your one stud binary to another server without installing libev or jemalloc. [3]

The stud Makefile now builds four binaries (if you "make all"): stud, stud-jem, stud-http, and stud-http-jem. jem means "with jemalloc" and http means "automatically injects X-Forward-For and X-Forward-Proto headers." [4]

All http support is isolated in ifdef blocks. Running a non-http stud is exactly the same as stud from bumptech/stud.

in short
<LIES>
Use stud. Don't use stunnel. Never let nginx listen for SSL connections itself.
</LIES>

<TRUTH>
Keep using nginx for SSL termination. Just make sure your ciphers are set correctly. See nginx does not suck at ssl for an overview of how to fix your nginx config and why this post is wrong.
</TRUTH>


-Matt, your friendly bay area neighborhood web performance junkie.

[a]: I tested nginx as a proxy, serving static files, and serving nginx-generated redirects. I tried changing all the relevant ssl parameters I could find. All setups resulted in the same SSL performance from nginx. I even tried the setup on more than one server (the other server was quad-core nginx got up to 75 requests per second).

[1]: Yes, stud supports writing source IP octets before a connection and now even writing haproxy's own source format before a connection, but I like routing everything back through nginx.

[2]: note: still in-progress. It works, but you can probably craft bloated headers to drop your connection (stud won't crash or segfault -- the errors just break the connection).

[3]: At the top of the Makefile you can easily twiddle static linking on and off per library (for libev and/or jemalloc).

[4]: X-Forward-For header injection is done properly. Any X-Forward-For or X-Forward-Proto headers originating from the client are removed and then replaced by stud-injected headers. We can't allow our clients to inject X-Forward headers our applications expect to be truthful.]]>
Mon, 11 Jul 2011 12:02:13 GMThttp://matt.io/technobabble/hivemind_devops_alert:_nginx_sucks_at_ssl/uq
It's Open Source, Bitchhttp://matt.io/technobabble/It's_Open_Source,_Bitch/uo
You don't matter. Your holy of holy open source principles don't matter. The users matter.

I "upgraded" from Fedora 13 to 14 recently. Well, I tried to upgrade. The upgrade failed using their special upgrade installer. I resorted to a clean install instead.

End result: less functionality.

Flash stopped working. Drag and drop stopped working. My (very common) video driver didn't work out of the box and required waiting for an update from Fedora weeks after the GM.

Even with the fixes so far, video is more choppy than before the upgrade. Drag and drop still doesn't work. Audio seems to have a mind of its own, but for now I'll blame audio problems on Chrome vs. nspluginwrapper vs. Flash vs. pulseaudio.

What's going on with Fedora?

Their bug tracker sheds light on how screwed up the Fedora maintainers can be: https://bugzilla.redhat.com/show_bug.cgi?id=638477

The short version: someone notices Flash isn't working properly. They track it down to a glibc problem. glibc closes it on their side as NOTABUG because technically, Flash is using memcpy incorrectly. A recent update to glibc made an optimization breaking overlapping memcpy calls. Even knowing 64 bit Flash isn't working, Fedora goes ahead with the release. As of a few weeks after the release, 64 bit Flash still doesn't work without a manual compile-your-own-shared-driver hack.

Why should I continue to use Fedora as a desktop OS if it refuses to test functionality of the most common desktop software program in the wild?

Personalities in the bug thread fall in to four categories: problem reporter, maintainer/admin, helpful guru, and peanut gallery (ideologues). It seems the ideologues and admins are the same group though.

The Fedora maintainer running the thread chimes in, "The only stupidity is crap software violating well known rules that have existed forever."

The official position of Fedora is to ship a disto with broken functionality because it's the "crap software violating well known rules." Screw the users, their software vendor is shipping buggy software. But, the bug never showed up until our most recent OS release.

Linus calmly chimes in when faced with the rude outburst, saying he understands the software is broken, but asks "What's the upside of breaking things? That's all I'm asking for."

Linus declares, "Are you seriously going to do a Fedora-14 release with a known non-working flash player?"

Yes, Fedora shipped a release with broken 64 bit Flash player functionality. They don't seem to care.

Dan reiterates what Linus is trying to say: "For a CLOSED NOTABUG bug report, seems to be a lot of traffic on it. Is ANYONE actually fixing this problem either in glibc or in Flash? This is ridiculuous."

Michael agrees: "If it works on Win 7 and doesn't work on Fedora, it is Fedora and Linux that take the crap. Period."

Linus continues to make points the Fedora maintainers ignore and refuse to comment about:
"Rather than make it look bad in the eyes of users who really don't care _why_ flash doesn't work, they just see it not working right.

There is no advantage to being just difficult and saying 'that app does something that it shouldn't do, so who cares?'. That's not going to help the _user_, is it?

And what was the point of making a distro again? Was it to teach everybody a lesson, or was it to give the user a nice experience?"

The same autistic-absolute-right-ideology versus sanity is repeated in a different thread at fedorahosted.


I haven't even touched on my other two issues. Drag and drop is broken. A feature from 1980 doesn't work in 2010. My video card, using the most used video driver on desktop Linux, didn't work in the release without a manual Fedora software update. Video card reason? It's using a proprietary driver so they don't test it.

What is Fedora doing?
]]>
Fri, 26 Nov 2010 16:07:57 GMThttp://matt.io/technobabble/It's_Open_Source,_Bitch/uo
The Key-Value Wars of the Early 21st Centuryhttp://matt.io/technobabble/The_Key-Value_Wars_of_the_Early_21st_Century/ui
"Give us replication or... give us an acceptable alternative!"

"If I have to write another schema migration, I swear to Monty, I'm going to become a hardware store day labourer."

"Serialized JSON is the answer to everything!"
"But what if I want to search by a property of the JSON object and not just the id?"
"You just write a map-reduce function and re-reduce until you get your answer."
"Did you just tell me to go computationally fuck myself?"
"I believe I did, Bob. I believe I did." [q]

The core of their debate came down to representation versus distribution. The squealers claimed to have perfect representation, but hacked together distribution methods. The kvalers claimed to have perfect distribution, but limited representation models.

Trees sitting next to tables
What was the problem with representation? Why is perfect representation diametrically opposed to perfect distribution? Perfect representation came down to one issue: asking for ranges of things. Who is between 18 and 29? Which orders completed between last Thursday and today?

The squealers worshiped at the altar of their B+ tree, offering up mathematically perfect schemas in return for logarithmic access times to their data. Squealers ignored the one downside of their Lord of All Data Structures: B+ trees were manipulated in-place on disk, requiring huge amounts of logically contiguous disk space for large datasets. Distributing and replicating a B+ tree across multiple machines was impossible. Squealers took to chopping up their data into smaller and smaller collections until entire copies of B+ trees could be kept on multiple machines. Squealers claimed they never needed needed cross-table joins in the first place and they actually enjoyed the additional administration of dealing with statement based replication [a].

The kvalers worshiped at the altar of their hash table, offering up non-colliding keys in return for separation of data enabling perfect distribution and replication. The kvalers ignored the one downside of their Lord of All Data Structures: hash tables provide no data locality. Kvalers happily maintained hand-crafted indexes if range queries or search-within-the-dataset queries were required. Kvalers pretended to never have heard of ad-hoc queries or business intelligence requirements.

Who put my tree on the table?
Trees are made of nodes. Nodes must have names to be found. Traditionally, B+ trees were stored in files with nodes being referenced by positions within the B+ tree file. Arbitrarily chopping a B+ tree in half and distributing it to another node was not practical. Even with a B+ tree designed to reference outside its own file [1], the entire collection of mini-B+ trees needed to be present for operations to complete, resulting in no advantage in distribution or replication.

The B+ tree scalability problem came down to one issue: how can you name a node and later reference it independent of its storage location?

One mild summer's day in Goolgonia, historically California, someone stumbled upon the answer: a new way of naming nodes [2]. What if, instead of offsets within a file, a Central Naming Service could provide a globally unique node identifier serving as a recall key for a node's location? Instead of storing nodes within a sequentially allocated data file, B+ tree nodes could be stored on anything accepting the name of the node and storing its contents.

The kvalers squealed.

The squealers begged the kvalers, "Please, may we have access to your infinite-storage-capacity, automatically-redundant-with-self-managing-failover, and incorruptible storage system?" The response was delivered with a wry smile: "We've been waiting for you."

In a sigh of ecstasy, the squealers and kvalers realized what was born. The squealer's toes curled at the thought of being free from routine sharding, free from identifying hot spots and spinning off copies, free from arbitrarily chopping tables in half, and free from single points of failure. The kvaler's eyes rolled upwards while synthesizing thoughts of sequential data access, retrieving records by ranges, and iterating over database snapshots just by holding on to a root node.

After an exhaustive journey, the kvalers and squealers drifted off to a sound sleep that night. The story of the squealers versus kvalers came to a close, but the journey of the sqvalers had just begun.

wtfbbq? (aka Back to Life, Back to Reality)
Over the past few years I've been looking for a stable, low maintenance, zero up front decision making, scalable sorted data platform. I looked high and low to no avail. I even went as far as writing a single-purpose BigTable clone called TinyTable for my own use, but it still had annoying edge cases when I overflowed a single disk or RAID array.

A potential solution dawned on me after looking at an R-tree implementation using CouchDB's append-only storage format. To store a node, you pass in the data and get back its position. To retrieve a node, you pass in a position and get back the data. It's that simple.

All file writing and reading is completely encapsulated within the couch_file module. The two storage calls are straight forward. No seeking, block management, or any other file-level details are leaked through the interface.

{ok, Position} = couch_file:append_term(Fd, Node).
{ok, Node} = couch_file:pread_term(Fd, Position).


Important point: Position is an opaque type. The code doesn't care if Position is an integer or a tuple or a binary. Position is passed through to the storage layer on lookups and returned to the caller on appends.

Let's do this
To make CouchDB store documents remotely, we only have to replace the implementation of the two functions listed above. For our remote storage let's use Riak as our Key-Value store (because it's awesome). CouchDB persists Erlang terms to disk and Riak persists Erlang terms to disk. We get to remove redundant code from CouchDB since Riak is converting terms for us. Riak also automatically replicates everything we store, easily handles adding more machines to increase capacity, and deals with failures transparently.

append_term's implementation becomes riak_write(Bucket, Key, Value).
pread_term's implementation becomes riak_read(Bucket, Key).
We replaced Fd with a Bucket name here. Bucket is the database filename as if it were stored locally (e.g. <<"_users.couch">>).

We also have to generate unique keys for our nodes. The CouchDB file format is append-only, so we never have to worry about updating a value once it is stored [e]. Where does Key come from? Key is equivalent to Position in the original couch_file calls. Key is now simply {node(), now()}. That's the Erlang way of making a globally unique value within your cluster. node() is the name of your VM (original in a cluster) and now() returns the current time (guaranteed to be monotonically increasing) [k].

We turn our nice {node(), now()} into a binary and use it as a key: term_to_binary({node(), now()}), et voila we have a globally unique key to store values in Riak [m].

All the tricky couch_file page alignment code is now gone. The only files written to the local file system are couch.uri and couch.log.

We did this
Does it work? Sure, it works. Check out my CouchDB Riak Integration for yourself [p]. Modify your bin/couchdb script to include Riak client libraries (riak_kv) and connect to a local Riak cluster:
-pa ../riak/deps/riak_kv/ebin -pa ../riak/deps/riak_core/ebin -setcookie riak -name 'couchdb@127.0.0.1'
Insert those after $ERL_START_OPTIONS and before -env ERL_LIBS

What did it take to convert CouchDB to use completely remote storage for documents? It took: 7 files changed, 87 insertions(+), 502 deletions(-)

See the commit log for quirks, caveats, and how to use the admin interface properly in my proof-of-concept implementation.

Patches welcome? Maybe? To make this mergeable upstream, we need a way to have both local-disk and Riak storage. The original couch_file needs to be restored with flags about when to use remote storage versus local storage per-DB. A dozen other quirks and deficiencies would have to be fixed and accounted for as well. It makes more sense to use the CouchDB B+ tree code (and/or the R-tree code) with a modified couch_file to create an independent data platform [L].


-Matt
@mattsta

You shouldn't follow @mattsta on twitter.
He only writes about annoying problems, trivialities of life, The Bay Area, and programming.
Pretty boring stuff.


Footnotes:

[q]: With apologies.

[a]: Yes, this is the shitty MySQL way of doing things. Let's not tell them about PostgreSQL's log shipping replication.

[1]: e.g. Replacing your node reference from {Offset} to {Filename, Offset} still means all referenced files must be replicated.

[2]: ANWONN: The most advanced, world-changing, node naming scheme to ever be conceived by an almost god-like being. Do you question it? Here, read a 600 page autobiography about how amazing, unique, and intelligent I am.

[3]: This provides more location independence than the the Sequential Append-Only Write-Once-Read-Many Just-Try-To-Corrupt-Me-I-Dare-You B+ tree file format. [[Yes, this footnote isn't referenced in the article above. I'm not sure when the source to the footnote got cut, but I like the footnote anyway.]]

[k]: Yeah, I just re-wrote Twitter's Snowflake in one line of Erlang. And we can deal with unsigned longs too because we're Not Java. Let's ignore the fact that {node(), now()} is about 300 bits instead of 64 bits.

[m]: Yes, you can (and should) make it much more efficient storage-wise. I'm going for concise and readable here.

[p]: The CouchDB code tree is very messy. It really needs to be cleaned up. I wish they would discover Rebar already. I don't give a shit about the bulk of it, still, I keep it professional.

[L]: I'm not a fan of storing documents as JSON.

[e]: Except for the header storing root node information.
]]>
Mon, 09 Aug 2010 22:12:37 GMThttp://matt.io/technobabble/The_Key-Value_Wars_of_the_Early_21st_Century/ui
nodejs -> twitter -> nodejs -> couchdbhttp://matt.io/technobabble/nodejs_->_twitter_->_nodejs_->_couchdb/uhnode client to import the public twitter timeline into couchdb running at http://127.0.0.1:5984/ (create the database at /twitter/ first).

Everybody can run 150 public API requests per hour which nets you around 3000 tweets per hour. The client polls the public API every 0.5 seconds.

Runs against 2010.07.16 node-v0.1.101.


#!/usr/bin/env node

var http = require('http');
var couch = http.createClient(5984, '127.0.0.1');
var twitter = http.createClient(80, 'api.twitter.com');

function post_all_json(twitter_json_text) {
var creq = couch.request('POST', '/twitter/_bulk_docs',
{'content-type': 'application/json'});
creq.write('{"docs":' + twitter_json_text + '}');
creq.end();

}

function runTwitterRequest() {
var treq = twitter.request('GET', '/1/statuses/public_timeline.json',
{'host': 'api.twitter.com'});
treq.end();
treq.on('response', function(response) {
var twitter_response = "";
response.setEncoding('utf8');
response.on('data', function(chunk) {
twitter_response += chunk;
});
response.on('end', function() {
var tjson = JSON.parse(twitter_response);
console.log("Posting retrieved tweets which number " + tjson.length);
post_all_json(twitter_response);
});
});
}

setInterval(runTwitterRequest, 500);

]]>
Sun, 25 Jul 2010 03:48:57 GMThttp://matt.io/technobabble/nodejs_->_twitter_->_nodejs_->_couchdb/uh
redis meetup noteshttp://matt.io/technobabble/redis_meetup_notes/uf
libcluster
Redis is getting native clustering one way or another. What if we make a generic libcluster to handle the hash ring, membership, networking, failover and other clustery things? It could be helpful in other projects if libcluster is as well thought out and as well written as Redis.

zeromq
How about adding zeromq as a connection option in addition to TCP and (soon) unix domain sockets? Zeromq can give us sane udp out of the box (maybe) and possibly a more efficient tcp layer since zeromq coalesces as many outgoing messages together as possible (nb: would break a central NIH rule of antirez).

command response tags
Someone brought up the issue of multiple threads using one redis connection. What if each redis command could accept an optional tag that got echoed back in the response to the user? Client libraries can make a tag (something unique within a short time period of the app), send commands to redis as needed, then use the tags in responses to match up redis responds with who should receive them.

Sounds good to me.

-Matt ]]>
Thu, 24 Jun 2010 15:07:56 GMThttp://matt.io/technobabble/redis_meetup_notes/uf
you'll tell them everything's fine, then we'll put you back in your boxhttp://matt.io/technobabble/you'll_tell_them_everything's_fine,_then_we'll_put_you_back_in_your_box/tj
First, under your application drop down menu select Switch Desktop Mode and pick Classic Desktop. The Dell Launcher application was eating over 40MB RAM just to show the unnecessary animated application picker.

Next, launch a terminal and bring your old friends back:
sudo -s
apt-get update
apt-get upgrade
# go ahead and reboot if upgrade installed a new kernel
# general goodies
apt-get -y install nmap vim vim-nox vlc minicom dict rrdtool aircrack-ng
# firefox goodies
apt-get -y install adblock-plus firebug mozilla-noscript flashblock
# programming goodies
apt-get -y install erlang mercurial haskell-compiler vlc minicom dict tinyscheme ocaml
# graph/image goodies
apt-get -y install gnuplot graphviz graphicsmagick-imagemagick-compat
# book writing goodies
apt-get -y install texlive-science docbook-* # will fail eventually because of a pdf being included in the HTML docs. after everything is downloaded run again without docbook-* and it'll setup the previously downloaded non-conflicting packages.

# for virtualbox
apt-get -y install linux-kernel-dev build-essential linux-headers-`uname -r` dkms

Download the latest virtual box from:
http://www.virtualbox.org/wiki/Linux_Downloads
and install with:
dpkg --force-architecture -i <downloaded virtualbox .deb>


]]>
Sat, 11 Apr 2009 14:26:25 GMThttp://matt.io/technobabble/you'll_tell_them_everything's_fine,_then_we'll_put_you_back_in_your_box/tj
abouthttp://matt.io/technobabble/about/tiSat, 11 Apr 2009 12:45:47 GMThttp://matt.io/technobabble/about/ti