Django and gevent

So, I've recently been playing with Django production schemes other than Apache/mod_wsgi (I'm sorry, Graham! :)

Specifically, I wanted to reduce my memory footprint, as I'm cheap, and VPSs don't come with a lot of RAM. Obviously, nginx is a great choice here, and turned out to be even simpler to configure than I'd expected.

root    /var/www/$host/html/;

server {
    listen 80;
    # nginx docs recommend try_files over "if"
    location    /   {
        try_files $uri @proxy;
    }
    location @proxy {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_pass http://unix:/var/www/$host/server.sock;
    }
}

The main issue that kept me, in the past, from changing, was an acceptably comfortable way to manage the "back end" tasks. Lots of systems presented themselves, but in the end I opted for runit ... it's simple, and general purpose.

First, I tried nginx/gunicorn ... it does quite well, but felt rather slow to me. According to Nicholas Piel, gevent on its own should be lighter, faster, and more reliable.

But I'd grown attached to nginx talking to my apps via unix sockets. That's ok, turns out the gevent wsgi server will take any socket you pass it.

#!/var/www/blog.tinbrain.net/app/bin/python

import sys, os

from gevent import wsgi
from gevent import socket
from gevent import monkey

sys.stdout = sys.stderr

# Just in case
monkey.patch_all()

import pwd

# Get this so we can chown/chgrp the socket and let nginx read it
pe = pwd.getpwnam('www-data')

SOCK = '/var/www/blog.tinbrain.net/server.sock'

sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
    os.remove(SOCK)
except OSError:
    pass
sock.bind(SOCK)
os.chown(SOCK, pe.pw_uid, pe.pw_gid)
os.chmod(SOCK,0770)
sock.listen(256)

import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()

# Set up for Django
sys.path.insert(0,'/var/www/blog.tinbrain.net/app/')
os.environ['DJANGO_SETTINGS_MODULE'] = 'musings.settings'

wsgi.WSGIServer(sock, application, spawn=None).serve_forever()

And there you have it. Does it work? Sure... I managed to hammer my test server [a snapshot of this blog] at almost 30 requests per second... not bad for half a night's work.

Next: Making Postgres "green"

Addendum

I've rechecked my figures, and it's far better than I thought!

When I crank ab up to 100 parallel requests, and 1000 total, I get:

Total transferred:      2389000 bytes
HTML transferred:       2072000 bytes
Requests per second:    280.68 [#/sec] (mean)
Time per request:       356.281 [ms] (mean)
Time per request:       3.563 [ms] (mean, across all concurrent requests)
Transfer rate:          654.82 [Kbytes/sec] received

And at180:

Total transferred:      2389000 bytes
HTML transferred:       2072000 bytes
Requests per second:    463.41 [#/sec] (mean)
Time per request:       388.426 [ms] (mean)
Time per request:       2.158 [ms] (mean, across all concurrent requests)
Transfer rate:          1081.14 [Kbytes/sec] received

Awesome!

comments powered by Disqus