Django Deployment made simple!

One of the most frequent questions we get on #django is "How do I deploy my django site?"

Let's face it: Deploying Django is "hard". Compared to "just dump it in the docroot" that worked for static sites, and even a lot of PHP sites, Django has a lot of moving parts to consider.

So, here is a guide to setting up a brand new DigitalOcean droplet, ready to host multiple Django sites at once.

(If you sign up with the link above, DigitalOcean will give you and me some credit for my referral. More details here :)

This is not the simplest solution to setting up a Django hosting droplet, but I hope you will find it clear enough that you know where to go from here.

The overview

I'll be using Debian, as it's generally current, doesn't bloat your basic install too much, and has a good track record on servers.

A basic Django site requires 4 main components: Web server, App server, DBMS (database), and MTA (mail server).

Web Server
Nginx will act as our web server, accepting HTTP requests, serving static assets and media content, and forwarding other requests to our App server.
App Server
We're going to use uWSGI. Whilst there are simpler solutions like `gunicorn`, uWSGI brings with it a plethora features that are hard to ignore.
Database Management Server (DBMS)
Postgres. It's really the best choice.
We need this so our servers can email us when there are errors, and so our sites can email their users. OpenSMTPd has the laudable goal of being simple and secure - covering the common, basic uses, instead of getting complicated, because complex means hard to secure.

We'll be keeping the web sites in /srv/apps/{project}/ with the following layout:

/venv - the virtualenv
/html - static and media
/code - where our project's code lives
/hostnames.txt - the list of hostnames this project responds to.

Then we'll add a symlink in /srv/www/ for each hostname we respond to, to the html dir of the project it's for.

Before we begin.

In this guide I'll assume some basic familiarity with using Linux and its ilk.

I will also try to abide by basic good security practices. This means we use root as little as possible.

One thing that turns up often is people are learning somewhere that "if it doesn't work, sudo!". DON'T DO THIS. Using sudo is something you should only do when you know you absolutely must.

As a convention in this document, any commands prefixed with # are expected to be run as root, and those with $ as a normal user.

Lastly, text editors. It's a personal choice. My preference is vi, but many find this (understandably) arcane. In a default install of Debian, you will typically have a choice between vi, nano, pico, and possibly more. Use what you feel most confident with.

Setting up a Droplet

Make sure when registering with DO to create a SSH key to use with them, and upload your public key to them. This can then be your default way to log in as root.

Once you've registered with DO, go to 'Create Droplet'.

Any name will do - I'll use "testing" for this case.

The basic $5 droplet may not seem like much, but I've managed to host 7 or 8 small Django sites concurrently on one.

Pick any region you like - one close to you may reduce latency, but not noticeably. You may want to consider legal jurisdiction of your data.

Select Debian, and go with the latest - 8.1 as of this writing.

For "Available Settings", you can leave them off for now, but when you get serious, do enable Backups.

Be sure to select your SSH key for installation [the box will colour in].

Now, press the big "Create Droplet" button, and about a minute later you'll have your server!


Before we set up our services, we need to do some routine house keeping.

Log into your new droplet

$ ssh root@{your-droplets-ip}

First, we'll update all the packages, to keep up to date. Since OpenSMTPd isn't in the current Debian release, we'll switch to testing.

Edit /etc/apt/sources.list and replace all occurrences of "jessie" with "testing".

Now, we update the package data (remember, # means do this as root):

# apt-get update
# apt-get autoclean

First rule of securing a server is to not run network services you don't need - minimal attack surface. So we're going to remove rpcbind, as we don't use NFS.

# apt-get purge rpcbind

Now we update all our installed packages.

# apt-get dist-upgrade

Next, some packages we'll need:

# apt-get install python-virtualenv python-pip python-dev \
    zlib1g-dev libjpeg-dev libtiff-dev libpq-dev fail2ban git htop

Some of this is obvious [virtualenv, pip]. We need python-dev for building Python packages that require compilation, such as PIL and psycopg2.

PIL also requires the image libraries libjpeg and libtiff, and uses zlib for PNG.

The libpq-dev package is needed for psycopg2 to talk to Postgres.

We use fail2ban to avoid filling our logs with the ever persistent botnet failed logins.

We'll need access to our git repo with our code in it, which explains git.

And finally, I like to use htop to monitor my system - it's like top, only a lot better.

After that, we clean up any packages that are installed but no longer needed.

# apt-get autoremove

Finally, reboot to make sure the latest everything is running.

# reboot

Final step

Before we move on, we're going to create a non-root user to do our daily work as. Just like not running services we don't need, we don't use root unless we absolutely must.

This helps to prevent mistakes; we all make mistakes.

# useradd -G www-data,sudo -m {username}
# passwd {username}

This creates a username user, in the www-data group so we can edit sites, the sudo group so we can use sudo, and then we set the password.

In a new window, try logging in to make sure this worked.


Configuring OpenSMTPd

We want to configure to only accept mail from local connections, since we have no need to receive emails, and don't want to be a relay for spam.

# apt-get install opensmtpd

The default config will only accept connections on localhost, so is good enough.

Configuring Postgres

Installing Postgres is simple, and the default configuration is find.

# apt-get install postgresql-9.4

We'll need to create a user for our apps to connect as. In an ideal world, we would create a separate user per app, for better isolation. However, for now we'll just create a "www-data" role, to match the www-data user our web sites run as.

# su - postgres -c "createuser www-data -P"

This will create a role for our www-data user, and prompt you for a password for it.

Next, we create a role for our selves that is allowed to create databases, and is added to the www-data role so it can create them owned by that role.

# su - postgres -c "createuser -g www-data -d {username}"

We could allow www-data to create databases, but it's safer to not. This is the principle of least privilege.

Since we're on a small memory budget, you may want to tweak some of the Postgres settings.

You can edit them in the file /etc/postgresql/9.4/main/postgresql.conf.

Here are some settings you can tune for memory use:

See here for more details.

Once you've adjusted these, restart postgres:

# systemctl restart postgresql

Configuring nginx

Debian provides 3 different builds of nginx, with different options compiled in. The smallest we can use is nginx-full [the default], since we'll be wanting uWSGI and the "gzip static" module.

# apt-get install nginx-full

Here, we will set up a single configuration that will handle the static requests for all of our sites, and forward other requests to uWSGI. The Fastrouter (see next section) will take care of passing those requests to the right workers.

We want to put the following in a new file called /etc/nginx/sites-available/multi

# Where to look for content (static and media)
root    /srv/www/$host/;

# Allow gzip compression
gzip_types text/css application/json application/x-javascript;
gzip_comp_level 6;
gzip_proxied any;
# Look for files with .gz to serve pre-compressed data
gzip_static on;

server {        
    listen 80;
    # nginx docs recommend try_files over "if"
    location    /   {
        # Try to serve existing files first
        try_files $uri @proxy =404;
    location @proxy {
        # Pass other requests to uWSGI
        uwsgi_pass unix://srv/apps/_/server.sock;
        include uwsgi_params;

We then remove the default site config, and symlink in our own

# cd /etc/nginx/sites-enabled
# rm default
# ln -s ../sites-available/multi .

Now restart nginx to take on the new config

# systemctl restart nginx

Configuring uWSGI

Bring in the parts of uWSGI that we need:

# apt-get install uwsgi-plugin-python3

or if you're still on Python 2:

# apt-get install uwsgi-plugin-python

We'll be running uWSGI in emperor mode. This way it will take care of launching new sites for us, and keeping them running, so we don't need a tool like supervisord.

First, we'll put our config files in place:

# mkdir /etc/uwsgi

Now, into /etc/uwsgi/emperor.ini put:

master = true
procname-master = Emperor

; Look for vassal configs using this pattern
emperor = /srv/apps/*/uwsgi.ini
; Don't resolve symlinks when considering reload-on-touch
emperor-nofollow = true

; Lowest privs
uid = www-data
gid = www-data

; Clean up our workers when we die.
no-orphans = true

Well, that was simple! But next we need a config for each of the sites (called "vassals" in uWSGI terms).

Into /etc/uwsgi/vassal.ini put:

master = true
procname-master = %c

; Run with lower privs
uid = www-data
gid = www-data

; :0 lets the OS assign a port
socket =
; Register with the FastRouter the list of hostnames
subscribe-to =

; Paths are referenced relative to where the INI file is found
chdir = %d

# Task management
; Max 4 processes
processes = 4
; Each running 4 threads
threads = 4
; Reduce to 1 process when quiet
cheaper = 1
; Save some memory per thread
thread-stack-size = 512

# Logging
plugin = logfile
req-logger = file:logs/request.log
logger = file:logs/error.log
log-x-forwarded-for = true

# Python app
plugin = python
virtualenv = venv/
pythonpath = code/
module = %c.wsgi
enable-threads = true

# Don't load the app in the Master.
app-lazy = true

The placeholder variables (%d, %c, etc) allow us to reuse this same configuration for many sites.

Finally, the Fastrouter. This is like a smart load balancer, but will distribute requests according to their hostnames.

Into /etc/uwsgi/router.ini put

master = true
procname-master = FastRouter

uid = www-data
gid = www-data

plugin = fastrouter
fastrouter = %d/server.sock
; run as lower privs
fastrouter-uid = www-data
fastrouter-gid = www-data
; handle the scale
fastrouter-processes = 2
; but scale down when quiet
fastrouter-cheap = true
; let others vassals subscribe to us
fastrouter-subscription-server =

# Logging
plugin = logfile
req-logger = file:logs/request.log
logger = file:logs/error.log

With the subscription server, our vassals can register with the FastRouter, telling it which hostnames they can handle, and what port they can be contacted on.

Now we need to create a service script for systemd.

Into /etc/systemd/system/emperor.uwsgi.service put

Description=uWSGI Emperor

ExecStart=/usr/bin/uwsgi --ini /etc/uwsgi/emperor.ini


And we can start it up using:

# systemctl start emperor.uwsgi

If this works without issue, we can enable it to start on boot:

# systemctl enable emperor.uwsgi

Preparing the deployment space

We need to make space for our apps to live, and the links for their hostnames.

# cd /srv
# mkdir www apps
# chown www-data:www-data www apps
# chmod g+w www apps

Creating the FastRouter

Our very first vassal will be the FastRouter. Whilst we could incorporate this into the Emperor task, I prefer to make it a vassal as it allows for reconfiguration without having to tear down all the other vassals.

Now, let's make a directory for the router. I use the name '_' to make it clear it's something other than a regular site.

# cd /srv/apps/
# mkdir _
# cd _
# mkdir logs
# chown -R www-data:www-data .

Finally, we symlink the router.ini into the dir, but call it "uwsgi.ini" so the Emperor will see it.

# ln -s /etc/uwsgi/router.ini uwsgi.ini

Soon after you do this, the Emperor will notice the file, and launch a new uWSGI instance using this config.

Creating a site

From here on, use your normal user account for all actions.

The steps for creating each site is only a little more involved than the FastRouter.

We're going to use the name of your project a few times, so let's put it in an environment variable (remember $ means to run as not root):

$ export NAME=myproject

This must match your Django project name, as uWSGI will use this name to look for the WSGI application.

(Remember, if you log out this will be un-set, so you will need to export it again.)

Make the directory:

$ cd /srv/apps
$ mkdir $NAME
$ cd $NAME

Make all the directories we need:

$ mkdir -p code html/static html/media logs
$ chgrp -R www-data logs html
$ chmod -R g+w logs html

Create a virtualenv, and activate it, and update pip:

$ virtualenv -p python3 venv

or if you're still on Python 2:

$ virtualenv -p python2 venv


$ . venv/bin/activate
$ pip install -U pip

Check out your project into code/

$ git clone {github url} code/

This assumes your project is the top level of your git repo. If this is not the case, check it out into another directory, and symlink the root of the project (where lives) to code/

$ ln -s myrepo/myproject code

Install your requirements

$ pip install -r code/requirements.txt

Create a database:

$ createdb -O www-data $NAME

Migrate your DB schema:

$ cd code
$ python migrate

And create a superuser for you to log in with:

$ python createsuperuser

Your project should be configured with

STATIC_ROOT = os.path.join(os.path.dirname(BASE_DIR), 'html', 'static')
MEDIA_ROOT = os.path.join(os.path.dirname(BASE_DIR), 'html', 'media')

so that when you run:

$ python collectstatic --noinput

it will collect into /srv/apps/{NAME}/html/static/

This assumes you have STATIC_URL = '/static/' and MEDIA_URL = '/media/'. This way nginx will find /static/* when it looks in /srv/www/$HOST/.

You also need to make sure your DATABASES is configured correctly. Ensure you have psycopg2 in your requirements.txt, and set your DATABASES to something like:

    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'yourproject',
        'USER': 'www-data',
        'PASSWORD': 'thepasswordyouset',
        'HOST': 'localhost',

Next, we'll pre-compress our CSS and JS. This saves the CPU and memory of compressing it on demand, and allows us to devote more time now to compressing it more.

$ cd /srv/apps/$NAME/html/
$ find . -name "*.js" -exec gzip -9k {} ";"
$ find . -name "*.css" -exec gzip -9k {} ";"

Then we add a list of hostnames we want for this site:

$ cd /srv/apps/$NAME/
$ echo >> hostnames.txt
$ echo >> hostnames.txt

Link in the host names for static/media content:

$ cd /srv/www
$ ln -s ../apps/$NAME/html
$ ln -s ../apps/$NAME/html

And... Launch!

$ cd /srv/apps/$NAME
$ ln -s /etc/uwsgi/vassal.ini uwsgi.ini

If this works fine, you can change your settings to DEBUG = False. When you do this, you will also need to set ALLOWED_HOSTS. Since both nginx and uWSGI are validating the hostname for us, we can safely set it as:


To reload your site after any changes, just:

$ touch -h uwsgi.ini

The Emperor will notice the change, and automatically restart the vassal.

How does it work?

So, let's consider different cases and how each part reacts.

  1. A request for static/media

    First, the client connects and asks for "/static/js/jquery.js", with the Host header specifying "".

    Nginx will get this, and the try_files directive tells it to look in the root, which is /srv/www/ It finds /srv/www/ and returns it.

  2. A request for dynamic content.

    The browser requests "/accounts/login/" from the host "".

    Nginx looks, but does not find /srv/www/, so it passes the request to uWSGI.

    The FastRouter looks at the Host, and passes the request to our $NAME vassal.

    The vassal then handles the request through Django, and returns the response.

  3. Anything else

    All other requests will get a 404.

In concluding...

I still find the need to repeat yourself with linking the host names a bit tedious, and do have a solution that doesn't involve nginx which does not suffer this. However, it's not entirely suitable for this sort of deployment.

Finally, I'd like to thank the people who prompted me to write this page, and devoted time to helping me test and refine this document. Online I know them as mattmcc, shangxiao, and jessamynsmith, and especially ycon_ who tireless helped debug this document :)