Unit testing is a very important part of any software project. It helps you know that the new code you are deploying works, and isn’t going to blow up in your face. It also helps you feel good about changing large chunks of code without destroying everything you’ve done for the last 3 years.

Unit testing with django is as simple as pie. The documentation is very good, and you can learn a lot about more advanced testing methods from the python documentation. In this blog post, I aim to show a quick way to get up and running with testing your django application.

First, if you are just starting out, make sure you put a high emphasis on testing your application, otherwise you are going to end up with a bunch of code that has never been tested and you will find yourself writing code for weeks just to get partial coverage on the code you’ve already written. Starting off on the right foot is a much better approach, and you will find life much more enjoyable.

Let’s get started…

First of all, you need to define your models:

from django.db import models
from django.contrib.auth.models import User

class Post(models.Model):
    title = models.CharField(max_length=50)
    body = models.TextField()
    author = models.ForeignKey(User, related_name="news_post")
    date = models.DateTimeField()
    users_read = models.ManyToManyField(User, related_name="users", blank=True)

    def __str__(self):
        return self.title

What we have done here is created a news post item. Let’s test it!

class PostTestCase(TestCase):
    fixtures = ['test_data.json']
    def setUp(self):
        self.user = User.objects.create_user('newsposter',
                                             'newsposter@news.com', 'newspass')
        self.post = Post.objects.create(title="Test Post #1",
                body="Test Post #1 Body",
                author=self.user,
                date=datetime.datetime.now())
        self.c = Client()


    def test_post_creation(self):
        """
        Tests that we can create a Post
        """
        self.assertEqual(self.post.title, "Test Post #1")
        self.assertEqual(self.post.author, self.user)

    def test_user_can_read(self):
        """
        Tests that a user is allowed to read.
        """
        self.c.login(username='newsposter', password='newspass')
        response = self.c.get('/news/get_post/1/')
        self.assertEqual(response.status_code, 200)
        self.assertNotEqual(response.content, '{}')

One of the really cool thing about testing with django is that it comes with a testing client that allows you to make requests just like a real user would. As you can see in our test_user_can_read() method, we have used the client to make a GET request against a URL. You can make a POST request just as easily:

def test_i_read_this(self):
    """
    Tests a new user marking the story as read.
    """
    self.c.login(username='newsposter', password='newspass')
    response = self.c.post('/news/read/1/', {'add':True})
    self.assertEqual(response.status_code, 200)
    self.assertEquals(response.content, '{\n    "read": true\n}')

In the previous code sample, the client sends a POST request to /news/read/1/ with the {'add':True} data. This gets converted to form data and submitted via the POST. The request returns back JSON, which we match up against what we expect it to return.

Here are some things to remember when you are writing your test cases:

  • setUp() gets called before every test method in your TestCase.

  • tearDown() gets called after every test method in your TestCase.

  • Test methods must start with “test” otherwise they will not be executed. It is safe to have other methods in your TestCase that do not begin with “test” if you want to abstract functionality for multiple test methods into a single function.

  • Django creates a test database for you, populates it, runs any south migrations (if you are using south), and then destroys it.

  • Do not expect that data that is available in one of your test methods will be available in another. Each test method starts with a blank data slate. If you need data instantiated before your tests are run, consider using the setUp() and tearDown() methods, or using fixtures. You can specify fixtures other than your initial_data fixtures by adding fixtures = ['test_data.json'] to your TestCase class.

Here is in-depth documentation about what assert methods are available to you in different versions of Python in the official Python unittest documentation.

As you can see, testing with django is really really simple, but very powerful. In my next post, I will discuss how to test django with a MongoDB backend that does not use the ORM.

VIM has been my editor of choice for at least 15 years. I love how fast I can edit files, perform menial tasks, and wreak general havoc on any code project I am working on at any given moment. One of the things that I have missed about VIM from an IDE perspective has been code completion (a.k.a. “IntelliSense”). I have spent a lot of time on websites and man pages trying to figure out syntax and function names for several types of languages, and just recently discovered a long-included feature of VIM called omni completion, or Omnicomplete.

Since my life is mostly centered around django these days, I will discuss how I’ve benefited from omnicomplete and how I’ve set it up in my own environment.

First, since django is a web development framework, I want to make sure that I can get omnicompletion for HTML, Python, JavaScript and CSS. Omnicompletion works for almost any programming language that VIM has syntax highlighting support for, and these languages are no exception.

Here’s what omnicomplete looks like for CSS files, for example:

omnicomplete for vim

Setting this up for your django project is simple as pie. It is helpful to have all your django projects in one parent directory for the following setup. You can obviously customize this to your needs, but this is the way I’ve set it up in my environment.

Add the following to a script in a directory of your choosing (~/bin/vim_wrapper is where mine is):

#!/bin/bash
export PROJECT=`python -c "import os; print os.path.basename(os.getcwd())"`
export PYTHONPATH="/path/to/your/projects/parent/directory/"
export DJANGO_SETTINGS_MODULE=$PROJECT.settings vim
$@

Then add the following line to your ~/.bash_profile or equivalent:

alias vi="vim_wrapper"

Or, you can call your vim_wrapper script by hand. (vim_wrapper file_to_edit.py)

Next, add the following lines to your ~/.vimrc file:

filetype plugin on
autocmd FileType python set omnifunc=pythoncomplete#Complete
autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS
autocmd FileType html set omnifunc=htmlcomplete#CompleteTags
autocmd FileType css set omnifunc=csscomplete#CompleteCSS

I also prefer to re-map the default key binding () to , so I accomplish this by also adding the following line to my ~/.vimrc file: `inoremap `

I also found this trick today while searching around…

What this function does is that if there is no completion that could happen it will insert a tab. Otherwise it checks to see if there is an omnifunction available and, if so, uses it. Otherwise it falls back to Dictionary completion if there is a dictionary defined. Finally it resorts to simple known word completion. In general, hitting the Tab key will just do the right thing for you in any given situation.

Add the following to your ~/.vimrc and you should be good to go. It works like a charm for me.

function! SuperCleverTab()
    if strpart(getline('.'), 0, col('.') - 1) =~ '^\s*$'
        return "\"
    else
        if &omnifunc != ''
            return "\\"
        elseif &dictionary != ''
            return "\"
        else
            return "\"
        endif
    endif
endfunction

inoremap <Tab> <C-R>=SuperCleverTab()<cr>

If you find yourself writing code in other languages, the following lines in your vimrc should be adequate:

filetype plugin on
set ofu=syntaxcomplete#Complete

You can now test that your installation works by changing directories to one of your django projects, firing up vim and running the following command: :python from django import db

If you do not get a horrible error, you are good to go!

You can now access code completion by the following methods:

  • <C-p> - Shows a list of all local symbols. This is good if you do not have a tags file associated with the file you are editing.

  • <C-space> - Shows a list of all available symbols. You need to set up a tags file, which is outside the scope of this blog post

  • <C-x><C-o> - The original keystroke sequence that we re-mapped C-space to.

  • <Tab> - The all-powerful tab!`

I hope you enjoy your new-found power with vim as much as I do!

I have been using django for web development for almost a year now, and I just recently started using South to do database migrations. To be fair, most of the work that I have been doing with databases has centered around MongoDB and schema-less document stores instead of a traditional RDBMS. Since Django does not come with any database migration tools, my standard approach was to make sure that my models are completely thought out before running the manage.py syncdb command. The lack of a good database migration tool was one of the things that originally had turned me off to django.

Enter South. South lets you manage your database in a way very similar to how Ruby on Rails works.

Converting a project to a South-managed project is very easy:

  1. Ensure that your database and models are completely synced up. (i.e. your models are not ahead of your database or vice-versa)

  2. Install South by running [sudo] pip install south

  3. Add South to your INSTALLED_APPS list in the settings.py for your django project.

  4. Run ./manage.py syncdb in your project root directory to add the South database tables to your database.

  5. If you have an existing application that you would like to conver to a South-managed application, run the following command: ./manage.py convert_to_south YOUR_APP_NAME If not, go to the next step!

  6. Now you are ready to go! You can change one of your models and then proceed to the next step.

  7. Run the following command to get South to create an automatic migration for you: ./manage.py schemamigration YOUR_APP_NAME --auto

  8. Now you can apply your newly created migration to your database:./manage.py migrate YOUR_APP_NAME

  9. Congratulations, you have performed your first database migration using South!

South lets you apply up to or back to any migration point by running a command like: ./manage.py migrate YOUR_APP_NAME 0001 (that command would take you back to your initial migration point. You can get a list of all your migrations and a description about each one by running ./manage.py migrate YOUR_APP_NAME --list. This lists all of the migrations you have available and denotes with a (*) which ones have been applied.

South is great for working in a team. All migrations are stored in YOUR_APP_NAME/migrations, so you can simply add these to your VCS and all of your team  members will get all of your migrations. If there is a conflict in some of the migrations that you and a team member have been working on, South will detect it and let you merge the conflicts.

All in all, I am really loving South. It makes working with an RDBMS and Django much more pleasant!

Ever have this problem? You just rebuilt a machine, and when you go to SSH into it, you get the following message:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!

Many people just go edit their ~/.ssh/known_hosts file and carry on. But there is a faster/better way!

OpenSSH comes with a command called ssh-keygen that allows you to generate and manage all your keys, including your ssh fingerprints.

Simple usage for this would be:

ssh-keygen -R HOSTNAME

One of the big problems with hosting your own database solution is that you have to do backups for it on a regular basis. Not only do you need to do backups for it, but you need to also keep backups offsite. Luckily, Amazon S3 allows a cheap and easy solution for your offsite backups.

I found a shell script solution for handling MongoDB backups, but it only does local backups. It keeps a nice history of recent backups, and rotates off the oldest ones when the threshold for age is reached. I modified the code to call a Python script that then synchronizes the newly created backup file to S3. I haven’t wired up any purging functionality yet, and I don’t know if I am going to. S3 storage is so cheap that it really doesn’t matter much. A complete solution would, of course, keep your local files and your remote off-site backups in S3 in sync, but there is also a case to be made for keeping a rich history of backups in the “cloud” so as to be able to revert to any point in history if necessary.

The script that does the magic to synchronize and purge old backups is written in Python, and uses the boto library to quickly do the work.

ACCESS_KEY='YOUR_ACCESS_KEY'
SECRET='YOUR_SECRET_KEY'
BUCKET_NAME='YOUR_BACKUPS_BUCKET' #note that you need to create this bucket first

from boto.s3.connection import S3Connection
from boto.s3.key import Key

def save_file_in_s3(filename):
    conn = S3Connection(ACCESS_KEY, SECRET)
    bucket = conn.get_bucket(BUCKET_NAME)
    k = Key(bucket)
    k.key = filename
    k.set_contents_from_filename(filename)

def get_file_from_s3(filename):
    conn = S3Connection(ACCESS_KEY, SECRET)
    bucket = conn.get_bucket(BUCKET_NAME)
    k = Key(bucket)
    k.key = filename
    k.get_contents_to_filename(filename)

def list_backup_in_s3():
    conn = S3Connection(ACCESS_KEY, SECRET)
    bucket = conn.get_bucket(BUCKET_NAME)
    for i, key in enumerate(bucket.get_all_keys()):
        print "[%s] %s" % (i, key.name)

def delete_all_backups():
    #FIXME: validate filename exists
    conn = S3Connection(ACCESS_KEY, SECRET)
    bucket = conn.get_bucket(BUCKET_NAME)
    for i, key in enumerate(bucket.get_all_keys()):
        print "deleting %s" % (key.name)
        key.delete()

if __name__ == '__main__':
    import sys
    if len(sys.argv) < 3:
        print 'Usage: %s  ' % (sys.argv[0])
    else:
        if sys.argv[1] == 'set':
            save_file_in_s3(sys.argv[2])
        elif sys.argv[1] == 'get':
            get_file_from_s3(sys.argv[2])
        elif sys.argv[1] == 'list':
            list_backup_in_s3()
        elif sys.argv[1] == 'delete':
            delete_all_backups()
        else:
            print 'Usage: %s  ' % (sys.argv[0])

There is obviously a lot more work to be done on this script, but it’s a start.

The appropriate setup for using this script and the AutoMongoBackup utility is to create a slave MongoDB node that receives synchronizations from the master. If you can handle having your Mongo instance locked for reads/writes while a backup is performed (i.e. you have a small database that backs up quickly) then you more than likely do not need to do the slave method.

Anyway, hope this helps! I’d love to hear other ideas about how else this can be done.