Mercurial > public > sg101
view legacy/management/commands/import_old_podcasts.py @ 943:cf9918328c64
Haystack tweaks for Django 1.7.7.
I had to upgrade to Haystack 2.3.1 to get it to work with Django
1.7.7. I also had to update the Xapian backend. But I ran into
problems.
On my laptop anyway (Ubuntu 14.0.4), xapian gets mad when search terms
are greater than 245 chars (or something) when indexing. So I created
a custom field that would simply omit terms greater than 64 chars and
used this field everywhere I previously used a CharField.
Secondly, the custom search form was broken now. Something changed in
the Xapian backend and exact searches stopped working. Fortunately the
auto_query (which I was using originally and broke during an upgrade)
started working again. So I cut the search form back over to doing an
auto_query. I kept the form the same (3 fields) because I didn't want
to change the form and I think it's better that way.
author | Brian Neal <bgneal@gmail.com> |
---|---|
date | Wed, 13 May 2015 20:25:07 -0500 |
parents | ee87ea74d46b |
children |
line wrap: on
line source
""" import_old_podcasts.py - For importing podcasts from SG101 1.0 as csv files. """ from __future__ import with_statement import csv import datetime from django.core.management.base import LabelCommand, CommandError from podcast.models import Channel, Item class Command(LabelCommand): args = '<filename filename ...>' help = 'Imports podcasts from the old database in CSV format' def handle_label(self, filename, **options): """ Process each line in the CSV file given by filename by creating a new weblink object and saving it to the database. """ try: self.channel = Channel.objects.get(pk=1) except Channel.DoesNotExist: raise CommandError("Need a default channel with pk=1") try: with open(filename, "rb") as f: self.reader = csv.DictReader(f) try: for row in self.reader: self.process_row(row) except csv.Error, e: raise CommandError("CSV error: %s %s %s" % ( filename, self.reader.line_num, e)) except IOError: raise CommandError("Could not open file: %s" % filename) def process_row(self, row): """ Process one row from the CSV file: create an object for the row and save it in the database. """ item = Item(channel=self.channel, title=row['title'], author=row['author'], subtitle=row['subtitle'], summary=row['summary'], enclosure_url=row['enclosure_url'], alt_enclosure_url='', enclosure_length=int(row['enclosure_length']), enclosure_type=row['enclosure_type'], guid=row['guid'], pubdate=datetime.datetime.strptime(row['pubdate'], "%Y-%m-%d %H:%M:%S"), duration=row['duration'], keywords=row['keywords'], explicit=row['explicit']) item.save()