# HG changeset patch # User Brian Neal # Date 1391139903 21600 # Node ID 7ce6393e6d30120059dab98b5f20bf84dcca868a # Parent c3115da3ff732e1bc35aa68f014074c85fd655d7 Adding converted blog posts from old blog. diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/000-blog-reboot.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/000-blog-reboot.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,65 @@ +Blog reboot with Blogofile +########################## + +:date: 2011-04-17 14:10 +:tags: Blogging, Blogofile +:slug: blog-reboot-with-blogofile +:author: Brian Neal + +Welcome to my new blog. I've been meaning to start blogging again for some time, especially since +the new version of SurfGuitar101.com_ went live almost two months ago. But the idea of dealing with +WordPress was putting me off. Don't get me wrong, WordPress really is a nice general purpose +blogging platform, but it didn't really suit me anymore. + +I considered creating a new blog in Django_, but I really want to spend all my time and energy on +improving SurfGuitar101 and not tweaking my blog. I started thinking about doing something +simpler. + +Almost by accident, I discovered Blogofile_ by seeing it mentioned in my Twitter feed. Blogofile is +a static blog generator written in Python. After playing with it for a while, I decided to use it +for a blog reboot. It is simple to use, Pythonic, and very configurable. The advantages for me to go +with a static blog are: + +1. No more dealing with WordPress and plugin updates. To be fair, WordPress is very easy to update + these days. Plugins are still a pain, and are often needed to display source code. +2. I can write my blog posts in Markdown_ or reStructuredText_ using my `favorite editor`_ instead + of some lame Javascript editor. Formatting source code is dead simple now. +3. All of my blog content is under version control. +4. Easier to work offline. +5. Easier to deploy. Very little (if any) server configuration. +6. I can use version control with a post-commit hook to deploy the site. + +Disadvantages: + +1. Not as "dynamic". For my blog, this isn't really a problem. Comments can be handled by a service + like Disqus_. +2. Regenerating the entire site can take time. This is only an issue if you have a huge blog with + years of content. A fresh blog takes a fraction of a second to build, and I don't anticipate + this affecting me for some time, if ever. I suspect Blogofile will be improved to include caching + and smarter rebuilds in the future. + +It should be noted that Blogofile seems to require Python 2.6 or later. My production server is +still running 2.5, and I can't easily change this for a while. This really only means I can't use +Mercurial with a *changegroup* hook to automatically deploy the site. This should only be a temporary +issue; I hope to upgrade the server in the future. + +Blogofile comes with some scripts for importing WordPress blogs. Looking over my old posts, some of +them make me cringe. I think I'll save importing them for a rainy day. + +The bottom line is, this style of blogging suits me as a programmer. I get to use all the same +tools I use to write code: a good text editor, the same markup I use for documentation, and version +control. Deployment is a snap, and I don't have a database or complicated server setup to maintain. +Hopefully this means I will blog more. + +Finally, I'd like to give a shout-out to my friend `Trevor Oke`_ who just switched to a static blog +for many of the same reasons. + + +.. _SurfGuitar101.com: http://surfguitar101.com +.. _Django: http://djangoproject.com +.. _Blogofile: http://blogofile.com +.. _Markdown: http://daringfireball.net/projects/markdown/ +.. _reStructuredText: http://docutils.sourceforge.net/rst.html +.. _favorite editor: http://www.vim.org +.. _Disqus: http://disqus.com/ +.. _Trevor Oke: http://trevoroke.com/2011/04/12/converting-to-jekyll.html diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/001-blogofile-rst.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/001-blogofile-rst.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,74 @@ +Blogofile, reStructuredText, and Pygments +######################################### + +:date: 2011-04-17 19:15 +:tags: Blogofile, Pygments, reStructuredText +:slug: blogofile-restructuredtext-and-pygments +:author: Brian Neal + +Blogofile_ has support out-of-the-box for reStructuredText_ and Pygments_. Blogofile's +``syntax_highlight.py`` filter wants you to mark your code blocks with a token such as +``$$code(lang=python)``. I wanted to use the method I am more familiar with, by configuring +reStructuredText with a `custom directive`_. Luckily this is very easy. Here is how I did it. + +First of all, I checked what version of Pygments I had since I used Ubuntu's package +manager to install it. I then visited `Pygments on BitBucket`_, and switched to the tag that matched +my version. I then drilled into the ``external`` directory. I then saved the ``rst-directive.py`` +file to my blog's local repository under the name ``_rst_directive.py``. I named it with a leading +underscore so that Blogofile would ignore it. If this bothers you, you could also add it to +Blogofile's ``site.file_ignore_patterns`` setting. + +Next, I tweaked the settings in ``_rst_directive.py`` by un-commenting the ``linenos`` variant. + +All we have to do now is to get Blogofile to import this module. This can be accomplished by making +use of the `pre_build() hook`_ in your ``_config.py`` file. This is a convenient place to hang +custom code that will run before your blog is built. I added the following code to my +``_config.py`` module + +.. sourcecode:: python + + def pre_build(): + # Register the Pygments Docutils directive + import _rst_directive + +This allows me to embed code in my ``.rst`` files with the ``sourcecode`` directive. For example, +here is what I typed to create the source code snippet above:: + + .. sourcecode:: python + + def pre_build(): + # Register the Pygments Docutils directive + import _rst_directive + +Of course to get it to look nice, we'll need some CSS. I used this Pygments command to generate +a ``.css`` file for the blog. + +.. sourcecode:: bash + + $ pygmentize -f html -S monokai -a .highlight > pygments.css + +I saved ``pygments.css`` in my ``css`` directory and updated my site template to link it in. +Blogofile will copy this file into my ``_site`` directory when I build the blog. + +Here is what I added to my blog's main ``.css`` file to style the code snippets. The important thing +for me was to add an ``overflow: auto;`` setting. This will ensure that a scrollbar will +appear on long lines instead of the code being truncated. + +.. sourcecode:: css + + .highlight { + width: 96%; + padding: 0.5em 0.5em; + border: 1px solid #00ff00; + margin: 1.0em auto; + overflow: auto; + } + +That's it! + +.. _Blogofile: http://blogofile.com +.. _reStructuredText: http://docutils.sourceforge.net/rst.html +.. _Pygments: http://pygments.org/ +.. _custom directive: http://pygments.org/docs/rstdirective/ +.. _Pygments on BitBucket: https://bitbucket.org/birkenfeld/pygments-main +.. _pre_build() hook: http://blogofile.com/documentation/config_file.html#pre-build diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/002-redis-whos-online.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/002-redis-whos-online.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,269 @@ +A better "Who's Online" with Redis & Python +########################################### + +:date: 2011-04-25 12:00 +:tags: Redis, Python +:slug: a-better-who-s-online-with-redis-python +:author: Brian Neal + +**Updated on December 17, 2011:** I found a better solution. Head on over to +the `new post`_ to check it out. + + +Who's What? +----------- + +My website, like many others, has a "who's online" feature. It displays the +names of authenticated users that have been seen over the course of the last ten +minutes or so. It may seem a minor feature at first, but I find it really does a lot to +"humanize" the site and make it seem more like a community gathering place. + +My first implementation of this feature used the MySQL database to update a +per-user timestamp whenever a request from an authenticated user arrived. +Actually, this seemed excessive to me, so I used a strategy involving an "online" +cookie that has a five minute expiration time. Whenever I see an authenticated +user without the online cookie I update their timestamp and then hand them back +a cookie that will expire in five minutes. In this way I don't have to hit the +database on every single request. + +This approach worked fine but it has some aspects that didn't sit right with me: + +* It seems like overkill to use the database to store temporary, trivial information like + this. It doesn't feel like a good use of a full-featured relational database + management system (RDBMS). +* I am writing to the database during a GET request. Ideally, all GET requests should + be idempotent. Of course if this is strictly followed, it would be + impossible to create a "who's online" feature in the first place. You'd have + to require the user to POST data periodically. However, writing to a RDBMS + during a GET request is something I feel guilty about and try to avoid when I + can. + + +Redis +----- + +Enter Redis_. I discovered Redis recently, and it is pure, white-hot +awesomeness. What is Redis? It's one of those projects that gets slapped with +the "NoSQL" label. And while I'm still trying to figure that buzzword out, Redis makes +sense to me when described as a lightweight data structure server. +Memcached_ can store key-value pairs very fast, where the value is always a string. +Redis goes one step further and stores not only strings, but data +structures like lists, sets, and hashes. For a great overview of what Redis is +and what you can do with it, check out `Simon Willison's Redis tutorial`_. + +Another reason why I like Redis is that it is easy to install and deploy. +It is straight C code without any dependencies. Thus you can build it from +source just about anywhere. Your Linux distro may have a package for it, but it +is just as easy to grab the latest tarball and build it yourself. + +I've really come to appreciate Redis for being such a small and lightweight +tool. At the same time, it is very powerful and effective for filling those +tasks that a traditional RDBMS is not good at. + +For working with Redis in Python, you'll need to grab Andy McCurdy's redis-py_ +client library. It can be installed with a simple + +.. sourcecode:: sh + + $ sudo pip install redis + + +Who's Online with Redis +----------------------- + +Now that we are going to use Redis, how do we implement a "who's online" +feature? The first step is to get familiar with the `Redis API`_. + +One approach to the "who's online" problem is to add a user name to a set +whenever we see a request from that user. That's fine but how do we know when +they have stopped browsing the site? We have to periodically clean out the +set in order to time people out. A cron job, for example, could delete the +set every five minutes. + +A small problem with deleting the set is that people will abruptly disappear +from the site every five minutes. In order to give more gradual behavior we +could utilize two sets, a "current" set and an "old" set. As users are seen, we +add their names to the current set. Every five minutes or so (season to taste), +we simply overwrite the old set with the contents of the current set, then clear +out the current set. At any given time, the set of who's online is the union +of these two sets. + +This approach doesn't give exact results of course, but it is perfectly fine for my site. + +Looking over the Redis API, we see that we'll be making use of the following +commands: + +* SADD_ for adding members to the current set. +* RENAME_ for copying the current set to the old, as well as destroying the + current set all in one step. +* SUNION_ for performing a union on the current and old sets to produce the set + of who's online. + +And that's it! With these three primitives we have everything we need. This is +because of the following useful Redis behaviors: + +* Performing a ``SADD`` against a set that doesn't exist creates the set and is + not an error. +* Performing a ``SUNION`` with sets that don't exist is fine; they are simply + treated as empty sets. + +The one caveat involves the ``RENAME`` command. If the key you wish to rename +does not exist, the Python Redis client treats this as an error and an exception +is thrown. + +Experimenting with algorithms and ideas is quite easy with Redis. You can either +use the Python Redis client in a Python interactive interpreter shell, or you can +use the command-line client that comes with Redis. Either way you can quickly +try out commands and refine your approach. + + +Implementation +-------------- + +My website is powered by Django_, but I am not going to show any Django specific +code here. Instead I'll show just the pure Python parts, and hopefully you can +adapt it to whatever framework, if any, you are using. + +I created a Python module to hold this functionality: +``whos_online.py``. Throughout this module I use a lot of exception handling, +mainly because if the Redis server has crashed (or if I forgot to start it, say +in development) I don't want my website to be unusable. If Redis is unavailable, +I simply log an error and drive on. Note that in my limited experience Redis is +very stable and has not crashed on me once, but it is good to be defensive. + +The first important function used throughout this module is a function to obtain +a connection to the Redis server: + +.. sourcecode:: python + + import logging + import redis + + logger = logging.getLogger(__name__) + + def _get_connection(): + """ + Create and return a Redis connection. Returns None on failure. + """ + try: + conn = redis.Redis(host=HOST, port=PORT, db=DB) + return conn + except redis.RedisError, e: + logger.error(e) + + return None + +The ``HOST``, ``PORT``, and ``DB`` constants can come from a +configuration file or they could be module-level constants. In my case they are set in my +Django ``settings.py`` file. Once we have this connection object, we are free to +use the Redis API exposed via the Python Redis client. + +To update the current set whenever we see a user, I call this function: + +.. sourcecode:: python + + # Redis key names: + USER_CURRENT_KEY = "wo_user_current" + USER_OLD_KEY = "wo_user_old" + + def report_user(username): + """ + Call this function when a user has been seen. The username will be added to + the current set. + """ + conn = _get_connection() + if conn: + try: + conn.sadd(USER_CURRENT_KEY, username) + except redis.RedisError, e: + logger.error(e) + +If you are using Django, a good spot to call this function is from a piece +of `custom middleware`_. I kept my "5 minute cookie" algorithm to avoid doing this on +every request although it is probably unnecessary on my low traffic site. + +Periodically you need to "age out" the sets by destroying the old set, moving +the current set to the old set, and then emptying the current set. + +.. sourcecode:: python + + def tick(): + """ + Call this function to "age out" the old set by renaming the current set + to the old. + """ + conn = _get_connection() + if conn: + # An exception may be raised if the current key doesn't exist; if that + # happens we have to delete the old set because no one is online. + try: + conn.rename(USER_CURRENT_KEY, USER_OLD_KEY) + except redis.ResponseError: + try: + del conn[old] + except redis.RedisError, e: + logger.error(e) + except redis.RedisError, e: + logger.error(e) + +As mentioned previously, if no one is on your site, eventually your current set +will cease to exist as it is renamed and not populated further. If you attempt to +rename a non-existent key, the Python Redis client raises a ``ResponseError`` exception. +If this occurs we just manually delete the old set. In a bit of Pythonic cleverness, +the Python Redis client supports the ``del`` syntax to support this operation. + +The ``tick()`` function can be called periodically by a cron job, for example. If you are using Django, +you could create a `custom management command`_ that calls ``tick()`` and schedule cron +to execute it. Alternatively, you could use something like Celery_ to schedule a +job to do the same. (As an aside, Redis can be used as a back-end for Celery, something that I hope +to explore in the near future). + +Finally, you need a way to obtain the current "who's online" set, which again is +a union of the current and old sets. + +.. sourcecode:: python + + def get_users_online(): + """ + Returns a set of user names which is the union of the current and old + sets. + """ + conn = _get_connection() + if conn: + try: + # Note that keys that do not exist are considered empty sets + return conn.sunion([USER_CURRENT_KEY, USER_OLD_KEY]) + except redis.RedisError, e: + logger.error(e) + + return set() + +In my Django application, I calling this function from a `custom inclusion template tag`_ +. + + +Conclusion +---------- + +I hope this blog post gives you some idea of the usefulness of Redis. I expanded +on this example to also keep track of non-authenticated "guest" users. I simply added +another pair of sets to track IP addresses. + +If you are like me, you are probably already thinking about shifting some functions that you +awkwardly jammed onto a traditional database to Redis and other "NoSQL" +technologies. + +.. _Redis: http://redis.io/ +.. _Memcached: http://memcached.org/ +.. _Simon Willison's Redis tutorial: http://simonwillison.net/static/2010/redis-tutorial/ +.. _redis-py: https://github.com/andymccurdy/redis-py +.. _Django: http://djangoproject.com +.. _Redis API: http://redis.io/commands +.. _SADD: http://redis.io/commands/sadd +.. _RENAME: http://redis.io/commands/rename +.. _SUNION: http://redis.io/commands/sunion +.. _custom middleware: http://docs.djangoproject.com/en/1.3/topics/http/middleware/ +.. _custom management command: http://docs.djangoproject.com/en/1.3/howto/custom-management-commands/ +.. _Celery: http://celeryproject.org/ +.. _custom inclusion template tag: http://docs.djangoproject.com/en/1.3/howto/custom-template-tags/#inclusion-tags +.. _new post: http://deathofagremmie.com/2011/12/17/who-s-online-with-redis-python-a-slight-return/ diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/003-nl2br-markdown-ext.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/003-nl2br-markdown-ext.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,100 @@ +A newline-to-break Python-Markdown extension +############################################ + +:date: 2011-05-09 22:40 +:tags: Markdown, Python +:slug: a-newline-to-break-python-markdown-extension +:author: Brian Neal + +When I launched a new version of my website, I decided the new forums would use +Markdown_ instead of BBCode_ for the markup. This decision was mainly a personal +one for aesthetic reasons. I felt that Markdown was more natural to write compared +to the clunky square brackets of BBCode. + +My new site is coded in Python_ using the Django_ framework. For a Markdown implementation +I chose `Python-Markdown`_. + +My mainly non-technical users seemed largely ambivalent to the change from +BBCode to Markdown. This was probably because I gave them a nice Javascript editor +(`MarkItUp!`_) which inserted the correct markup for them. + +However, shortly after launch, one particular feature of Markdown really riled up +some users: the default line break behavior. In strict Markdown, to create a new +paragraph, you must insert a blank line between paragraphs. Hard returns (newlines) +are simply ignored, just like they are in HTML. You can, however, force a break by +ending a line with two blank spaces. This isn't very intuitive, unlike the rest of +Markdown. + +Now I agree the default behavior is useful if you are creating an online document, like a blog post. +However, non-technical users really didn't understand this behavior at all in the context +of a forum post. For example, many of my users post radio-show playlists, formatted with +one song per line. When such a playlist was pasted into a forum post, Markdown made it +all one giant run-together paragraph. This did not please my users. Arguably, they should +have used a Markdown list. But it became clear teaching people the new syntax wasn't +going to work, especially when it used to work just fine in BBCode and they had created +their playlists in the same way for several years. + +It turns out I am not alone in my observations (or on the receiving end of user wrath). Other, +much larger sites, like StackOverflow_ and GitHub_, have altered their Markdown parsers +to treat newlines as hard breaks. How can this be done with Python-Markdown? + +It turns out this is really easy. Python-Markdown was designed with user customization +in mind by offering an extension facility. The `extension documentation`_ is good, +and you can find extension writing help on the friendly `mailing list`_. + +Here is a simple extension for Python-Markdown that turns newlines into HTML
tags. + +.. sourcecode:: python + + """ + A python-markdown extension to treat newlines as hard breaks; like + StackOverflow and GitHub flavored Markdown do. + + """ + import markdown + + + BR_RE = r'\n' + + class Nl2BrExtension(markdown.Extension): + + def extendMarkdown(self, md, md_globals): + br_tag = markdown.inlinepatterns.SubstituteTagPattern(BR_RE, 'br') + md.inlinePatterns.add('nl', br_tag, '_end') + + + def makeExtension(configs=None): + return Nl2BrExtension(configs) + +I saved this code in a file called ``mdx_nl2br.py`` and put it on my ``PYTHONPATH``. You can then use +it in a Django template like this: + +.. sourcecode:: django + + {{ value|markdown:"nl2br" }} + +To use the extension in Python code, something like this should do the trick: + +.. sourcecode:: python + + import markdown + md = markdown.Markdown(safe_mode=True, extensions=['nl2br']) + converted_text = md.convert(text) + +**Update (June 21, 2011):** This extension is now being distributed with +Python-Markdown! See `issue 13 on github`_ for the details. Thanks to Waylan +Limberg for the help in creating the extension and for including it with +Python-Markdown. + + +.. _Markdown: http://daringfireball.net/projects/markdown/ +.. _BBCode: http://en.wikipedia.org/wiki/BBCode +.. _Python: http://python.org +.. _Django: http://djangoproject.com +.. _MarkItUp!: http://markitup.jaysalvat.com/home/ +.. _StackOverflow: http://blog.stackoverflow.com/2009/10/markdown-one-year-later/ +.. _GitHub: http://github.github.com/github-flavored-markdown/ +.. _Python-Markdown: http://www.freewisdom.org/projects/python-markdown/ +.. _extension documentation: http://www.freewisdom.org/projects/python-markdown/Writing_Extensions +.. _mailing list: http://lists.sourceforge.net/lists/listinfo/python-markdown-discuss +.. _issue 13 on github: https://github.com/waylan/Python-Markdown/issues/13 diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/004-fructose-contrib.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/004-fructose-contrib.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,42 @@ +I contributed to Fructose +######################### + +:date: 2011-05-31 21:40 +:tags: Fructose, C++, Python, UnitTesting +:slug: i-contributed-to-fructose +:author: Brian Neal + +At work we started using CxxTest_ as our unit testing framework. We like it because +it is very light-weight and easy to use. We've gotten a tremendous amount of benefit +from using a unit testing framework, much more than I had ever imagined. We now have +almost 700 tests, and I cannot imagine going back to the days of no unit tests or ad-hoc +testing. It is incredibly reassuring to see all the tests pass after making a significant +change to the code base. There is no doubt in my mind that our software-hardware integration +phases have gone much smoother thanks to our unit tests. + +Sadly it seems CxxTest is no longer actively supported. However this is not of great +concern to us. The code is so small we are fairly confident we could tweak it if necessary. + +I recently discovered Fructose_, a unit testing framework written by Andrew Marlow. It too +has similar goals of being small and simple to use. One thing I noticed that CxxTest had that +Fructose did not was a Python code generator that took care of creating the ``main()`` function +and registering all the tests with the framework. Since C++ has very little introspection +capabilities, C++ unit testing frameworks have historically laid the burden of registering +tests on the programmer. Some use macros to help with this chore, but littering your code +with ugly macros makes tests annoying to write. And if anything, you want your tests to be +easy to write so your colleagues will write lots of tests. CxxTest approached this problem by +providing first a Perl script, then later a Python script, to automate this part of the process. + +I decided it would be interesting to see if I could provide such a script for Fructose. After +a Saturday of hacking, I'm happy to say Andrew has accepted the script and it now ships with +Fructose version 1.1.0. I hope to improve the script to not only run all the tests but to also +print out a summary of the number of tests that passed and failed at the end, much like CxxTest does. +This will require some changes to the C++ code. Also on my wish list is to make the script +extensible, so that others can easily change the output and code generation to suit their needs. + +I've hosted the code for the Python script, which I call ``fructose_gen.py`` on Bitbucket_. +Feedback is greatly appreciated. + +.. _CxxTest: http://cxxtest.tigris.org/ +.. _Fructose: http://fructose.sourceforge.net/ +.. _Bitbucket: https://bitbucket.org/bgneal/fructose_gen/src diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/005-django-unicode-error-uploads.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/005-django-unicode-error-uploads.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,53 @@ +Django Uploads and UnicodeEncodeError +##################################### + +:date: 2011-06-04 20:00 +:tags: Django, Python, Linux, Unicode +:slug: django-uploads-and-unicodeencodeerror +:author: Brian Neal + +Something strange happened that I wish to document in case it helps others. I +had to reboot my Ubuntu server while troubleshooting a disk problem. After the +reboot, I began receiving internal server errors whenever someone tried to view +a certain forum thread on my Django_ powered website. After some detective work, +I determined it was because a user that had posted in the thread had an avatar +image whose filename contained non-ASCII characters. The image file had been +there for months, and I still cannot explain why it just suddenly started +happening. + +The traceback I was getting ended with something like this: + +.. sourcecode:: python + + File "/django/core/files/storage.py", line 159, in _open + return File(open(self.path(name), mode)) + + UnicodeEncodeError: 'ascii' codec can't encode characters in position 72-79: ordinal not in range(128) + +So it appeared that the ``open()`` call was triggering the error. This led me on +a twisty Google search which had many dead ends. Eventually I found a suitable +explanation. Apparently, Linux filesystems don't enforce a particular Unicode +encoding for filenames. Linux applications must decide how to interpret +filenames all on their own. The Python OS library (on Linux) uses environment +variables to determine what locale you are in, and this chooses the encoding for +filenames. If these environment variables are not set, Python falls back to +ASCII (by default), and hence the source of my ``UnicodeEncodeError``. + +So how do you tell a Python instance that is running under Apache / ``mod_wsgi`` +about these environment variables? It turns out the answer is in the `Django +documentation`_, albeit in the ``mod_python`` integration section. + +So, to fix the issue, I added the following lines to my ``/etc/apache2/envvars`` +file: + +.. sourcecode:: bash + + export LANG='en_US.UTF-8' + export LC_ALL='en_US.UTF-8' + +Note that you must cold stop and re-start Apache for these changes to take +effect. I got tripped up at first because I did an ``apache2ctrl +graceful``, and that was not sufficient to create a new environment. + +.. _Django: http://djangoproject.com +.. _Django documentation: https://docs.djangoproject.com/en/1.3/howto/deployment/modpython/#if-you-get-a-unicodeencodeerror diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/006-nl2br-in-python-markdown.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/006-nl2br-in-python-markdown.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,16 @@ +My newline-to-break extension now shipping with Python-Markdown +############################################################### + +:date: 2011-06-21 22:15 +:tags: Markdown, Python +:slug: my-newline-to-break-extension-now-shipping-with-python-markdown +:author: Brian Neal + +Here is a quick update on a `previous post`_ I made about a newline-to-break +extension for `Python-Markdown`_. I'm very happy to report that the extension will +now be `shipping with Python-Markdown`_! Thanks to developer Waylan Limberg for +including it! + +.. _previous post: http://deathofagremmie.com/2011/05/09/a-newline-to-break-python-markdown-extension/ +.. _Python-Markdown: http://www.freewisdom.org/projects/python-markdown/ +.. _shipping with Python-Markdown: https://github.com/waylan/Python-Markdown/issues/13 diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/007-subversion-contrib.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/007-subversion-contrib.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,143 @@ +Contributing to open source - a success story and advice for newbies +#################################################################### + +:date: 2011-06-23 21:45 +:tags: Subversion, OpenSource +:slug: contributing-to-open-source-a-success-story-and-advice-for-newbies +:author: Brian Neal + +Recently, my team at work found a `bug in Subversion`_, I submitted a patch, and it +was accepted! This was very exciting for me so I thought I would share this +story in the hopes of inspiring others to contribute to open source projects. +It may not be as hard as you might think! + +The Bug +======= + +We use Subversion_ at work for revision control. My colleague and I were trying +to merge a branch back to trunk when we ran into some strange behavior. We make +use of Subversion properties, which allow you to attach arbitrary metadata to +files and directories. Our project has to deliver our source code and +documentation to the customer in a required directory format (can you guess who +our customer is?). However not all files need to be sent to the customer. To +solve this problem we use a simple "yes/no" delivery property on each file to +control whether it is delivered or not. Before making a delivery, a script is +run that prunes out the files that have the delivery flag set to "no". + +When our merge was completed, many files were marked with having merge conflicts +on the delivery property. Looking through the logs it was discovered that after +we had made our branch, someone had changed the delivery property on some files +to "yes" on the trunk. Someone else had also changed the delivery property +independently to "yes" on the branch. When we attempted to merge the branch back +to trunk, we were getting merge conflicts, even though we were trying to change +the delivery property value to "yes" on both the trunk and branch. Why was this +a conflict? This didn't seem quite right. + +I signed up for the Subversion user's mailing list and made a short post +summarizing our issue. I later learned that it is proper etiquette to attach a +bash script that can demonstrate the problem. Despite this, a Subversion developer +took interest in my post and created a test script in an attempt to reproduce our +issue. At first it looked like he could not reproduce the problem. However another +helpful user pointed out a bug in his script. Once this was fixed, the developer +declared our problem a genuine bug and created a ticket for it in the issue tracker. + +The Patch +========= + +Impressed by all of this, I thanked him for his efforts and tentatively asked if +I could help. The developer told me which file and function he thought the +problem might be in. I downloaded the Subversion source and began looking at the +code. I was fairly impressed with the code quality, so I decided I would try to +create a patch for the bug over the weekend. We really wanted that bug fixed, +and I was genuinely curious to see if I would be able to figure something out. +It would be an interesting challenge and a test of my novice open source skills. + +When the weekend came I began a more thorough examination of the Subversion +website. The Subversion team has done a great job in providing documentation on +their development process. This includes a contributing guide and patch +submittal process. I also discovered they had recently added a makefile that +downloaded the Subversion source code and the source for all of Subversion's +dependencies. The makefile then builds everything with debug turned on. Wow! It +took me a few tries to get this working, but the problems were because I did not +have all the development tools installed on my Ubuntu box. Once this was +sorted, everything went smoothly, and in a matter of minutes I had a Subversion +executable I could run under the gdb debugger. Nice! + +I studied the code for about an hour, peeking and poking at a few things in the +debugger. I used the script the developer wrote to recreate the problem. I +wasn't quite sure what I was doing, as I was brand new to this code base. But +the code was clearly written and commented well. My goal was to get a patch that +was in the 80-100% complete range. I wanted to do enough work that a core +developer would be able to see what I was doing and either commit it outright or +easily fill in the parts that I missed. After a while I thought I had a solution +and generated a patch. I sent it to the Subversion developer's mailing list as +per the contributing guide. + +The Wait +======== + +Next I began probably the worst part for a contributor. I had to wait and see if +I got any feedback. On some open source projects a patch may languish for months. +It all depends on the number of developers and how busy they are. My chances +didn't look good as the developers were in the initial stages of getting a +beta version of 1.7 out the door. It was also not clear to me who "owned" the +issue tracker. On some projects, the issue tracker is wide open to the +community. Was I supposed to update the ticket? I wasn't quite sure, and the +contributing guide was silent on this issue. I eventually concluded I was not; +it looked like only committers were using the tracker. Patches were being +discussed on the mailing list instead of in the tracker. This is a bit different +than some projects I am familiar with. + +I didn't have to wait long. After a few days, the original developer who +confirmed my bug took interest again. He looked at my patch, and thought I had +missed something. He suggested a change and asked for my opinion. I looked at +the code again; it seemed like a good change and I told him I agreed. I also +warned him I was brand new to the code, and to take my opinion with a grain a +salt. After running my change against the tests, he then committed my patch! +One small victory for open source! + +Final Thoughts +============== + +So what went right here? I have to hand it to the Subversion team. They have +been in business a long time, and they have excellent documentation for people +who want to contribute. The makefile they created that sets up a complete +development environment most definitely tipped the scale for me and enabled me +to create my patch. Without that I'm not sure I would have had the time or +patience to get all that unfamiliar source code built. The Subversion team has +really worked hard at lowering the barrier for new contributors. + +My advice to people who want to contribute to open source but aren't quite sure +how to go about doing it: + +- Spend some time reading the documentation. This includes any FAQ's and + contributor guides (if any). +- Monitor the user and developer mailing lists to get a feel for how the + community operates. Each project has different customs and traditions. +- You may also wish to hang out on the project's IRC channel for the same + reason. +- When writing on the mailing lists, be extremely concise and polite. + You don't want to waste anyone's time, and you don't want to + be seen as someone who thinks they are entitled to a fix. Just remember you + are the new guy. You can't just barge in and make demands. +- Ask how you can help. Nothing makes a developer happier when someone asks how + they can help. Remember, most of the people in the community are volunteers. +- Open source can sometimes be "noisy". There will be people who + won't quite understand your issue and may hurriedly suggest an incorrect solution or give + incomplete advice. Study their responses and be polite. You may also wish to resist the temptation + to reply right away. This is especially hard when you are new and you don't + know who the "real" developers are. However you should assume everyone is trying to + help. +- Finally, be patient. Again, most folks are volunteers. They have real jobs, + families and lives. The project may also be preoccupied with other tasks, like + getting a beta out the door. Now may not be a good time for a brand new + feature, or your bug may not be considered a show stopper to the majority of + the community. + +A big thank-you to Stefan Sperling from the Subversion team who shepherded my +bug report and patch through their process. + +I hope this story encourages you to contribute to open source software! + +.. _bug in Subversion: http://subversion.tigris.org/issues/show_bug.cgi?id=3919 +.. _Subversion: http://subversion.apache.org/ diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/008-oauth-python-gdata.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/008-oauth-python-gdata.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,219 @@ +Implementing OAuth using Google's Python Client Library +####################################################### + +:date: 2011-07-04 13:00 +:tags: Python, OAuth, Google, GData +:slug: implementing-oauth-using-google-s-python-client-library +:author: Brian Neal + +My Django_ powered website allows users to submit events for a site calendar +that is built upon Google Calendar. After an admin approves events, I use +Google's `Python Client Library`_ to add, delete, or update events on the Google +calendar associated with my personal Google account. I wrote this application a +few years ago, and it used the ClientLogin_ method for authentication. I +recently decided to upgrade this to the OAuth_ authentication method. The +ClientLogin method isn't very secure and it doesn't play well with Google's +`two-step verification`_. After hearing about a friend who had his GMail account +compromised and all his email deleted I decided it was long past due to get +two-step verification on my account. But first I needed to upgrade my web +application to OAuth. + +In this post I'll boil down the code I used to implement the elaborate OAuth +dance. It really isn't that much code, but the Google documentation is somewhat +confusing and scattered across a bewildering number of documents. I found at +least one error in the documentation that I will point out. Although I am using +Django, I will omit details specific to Django where I can. + +In addition to switching from ClientLogin to OAuth, I also upgraded to version +2.0 of the Google Data API. This had more implications for my calendar-specific +code, and perhaps I can go over that in a future post. + +Getting started and registering with Google +=========================================== + +To understand the basics of OAuth, I suggest you read `OAuth 1.0 for Web +Applications`_. I decided to go for maximum security and use RSA-SHA1 signing on +all my requests to Google. This requires that I verify my domain and then +`register my application`_ with Google, which includes uploading a security +certificate. Google provides documentation that describes how you can `create a +self-signing private key and certificate`_ using OpenSSL. + +Fetching a Request Token and authorizing access +=============================================== + +To perform the first part of the OAuth dance, you must ask Google for a request +token. When you make this request, you state the "scope" of your future work by +listing the Google resources you are going to access. In our case, this is the +calendar resources. You also provide a "consumer key" that Google assigned to +you when you registered your application. This allows Google to retrieve the +security certificate you previously uploaded when you registered. This is very +important because this request is going to be signed with your private key. +Fortunately the Python library takes care of all the signing details, you simply +must provide your private key in PEM format. And finally, you provide a +"callback URL" that Google will send your browser to after you (or your users) +have manually authorized this request. + +Once you have received the request token from Google, you have to squirrel it +away somewhere, then redirect your (or your user's) browser to a Google +authorization page. Once the user has authorized your application, Google sends +the browser to the callback URL to continue the process. Here I show the +distilled code I used that asks for a request token, then sends the user to the +authorization page. + +.. sourcecode:: python + + import gdata.gauth + from gdata.calendar_resource.client import CalendarResourceClient + + USER_AGENT = 'mydomain-myapp-v1' # my made up user agent string + + client = CalendarResourceClient(None, source=USER_AGENT) + + # obtain my private key that I saved previously on the filesystem: + with open(settings.GOOGLE_OAUTH_PRIVATE_KEY_PATH, 'r') as f: + rsa_key = f.read() + + # Ask for a request token: + # scopes - a list of scope strings that the request token is for. See + # http://code.google.com/apis/gdata/faq.html#AuthScopes + # callback_url - URL to send the user after authorizing our app + + scopes = ['https://www.google.com/calendar/feeds/'] + callback_url = 'http://example.com/some/url/to/callback' + + request_token = client.GetOAuthToken( + scopes, + callback_url, + settings.GOOGLE_OAUTH_CONSUMER_KEY, # from the registration process + rsa_private_key=rsa_key) + + # Before redirecting, save the request token somewhere; here I place it in + # the session (this line is Django specific): + request.session[REQ_TOKEN_SESSION_KEY] = request_token + + # Generate the authorization URL. + # Despite the documentation, don't do this: + # auth_url = request_token.generate_authorization_url(domain=None) + # Do this instead if you are not using a Google Apps domain: + auth_url = request_token.generate_authorization_url() + + # Now redirect the user somehow to the auth_url; here is how you might do + # it in Django: + return HttpResponseRedirect(auth_url) + +A couple of notes on the above: + +* You don't have to use ``CalendarResourceClient``, it just made the most sense + for me since I am doing calendar stuff later on. Any class that inherits from + ``gdata.client.GDClient`` will work. You might be able to use that class + directly. Google uses ``gdata.docs.client.DocsClient`` in their examples. +* I chose to store my private key in a file rather than the database. If you do + so, it's probably a good idea to make the file readable only to the user your + webserver runs your application as. +* After getting the request token you must save it somehow. You can save it in + the session, the database, or perhaps a file. Since this is only temporary, I + chose to save it in the session. The code I have here is Django specific. +* When generating the authorization URL, don't pass in ``domain=None`` if you + aren't using a Google Apps domain like the documentation states. This appears + to be an error in the documentation. Just omit it and let it use the default + value of ``"default"`` (see the source code). +* After using the request token to generate the authorization URL, redirect the + browser to it. + +Extracting and upgrading to an Access Token +=========================================== + +The user will then be taken to a Google authorization page. The page will show the +user what parts of their Google account your application is trying to access +using the information you provided in the ``scopes`` parameter. If the user +accepts, Google will then redirect the browser to your callback URL where we can +complete the process. + +The code running at our callback URL must retrieve the request token that we +saved earlier, and combine that with certain ``GET`` parameters Google attached +to our callback URL. This is all done for us by the Python library. We then send +this new token back to Google to upgrade it to an actual access token. If this +succeeds, we can then save this new access token in our database for use in +subsequent Google API operations. The access token is a Python object, so you +can serialize it use the pickle module, or use routines provided by Google +(shown below). + +.. sourcecode:: python + + # Code running at our callback URL: + # Retrieve the request token we saved earlier in our session + saved_token = request.session[REQ_TOKEN_SESSION_KEY] + + # Authorize it by combining it with GET parameters received from Google + request_token = gdata.gauth.AuthorizeRequestToken(saved_token, + request.build_absolute_uri()) + + # Upgrade it to an access token + client = CalendarResourceClient(None, source=USER_AGENT) + access_token = client.GetAccessToken(request_token) + + # Now save access_token somewhere, e.g. a database. So first serialize it: + access_token_str = gdata.gauth.TokenToBlob(access_token) + + # Save to database (details omitted) + +Some notes on the above code: + +* Once called back, our code must retrieve the request token we saved in our + session. The code shown is specific to Django. +* We then combine this saved request token with certain ``GET`` parameters that + Google added to our callback URL. The ``AuthorizeRequestToken`` function takes care of + those details for us. The second argument to that function requires the full URL + including ``GET`` parameters as a string. Here I populate that argument by + using a Django-specific method of retrieving that information. +* Finally, you upgrade your token to an access token by making one last call to + Google. You should now save a serialized version of this access token in your + database for future use. + +Using your shiny new Access Token +================================= + +Once you have saved your access token, you won't have to do this crazy dance +again until the token either expires, or the user revokes your application's +access to the Google account. To use it in a calendar operation, for example, +you simply retrieve it from your database, deserialize it, and then use it to +create a ``CalendarClient``. + +.. sourcecode:: python + + from gdata.calendar.client import CalendarClient + + # retrieve access token from the database: + access_token_str = ... + access_token = gdata.gauth.TokenFromBlob(access_token_str) + + client = CalendarClient(source=USER_AGENT, auth_token=access_token) + + # now use client to make calendar operations... + +Conclusion +========== + +The main reason I wrote this blog post is I wanted to show a concrete example of +using RSA-SHA1 and version 2.0 of the Google API together. All of the +information I have presented is in the Google documentation, but it is spread +across several documents and jumbled up with example code for version 1.0 and +HMAC-SHA1. Do not be afraid to look at the source code for the Python client +library. Despite Google's strange habit of ignoring PEP-8_ and using +LongJavaLikeMethodNames, the code is logical and easy to read. Their library is +built up in layers, and you may have to dip down a few levels to find out what +is going on, but it is fairly straightforward to read if you combine it with +their online documentation. + +I hope someone finds this useful. Your feedback is welcome. + + +.. _Django: http://djangoproject.com +.. _Python Client Library: http://code.google.com/apis/calendar/data/2.0/developers_guide_python.html +.. _ClientLogin: http://code.google.com/apis/calendar/data/2.0/developers_guide_python.html#AuthClientLogin +.. _OAuth: http://code.google.com/apis/gdata/docs/auth/oauth.html +.. _two-step verification: http://googleblog.blogspot.com/2011/02/advanced-sign-in-security-for-your.html +.. _OAuth 1.0 for Web Applications: http://code.google.com/apis/accounts/docs/OAuth.html +.. _register my application: http://code.google.com/apis/accounts/docs/RegistrationForWebAppsAuto.html +.. _create a self-signing private key and certificate: http://code.google.com/apis/gdata/docs/auth/oauth.html#GeneratingKeyCert +.. _PEP-8: http://www.python.org/dev/peps/pep-0008/ diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/009-windows-trac-upgrade.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/009-windows-trac-upgrade.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,43 @@ +Upgrading Trac on Windows Gotchas +################################# + +:date: 2011-09-12 22:15 +:tags: Python, Trac, Subversion, Windows +:slug: upgrading-trac-on-windows-gotchas +:author: Brian Neal + +At work, we are outfitted with Windows servers. Despite this obstacle, I managed +to install Trac_ and Subversion_ a few years ago. During a break in the action, +we decided to update Subversion (SVN) and Trac. Since we are on Windows, this +means we have to rely on the `kindness of strangers`_ for Subversion binaries. I +ran into a couple of gotchas I'd like to document here to help anyone else who +runs into these. + +I managed to get Subversion and Trac up and running without any real problems. +However when Trac needed to access SVN to display changesets or a timeline, for +example, I got an error: + +``tracerror: unsupported version control system "svn" no module named _fs`` + +After some googling, I finally found that this issue is `documented on the Trac +wiki`_, but it was kind of hard to find. To fix this problem, you have to rename +the Python SVN binding's DLLs to ``*.pyd``. Specifically, change the +``libsvn/*.dll`` files to ``libsvn/*.pyd``, but don't change the name of +``libsvn_swig_py-1.dll``. I'd really like to hear an explanation of why one +needs to do this. Why doesn't the Python-Windows build process do this for you? + +The second problem I ran into dealt with mod_wsgi_ on Windows. Originally, a few +years ago, I setup Trac to run under mod_python_. mod_python has long been +out of favor, so I decided to cut over to mod_wsgi. On my Linux boxes, I always +run mod_wsgi in daemon mode. When I tried to configure this on Windows, Apache +complained about an unknown directive ``WSGIDaemonProcess``. It turns out that +`this mode is not supported on Windows`_. You'll have to use the embedded mode on +Windows. + +.. _Trac: http://trac.edgewall.org/ +.. _Subversion: http://subversion.apache.org/ +.. _kindness of strangers: http://sourceforge.net/projects/win32svn/ +.. _documented on the Trac wiki: http://trac.edgewall.org/wiki/TracSubversion +.. _mod_wsgi: http://code.google.com/p/modwsgi/ +.. _mod_python: http://www.modpython.org/ +.. _this mode is not supported on Windows: http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/010-redis-whos-online-return.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/010-redis-whos-online-return.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,100 @@ +Who's Online with Redis & Python, a slight return +################################################# + +:date: 2011-12-17 19:05 +:tags: Redis, Python +:slug: who-s-online-with-redis-python-a-slight-return +:author: Brian Neal + +In a `previous post`_, I blogged about building a "Who's Online" feature using +Redis_ and Python_ with redis-py_. I've been integrating Celery_ into my +website, and I stumbled across this old code. Since I made that post, I +discovered yet another cool feature in Redis: sorted sets. So here is an even +better way of implementing this feature using Redis sorted sets. + +A sorted set in Redis is like a regular set, but each member has a numeric +score. When you add a member to a sorted set, you also specify the score for +that member. You can then retrieve set members if their score falls into a +certain range. You can also easily remove members outside a given score range. + +For a "Who's Online" feature, we need a sorted set to represent the set +of all users online. Whenever we see a user, we insert that user into the set +along with the current time as their score. This is accomplished with the Redis +zadd_ command. If the user is already in the set, zadd_ simply updates +their score with the current time. + +To obtain the curret list of who's online, we use the zrangebyscore_ command to +retrieve the list of users who's score (time) lies between, say, 15 minutes ago, +until now. + +Periodically, we need to remove stale members from the set. This can be +accomplished by using the zremrangebyscore_ command. This command will remove +all members that have a score between minimum and maximum values. In this case, +we can use the beginning of time for the minimum, and 15 minutes ago for the +maximum. + +That's really it in a nutshell. This is much simpler than my previous +solution which used two sets. + +So let's look at some code. The first problem we need to solve is how to +convert a Python ``datetime`` object into a score. This can be accomplished by +converting the ``datetime`` into a POSIX timestamp integer, which is the number +of seconds from the UNIX epoch of January 1, 1970. + +.. sourcecode:: python + + import datetime + import time + + def to_timestamp(dt): + """ + Turn the supplied datetime object into a UNIX timestamp integer. + + """ + return int(time.mktime(dt.timetuple())) + +With that handy function, here are some examples of the operations described +above. + +.. sourcecode:: python + + import redis + + # Redis set keys: + USER_SET_KEY = "whos_online:users" + + # the period over which we collect who's online stats: + MAX_AGE = datetime.timedelta(minutes=15) + + # obtain a connection to redis: + conn = redis.StrictRedis() + + # add/update a user to the who's online set: + + username = "sally" + ts = to_timestamp(datetime.datetime.now()) + conn.zadd(USER_SET_KEY, ts, username) + + # retrieve the list of users who have been active in the last MAX_AGE minutes + + now = datetime.datetime.now() + min = to_timestamp(now - MAX_AGE) + max = to_timestamp(now) + + whos_online = conn.zrangebyscore(USER_SET_KEY, min, max) + + # e.g. whos_online = ['sally', 'harry', 'joe'] + + # periodically remove stale members + + cutoff = to_timestamp(datetime.datetime.now() - MAX_AGE) + conn.zremrangebyscore(USER_SET_KEY, 0, cutoff) + +.. _previous post: http://deathofagremmie.com/2011/04/25/a-better-who-s-online-with-redis-python/ +.. _Redis: http://redis.io/ +.. _Python: http://www.python.org +.. _redis-py: https://github.com/andymccurdy/redis-py +.. _Celery: http://celeryproject.org +.. _zadd: http://redis.io/commands/zadd +.. _zrangebyscore: http://redis.io/commands/zrangebyscore +.. _zremrangebyscore: http://redis.io/commands/zremrangebyscore diff -r c3115da3ff73 -r 7ce6393e6d30 content/Coding/011-ts3-python-javascript.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/content/Coding/011-ts3-python-javascript.rst Thu Jan 30 21:45:03 2014 -0600 @@ -0,0 +1,311 @@ +A TeamSpeak 3 viewer with Python & Javascript +############################################# + +:date: 2012-01-20 19:15 +:tags: Python, Javascript, TeamSpeak +:slug: a-teamspeak-3-viewer-with-python-javascript +:author: Brian Neal + +The Problem +=========== + +My gaming clan started using `TeamSpeak 3`_ (TS3) for voice communications, so +it wasn't long before we wanted to see who was on the TS3 server from the clan's +server status page. Long ago, before I met Python, I had built the clan a server +status page in PHP. This consisted of cobbling together various home-made and +3rd party PHP scripts for querying game servers (Call of Duty, Battlefield) and +voice servers (TeamSpeak 2 and Mumble). But TeamSpeak 3 was a new one for us, +and I didn't have anything to query that. My interests in PHP are long behind +me, but we needed to add a TS3 viewer to the PHP page. The gaming clan's web +hosting is pretty vanilla; in other words PHP is the first class citizen. If I +really wanted to host a Python app I probably could have resorted to Fast CGI or +something. But I had no experience in that and no desire to go that way. + +I briefly thought about finding a 3rd party PHP library to query a TS3 server. +The libraries are out there, but they are as you might expect: overly +complicated and/or pretty amateurish (no public source code repository). I even +considered writing my own PHP code to do the job, so I started looking for any +documentation on the TS3 server query protocol. Luckily, there is a `TS3 +query protocol document`_, and it is fairly decent. + +But, I just could not bring myself to write PHP again. On top of this, the +gaming clan's shared hosting blocks non-standard ports. If I did have a PHP +solution, the outgoing query to the TS3 server would have been blocked by the +host's firewall. It is a hassle to contact their technical support and try to +find a person who knows what a port is and get it unblocked (we've had to do +this over and over as each game comes out). Thus it ultimately boiled down to me +wanting to do this in Python. For me, life is too short to write PHP scripts. + +I started thinking about writing a query application in Python using my +dedicated server that I use to host a few Django_ powered websites. At first I +thought I'd generate the server status HTML on my server and display it in an +``