Game Review: Flash Tactics

One of the things I want to do with this blog is showcase some of the best examples of Python game development. There are plenty of candidates on pygame.org's extensive database. I'm a bit of an RTS fan - among most other genres - so I was curious to see whether there are any polished RTS games in the database.

One that I turned up that is extremely polished is Flash Tactics - a real-time tank fighting game.

The player controls three tanks of slightly different types, which can me moved forwards or backwards across the landscape and told to target specific enemies. Each tank fires automatically as long as it has a shot, and that depends a lot on the angle of the terrain on which it is sitting. Thus a lot of the game is about choosing good firing positions for your tanks. Your tanks respawn after 10 seconds so you always have three to play with, but if all are killed at once the game is over.

It is also possible to purchase modifications for your tanks, including the awesome armour mod that makes your tank twice as strong but twice the size; earning these involves building up a combo by killing the enemy tanks as fast as possible. Modification points are only awarded at the end of a level so they are quite difficult to obtain.

This game has a very polished appearance, with superb smoke effects and explosions, with the only flaws being that some of the effects are a little flickery, and the heavy use of nearest neighbour rotation leaves the sprites heavily pixellated. I particularly like the comic chat messages the tanks spout, such as "I didn't sign up for this!" and "He had a friggin family!"

It's also worth pointing out the excellent levels drawn with a delightful cartoonish style but with good attention to the foreground and background detail, each offering an interesting mix of combat terrain.

What other Python games have you enjoyed playing lately? Leave a note in the comments.

Screenshot Code

If you want people to look at your game, screenshots are what most pique people's attention.

It's sometimes possible to take screenshots using a generic screen grab tool, but that's not always the case, and it's not always quick and easy. It's best to set up a specific key you can press to take screenshots quickly and easily (I invariably use F12).

Fortunately doing this from Python code is pretty formulaic.

Pygame

Pygame can directly save a surface as an image. The code to do this just needs to be dropped into your event handling.



import datetime

import pygame



def screenshot_path():
    return datetime.datetime.now().strftime('screenshot_%Y-%m-%d_%H:%M:%S.%f.png')


...



# in your event loop

if event.type == KEYDOWN:
    if event.key == K_F12:
        pygame.image.save(screen_surface, screenshot_path())




Pyglet

In OpenGL, you have to read back the colour buffer to an image and save that. As you generally don't want the colour buffer's alpha channel to be saved if it has one, there are a couple of OpenGL calls to force every pixel to be read as opaque. Pyglet can handle reading and saving the colour buffer though.



import datetime

from pyglet import gl

from pyglet.window import key



def screenshot_path():
    return datetime.datetime.now().strftime('screenshot_%Y-%m-%d_%H:%M:%S.%f.png')


...



def on_key_press(symbol, modifiers):
    """This is your registered on_key_press handler.

    window is the pyglet.window.Window object to grab.
    """
    if symbol == key.F12:
        gl.glPixelTransferf(gl.GL_ALPHA_BIAS, 1.0)  # don't transfer alpha channel
        image = pyglet.image.ColorBufferImage(0, 0, window.width, window.height)
        image.save(screenshot_path())
        gl.glPixelTransferf(gl.GL_ALPHA_BIAS, 0.0)  # restore alpha channel transfer

One-Hour Games

After Pyweek 12, a few contestants fuelled by the drug of Python game coding set about a new challenge, writing a game in one hour, solo.

superjoe and John opened the competition with hour-game and space_bombers:

Chard followed up with an addictive little avoid-the-laser game, futility

My attempt was a simple destroy-the-planet game.

camel, made by Cosmologicon in just 30 minutes, was created according to a randomly selected theme:

And that's all I've seen, so far. If anyone would like to participate, simply set your timer for one hour, knock up a game from scratch, in Python, and pop a note with a link to a download in the comments.

Pyggy Awards, July 2011

Off the back of Pyweek 12, Greg Ewing has announced the 6th Pyggy awards, to be judged in July 2011.

This time the 2 month long contest is open to any open-source Python game, as Greg explains:

If you've had a Python game project on the back burner, you no longer have any excuse for not finishing it off! Get going!

Previously the Pyggy awards were only open to games building upon Pyweek entries or themes.

MVC Controllers

MVC as a pattern has many interpretations, but an interpretation I think is common goes as follows:

  • Model - database abstraction/persistence
  • View - templates
  • Controller - all the code that interprets the request, interacts with models, and sets up and renders a template

There is a distinct problem with this, and it's that the controller is bloated with all manner of code  - things which don't seem bottom-up enough to be part of the model, or code to pre-chew model data for the benefit of dumb templates. This leads to controllers dozens of lines long that are difficult to read as a whole.

In my code (Django code, where controllers are called views) I have been try to avoid this, for a long while now. Since controllers constitute the glue between request, model interaction and setting up of the view, I read these frequently to determine how the application is glued together. Therefore I've found them to benefit from being as short and transparent as possible.

These facts should be immediately visible on reading the code of a controller:

  • What are the input parameters from the request (GET/POST/Cookies/Headers/Session Vars)
  • How those parameters are interpreted/validated (their domain)
  • What operation the request performs
  • What variables are passed to the template system

I think it's worth formularising this. My take would be something like this:

A controller should contain no code other than these distinct phases:

  • Unpacking the request parameters/validating the request.
  • Invoking an operation, defined elsewhere.
  • Setting up the context for template rendering.

These can even be labelled as such.

I've conflated unpacking/validating because these can generally be stated succinctly together, using trivial code. For example, in Django, you'd see code like this:

def category(request, category_id):
    category = get_object_or_404(Category, id=category_id)

which I think succinctly comprises these facts:

  • The controller named category receives a parameter category_id
  • The valid domain of category_id is the set of all Category ids
  • If category_id is outside that domain, Http404 is raised

When identifying the business logic of a view, I've found that when the code enacting this grows to more than a couple of lines it's best either to be put into a form's .save() method (thus a form basically defines one operation on a mixed bag of unvalidated input), or into model methods (usually for simple model manipulation), or into a separate class that defines more complicated business logic.

When reading the template context setup, I want to know, when writing templates, what variables I have. Thus this should be explicit in the code of the controller I'm looking at.

Dev versus DevOps

Having spent the past few months working in ops I have learned a wide range of new skills in server and network infrastructure. I found that my skills as a developed augmented what my then competent ops skills. Coming back to full-time development now I was expecting to find that my infrastructure skills would improve my development. What I wasn't expecting was such an early and staggering example.

A year ago, I solved a problem that I was then experiencing - querying ActiveDirectory with python-ldap under Debian. There was an incompatibility between GnuTLS and AD that made this impossible, due to AD missing TLS 1.1 support and no fallback from TLS 1.1 to TLS 1.0. This would happen:

$ gnutls-cli -p 636 ad.example.com

Resolving 'ad.example.com'...

Connecting to 'ad.example.com:636'...

*** Fatal error: A TLS packet with unexpected length was received.

*** Handshake has failed

GNUTLS ERROR: A TLS packet with unexpected length was received.

This worked when disabling TLS 1.1 in GnuTLS, but libldap does not expose a way to set GnuTLS options, and so nor does python-ldap.

My developer solution

My 2010 workaround was to recompile libldap against OpenSSL (which tested to work with AD). This is how it is done:

Build instructions for libldap

  1. check that source repos are available in /etc/apt/sources.list
  2. $ apt-get source openldap
  3. # apt-get build-dep openldap
  4. # apt-get install libssl-dev
  5. cd to the openldap-* directory
  6. $ CPPFLAGS=-D_GNU_SOURCE ./configure --prefix=<where> --with-tls=openssl See http://www.openldap.org/its/index.cgi/Build?id=5464 for reasons behind the _GNU_SOURCE flag.
  7. $ make -j <number_of_cpus> depend
  8. $ cd libraries
  9. $ make -j <number_of_cpus>
  10. $ make install
  11. $ cd ../include
  12. $ make install

Build instructions for python-ldap

libldap may provide the same binary interface (ABI) whether it's compiled with GnuTLS or OpenSSL, but there is a chance that it may differ, so recompiling python-ldap against the new libldap is recommended.

  1. # apt-get install python-dev
  2. Obtain source for stable python-ldap from http://pypi.python.org/pypi/python-ldap/
  3. Extract archive and enter extracted directory
  4. Edit setup.cfg: add <where>/lib after library_dirs = add <where>/include after include_dirs = add extra_link_args = -L<where>/lib -rpath <where>/lib somewhere in the [_ldap] section.
  5. $ mkdir -p <where>/lib/python<version>/site-packages (where <version> is eg. 2.5, 2.6)
  6. $ PYTHONPATH=<where>/lib/python<version>/site-packages/ python setup.py install --prefix=<where>

Running Python

To use the recompiled version of the libraries


$ PYTHONPATH=/lib/python/site-packages/ python example.py

My DevOps Solution

Using stunnel it is possible to "unwrap" the SSL layer and provide unencrypted access to python-ldap. stunnel is compiled against OpenSSL and thus doesn't suffer from the GnuTLS bug.


$ sudo stunnel -c -d 127.0.0.1:389 -r ad.example.com:636

As a developer there's a certain hesitance to introduce another independent service into the system. It feels like weakening the chain, going from one point of failure to many points of failure - potentially bugs or misconfigurations in the adapting component itself or misconfigurations of the server that is supposed to be hosting that component.

As a DevOp, given the tools and experience to maintain infrastructure systems that involve vastly more components than this, it seems robust - no non-standard components, just an easy-to-configure off-the-shelf tool doing what it is intended for.

Puppet

Over the last couple of months I have been getting to grips with Puppet, a client-server system for applying configurations to remote machines. It is a powerful tool for network administration, allowing the configuration for your entire network to be stored in one versioned repository and applied with little effort. There are other, similar tools in this space, but Puppet seems to be particularly popular at the moment.

Puppet includes a master server that serves configuration and files, and a client daemon that connects back to the master using client-certified SSL, and applies configuration on the machine on which it runs. Configurations are defined in Puppet's own declarative language.

While Puppet is described as a tool that will apply configurations to remote machines, the fact that Puppet manifests comprise definitive knowledge about the network configuration should not be overlooked. As you configure services, the Puppet rules that you write serve as documentation of the process that you followed.

This is not to say Puppet is without problems. The client is very heavy, and can consume lots of memory to apply a configuration - this can be a showstopper on an otherwise very light VM. The Puppet language is clean for simple cases, but restrictions in its syntax that stem either from incompleteness, or deliberate restrictions intended to enforce configuration sanity can defeat attempts to write complicated and re-usable recipes.

It is also difficult to test Puppet recipes. You can run them on a VM, but it's time-consuming to ensure that they apply correctly, first time, given an out-of-the-box install. It's somewhat likely that you would need to run Puppet once, then run apt-get update, and then run Puppet again.

From my point of view as a developer who generally works with normalised databases, what I find ugly is that the Puppet repository is not one fact, one place. Puppet recipes most frequently just copy configuration files onto the client, and the particulars of a configuration file may implicitly depend on facts buried in many other configuration files or Puppet manifests. For example, the IPs listed in DNS zone files must match the IPs assigned in each host's network configuration.

To avoid some of these problems, a future Puppet-like tool could perhaps take the form of a comprehensive and extensible network information system (eg. RDF), and a suite of tools and recipes for compiling that information into something as lightweight as a bash script to run on each remote machine.

Fonts and font-family

Yesterday, on Twitter, I watched a discussion emerge as one person I follow pointed out that another person's hosted wordpress.com blog was illegible on her computer, with all of the content appearing in ugly bold italics.

While we never got to the bottom of that issue (I couldn't reproduce it), it's worth backing up and examining font use on the web.

Fonts, unlike any other aspect of web browser rendering, depend on the platform, not the browser version.

The reason is simple: fonts are not bundled with the browser, but with the operating system, or installed with some creative applications.

If you select fonts based on how they look on your computer, they will look different on another computer with a different set of fonts installed. Also, fonts are matched by name in CSS, so when you write

font-family: "Helvetica", "Arial", sans-serif;

you are requesting a font named "Helvetica", then one named "Arial", then the default sans-serif fonts. This is a very common thing for people to write, because Helvetica is a popular choice on Mac, Arial is a popular choice on Windows, and sans-serif is a catch-all. The intention is to select a nice sans-serif font on each platform.

Unfortunately, "Helvetica" can exist on Windows and Linux as well as Mac. Helvetica has been around as a typeface since 1957, and there are different versions of it around - by what route, or with what degree of intellectual property infringement, I do not know. There are also a fair number of variants that your computer might also consider, if they are installed.

On Linux, Helvetica was historically an X bitmap font (ugly, impractical things that are now effectively dead). These days it is generally an fontconfig alias for a free sans-serif font, but renders with iffy hinting and kerning, perhaps to conform to the original font's metrics (ie. it has been shoehorned into exactly the same space, so that printed publications don't come out wrong). I actually find this font quite uncomfortable to read.

On Windows, you may occasionally find Helvetica exists, perhaps even as the same font, installed on its own, or with some software suite, but if you do you'll find several browsers on Windows render fonts with Microsoft's ClearType renderer optimised for legibility, not the Mac's quality-optimised renderer, also used in Safari 3 on Windows. Microsoft own fonts have been tweaked to work well with ClearType - others may not. Linux is (as ever) more flexible: it's possible to configure the amount of hinting to use through fontconfig, although most users will keep their distribution's defaults.

Ultimately it's an impenetrable picture - you cannot be sure that the fonts you list will give anything like the browsing experience you were expecting. The same overall picture applies with serif fonts and monospace fonts.

The best solution (unless you want to try downloadable fonts, which I wouldn't recommend for body fonts) is to side-step the specifics of fonts entirely and delegate to the user/browser/operating system. There are three suitable aliases for font families: sans-serif, serif, and monospace. These will reliably give you a good font of that category. There are two other aliases, cursive and fantasy which are too poorly defined - you could get practically anything.

Is this really the only option? If you're prepared to go to the enormous lengths required, can you not pick a list of named fonts, test broadly and claim it works? Well, yes, if you test broadly enough you can get say 99.9% coverage. Unfortunately, that's not always good enough.

The topic of a site turns out to significantly affect the statistics of users that visit it. For example, a site about Linux will get more Linux hits. A site about using Photoshop will get most hits from people with Adobe Creative Suites installed, and that comes with fonts. So as a theme designer, what was 99%+ for you could be 90% for some of the people who use your theme.

So, in summary, stick to the safe fonts: sans-serif, serif and monospace. Fonts that are ubiquitious and designed for the screen are also quite safe - Arial and Verdana. You might be able to find some other safe places by consulting statistics if you are feeling creatively hemmed in. But please, don't make font assumptions.

The Virtual Revolution

Last night's BBC documentary The Virtual Revolution, available on iPlayer now, is exactly typical of all internet documentaries I have seen, from the generic title (pick one of "The Digital", "The Cyber", "The Virtual" and one of "Revolution", "Renaissance", "Tomorrow" etc.) to using "web" and "internet" interchangeably, to cutting to shots of computer screens showing something internetty, like repeatedly typing www.com into a browser's address bar (it is a valid domain, but it is enormously more likely to be typed through incompetence), to the intentionality ascribed to the entire edifice, which, they alleged, was deliberately designed to democratise everything ever.

The story was woven into a history of the internet as told by "key players and pioneers" including Sir Tim, Youtube, Wikipedia and Arianna Huffington of frequently alt-med promoting rag HuffPo, thus neatly side-stepping the role of the millions of faceless bloggers and web users who pump content into the web and Web 2.0 sites and who are in truth responsible for what the web is today.

Actually, I say sidestepping - blogs were mentioned.

The world of blogging is going through a crisis. Of the more than 130 million blogs active since 2002, it's estimated that over 90% are now dormant.

Ok, a lot of people set up blogs and stop posting to them. But ignore that: what they've reversed here is the fact that there are 13 million active blogs on the web. That is a HUGE number. That means there is one active blog for every 130 Internet users.

Youtube in particular is noteworthy only for being the most popular video distribution site. As a site neither pioneering or unique, you wonder how their CEO's opinion could possibly be more valuable than that of it's more popular video bloggers. Incidentally, unlike many sites, such as Facebook, there's almost no drawback to switching to a competitor, such as Vimeo.

Jimmy Wales, founder of Wikipedia, on the other hand, is truly visionary. Nobody would have thought a wiki could scale to the size of an encyclopaedia and beyond without its quality suffering a lot more than Wikipedia's actually does. The result is the most useful site on the Internet outside of Google. Wales did not, of course, invent the wiki or prove the wiki concept itself.

But the main thing this programme gets wrong is simple definitions. The whole episode laments the fact that the internet was supposed to be democratic, but they claim it isn't because everyone uses Facebook, or Youtube, and sites like HuffPo get more traffic than your average blog. The word oligarchy was used.

Wrong. People can choose which websites to use or not use. Remember Myspace? Owned by News Corp, one of the world's biggest media companies? What happened to that? I suppose, as oligarchs, they must have decided for us that we weren't going to use it any more, right?

I won't bother with the rest of the series.