Pyweek Game Jam is 19th-26th February

The 23rd Pyweek games programming competition will run from the 19th to 26th of February 2017, from midnight to midnight (UTC).

The Pyweek rules, in short, are:

  • Develop a game
  • In Python (mostly, at least!)
  • As an individual or with a team
  • In exactly one week (or less!)
  • From "scratch" - no personal codebases, only public, documented libraries
  • On a theme that is selected by vote, announced at the moment the contest starts.

Python has great libraries for programming games, both 2D and 3D, and the flexibility of the language means you can achieve a lot in just 7 days. Pyweek is open to programmers of all ages and experience levels, from anywhere in the world, and it's a great way to challenge yourself and improve your skills.

Games are scored by other entrants, on criteria of fun, production and innovation, and you'll have to think about all three to be in with a chance of winning! It's a free competition though, so your prize is recognition :-)

Here's what you need to do if you want to take part:

  1. Sign up/login at, and then "Register an entry" at the top-right.
  2. Familiarise yourself with some of the libraries and resources you can use to make games.
  3. Join the IRC Channel, #pyweek on Freenode, for discussion and advice.
  4. Play some of the previous entries at the site.
  5. Why not post a diary entry (from your entry page), introducing yourself and your team?
  6. Put a "save the date" in your calendar! Theme voting starts 2017-02-12, and the competition the week after, 2017-02-19 00:00 UTC.

Have fun, and good luck!

Scaling software development without monorepos

Google, Twitter and Facebook all famously use monorepos for the bulk of their development. A monorepo is a single version control repository shared by all of an organisation's code. That's not to say that a monorepo should be just an unstructured mess of code in a single repository; that would be chaos. It's usually a collection of components - apps, services, libraries etc - all stored alongside each other in a single conceptual codebase.

A monorepo obeys two rules:

  • Whenever you build, test, run or release software, all the code used comes from the same version of the whole repo.
  • Code can only be pushed to the repo if all tests it could possibly affect have passed. Conceptually the entire repo is always passing all tests and is ready to release at any time.

Crucially, a monorepo bans any flexibility to pick and choose versions of libraries code depends on. This is seen as the big advantage of monorepos - avoiding dependency hell.

A typical dependency hell situation is this:

  • You depend on two libraries A and B.
  • The latest A depends on C version 1.1.
  • The latest B depends on C version 1.2.
  • You therefore can't use the latest A and latest B.

There may be a solution - downgrade A, or B, or both - or there may not.

Dependency hell is a problem, and one that becomes much more problematic as the organisation scales - as the number of libraries increases. Monorepos avoid dependency hell by enforcing that A and B will always depend on the same version of C - the latest version.

Another advantage is that monorepos can help avoid using stale code. Once you get your code into the monorepo, any future release of any product will be using that code. Of course, it's the same amount of effort to port code to use newer versions of dependencies, but that work has to be done before the new version can be pushed.

However, it's not without huge downsides.

Even if you can calculate which tests could possibly be affected, you can find yourself rerunning huge swathes of the organisation's tests to guarantee the codebase is always ready to release. To respond to this, orgs disregard extravagancies such as integration tests and mandate fast running unit tests only.

Making breaking changes to an API in a monorepo is hard, because to push code into the repo it already has to be passing all (unit) tests. There are several responses that drop out, all sensible but undesirable:

  • Only make backwards-compatible changes - bad, because we accrue debt and cruft, shims and hacks
  • Introduce feature flags - bad because we introduce codepaths that may mean combinations of flags run in production that haven't been tested
  • Take heroic steps to try to patch all the code in the organisation - bad, because this involves developers changing other team's code, which can lead to mistakes that may slip through code review

It can be extremely hard to utilise third-party libraries in a monorepo, because code that was developed with the assumption of versioned library releases is completely naïve of the breaking-changes issue.

Also, if things break that aren't cause by unit tests, finding what changed can be hard - everything is changing all the time.

Put simply, monorepos neglect how valuable it is to have fully tested, versioned releases of library code accompanied by CHANGELOGs describing what changed.

Versioned releases mean a developer using a library is decoupled from any breaking changes to a library. There can be multiple branches in development at once, say a 1.6 maintenance release and a 2.0, letting developers upgrade as time allows.

An alternative

I believe a better alternative to monorepos can be found using traditional component versioning and releasing.

Let's go back to the two problems we were trying to solve:

  1. We want to help solve dependency hell.
  2. We want to drive developers towards using up-to-date versions of libraries.

Rather than going to the effort of building a monorepo system (because tooling for these isn't readily available off-the-shelf), could we build tooling that tackles these problems, using a standard assumption that libraries will be released independently, their code fully tested?

Driving users to upgrade

Ensuring that developers work towards staying current with the latest versions of libraries is perhaps the easier problem.

If we let developers develop libraries which are released with semantic versions, we can build a system to keep track of which versions are supported.

I envisage this looking very much like (random example page), a system that lets Github users see if the open-source libraries they depend on are up-to-date.

Conceived as an internal release management tool, we simply let library maintainers set the status of each released version, as one of:

  • Up-to-date - green
  • Out-of-date - amber - prefer not to release against this
  • End-of-life - red - only release against this as last resort. You could have special red statuses for "insecure" and "buggy"

The system should be able to calculate, for any build of any library, whether it is up-to-date.

Something like semantic versioning would of course be recommended; in principle semantic versioning would make it possible to automatically update which versions are out-of-date.

With this system we can easily communicate to developers when they need to take action, without making it painful. Maintainers could quickly kill a buggy patch release by marking it "end-of-life".

Solving dependency hell

Dependency hell can be relieved by being more agnostic about the versions of code we can support against. This is much easier in dynamic languages such as Python that have strong introspection capabilities. This allows for code compatibility across a range of versions of libraries.

  • You depend on two libraries A and B.
  • The latest A depends on C version 1.0 to 1.1.
  • The latest B depends on C version 1.1 to 1.2.
  • You therefore can therefore use the latest A and latest B with C version 1.1.

This is so innate to Python that even pip, Python's package installer, doesn't currently fully resolve dependency graph conflicts - for each dependency, the first version specification encountered as pip walks the dependency graph is the only one guaranteed to be satisifed.

This kind of flexibility is not impossible in other languages, however. In C and C++ this is sometimes achievable through preprocessor directives. It's a little harder in Java and C# - mostly you'd have to explicitly expose compatible interfaces - but that is something that we're often doing anyway.

Even without this flexibility, you could perhaps create point release of a library to add compatibility with current versions of dependencies.

Here's my suggestion for our build tool:

  1. Libraries should be flexible about the release versions of dependencies they build against. (On the other hand, applications - the leaves of the dependency graph, that nothing else depends on - should pin very specific versions of dependencies.)
  2. If we're not running hordes of unit test on every single push, but instead running a full test suite only on a release, we can instead use some of those test farm resources to "try out" combinations of library dependencies. Even if it doesn't find solutions, it can give developers information on what breaks in different scenarios, before developers come to need it.

In short, we should try to encourage solutions to dependency hell problems to exist, and then precompute those solutions.

The build tool itself would effectively write the requirements.txt that describes what works with what.

Combining these

These ideas come together very nicely into a single view of reasoning about versioned releases of code.

  • You can see what versions of dependencies are available.
  • Query the system for dependency hell solutions.
  • See whether dependency hell solutions push you into the territory of having to use out-of-date code.
  • See where effort needs to be spent to add compatibility with or port to newer library versions.

Maybe this system could show changelog information as well, for better visibility of what is going on to cause version conflicts and test failures.

I can't say for sure whether this system would work, because as far as I know it has not yet been built. But given the wealth of pain I've always felt as a Python developer in organisations that are embracing monorepos, I long for the comfort and convenience of open-source Python development, where there's no monorepo pain. I hope we can work towards doing that kind of development at scale inside large organisations.

Chopsticks - a Pythonic orchestration library

In my current role, we've been using Ansible for our orchestration and configuration management. Ansible is okay, but after several months wrestling with its extension API and being frustrated by YAML syntax ideas started popping into my head for something better.

Chopsticks (docs) represents my vision of Pythonic orchestration. It's not an orchestration framework in itself. It's more the transport layer for other orchestration systems. It's a remote procedure call (RPC) system that relies on no agent on the remote host: the agent is built dynamically on the remote host by feeding code over the SSH tunnel to the system Python.

For example, you can create an SSH Tunnel to a remote host:

from chopsticks.tunnel import Tunnel
tun = Tunnel('')

Then you can pass a function (any pickleable Python function), to be called on the remote host. Here I'm just calling the standard time.time() function.

import time
print('Time on %s:' %,

Of course, you might want to do this in parallel on a number of hosts, and this is also built-in to Chopsticks:

from import Group
from chopsticks.facts import ip

group = Group([
for host, t in
    print('%s time:' % host, t)
for host, addr in
    print('%s ip:' % host, addr)

Note that the code for chopsticks.facts does not need to be installed on the remote hosts. They will load it on demand from the orchestration host.

Effectively, Chopsticks gives you the ability to write Python programs that straddle a number of machines, all sharing a single codebase.

See the README for a summary of how this works.

SSH tunnels are not the only connection type Chopsticks supports. It communicates over stdin/stdout pipes, so can work with any system that supports these without inteference. Such as Docker (this is on Github, but not PyPI yet):

from chopsticks.tunnel import Docker
from import Group
from chopsticks.facts import python_version

group = Group([
    Docker('worker-1', image='python:3.4'),
    Docker('worker-2', image='python:3.5'),
    Docker('worker-3', image='python:3.6'),

for host, python_version in
    print('%s Python version:' % host, python_version)

Why "Chopsticks"?

Chopsticks gives fine control at a distance - like chopsticks do.

Chopsticks vs ...

It's natural to draw comparisons between Chopsticks and various existing tools, but I would point out that Chopsticks is a library, not an orchestration framework, and I'd invite you to think whether other tools could benefit from using and building on it.


Perhaps the immediate comparison is with Ansible, because it is frustrations with this that inspired Chopsticks.

Ansible feels a lot like Bash scripting across hosts, but in a warty YAML syntax. So first and foremost, I'm attracted to the idea of describing plays in nice, clean Python code. Python code is also more easily testable, and there are great documentation tools you can use.

Ansible's remote execution model involves dropping scripts, calling them, and deleting them. In Ansible 2.1, some of Ansible's support code for Python-based Ansible plugins gets shipped over SSH as part of a zipped bundle; but this doesn't extend to your own code extentions. So Chopsticks is more easily and naturally extensible: write your code how you like and let Chopsticks deal with getting it running on the remote machine.


Fabric is perhaps more similar to Chopsticks - it's a thin framework around the SSH transport, that allows scripting across hosts in Python syntax.

The big difference between Fabric and Chopsticks is that Fabric will only execute shell commands on the remote host, not Python callables. Of course you can drop Python scripts and call them, but then you're back in Ansible territory.

The difference in concept goes deeper: Fabric tries to be "of SSH", exploiting all the cool SSH tunnelling features. Chopsticks doesn't care about SSH specifically; it only cares about Python and pipes. This is what allows it to work identically with Docker as with remote SSH hosts.


As I was sharing Chopsticks on the Twitters, people pointed out the similarity to execnet, which I had not heard of.

Chopsticks has similarity to execnet, but from what little I've read it works in a very different way (by shipping selected code fragments), and will not allow importing arbitrary code from the orchestration host (ie. full import hooks).

Future of Chopsticks

Chopsticks is open source under the Apache 2 license, and at the time of writing, is at a very early stage - barely more than a proof-of-concept - but under very active development.

It currently has support for:

  • SSH, Docker and subprocess tunnels
  • Python 2.6-2.7 and 3.3-3.6
  • Parallel execution
  • Error handling
  • Proxying of stderr (with hostname prepended)

It needs:

  • Tests
  • Send/receive file streams
  • Higher-level orchestration functions

If Chopsticks looks interesting to you, I'd appreciate your feedback, and I welcome any pull requests.

Updated 2016-07-24: Updated to reflect improvements since original posting.

Pyweek 21 is one week away, 28th February-6th March 2016

The next Pyweek competition will run from 00:00 UTC on Sunday 28th February to 00:00 UTC on Sunday 6th March. That's next week!

Pyweek is a week-long games programming competition in which participants are challenged to write a game, from scratch, in a week, in Python.

This week, we vote on themes! The possible themes can be interpreted however you like:

  • Jump in line - you could do a Dance Dance Revolution-style game with cute animals
  • Showtime! - perhaps a business sim in which you run a TV network?
  • The aftermath - clean up the mess from a house party before your parents get home.
  • The incantation - players enter a TV talent show to prove they are the best spell-caster. Each week one is voted off! Featuring
  • In the model - this naturally lends itself to a totally awesome game in which you have to develop a Python script with sklearn to solve exciting big data problems! With cute animals!

But seriously, if these theme ideas get your creative juices flowing, and you have spare time to write a game next week, why not register an entry and give it a go?

Get a Kanban! (or Scrum board)

I continue to be staggered at the effectiveness, as a process management technique, of simply sticking cards representing tasks onto a whiteboard. Whatever your industry or project management methodology, the ability it offers to visualise the flow of work is immensely powerful. It lets us plan the current work and the near future to maximise our productivity.


It's valuable whether you're working on your own or working as a team. When working as a team, it can be used to schedule work among team members. When on your own, it merely helps with clarity of thought (we'll look at why a little later).

Yet this is largely unknown outside of software development. All sorts of industries would benefit from this approach, from farming to law.


There's lots of variation in the terminology around kanbans, so let me lay out the terms as I use them.

The idea of a kanban originates in manufacturing in Japan. The word itself means sign board and refers to the board itself. Specific processes built around a kanban are called kanban methodologies. Scrum calls the kanban a "Scrum Board" and naturally there are all manner of other terms and practices for using a similar approach in other methodologies too.

Onto the kanban we need to stick cards representing tasks - small pieces of work that are easy to pick up and get done. Sometimes tasks will relate to bigger projects. Some call these bigger projects epics, and may use additional cards to represent the relationship of tasks to epics.

A backlog is the totality of the work yet to do (and again, terms differ; some practices may exclude work that is already scheduled).

How to run a kanban

First of all, get yourself a real, physical whiteboard. If you can get a magnetic whiteboard, you can stick task cards to it with magnets, which is nice and clean. But otherwise your tasks can be cards stuck to the board with blu-tak, or post-it notes. I favour index cards of a weighty paper density, about the size of your hand when flat. This lets you write large, clear letters on them, which are easier to see from a distance, and they are somewhat resistant to being scuffed as you stack them into a deck and riffle through it.

Next, you need to come up with your backlog. If you're in the middle of a piece of work, you can start by braindumping the current state. Otherwise, get into a quiet room, with the appropriate people if necessary, and a stack of index cards, and write out cards, or break them down, or tear them up, until you have a set of concrete tasks that will get you to your goal. Make sure everyone agrees the cards are correct.

The cards can include all kinds of extra information that will help you plan the work. For example, you might include deadlines or an estimate (in hours, days or your own unit - I like "ideal hours").


Sometimes tasks are easy to describe on a card but if you were to pick up the card as something to work on, it wouldn't be immediately obvious where to start. These should be broken down into smaller pieces of work during this planning phase. This allows you to see with better granularity how much of the large piece of work is done. I like tasks that are of an appropriate size for each person to do several of them in a week. However, it's OK to break down the card into smaller tasks later if the task is probably going to be something to tackle further in the future.

Now, divide the whiteboard into columns. You will need at least two: something like backlog, and in progress. But you could have many more. Kanban is about flow. Tasks flow through the columns. The flow represents the phases of working on a task. You might start by quoting for work and finish by billing for it. Or you might start by making sure you have all the raw materials required and finish by taking inventory of materials used.


None of these practices are set in stone - you can select them and reselect them as your practices evolve. For example, you could focus on longer-range planning:


So with your whiteboard drawn, you can put your tasks on the board. Naturally many of your cards may not fit, so you can keep your backlog stack somewhere else. Choosing what to put on the board becomes important.

Now, just move the cards to reflect the current state. When a task is done, you update the board and choose the next most valuable task to move forward. You might put initials by a card to indicate who is working on it.

Visit the kanban regularly, as a team. Stop and replan frequently - anything from a couple of times a week up to a couple of times a day - especially when new information becomes available. This might involve pulling cards from the backlog onto the board, writing new cards, tearing up cards that have become redundant, and rearranging the board to reprioritise. Make sure the right people are present every time if possible.

Less frequently you might make a bigger planning effort: pick up all the cards from your backlog pile or column, and sit down again with the relevant people to replan these and reassess all their priorities. Some of the cards may be redundant and some new work may have been identified.

The value of the kanban will then naturally begin to flow:

  • Higher productivity as you see how what you're working on fits into a whole
  • A greater ability to reschedule - for example, to park work in progress to tackle something urgent
  • Team collaboration around tasks that seem to be problematic
  • Estimates of when something might get done or which deadines are at risk


A physical whiteboard seems to be very important. A lot of the practices don't seem to evolve properly if you use some sort of digital version of a kanban. There are lots of reasons for this. One obvious one is that physical whiteboards offer the ability to annotate the kanban with little hints, initials, or whatever. Another one is that an online whiteboard doesn't beg to be looked at; a physical whiteboard up in your workplace is something to notice frequently, as well as offer a designated place to get away from a screen and plan work.

Naturally, having a physical whiteboard is only possible if your team is not geographically distributed. Geographically distributed teams are challenging for a whole host of reasons, and this is just one. A digital version of a kanban may be a good approach in those cases. Or perhaps frequent photos of a physical whiteboard elsewhere in the world can help to keep things in sync.

Readability from a distance helps get value from your kanban. Write in capital letters because these are more readable from a distance. Use a broad felt pen. Use differently coloured index cards or magnets to convey additional information.

It's somewhat important to ensure that the kanban captures all streams of work. There's a tendency to think "This isn't part of the project we're planning; let's not get distracted by it". But that reduces the value of the kanban in tracking what is actually happening in your workflow. Obviously, different streams of work can be put in a different place on the kanban, or use differently coloured cards.

You can also track obstacles to delivering work on the board. I like to reserve red cards to indicate obstacles. Removing those obstacles may require work!

Why Kanbans work

Kanbans are certainly a form of process visualisation. Enabling you to visualise how tasks are flowing will let you spot problems in the process, such as too much work building up that only a certain team member can do. You can design workarounds to a problem like this also right there on the kanban.

Stepping back from this, the reason I've found having a kanban useful even for solo work may be related to the psychological idea of transactive memory, where we use our memory not as a primary store of information, but as an index over other stores of information, such as those in other people's heads, or on paper. The model of thought is then very much like a database transaction - we might "read" a number of facts from different sources into working memory, generate some new insight, and "write" that insight back to an external source.

By committing our understanding of our backlog of work to index cards, we can free our memories to focus on the task at hand. And when that task is done, we waste no time in switching back to a view of our workflow that can tell us immediately "what's next". Or say we encounter new information that we suspect affects something in the backlog - being able to go straight back to that card and recover exactly how we defined the task turns out to be useful: it allows us to quickly assess the impact of new information to our existing ideas and plans.

The final reason I believe kanbans work so well is that both the kanban and the stack of cards that represent your backlog are artifacts that are constructed collaboratively in a group. Taking some concrete artifact out of a meeting as a record of what was said cuts down a lot on misremembered conclusions afterwards. Some people try to take "action points" out of meetings for the same reason, and then quote them back to everyone by e-mail afterwards. This doesn't seem to work as well - I often find myself thinking "I don't recall agreeing that!" One reason for this is that the record of the action points is not written down for all to see and approve/veto, but a personal list written by the person taking the minutes.

Writing tasks out on index cards in front of people, and reading them out repeatedly or handing them around (or laying them out on the table for people to move around and reorganise - related in principle to CRC Cards), means that everyone gets a chance to internalise or reject the wording on the card.

Similarly, the organisation of kanban is not only a concrete artifact that is modified with other people standing around: it is ever-present to consult and correct. Nobody can have an excuse to leave the kanban in an incorrect state. Thus the kanban is a reliable source of truth.

So whatever your industry, whatever your process methodology, set yourself up a kanban and give it a try. Happy kanbanning!

Pygame Zero 1.1 is out!

Pygame Zero 1.1 is released today! Pygame Zero is a zero-boilerplate games programming framework for education.

This release brings a number of bug fixes and usability enhancements, as well as one or two new features. The full changelog is available in the documentation, but here are a couple of the highlights for me:

  • A new spell checker will point out hook or parameter names that have been misspelled when the program starts. This goes towards helping beginner programmers understand what they have done wrong in cases where normally no feedback would be given.
  • We fixed a really bad bug on Mac OS X where Pygame Zero's window can't be focused when it is installed in a virtualenv.
  • Various contributors have contributed open-source implementations of classic retro games using Pygame Zero. This is an ongoing project, but there are now implementations of Snake, Pong, Lunar Lander and Minesweeper included in the examples/ directory. These can be used as a reference or turned into course material for teaching with Pygame Zero.

Pygame Zero was well-received at Europython. Carrie-Anne Philbin covered Pygame Zero in her keynote; I gave a lightning talk introducing the library and its new spellchecker features; and the sprints introduced several new collaborators to the project, who worked to deliver several of the features and bugfixes that are being released today.

A big thank you to everyone who helped make this release happen!

Pyweek 20 announced, 9th-15th August 2015

The next Pyweek competition has been announced, and will run from 00:00 UTC on Sunday 9th August to 00:00 UTC on Sunday 16th August.

Pyweek is a week-long games programming competition in which participants are challenged to write a game, from scratch, in a week. You can enter as a team or as an individual, and it's a great way to improve your experience with Python and express your creativity at the same time.

If writing a game seems like a daunting challenge, check out Pygame Zero, a zero-boilerplate game framework that can help you get up and running more quickly.

Due to various circumstances this has been delayed somewhat, and is now being announced at somewhat short notice. Be aware that this means that theme voting begins this Sunday, 2nd August.

Pygame Zero, a zero-boilerplate game framework for education

Pygame Zero (docs) is a library I'm releasing today. It's a remastering of Pygame's APIs, intended first and foremost for use in education. It gives teachers a way to teach programming concepts without having to explain things like game loops and event queues (until later).

Pygame Zero was inspired by conversations with teachers at the Pycon UK Education Track. Teachers told us that they need to be able to break course material down into tiny steps that can be spoon-fed to their students: our simplest working Pygame programs might be too complicated for their weakest students to grasp in a single lesson.

They also told us to make it Python 3 - so this is Python 3 only. Pygame on Python 3 works [1] already, though there has been no official release as yet.

A Quick Tour

The idea is that rather than asking kids to write a complete Pygame program including an event loop and resource loading, we give them a runtime (pgzrun) that is the game framework, and let them plug handlers into it.

So your first program might be:

def draw():

That's the complete program: screen is a built-in and doesn't have to be imported from anywhere. Then you run it with:


Image loading is similarly boilerplate-free; there are a couple of ways to do it but the one we recommend:

# Load images/panda.png (or .jpg) and position its center at (300, 200)
panda = Actor('panda', pos=(300, 200))

def draw():

More appropriate to sounds and static images, the images/ and sounds/ directories appear as built in "resource package" objects:

def draw():
    screen.blit(images.background, (0, 0))

def on_mouse_down():

We use introspection magic to call the event handlers in the script with whatever arguments they are defined to take. Each of the following will "do the right thing":

def on_mouse_down(pos):
    print("You clicked at", pos)
def on_mouse_down(button):
    print("You clicked the",, "mouse button")
def on_mouse_down(button, pos):
    print("You clicked", button, "at", pos)

Batteries Included

Pygame Zero is also useful for more seasoned developers. Though the APIs have been designed to be friendly to novices, they also help you get up-and-running faster with a larger project. The framework includes a weakreffing clock, a property interpolation system, and a built-in integration of Christopher Night's pygame-text. These are the kinds of things you want in your toolkit no matter how expert you are.

It's not hard to "reach behind the curtain" into Pygame proper, when you outgrow the training wheels offered by Pygame Zero.

Portable and distributable

I've discovered in previous Pyweeks that sticking to Pygame as a single dependency is just the simplest way to distribute Python games. OpenGL may offer better performance but users frequently encounter platform and driver bugs. The AVBin used by Pyglet has been buggy in recent Ubuntu versions. So Pygame gives much better confidence that others will be able to play your game exactly as intended.

Pygame Zero has been built to constrain where users store their images, sounds and fonts. It also disallows them being named in a way that will cause platform portability problems (eg. case sensitivity).

Hopefully this will help schoolkids share their Pygame Zero games easily. I'd be interested in pursuing this to make it even easier, for example for users without Python installed, or with a hosting system like a simplified pip/PyPI.


I'd welcome your feedback if you are able to install Pygame Zero and try it out. It is, of course, on my Bitbucket if you would like to submit pull requests. If you would like to get involved in the project, writing easy-to-understand demos and tutorials would be much appreciated; or there's a short feature wishlist in the issue tracker.

[1] There's a bit of a showstopper bug in Pygame for Python 3 on some Macs - but not reproducable on the Mac I have to hand right now. If anyone can help get this fixed it would be enormously appreciated.

New, free Python Jobs board

Recently we've been on a recruitment drive, trying to fill a number of roles for experienced Python developers. The jobs board has been frozen for a while, so to assist us in meeting new candidates we tossed around ideas for a free, community-run jobs board: it would have to be a static site; it would have to be on Github; employers should be able to list a job just by submitting a pull request. And then Steve went ahead and wrote it!:

Please, please bookmark it, tweet it, reblog it, even if you're not looking for a job right now. It only works if it gets eyeballs. And of course, it's completely free, for everyone, forever. It's by Pythonistas, for Pythonistas.

If you are hiring (and are not a recruitment agent), knock up a Markdown file describing the role you're looking to fill (plus some metadata) and send us a pull request. Instructions are in the Github README.

We'll accept job listings from anywhere in the world. Sure it's not very easy to navigate by region yet. That may be the next job. Perhaps you could help out - pull requests don't have to be limited to new job postings, hint, hint! (Build machinery/templates are in this repo).

On a personal note I want everyone in this community to be employed, happy, and making a comfortable living. Perhaps this site can help make that happen? I'd love to hear your feedback/experiences; use the Disqus gizmos below.

Update 9:45pm UTC: Talking to the team, I discover I'm mistaken: we're actually going to allow recruiters to post job opportunities, providing they do all the work in sending us a pull request and include full relevant details such as the identity of the employer.