Battleships AIs
Of the projects we have tackled at the Reading Python Dojo one of my favourites was programming AIs for the game Battleships, which came up in January 2013.
The dojo is not usually a competition, but in this case we waived that principle and split into two teams to create an AI that could play off against the other. I set down the basic battleships rules that we would compete under:
-
The grid is 10x10.
-
Each team has the following ships:
Ship Length Aircraft carrier 5 Battleship 4 Submarine 3 Destroyer 3 Patrol boat 2
The teams were not tasked with drawing a board or placing the ships. We simply drew the grids up on the whiteboard, manually placed the ships, and then had the computers call the moves. The computers were given feedback on whether each shot had hit, missed, or hit and sunk a ship.
Team A's AI was extremely deterministic, sweeping the grid in a checkerboard pattern from the bottom-right corner to the top left, until it scored a hit, in which case it would try and strafe each possible orientation of the ship in turn until it was sunk. It would then resume the sweep from where it had left off.
Team Alpha's AI was more stochastic, choosing grid squares at random until it scored a hit, then working outward like a flood-fill to completely carpet-bomb the ship. If at any point a square was completely surrounded by misses, then it could not contain a ship, and the AI would not pick this square.
On the night, Team A finally won after some astonishly unlucky misses from Team Alpha. Team Alpha benefitted from Team A's worst case performance by luckily placing a ship in the top-left corner of the grid where it would take the maximum 50 moves for Team A's sweep to find. The randomness of Team Alpha's AI injected a tension that at any point it could stumble across Team A's last ship and win, even as Team A's AI swept inexorably towards that final ship.
After the Dojo I began to wonder just how often Team A's AI would beat Team Alpha's. Team Alpha could get lucky and find all of Team A's battleships more rapidly than Team A could sweep the grid. To answer the question I wrote BattleRunner, which runs the unmodified AI scripts as subprocesses over thousands of games, albeit with a simple random ship placement strategy. It was actually my first Twisted program! While I normally use gevent for async I/O, I hit a snag very early on with Python's output buffering and wondered if switching to Twisted would solve it. It didn't; the solution was to call python with -u (or modify the AI scripts, which I was keen not to do).
The answer is that Team A beats Team Alpha about 64% of the time; Team Alpha wins 36% of the time.
BattleRunner also let me test improvements to the AIs; I was able to add a optimisation to improve how Team A's AI detects a ship's orientation. The Team A+ AI beats the original AI 54% of the time (and loses 46% of the time) - a small but significant improvement.
Perhaps you can do better than Team A's AI! There are lots of optimisations left to be had. Why not clone the repo and give it a try?
Comments
Comments powered by Disqus