Archive for April, 2009

Poor Bear Update 4: Collision Detection

I have been working on adding tricks to PoorBear over the last week. Trevor has sent us a ton of crazy animations for tricks (I will try and throw up a video preview of some of them soon) as a result, I was in desperate need of a way to generate collision verts in a manner other than plotting them by hand (yeah, I plotted and translated the verts for one animation by hand and it took about an hour). I will go over the method I used to solve this problem.

The problem

I am using the Chipmunk physics engine on PoorBear and very basic collision shapes for all objects to try and keep it as fast as possible. You can make very complex objects with Chipmunk, combining many circles and polygons, but I feel that a single convex poly will provide accurate enough collision for this particular game. So, PoorBear’s body and scooter are represented with seven verts which are depicted by the orange dots below:

Up until we decided to add tricks to the game these vertices were all that was needed to provide collision detection for PoorBear’s body and scooter. The only variation in the animation was the movement of his scarf which didn’t need any collision detection so the verts remained static. Since we want to be able to do some pretty crazy tricks, these verts are no longer sufficient because the trick animations aren’t close to the shape of these verts. For example, this is a frame from one of the tricks:

As you can see, not only is it a totally different shape but the image is a different size and the wheels and shocks of the scooter are also included. This particular animation has ten frames and each frame is different enough to warrant unique collision verts. We have around 15 different trick animations with as many as 40 frames each which is why it became unrealistic to plot the verts by hand.

I browsed the internet for advice on how to solve this problem, but being new to game development I wasn’t even sure what to search for. Someone mentioned to be that I could use a single color image to generate terrain when I was working on the level editor for PoorBear so I started thinking how I could implement something like that and apply it to this problem. The solution turned out to be simpler than I had imagined and has saved me a ton of time.

The solution

My solution was to open each frame in Photoshop, create a new blank layer, and draw a 1px black (could be any color) dot where I wanted each vert to be. Then simply hide the original layer and save the layer with the dots to a file. Then I wrote a little script with my scripting language of choice, Python, to grab each pixel and convert them to coordinates that I can use with the physics engine. I have since extended the script so that I can generate the verts for multiple frames and animations at once because all the animations for PoorBear are complied into 1024×1024 images to cut down on the number of textures being swapped. However, I will just go over the basic steps of generating verts for a single image.

Below is sample of the previous image with several points drawn to form a loose polygon for collision detection. The points are overlaid on the image so that it is obvious what they represent and the points are large for the sake of clarity. The red dots have to be 1px on the real thing.

Once the verts are pulled out of the image and converted to something the physics engine can understand, they will represent something like the following in the game:

The script

There is a great image handling module for Python called Python Imaging Library (PIL) which is required for this code to work.

First we need to open the image saved from photoshop and pull out the pixels that were drawn. This can be done with the following code:

def grab_points():
image = Image.open("/path/to/image/animation0000.png")
pixels = image.load()
width = image.size[0] # the size property is a tuple (width, height)
height = image.size[1]
points = []

for x in xrange(width):
for y in xrange(height):
if pixels[x, y][0] == 255: # could just as easily detect any color
# pixels[x, y] returns the tuple (R, G, B)
points.append([x,y])

This code opens the image and loads the pixel data into memory and steps through each pixel, adding the ones with a red channel value of 255 to the points list. We could have also used a list comprehension which most likely runs more efficiently but is considerably less readable. This is what it would look like:

def grab_points():
image = Image.open("/path/to/image/animation0000.png")
pixels = image.load()

points = [[x,y] for x in xrange(image.size[0) for y in xrange(image.size[1]) if pixels[x, y][0] == 0]

Now that we have the location of all the pixels we drew in Photoshop, we need to convert them to something the physics engine can understand. Getting the pixel data was very simple thanks to PIL and at this step these points could be used for any physics engine with the right translations. These next steps will be more and more specific to my situation (Chimpmunk physics and the iPhone) but can be adjusted to most any project.

Chipmunk expects the verts to be in clockwise order and to form a convex poly. Currently, the verts are ordered by their x value. Given the image below, we need the verts in the order ABCDE but they are in the order ABECD right now.

I developed a simple algorithm which arranges the verts in the correct order, it has four basic steps:

  • 1. Iterate over all verts excluding the first and last
  • 2. Remove the verts with a y value less that half the height of the image, saving them in a temporary list
  • 3. Reverse the order of the temporary list
  • 4. Append the temporary list onto the original list

This is the code that does that:

def sort_points(points):
length = len(points) - 1
temp = []

i = 1
while i < length:
y = points[i][1]
if y > HEIGHT / 2:
temp.append(points.pop(i))
length -= 1 # we are editing the list in place. since
# we popped a value, decrement the length
else: i += 1

temp.reverse()

[points.append(point) for point in temp]

At this point, we have pulled the pixel data out of the original image and sorted the points in an order that the physics engine will understand. Now we just need to translate the points to the coordinate system used by the physics engine. The pixels were stored linearly in the pixels list where the first pixel in the list represented the top left pixel of the image and the last represented the bottom right pixel. This can logically be thought of as a coordinate system with the origin in the top left and the positive y-axis growing downwards. Chipmunk uses the traditional coordinate system with the origin located in the center. We just need to loop back over every point and transform them to coordinates Chipmunk understands. This code will do that:

def transform_points(points):
for point in points:
x = point[0]
y = point[1]
point[0] = x - OFFSET_X if x > OFFSET_X else (OFFSET_X - x) * -1
point[1] = (y - OFFSET_Y) * -1 if y > OFFSET_Y else OFFSET_Y - y

Now we have the list in an order and format that Chipmunk can use. Being that PoorBear is running on the iPhone, I just format this data to resemble a multidimensional C array and copy/paste it over into the code for the game. There are better ways to get the data over but copy/pasting is good enough for now.

The full script is below, I just chained each function together for simplicity.

from PIL import Image

OFFSET_X = 75 # image width / 2
OFFSET_Y = 75 # image height / 2

def grab_points():
image = Image.open("/path/to/image/animation0000.png")
pixels = image.load()

points = [[x,y] for x in xrange(image.size[0]) for y in xrange(image.size[1]) if pixels[x, y][0] == 0]

sort_points(points)

def sort_points(points):
length = len(points) - 1
temp = []

i = 1
while i < length:
y = points[i][1]
if y > OFFSET_Y:
temp.append(points.pop(i))
length -= 1
else: i += 1

temp.reverse()

[points.append(point) for point in temp]

transform_points(points)

def transform_points(points):
for point in points:
x = point[0]
y = point[1]
point[0] = x - OFFSET_X if x > OFFSET_X else (OFFSET_X - x) * -1
point[1] = (y - OFFSET_Y) * -1 if y > OFFSET_Y else OFFSET_Y - y

# format the list if needed
file = open ('output.txt', 'w')
file.write (points)
file.close()

grab_points()