Advanced target visibility working

GuiltySpark now extracts the map’s BSP and performs ray casting for better detection of target visibility. In fact, it can determine if the line between any two arbitrary points is obstructed by any part of the level (minus the scenery at this point). How I did this is simple in concept: imagine the map as a number line from 1 to 10 instead of a 3D space. Suppose the two points I want to check are 2 and 5. The BSP works by recursively subdividing the space into two halves. In one half we have the set {1, 2, 3, 4, 5}, and in the other we have {6, 7, 8, 9, 10}. Both points are in the first set, so half of the entire map can now be ignored. The next subdivision gives the sets {1, 2, 3} and {4, 5}. The points, 2 and 5, would now lie in different sets and so we stop here. This means that both {1, 2, 3} and {4, 5} may contain a number between 2 and 5 that intersects their ray. As you can see, we can’t only consider the first set because we would miss 4, and likewise for 3 if we only considered the second set. We next consider all the smallest subdivisions of these two groups: {1}, {2}, {3}, {4}, {5}. These are called the leaves and they represent surfaces. Their surface may be located somewhere between 2 and 5, but that doesn’t mean they intersect the ray. Thus each leaf in the range {2} to {5} is checked for ray intersection.

Last night was a total code-a-thon and I ended up writing all the ray-BSP intersection code. I didn’t get a chance to test it last night due to a couple bugs, which I have resolved just this morning 11:50 (morning for me). One bug was forgetting to check if the BSP node child indices are -1. I was already checking if they had their 0x80000000 bit set, which indicates that they are a leaf containing BSP2D references and not another BSP3D node, but an index of -1 says that there is no node at all on that side of the plane. I think this is due to the way Halo creates its BSPs. When building a BSP, it’s difficult to know where to place planes to divide the level properly. I think that the developers chose arbitrary polygons and used the planes they lie on. If this polygon defines the exterior of the geometry at a concave part of the level then there’s a chance the plane won’t intersect any other part of the map and so one side just faces outside the map. In reality, this happens rarely meaning Bungie chose a good heuristic. The next bug was not handling flagged plane indices in surfaces, causing an out of bounds error when accessing my planes array. I’m not sure what a flagged plane index means, but flipping the flag bit seems to have no negative effect.

In any case, the number of flagged planes is probably so low that any problems caused by it would only rarely affect GuiltySpark’s ability to determine if the target is visible or not. Worst case scenario: it shoots at a few walls accidentally. This new method is still a vast improvement over the old target visibility detection. The BSP is extracted in the blink of an eye, so GuiltySpark can do it during gameplay without any hiccups. I thought I would have to read Halo’s memory in large chunks to extract it quickly, but it turns out that wasn’t necessary. GuiltySpark extracts the BSP only once when the target visibility data source is initially requested by the AI, at which point it’s stored internally until you restart the AI. The actual calculations for the ray intersection are fast too. I found plenty of opportunities to prune my options and limit the amount of calculation required.

To calculate if the ray intersects a polygon, I find the point of intersection between the ray and the polygon’s plane. I then project the polygon and the intersection point into 2D space based on the dominant component of the plane’s normal (ensures best topology). The Jordan curve theorem is used next; if an arbitrary ray from the intersection point crosses an even number of the polygon’s edges, the intersection point lies outside the polygon.

So what does this mean for users of GuiltySpark? The bot now knows if a target is visible despite distance and how much the aimbot is leading them. You could also turn off the aimbot when the target is not visible, so no more locking on through walls. All-in-all, this new addition makes for a more believable and human-like bot. With a half-decent AI script, nobody will be able to tell they are playing against a program.


BSP extraction

Over the past while I’ve made a lot of progress reversing Halo’s BSP. Now that I can extract the information, I just need to write the algorithms to work with it. As I’ve mentioned previously, one goal is determining target visibility by checking if any polygon in the BSP intersects the ray between your player and the target. To do this, I’ll find the smallest possible node of the BSP that contains but does not divide the two points. Every leaf under this node will contain potential ray-intersecting polygons.

Here’s what I learned about Halo’s BSPs:

Target visibility

As you may know, when you aim at an enemy ingame, their name fades in near the top of the screen. This name won’t show up when obstacles block the target from sight and it has a little bit of leeway in that you don’t have to be aiming exactly at them. This made it ideal for determining target visibility. Unfortunately, it has some drawbacks.  Firstly, the name won’t show up when the target is too far away, even if they’re in range. Secondly, the leeway I mentioned earlier is limited. In laggy servers or when using slow moving projectiles like rockets, the aimbot needs to aim far ahead of the target to get a hit. That means the name won’t show up anymore. A third drawback is that opening the ingame framerate monitor (ctrl+F12) interferes.

I see 3 alternate solutions:

  1. Use whatever function the game uses to determine target visibility
  2. Compare target distance with the depth buffer at their location
  3. Extract the level geometry and perform ray casting

I’m not a fan of option 2 because it requires a Direct X wrapper and the depth buffer is only so precise. Furthermore, I want something that will work even if the target isn’t within the view frustum. I see option 1 as a plan B because option 3 gives me the most options and opportunities. Option 3 would allow me to automatically generate node graphs or display the map within GuiltySpark. Mind you, those improvements are a long way off and I might not get to them. The priority is target visibility.

Sky told me the map cache data is at 0x40440000, so I fired up Cheat Engine and checked it out:

A list of tags starts 0x10 from the start. It wasn’t too hard to figure out the tag reference structure. What I need to do is scan through the list to find the sbsp tag, then follow its data pointer to get the actual BSP data. The next step is to figure out what I need from the BSP to do target visibility calculations. The picture shows that what’s in memory is basically the same as what’s in the .map file. If what’s in the .map file is similar to the structure of the various tag files, then examining the .scenario_structure_bsp tags with Guerilla (part of Halo CE’s editing kit) will make it easier to understand the sbsp tag in memory.

While we’re at it, here’s a list of changes made to GuiltySpark since 1.0.20

  • Added random number data source ($48)
  • The last 10 commands are now stored and can be listed with the “history” command. The command itself is not stored
  • Added “!!” command to re-execute the last command and “!<#>” to re-execute the <#>th last command. Not stored in history
  • Pressing the up arrow key in the command input box cycles through previous commands
  • Renamed “clear” command to “deleteall” in case people confuse “clear” for “cls”
  • Added a “where <#>” command to focus the graph view on the given node
  • Node numbers are no longer drawn over by links and are 1 pt larger for easier reading

Introduction to GuiltySpark

For my undergraduate directed project, I have decided to continue the development of my program GuiltySpark. As a requirement of the project, I will be using this blog to record the details of my progress.

I’m an avid player of the PC game Halo, and I’ve been involved in its modding scene for around 4-5 years. During that time, I’ve picked up 3D modeling and texturing skills, and I’ve joined the staff of the community’s most active site. These days, I mostly create custom skyboxes for mapping teams, which I enjoy because Iove drawing with my tablet. It was only natural for me to try programming applications for Halo after going through first year CSC. Programming for Halo usually involves reading from or writing to the running game’s memory. This opens up new possibilities for players and developers alike that didn’t exist before. For example, a community member developed an wrapper API for the game engine that allows the addition of extensions to the game such as postprocessing and removing various limits that once held map-makers back.

I was inspired by the work of these community members, but I always lacked the experience to make these applications myself. Near the end of CSC 115, I wondered if I could make a program that, using only the player’s coordinates, navigated the player around a map (one of the ingame levels). So I began work. First, a failed Java attempt, then I decided to learn C# as it would be easier to perform “memory hacking” with. I worked all through the summer and when I could during the Fall 2010 semester. What I’ve ended up with is a fully automated, user-programmable bot. Not only will it travel around the map, but it will actually target and shoot at other players. In other words, it plays Halo by itself.

How does it work? Users need to create a special node graph for each map. The bot uses it for path finding and path following. GuiltySpark contains all the tools a user needs to create these node graphs, and I believe it’s quite user-friendly. Once the bot could travel around, I knew I could take the program a step further and add some sort of aiming and AI system. I began to brainstorm for the AI system. It needed to be able to react to changes in the environment and make decisions. The decisions should depend upon information from the environment. Some tasks should have a higher priority at times than others. Using these requirements, I made up a sort of programming language to define the bot’s behaviour. Users of GuiltySpark can write these AI files themselves, and it’s actually “compiled” into a usable form when they load it into the program. It’s easiest to think of it as a tree of tasks, each tree node having a priority. Priorities can incorporate data from the game environment. At each step of the AI, it finds the highest priority path down the tree. This represents a decision.

GuiltySpark is currently in beta testing. I have 8 testers from the Halo community looking for bugs and requesting features. It’s about a month or two from release. However, for my undergraduate directed project I will be further working on the program before and after release, and including my changes in a future version.

If you’ve read this far, you’re probably wondering what exactly I want to do for GuiltySpark as my project. Right now, target visibility is checked using a cheap trick. When you aim at a target ingame, their name shows up on screen. The text won’t show up when you aim at someone through obstacles, so I can see if that text is visible or not to determine if the bot has a clear shot. The problem is that the text doesn’t show up if the target is too far away, even if visible. My bot can also aim ahead of moving targets to compensate for projectile travel time and network latency. Leading too far ahead makes the text disappear. I need a better solution.

The better solution is to extract the level geometry from Halo’s memory, then perform some kind of ray casting on it. This is something totally new to me and I’m excited to try it. It will be a challenge to correctly extract the geometry and work with it in a real-time fashion. Luckily, my program already has a small overhead of about 2-4% CPU usage (by task manager) when running the AI, aimbot, and path following. I’ve made sure to optimize where I can because I didn’t know how expensive the AI system would be, especially since users can write their own AI behaviour files to program the bot however they want. I tried to safeguard the program against them using too many resources with complicated AI files.

Anyway, once this project is cleared and official I will start doing my research. A lot of work needs to be done before I can even work with the level geometry, and further work needs to be done to integrate this all seamlessly into my existing program.

I will try to update this blog at least once per week.