As you may know, when you aim at an enemy ingame, their name fades in near the top of the screen. This name won’t show up when obstacles block the target from sight and it has a little bit of leeway in that you don’t have to be aiming exactly at them. This made it ideal for determining target visibility. Unfortunately, it has some drawbacks. Firstly, the name won’t show up when the target is too far away, even if they’re in range. Secondly, the leeway I mentioned earlier is limited. In laggy servers or when using slow moving projectiles like rockets, the aimbot needs to aim far ahead of the target to get a hit. That means the name won’t show up anymore. A third drawback is that opening the ingame framerate monitor (ctrl+F12) interferes.
I see 3 alternate solutions:
- Use whatever function the game uses to determine target visibility
- Compare target distance with the depth buffer at their location
- Extract the level geometry and perform ray casting
I’m not a fan of option 2 because it requires a Direct X wrapper and the depth buffer is only so precise. Furthermore, I want something that will work even if the target isn’t within the view frustum. I see option 1 as a plan B because option 3 gives me the most options and opportunities. Option 3 would allow me to automatically generate node graphs or display the map within GuiltySpark. Mind you, those improvements are a long way off and I might not get to them. The priority is target visibility.
Sky told me the map cache data is at 0x40440000, so I fired up Cheat Engine and checked it out:
A list of tags starts 0x10 from the start. It wasn’t too hard to figure out the tag reference structure. What I need to do is scan through the list to find the sbsp tag, then follow its data pointer to get the actual BSP data. The next step is to figure out what I need from the BSP to do target visibility calculations. The picture shows that what’s in memory is basically the same as what’s in the .map file. If what’s in the .map file is similar to the structure of the various tag files, then examining the .scenario_structure_bsp tags with Guerilla (part of Halo CE’s editing kit) will make it easier to understand the sbsp tag in memory.
While we’re at it, here’s a list of changes made to GuiltySpark since 1.0.20
- Added random number data source ($48)
- The last 10 commands are now stored and can be listed with the “history” command. The command itself is not stored
- Added “!!” command to re-execute the last command and “!<#>” to re-execute the <#>th last command. Not stored in history
- Pressing the up arrow key in the command input box cycles through previous commands
- Renamed “clear” command to “deleteall” in case people confuse “clear” for “cls”
- Added a “where <#>” command to focus the graph view on the given node
- Node numbers are no longer drawn over by links and are 1 pt larger for easier reading
Last night I gave the beta testers version 1.0.20. It should drastically increase the abilities of the bot; weapon management is one of the big improvements. I can’t wait to see what people do with this. I should make a video showcasing everything GuiltySpark has to offer. Not only is it a multiplayer bot, but the AI system is configurable enough to do almost anything. GuiltySpark could be very useful in Machinima when background actors are needed, removing the need to use keyboard with your feet (you know who you are). It can help other Halo app developers needing someone to test with and perform simple tasks. Server operators can use it to fill slots in an otherwise empty server without appearing AFK.
Here’s a list of changes in 1.0.20:
- Strafing mode no longer has an effect when following jump or look-ahead links
- Fixed a case where GuiltySpark would encounter an exception if Halo closed first
- FIDs with boolean parameters (such as for enabling/disabling modes) now intererpret 0 as false and anything else as true (as opposed to strictly 1 = true)
- START/PAUSE_AIMBOT were replaced with a single FID TOGGLE_AIMBOT (11) with parameter 0 meaning pause, otherwise start
- Added exchange weapon, action, crouch, jump, switch weapon, switch grenade, melee, reload, backpack reload, zoom, and flashlight FIDs
- MOUSE1 and MOUSE2 FIDs take a different parameter format now; -1 is click down, 0 is click up, and any value >0 performs a full click with that number of milliseconds in between button down and up
- Loading an AI file with a missing included file now results in loading failure
- The AI output text window now automatically scrolls as new lines are added, and the auto-scrolling for the graph output window is smoother
- Added ZOOM_LEVEL ($36) and FLASHLIGHT ($37) data sources
- Added support for new operators in postfix expressions: compare (=, >, <), logic (|, &), and math (^, %, `, ~)
- FID 0 (print) now uses its task name as a label when printing
- Added data sources and FIDs for weapon management
- Fixed how the bot would get temporarily stuck following a path when the AI has found a newer one
- Fixed look-ahead links conflicted with smooth aiming
- Fixed an exception when pathfinding failed and the walking thread continued, resulting in following an empty path
- Added hotkeys to start (F11) and stop (F12) the AI
This might be the last beta version. The list of features to add is getting smaller even though I keep adding to it. As much as I want to keep improving it, I know a lot of people are eagerly waiting. I’ll try not to keep you waiting too long. I’m actually looking forward to all the documentation and tutorials I have to make because it’ll be a nice change.
Research into extracting the BSP has already begun, more on that later.
My beta testers have been quite helpful so far by requesting lots of features. I had a chance to implement a handful of them today and put up the new version for download. They haven’t really found any glitches yet. Is this a good sign?
Here’s a list of changes in GuiltySpark v1.0.8:
- Added random node data source ($35)
- Zoom level goes up to 25 now
- The size of drawn links and circles is now adjustable
- Setting the aimbots projectile velocity to 0 now means ignore travel time
- Fixed a case where setting view angles incorrectly resulted in an ingame glitch
- Enforced default properties upon starting the AI: storage values are 0, arc mode is off, gravity scale is 1, projectile velocity is 0, look-ahead mode is off, and strafe mode is off
- Aimbot wobble increased slightly
- ENABLE/DISABLE_LOOK_AHEAD replaced with a single FID SET_LOOK_AHEAD_MODE (14); a parameter of 0 means off, anything else means on
- Added strafe mode FID (24); a parameter of 0 means off, anything else means on
I’ve still got about 7 items left on the beta to-do list. After those and other unforeseen requests are done, I’ll finish up by adding some pre-release goodies. The improved target detection will have to be branched off at some point.
For my undergraduate directed project, I have decided to continue the development of my program GuiltySpark. As a requirement of the project, I will be using this blog to record the details of my progress.
I’m an avid player of the PC game Halo, and I’ve been involved in its modding scene for around 4-5 years. During that time, I’ve picked up 3D modeling and texturing skills, and I’ve joined the staff of the community’s most active site. These days, I mostly create custom skyboxes for mapping teams, which I enjoy because Iove drawing with my tablet. It was only natural for me to try programming applications for Halo after going through first year CSC. Programming for Halo usually involves reading from or writing to the running game’s memory. This opens up new possibilities for players and developers alike that didn’t exist before. For example, a community member developed an wrapper API for the game engine that allows the addition of extensions to the game such as postprocessing and removing various limits that once held map-makers back.
I was inspired by the work of these community members, but I always lacked the experience to make these applications myself. Near the end of CSC 115, I wondered if I could make a program that, using only the player’s coordinates, navigated the player around a map (one of the ingame levels). So I began work. First, a failed Java attempt, then I decided to learn C# as it would be easier to perform “memory hacking” with. I worked all through the summer and when I could during the Fall 2010 semester. What I’ve ended up with is a fully automated, user-programmable bot. Not only will it travel around the map, but it will actually target and shoot at other players. In other words, it plays Halo by itself.
How does it work? Users need to create a special node graph for each map. The bot uses it for path finding and path following. GuiltySpark contains all the tools a user needs to create these node graphs, and I believe it’s quite user-friendly. Once the bot could travel around, I knew I could take the program a step further and add some sort of aiming and AI system. I began to brainstorm for the AI system. It needed to be able to react to changes in the environment and make decisions. The decisions should depend upon information from the environment. Some tasks should have a higher priority at times than others. Using these requirements, I made up a sort of programming language to define the bot’s behaviour. Users of GuiltySpark can write these AI files themselves, and it’s actually “compiled” into a usable form when they load it into the program. It’s easiest to think of it as a tree of tasks, each tree node having a priority. Priorities can incorporate data from the game environment. At each step of the AI, it finds the highest priority path down the tree. This represents a decision.
GuiltySpark is currently in beta testing. I have 8 testers from the Halo community looking for bugs and requesting features. It’s about a month or two from release. However, for my undergraduate directed project I will be further working on the program before and after release, and including my changes in a future version.
If you’ve read this far, you’re probably wondering what exactly I want to do for GuiltySpark as my project. Right now, target visibility is checked using a cheap trick. When you aim at a target ingame, their name shows up on screen. The text won’t show up when you aim at someone through obstacles, so I can see if that text is visible or not to determine if the bot has a clear shot. The problem is that the text doesn’t show up if the target is too far away, even if visible. My bot can also aim ahead of moving targets to compensate for projectile travel time and network latency. Leading too far ahead makes the text disappear. I need a better solution.
The better solution is to extract the level geometry from Halo’s memory, then perform some kind of ray casting on it. This is something totally new to me and I’m excited to try it. It will be a challenge to correctly extract the geometry and work with it in a real-time fashion. Luckily, my program already has a small overhead of about 2-4% CPU usage (by task manager) when running the AI, aimbot, and path following. I’ve made sure to optimize where I can because I didn’t know how expensive the AI system would be, especially since users can write their own AI behaviour files to program the bot however they want. I tried to safeguard the program against them using too many resources with complicated AI files.
Anyway, once this project is cleared and official I will start doing my research. A lot of work needs to be done before I can even work with the level geometry, and further work needs to be done to integrate this all seamlessly into my existing program.
I will try to update this blog at least once per week.
For the next while, I’ll be using this blog to report on the progress of my undergraduate directed project at UVic.