More on Movement System

The movement range works on the idea of having move points, points that you expend for moving.

I made it that I can also restrict movement based on the angle of the terrain being walked on.

Orange lines mean you can’t go further because the angle of the ground is too steep or too high.

Green lines mean he can’t move farther in that direction, but only because he ran out of move points. He has to end his turn to go further.

"Well that's the world we live in!"  "No, that's how you chose to see the world."

 

Here’s how it looks like as I increase the available move points.

At 75 move points:

moveRange2-2

 

 

At 100 move points:

moveRange2-3

He can go further upwards the hill’s path, and further downwards to the tower.

But notice all those orange lines. That means he’s not allowed to climb near-vertical slopes, nor can he jump down from the hill. Later on I may implement climbing and jumping down.

 

I can also make moving upwards more costly, making the move range shorten on upward paths only.

moveRange2-4

 

Such properties are meant for heavy units with bulky armor; they get tired faster walking upwards.

moveRange2-5

 

Here, upward movement is greatly restricted, but notice downward movement hasn’t changed.

moveRange2-6

 

Contrast that to units that can move upward easily, like a Scout or Pathfinder type of unit: lightly protected and has low survivability in thick battles, but can easily move where heavy units have trouble.

moveRange2-3

 

Advertisements

Refactoring

20130808_230559

With my plans on changing the combat system, a big refactoring effort on my code is needed.

I received good feedback on my article about a loosely-coupled system for code in games. So I think I should apply the principles back to my code in Tactics Ensemble.

Low-level

I should start with revisiting my unit low-level classes.

UnitAnimation:

My previous code was a thin layer behind Unity’s Animation class. Users of this class had to compute things by themselves, specifying blend time between animations, needing to check for animation playback positions themselves, etc. I need to change this so it would be a fire-and-forget scenario. Just send requests to do a certain animation, and let the UnitAnimation class figure out the little details by itself.

AnimationsUsed:

Some simple class should facilitate what default animations should be used for idle, what default hurt animation should be used when getting damaged, what animation should be played when unit is selected, and whatnot.

UnitMovement:

This is a good time to revisit the arguments between Character Controller and Rigidbody. Which one should I use now? I should also redo pathfinding, obstacle avoidance, and pushing other units, in a cleaner way now. I should also consider how this class is used by the human player input and by the AI system, how different should handling those two users be.

UnitAttack:

This is the class that was missing from my systems. A separate class should handle the little details of turning on or off melee collisions, and launching of projectiles at the right time.

UnitGui:

This is also something I need to add. A class that handles GUI for the unit, meaning what effect should happen when the unit is highlighted, when the mouse is hovered over it, when it is selected, etc.

The healthbar and any other icons floating above this unit should also be handled by this class.

Should also communicate with the GUI HUD to display this unit’s actions, portrait, name, etc. on the HUD.

UnitReplay:

Something I could add later on. It would basically take note of all incoming function calls to all low-level classes, record at what time they were called. When watching a replay, we simply review the list made, and call the appropriate functions at the proper time. This only handles the replay for one unit, so we have one UnitReplay for each unit in the battle.

Unit Facade Class

I should make the Unit class refer to the low-level classes via interfaces only. I’d need to clean this up as well to accommodate the new combat system.

20130808_230638

Actions

I think I also need to question my system on Actions and Effects.

It would be beneficial if I convert my Effects system to just use behaviour trees instead. It would make editing more consistent, and add flexibility to attacks; you could add conditional effects easily (e.g. if my health > enemy health, deal 2x damage).

Actions (i.e. attacks) would get a revamp, to accommodate the new combat system to be experimented on. I also need to consider if it can be improved to integrate with the AI system better.

Local Player Singletons

This is the set of singletons used to facilitate player input from mouse and keyboard.

They manage selecting a friendly unit on behalf of the player, relaying orders to it, facilitating when the game is asking the player for a destination/target to click on, etc. Basically it handles player input from the local machine (as opposed to input from across the network in a multiplayer game).

I think I should separate this to a low level that handles mouse and keyboard directly, and a high level that sits in problem domain. With that, I should be able to add touch-screen controls later without messing the existing code.

UnitSelector

The class that handles which unit is currently selected. In the normal flow of how a player plays the game, this is where it all starts.

Since it manages which unit is selected, this dictates which unit will be given orders by the player.

This is one of the things that probably needs decoupling, with a UnitSelectorMouseKeyboard that listens for left-clicks and keypresses, and a high-level UnitSelector that facilitates OnSelected events, tells which unit is currently selected to whoever asked it, and whatnot.

GUI HUD

Accepting input from GUI buttons should also be handled by a separate, low-level class. BattleNGUI. This would be the one directly handling NGUI.

This low-level class will listen in on unit selection events, so it can make sure the currently selected unit’s action buttons, portrait, name, etc. get displayed (in NGUI). Each action button would somehow be able to fire off requests to activate that action of that unit.

A BattleHotkey class or something would essentially do the same of allowing sending of requests to do an action of the selected unit, but now by keyboard hotkey presses instead.

Camera System

Camera movement would normally be independent from everything else, being controlled by the player only.

However, there are times when the camera would be controlled by the game.

When an enemy unit is moving on its turn, the camera should center to it (which I haven’t implemented).

And some actions or events may call for showing cinematic shots of characters.

Effects like camera shake would also be handled by this system.

Decoupling is also in order here. A CameraControlMouseKeyboard would listen for keyboard and mouse input while CameraControl provides the actual moving, rotating, zooming. There would be a CameraControlTouch for touchscreen devices.

AI Players

Fairly rudimentary right now, and I don’t think I’ll be changing this soon. When the time comes, I may experiment on giving AI players a central behaviour tree to use.

Right now, each AI unit thinks for itself. But coordination may be better if I give the AI player its own behaviour tree. It would look at the battlefield map from a tactician’s point of view, then it would tell each unit what role they should be in (go to the defensive, attack from this side, scout here, etc.).

Battle Manager Singleton

This singleton acts as umpire, deciding when the battle has ended, who won and who lost. It manages a list of all players in the game, so it can decide whose turn is next. Very simple piece of code, and I probably don’t need to do anything here.

 

If I knew how to make UML diagrams I probably would make one now.

Changing Things

20130808_230210

So, here’s the thing.

One of the experiments I did with my game was that movement also consumed Action Points. This is the same with the old XCOM and Fallout games.

However one thing I noticed is that I am poor at judging how much Action Points I should spend, i.e. “Oops, I made my unit move too far, now he doesn’t have enough AP to actually attack anymore! He’s just a sitting duck.”

This wasn’t so much of a problem with XCOM; the AP in that game is of whole numbers, that amounted to less than 100 points, normally. It’s even smaller in Fallout which would be in the 10-20 range usually. (It was feasible for them to use whole numbers because those games had grids for movement.)

Being in small whole numbers, it was easy to think ahead how much AP you’d need i.e. “Oh, swinging this big hammer costs 3 AP so I better use only up to 7 AP (being that my character has 10 AP)”.

In this regard, I’m deciding to experiment with a different system. It’ll closely resemble Skulls of the Shogun or the Arc the Lad games in PS2, where movement has no cost. Instead, you can move as much as you’d like within your movement range (i.e. a circle perimeter).

There will still be AP, but it is only needed for attacks or other actions instead now.

It also feels like something that may resemble Valkyria Chronicles, in the end, in that it feels like an action-game, only that you control many people, one at a time.

My Scrum Burndown Chart

I use the methodology called Scrum when working on Tactics Ensemble. It’s very appropriate for any videogame project where there are a lot of unknowns at the start.

The most useful thing it can provide you is what they call the burndown chart.

It’s a chart detailing how many tasks you’ve done, and how many you should’ve done at the time. From there you can predict if you’re going to miss the deadline or not.

I use OpenOffice Calc with a bunch of formulae to help me automate much of the calculcations.

Burndown charts let you see if you’re going to miss the deadline. The blue line is the ideal path. By the deadline, the blue line (outstanding tasks) should be zero. The red line is where I actually am.

From the image you can see I was on a roll back in July. That was when I made melee attacks and the attack editor.

Right now I’m cutting it close, but the progress is good.

Modding Options

So for modding support, I’m running the options through my head.

1. Mono dynamic code generation
? Mono/.NET allows to compile code from within code, via the Code DOM compiler
? so the idea is to let modders create C# code and the game will compile those
? compilation results in either a compiled DLL file which will be loaded, or stored in memory. if you don’t save it as a DLL, you’d need to recompile the user’s C# file every time the game is run (not really a problem)
+ possible to let end-users create C# code, code that will run just as fast as the C# code in the game
– potentially allows end-users to cheat the single-player game by editing values or forcing commands on the game, similar to Game Genie
– potentially encourages end-users to share DLL files to each other, an avenue for a security breach on a user’s PC
– won’t work on web or mobile versions as they are sandboxed environments that do not allow compiling during runtime

2. KopiLuaInterface
? a Lua interpreter written purely in C#
? modders will make Lua scripts
+ a completely sandboxed environment, users will not be able to write malicious code
+ will work on web versions
+ will also work on mobile versions

– running a Lua script is slower than running a C# script. even slower since KopiLua is slower than the standard Lua1

3. LuaJIT + LuaInterface
? Unity (via Mono) can load a DLL of compiled C/C++ code (native code)
? modders will make Lua scripts
+ a completely sandboxed environment, users will not be able to write malicious code
+ runs faster than KopiLuaInterface, even faster than standard Lua with Just-In-Time compiling
+ can work on mobile versions with a little extra work
– will not work in web builds as they are sandboxed environments that do not allow loading of native DLLs
– works only on Unity Pro
– running a Lua script is slower than running a C# script1

LuaJIT is a no-go only because I have no Unity Pro.

Mono is the fastest for this situation, but it also has more potential to create malicious code. I’m only targeting Windows standalone so a web or mobile version is irrelevant for the moment. But if I ever decide later on to move some stuff into web/mobile, then mods made in C# won’t work, and that’s a loss.

So I’m going to use KopiLuaInterface for mod scripts.

When I get Pro, it’ll finally be possible for me to load native DLLs. It will likely be possible to convert what I have to use LuaJIT instead, at least, when making the standalone/mobile version of the game. If ever a web build is released, it would just fallback to using KopiLua. The nice thing is end-users’ Lua scripts will work regardless if the game is using LuaJIT or KopiLua.


1: The speed difference between Lua and C# should largely be trivial depending on what the Lua scripts do and what it is used for. Scripts shouldn’t be executed in tight loops, but for handling events, they’ll be fine.

W.U. 10: Basic Attack AI

Here I have the enemy A.I. spamming the Lunge attack. UPDATE (2012Aug18 1540 PHT): It turns out my enemy A.I. was erroneously allowing the enemies to Lunge even when they didn’t have enough stamina points to do so. This has been fixed.

One thing to note was that it was starting to become a pixel hunting process for me to get the proper place to move to for an attack, as I suspected. This was why I wanted to make an attack preview where you see a ‘ghost’ of your unit doing the attack before you confirm the command. The ghost’s attack would highlight which units will get hit from the attack.

I also need a better stamina cost display.

Now, about the A.I., here’s the behaviour tree used for the enemies in the video:

It goes basically like this:

  1. Retreat if I need to
  2. Find nearest enemy
  3. Check if I still have enough stamina to get near him and attack
  4. Do the actual attack if I can
  5. Just close in if I can’t move and attack in the same turn

(I really wouldn’t need to add that additional check in the third step because of the way behaviour trees work. But it’s there because I plan on adding more to the tree.)

One thing I did not anticipate the A.I. to do, was that after they attack, they spend time backtracking to the proper range to do the lunge attack, so they’d be in the right position come their next turn. I was surprised that the A.I. was that clever, and yet, all it did was follow the instructions I gave it. And there it was in step no. 5 (i.e. close in if I can’t attack yet). Its just that I didn’t realize it would also do that immediately after an attack.

Editor

As for the behaviour tree editor, I was thinking of making the display look more like a folder layout, given the cramped space it needs to be shown on.

Here’s how that kinda looks like:

And here’s how my initial trial on the code for that looks like:

The purple, stretched hexagons are selectors, while the large rounded boxes are sequences. Sequences are allowed to have different colors for easier recognition.

Selectors arrange their children vertically, and sequences arrange theirs horizontally.

The white hexagons are conditions, while the white boxes are actions.

Watching Traversals

With my goals for modding support, I want to be able to show a behaviour tree in-game, so I made sure to use plain GUI code and not EditorGUI. With that I can show tree traversals as they happen. This is meant for debugging.

Stamina Cost Check Hack

The “CanLungeNow?” is a special (i.e. hack) type of sequence that I was forced to make. It basically accumulates the stamina cost of its children actions and returns failure if the A.I. agent (the unit) does not have enough stamina to do all those actions. I’m in the process of redoing it with decorators instead.

Static Tree

I designed my trees so that multiple agents can traverse one tree simultaneously. This means I can have only one tree that powers the A.I. of multiple units. While my game is turn-based and wouldn’t need such a feature, I may, however, add squadron units who will move and act together (goblin horde units).

This means nodes in the tree do not keep information themselves. Whenever they want to store information, they store it in the A.I. agent. If a node returns `Running`, it stores that state in the agent, not in the tree. Its stored in a hashtable, so they can put anything they want.

If a leaf node doesn’t have parameters, it might as well be a static class! If it does have parameters, those are most likely never changed during runtime. There could be a rare case for a need of an action that edits the tree it is in (self-learning A.I.?).

Flattening A Tree

Using Mono’s dynamic compilation features (allowing code to compile code), I can convert a behaviour tree from what it is right now, a bunch of classes arranged in a tree, to just one class with selectors and sequences converted to a bunch of nested if-else-if chains.

Most likely the leaf nodes (actions and conditions) need to stay as classes, but at least tree traversal will surely be faster, because there is no tree to traverse anymore; traversal is hardcoded for the most part. This is most likely what AngryAnt’s Behave library does as well.

Notes on the A.I.

Here’s some notes I was taking down while watching the behaviour tree video by Alex J. Champandard (part 1, part 2, and part 3).

Goals for the A.I.

  1. “Intelligent” enough to know how to preserve itself
  2. Compentent enough to finish goals assigned to it
    1. Adaptable to situations
      1. If it suddenly can’t do what is assigned to it, it should do some fallback action, perhaps wait until such time that it can do it again.
      2. If it can do multiple ways of achieving a goal assigned to it, it should look for the best way it can do it (HTN Planner)
    2. Fuzzy goals: give percentage as to how much attention it should give to each goal assigned to it (priority)
  3. Coordinate with allies to do joint operations (semaphores?)

Goals for the A.I. System

  1. Easy to use/understand (don’t we all want this)
  2. No need to bug the programmer every time: Once the system has been coded, let designers design behaviour without more coding (as much as possible). (How? By making the system composed of basic building blocks that can express up to the most complex ideas, like Lego, so the designer can use them in creative ways without asking the programmer to hard-code something all the time.)
  3. Easy to reuse:
    1. Reuse behaviour: Reference a (reusable) sub-behaviour within a behaviour.
    2. Use concept of cascading style sheets/inheritance: Reuse generic behaviours and just override for specific functionality. Can even use strategy pattern on lieu of this (override behaviours during runtime).
    3. Encourage creating small, modular, connectable behaviours.

A.I. Systems

My first try at an A.I. system was when I was making Death Zone Zero. I was inspired by the A.I. system in Dragon Age: Origins.

If you’re a programmer, one look at these screenshots should explain how the whole thing is set up.

To be clear about it, it works like one long if-then-else-if chain, but the important distinction is that you can add, remove, and change those if’s during runtime.

So its basically a list of conditions, with an action to do for each, when the corresponding condition is met.

You can also describe it as a priority list. It checks from the top first. If it found something to do at the top, the program won’t bother looking at the bottom conditions. Of course the program continually evaluates this long if-then-else-if chain (always starting from the top), so there is a chance the bottom entries can be triggered. It depends exactly on how lenient the conditions are.

Here is an example:

  1. Am I dying? Then I should escape from the enemy.
  2. (Assuming I’m not dying,) Is an enemy in attack range? Then I should retaliate.
  3. (Assuming I’m not dying and not being attacked,) Did I detect an enemy? Then I should pursue him.
  4. (Assuming I’m not dying, not attacked, & not seen an enemy) Then I have nothing to do, I’ll just wander/patrol the area.

That is basically the most common A.I. you’ll ever find in videogames. I’ll reword everything to make that clear:

  1. Am I losing? Then assume a defensive position.
  2. (Assuming I’m not losing,) Is an immediate threat found? Then do something about it.
  3. (Assuming I’m not losing and no immediate threats,) Am I seeing a potential threat on the horizon? Then take it into account.
  4. (Assuming I’m not losing, and no immediate or potential threats) Then improve myself and keep on guard.

You’ll notice each condition expects that the ones before it failed. This is essential in making it a priority list.

You’ll also notice the entries are ordered from most important to least. This is also needed to make sure the A.I. works as intended.

Building upon this, I had added the ability to add more than one condition for an action. The conditions would then be attached together in essentially AND operators. If one condition fails, the program won’t bother with the other conditions and move on to the next entry in the list.

Improvements

What I had problems with back then, was how to make enemy A.I. do coordinated attacks (flanking, one distracts while the other attacks from the target’s blind spot), and to make an A.I. switch between different priority lists.

Swapping Priority Lists

Say the enemy can act stealthily if he wanted to. His abilities for stealth essentially mirror what he does while not in stealth: escape, attack, pursue, and patrol, but again, all as a stealthy variation. What I had been thinking then, is to add an action to make an A.I. switch to a different priority list if the situation calls for it.

Moving On

Then a lot of things happened and Death Zone Zero is currently on hold. Now that I am tackling A.I. systems again, I’m taking a pause and wondering how to do things again. I’ve heard about behaviour trees and how it’s very well suited for videogame A.I.

I ended up reading this #AltDevBlogADay article on behaviour trees, and chanced upon one of his references: the A.I. system in the PSN game, Swords and Soldiers (part 1, and part 2).

The more I read it, the more I realized, “This isn’t a behaviour tree, this is more like the priority list system I made up long ago”.

The A.I. system in Swords and Soldiers is very similar to the priority list system I made up long ago.

The nice thing about behaviour trees is you can create one that works exactly like a priority list.

The truth though, is that they can have a more sophisticated structure that potentially make them work a lot more “intelligently”. If the priority list is like an if-then-else chain, then the behaviour tree can add more, like a switch statement, nested if’s, and even loops. Behaviour trees are like a visual programming language but made for a more specific purpose (i.e. videogame artificial intelligence). Part of their charm is to make the design of A.I. a lot more accessible to non-programmers, the same way my priority list does.

The only problem is its a lot harder to understand. For that, I recommend the article I mentioned above.

There’s also this hour-long video about behaviour trees in aiGameDev.com. An account is required, but subscription is free. You could, however, watch that same video for free, without registration, in this three-part series: part 1, part 2, and part 3.

Normal Map Problems in Blender

Normal map baking in Blender is very simple and straightforward but there are numerous pitfalls that are not apparent to the user that can make things broken.

No objects or images found to bake to

Generally speaking, if either of the high-poly object or the low-poly object you are baking to are set as not renderable, Blender can’t find it and thus, can’t bake anything. To make objects not renderable, there are many ways to do so:

  1. Set it as invisible from the Outliner
  2. Have either objects in a layer that isn’t set to be rendered. In the Render tab, there’s a section named “Layers”. That determines which layers will be rendered.

It doesn’t make sense to me at all that normal map baking has to depend on these things as if we’re rendering to a movie file.

The other possible reason is if the low-poly object doesn’t have an image assigned to it. From my experience you can just create a new image from Blender, assign it to the low-poly’s UVs, and you don’t even need to save it yet. That is enough to make normal map baking to work.

Overlapping UVs

Take care when baking into objects that have overlapping UVs. If you have a head whose UVs’ left side is mirrored to its right side, this can cause the baking to overlap as well.

Here, the objects have multiple copies, thus, it made sense for me to simply make them use the same UV space. I was baking the high-poly object you see in the bottommost copy of that object.

However, the normal map baking produced those lines you can see in the left side of the screenshot. Again, this is because the UVs are overlapping.

What I did, was to temporarily set aside the UVs of all the other copies of the object:

Then bake again. This will produce a clean result like the screenshot below. Then simply move the UVs back to their original positions.

Flipped Normal Maps

Things looked fine until I saw my object up close:

The left side of my character’s armor seems to be flipped from the right side. This is because in the normal map (shown in the right side), the UVs were split in half. And then the right half was rotated upside down (because I was saving space).

Here is how it looks like in Blender:

However, when you preview the normal map from Blender, it looks fine.

In the end, I was forced to remap the UV layout so there’d be as few seams as possible.

This resulted in having to reduce the size of my UVs to fit in the image, but so far, that’s the only way I see to fix it.