Latest News

NPC AI updates 2021-02-04

One of the major changes I have been working on is a revamp of the NPC AI architecture. This is the AI code which runs enemy ships. The initial version was very shallow and had just enough to barely complete a battle, basically a prototype. It was time to re-write this for something long term and I took a few different stabs at the architecture and think I have finally come to something that is close to final.

The first version

The first AI was implemented as basically a single large function with a lot of if/switch statements, making decisions based on what is going on in the world. Decisions like "what weapon should I fire" and "which ship to attack". Even doing hardly anything like what would be finally required, this code was already 500+ lines of almost unreadable conditionals. I knew obviously this wasn't going to scale, so I set out to refactor this into something better.

The second version

For the second version of the code, I started to decompose the preceding function into a set of sub steps: "ChooseWeapon", and "ChooseShipComponent" and "ChooseShipToFireAgainst", etc. The overall logic was still top down, with the individual implementations moved to independent classes/functions. This made it a little easier to follow the logic (as more conditionals, higher levels, could be seen at once), but it was still essentially the same top-down algorithm. There are currently over 60 behaviors that a Ship can do that need to all interact, from ordering mining drones, weapons firing, fluctuating power, orbiting planets. There's no way I am going to be able to understand the code in a top down manner like this and incorporate all of these interacting behaviors. In addition, some of the behaviors, like should I fire my weapon or should I flee? Should I trade with this enemy or should I battle? These need to be resolved in a dynamic and fluid environment and change over time.

How do you implement an AI with dozens of behaviors that all interact, sometimes conflict and may have their own mutually compatible or mutually exclusive goals? For this I went to the other extreme.

The current version

For the current and hopefully final architecture, I inverted the control entirely. Instead of a top down algorithm which descends into the child logic, I pushed all decision making into each behavior. Behaviors "FireWeapon", "CancelBattleOrders", "FleeBattle" for example are now independent Actors or State Machines that determine entirely if they are going to act or not. The overall AI then loops through each behavior, and each behavior decides to act, or not. In addition, as only one behavior executes in reality at a time, behaviors may need to have a priority of execution. Each behavior publishes a weight from 1 to 100 that determines its own desire to run.

The benefit of this architecture is I can test each behavior in isolation and make each behavior really complex while keeping a handle on the logic. The downside is now coordination between behaviors needs to happen from messages between then. But with 60+ behaviors that need to be written with most of them no having to coordinate directly, I think this is so far the best tradeoff.

More News

State of the Game 2021 2021-01-04
Hello World 2021-01-01