It iterates through the targets in the order the spider observed them (i.e. The first priority is to focus on weakening the weakest pending node.
The new worker scheduling algorithm currently has two basic priorities. We'll be able to spend more time thinking about algorithmic improvements if we don't have to do fiddly things like managing state. Cancelling all our existing workers has some minor drawbacks in terms of performance, but what it wins us in simplicity dominates such considerations.
#Bitburner kill not working code
Netscripts programming capabilities are some of the most challenging and inconsistent I've ever worked with, so I want to write as little complex code as possible. We cancel all existing workers because it is easier to solve this problem if you don't have to keep track of state. awaits a signal that something material has changed.cancels all existing distributor controlled workers,.The distributor is the most interesting part. It stores the hacked node list in a newline separated file, so that other scripts don't have to invoke a function or spend precious CPU time reconstructing the list. It uses a breadth first search across the nodes starting from home, hacking any nodes we have the capability to. The spider is very straightfoward, as you will see below in spider2.js. A distributor to coordinate work among the available owned servers.I designed a system with three main components: Minimize RAM usage (scheduling overhead of around 30GB).Allocate resources toward the most efficient available task, subject to some allowances for early progression.Weaken and grow first, before beginning to hack.
Allocate resources from one server to work on another.