This PR implements a worked thread pool. It uses a fixed amount of threads in a pool and allows scheduling tasks
that can be run on threads (and then waited for). It satisfies the following use cases:
* HTML5 thread count is fixed (and similar restrictions are known in consoles) so we need to reuse threads.
* Thread spawning is slow in general, so reusing threads is faster anyway.
* This implementation supports recursive waiting for tasks, making it less prone to deadlocks if threads from the pool also run tasks.
After this is approved and merged, subsequent PRs will be needed to replace the ThreadWorkPool usage by this class.
This PR is a continuation of #50381 (which was implemented exactly a year ago!)
* Add a visual interface to select which classes should not be built into Godot (well, they are built if something else uses them, but if not used the optimizer will remove them out).
* Add a detection system to scan the project and figure out the actual classes used.
* Added the ability for SCons to load build profiles.
Obligatory Screen:
A simple test with a couple of nodes in the scene resulted in a 25% reduction for the final binary size
TODO:
* Script languages need to implement used class detection (left for another PR).
* Options to disable servers or server functionalities (like 2D or 3D physics, navigation, etc). Are missing, that should also greatly aid in reducing binary size.
* Options to disable some modules would be desired.
* More options to disable drivers (OpenGL, Vulkan, etc) would be desired.
In general this PR is a starting point for more contributors to improve and enhance this functionality.
Read/write ops for this implementation are done through the java layer via jni, and so for good performance, it's key to avoid numerous repeated small read/write ops due the jni overhead.
The alternative is to allocate a (conversatively-sized) large buffer to reduce the number of read/write ops over the jni boundary.
MultiplayerSynchronizers can now be configured to limit their visibility
to a subset of the connected peers, if the synchronized node was spawned
by a MultiplayerSpawner (either automatically or via custom spawn) the
given node will also be despawned remotely.
The replication system doesn't have the logic to handle subspawn
directly, but it is possible to handle them appropriately by manually
updating the visibility of the parent before changing the one of the
nested spawns via the "update_visibility" function.
The visibility of each MultiplayerSynchronizer can be controlled by
adding or remove filters via "[add|remove]_visibility_filter(callable)".
To further optimize the network code, visibility filters can be configured
to be automatically updated during idle or physics frame, or set to always
require manual update (via the "update_visibility" function).
This was done by refactoring directory and file access handling for the Android platform so that any general filesystem access type go through the Android layer.
This allows us to validate whether the access is unrestricted, or whether it falls under scoped storage and thus act appropriately.