Initial octahedral compression incorrectly wrote tangent to the buffer
using an offset of 3 rather than 4, losing the sign of the tangent
vector needed for things like tangent space for texturing mapping
GLES3 renderer used remove_custom_define rather than set_conditional to
change back to the default conditional state the scene shader should be
in
Implement Octahedral Compression for normal/tangent vectors
*Oct32 for uncompressed vectors
*Oct16 for compressed vectors
Reduces vertex size for each attribute by
*Uncompressed: 12 bytes, vec4<float32> -> vec2<unorm16>
*Compressed: 2 bytes, vec4<unorm8> -> vec2<unorm8>
Binormal sign is encoded in the y coordinate of the encoded tangent
Added conversion functions to go from octahedral mapping to cartesian
for normal and tangent vectors
sprite_3d and soft_body meshes write to their vertex buffer memory
directly and need to convert their normals and tangents to the new oct
format before writing
Created a new mesh flag to specify whether a mesh is using octahedral
compression or not
Updated documentation to discuss new flag/defaults
Created shader flags to specify whether octahedral or cartesian vectors
are being used
Updated importers to use octahedral representation as the default format
for importing meshes
Updated ShaderGLES2 to support 64 bit version codes as we hit the limit
of the 32-bit integer that was previously used as a bitset to store
enabled/disabled flags
Implemented splitting of vertex positions and attributes in the vertex
buffer
Positions are sequential at the start of the buffer, followed by the
additional attributes which are interleaved
Made a project setting which enables/disabled the buffer formatting
throughout the project
Implemented in both GLES2 and GLES3
This improves performance particularly on tile-based GPUs as well as
cache performance for something like shadow mapping which only needs
position data
Updated Docs and Project Setting
This is an older, easier to implement variant of CAS as a pure
fragment shader. It doesn't support upscaling, but we won't make
use of it (at least for now).
The sharpening intensity can be adjusted on a per-Viewport basis.
For the root viewport, it can be adjusted in the Project Settings.
Since `textureLodOffset()` isn't available in GLES2, there is no
way to support contrast-adaptive sharpening in GLES2.
Add two new functions to the IP class that returns all addresses/aliases associated with a given address.
This is a cherry-pick merge from 010a3433df which was merged in 2.1, and has been updated to build with the latest code.
This merge adds two new methods IP.resolve_hostname_addresses and IP.get_resolve_item_addresses that returns a List of all addresses returned from the DNS request.
The error check was added for `FileAccessUnix` but it's not an error when both
`p_src` and `p_length` are zero.
Added correct error checks to all implementations to prevent the actual
erroneous case: `p_src` is nullptr but `p_length > 0` (risk of null pointer
indexing).
Fixes#33564.
(cherry picked from commit 01d5c463be)
This changes the types of a big number of variables.
General rules:
- Using `uint64_t` in general. We also considered `int64_t` but eventually
settled on keeping it unsigned, which is also closer to what one would expect
with `size_t`/`off_t`.
- We only keep `int64_t` for `seek_end` (takes a negative offset from the end)
and for the `Variant` bindings, since `Variant::INT` is `int64_t`. This means
we only need to guard against passing negative values in `core_bind.cpp`.
- Using `uint32_t` integers for concepts not needing such a huge range, like
pages, blocks, etc.
In addition:
- Improve usage of integer types in some related places; namely, `DirAccess`,
core binds.
Note:
- On Windows, `_ftelli64` reports invalid values when using 32-bit MinGW with
version < 8.0. This was an upstream bug fixed in 8.0. It breaks support for
big files on 32-bit Windows builds made with that toolchain. We might add a
workaround.
Fixes#44363.
Fixesgodotengine/godot-proposals#400.
Co-authored-by: Rémi Verschelde <rverschelde@gmail.com>
In the legacy renderer unrigged polys would display with no transform applied, whereas the software skinning didn't deal with these at all (outputted them with position zero). This PR simply copies the source to destination verts and replicates the legacy behaviour.
All my earlier test cases for software skinning had the polys parent transform to be identity. This works fine until you had cases where the user had moved the transform of the parent nodes of skinned polys.
This PR fixes this situation by taking into account the final (concatenated) transform of the polys RELATIVE to the skeleton base transform. It does this by applying the inverse skeleton base transform to the poly final transform.
Since we clone the environments to build thirdparty code, we don't get an
explicit dependency on the build objects produced by that environment.
So when we update thirdparty code, Godot code using it is not necessarily
rebuilt (I think it is for changed headers, but not for changed .c/.cpp files),
which can lead to an invalid compilation output (linking old Godot .o files
with a newer, potentially ABI breaking version of thirdparty code).
This was only seen as really problematic with bullet updates (leading to
crashes when rebuilding Godot after a bullet update without cleaning .o files),
but it's safer to fix it everywhere, even if it's a LOT of hacky boilerplate.
(cherry picked from commit c7b53c03ae)
We've been using standard C library functions `memcpy`/`memset` for these since
2016 with 67f65f6639.
There was still the possibility for third-party platform ports to override the
definitions with a custom header, but this doesn't seem useful anymore.
Backport of #48239.
The final_modulate was incorrectly being set in the uniform on light passes in GLES3 in situations where color was baked in the vertices. This was already correct in GLES2. This PR makes prevents setting final_modulate in this situation.
The translation to larger vertex formats was assuming that batches were rects, and not accounting that the num_commands had a different meaning for lines and polys, so the calculation for number of vertices to translate was incorrect in these cases.
Also prevents infinite loop if a single polygon has too many vertices to fit in the batch buffer.
When users create an invalid shader, the shader->valid flag is set to false. Batching previously assumes that shaders are valid, and this can result in primitives with invalid shader being joined, causing visual errors.
This PR prevents joining items that have invalid shaders.
Allows users to override default API usage, in order to get best performance on different platforms.
Also changes the default legacy flags to use STREAM rather than DYNAMIC.
When using modulate_fvf, final_modulate was still being applied on CPU in some circumstances, AS WELL as in the shader. This double application resulted in the wrong color.
This PR prevents CPU multiplication of final_modulate when modulate_fvf is in use.
It also applies an OR to the joined item flags with each item joined. This fixes a bug where a smaller FVF is used than required, resulting in incorrect colors.
In rare cases default batches could occur which were containing commands that were not owned by the first item referenced by the joined item. This had assumed to be the case, and would read the wrong command, or crash.
Instead for safety in this PR we now store a pointer to the parent item in default batches, and use this to determine the correct command list instead of assuming.
An earlier PR #46898 had flipped the rotation basis polarity. This turns out to also need a corresponding flip for the light angles for the lighting to make sense.
The editor under certain circumstances is passing invalid polys to the renderer. This should be fixed upstream but just in case this PR adds fault tolerance for invalid indices.
Trying to use the old `hardware_transform` flag to combine the new large_fvf has lead to several bugs. So here the logic is broken out into 2 separate components, single item and large_fvf.
The old `hardware_transform` name also no longer makes sense, as there are now 3 transform paths:
Software (CPU)
Hardware (uniform)
Hardware (attribute)
Large FVF which encodes the transform in a vertex attribute is triggered by reading from VERTEX in a custom shader. This means that the local vertex position must be available in the shader, so the only way to batch is to also pass the transform as an attribute.
The large FVF path already disabled CPU transform in the case of rects, but not in other primitives, which this PR fixes.
Note that large FVF is incompatible with 2d software skinning. So reading from VERTEX in a custom shader when using skinning will not work.
- Fix objects with no material being considered as fully transparent by the lightmapper.
- Added "environment_min_light" property: gives artistic control over the shadow color.
- Fixed "Custom Color" environment mode, it was ignored before.
- Added "interior" property to BakedLightmapData: controls whether dynamic capture objects receive environment light or not.
- Automatically update dynamic capture objects when the capture data changes (also works for "energy" which used to require object movement to trigger the update).
- Added "use_in_baked_light" property to GridMap: controls whether the GridMap will be included in BakedLightmap bakes.
- Set "flush zero" and "denormal zero" mode for SSE2 instructions in the Embree raycaster. According to Embree docs it should give a performance improvement.
As part of the improvements to batch more cases, batching can store final_modulate as an attribute in the vertex format rather than sending as a uniform. This allows draw calls with different final_modulate to be batched together.
However custom shader code was reading from only the final_modulate uniform, and not the attribute when it was in use. This was leading to visual errors.
This is tricky to solve, because we cannot use the same name for the attribute in the vertex and fragment shaders, because one is an attribute and one a varying, whereas a uniform is accessible anywhere. To get around this, a macro is used which can translate to the most appropriate variable depending on whether uniform or attribute or varying is required.