This provides more realistic lighting with a very small performance cost.
The option is available in both GLES3 and GLES2, and can be enabled in
the Project Settings. This goes well with the ACES Fitted tonemapping mode
that was recently added.
When enabled, this also makes upgrading Godot 3.x projects to Godot 4.0 easier,
since lighting in 3.x will better match how it'll look in Godot 4.0.
Change the existing DEV_ASSERT function to be switched on and off by the DEV_ENABLED define. DEV_ASSERT breaks into the debugger as soon as hit.
Add error macros DEV_CHECK and DEV_CHECK_ONCE to add an alternative check that ERR_PRINT when a condition fails, again only enabled in DEV_ENABLED builds.
Update mesh_surface_get_format_stride and
mesh_surface_make_offsets_from_format to return an array of offsets and
an array of strides in order to support vertex stream splitting
Update _get_array_from_surface to also support vertex stream splitting
Add a condition on split stream usage to ensure it does not get used on
dynamic meshes
Handle case when Tangent is compressed but Normal is not compressed
Make stream splitting option require a restart in the settings
Update SoftBody and Sprite3D to support and use strides and offsets
returned by updated visual_server functions
Update Sprite3D to use the dynamic mesh flag
This is only available on the GLES3 backend.
This can be useful for advanced shaders, but it should generally
not be enabled otherwise as full precision has a performance cost.
For general-purpose rendering, the built-in debanding filter should
be used to reduce banding instead.
This restores Windows platform file handling back to open files non-exlusively by default, as was the case before October 2018. (See b902a2f2a7)
Back then, while fixing warnings for MSVC, the function used for opening files was changed from _wfopen() to _wfopen_s() as suggsted by the warning C4996. ("This function may be unsafe, consider using _wfopen_s instead.")
This new function
1. did parameter validation and thus avoided some possible security issues due to nil pointers or wrongly terminated strings
2. it also changed the default file sharing for opened files from _SH_DENYNO (which was the implicit default for the previous _wfopen()) to _SH_SECURE.
_SH_DENYNO means every opened file could be opened by other calls (like is the default on other operating systems).
_SH_SECURE means if the file is opened with READ access, others can still read the same file, but if it is opened with WRITE access, others can't open it at all, not even to read.
This led to rarely occuring bugs on Windows, i.e. due to random access by Antivirus processes, or Godot/Windows not closing a file handle fast enough while trying to open it again elsewhere (i.e. project.godot, instead showing the Project manager, or saving shaders/debugging the game).
What this PR does it change the file access to a third method, _wfsopen(). This is still secure, doing parameter validation and thus avoids the warning, but it allows us to actually SET the file sharing parameter. And we set it to _SH_DENYNO, as it was implicitely before the change. (And as it currently is on all non-Windows platforms, where file sharing restrictions don't exist by default.)
Warning C4996 should really have been pointing this out. It should've been _wfsopen() all along. Let's hope this banishes those annoying, rare errors for all eternity.
Fixes#28036.
(cherry picked from commit b48cbb5da9)
This backports the high quality glow mode from the `master` branch.
Previously, during downsample, every second row was ignored.
Now, when high-quality is used, we sample two rows at once to ensure
that no pixel is missed. It is slower, but looks much better and has
a much high stability while moving.
High quality also takes an additional horizontal sample the width of the
horizontal blur matches the height of the vertical blur.
With the octahedral compression, we had attributes of a size of 2 bytes
which potentially caused performance regressions on iOS/Mac
Now add padding to the normal/tangent buffer
For octahedral, normal will always be oct32 encoded
UNLESS tangent exists and is also compressed
then both will be oct16 encoded and packed into a vec4<GL_BYTE>
attribute
Initial octahedral compression incorrectly wrote tangent to the buffer
using an offset of 3 rather than 4, losing the sign of the tangent
vector needed for things like tangent space for texturing mapping
GLES3 renderer used remove_custom_define rather than set_conditional to
change back to the default conditional state the scene shader should be
in
Implement Octahedral Compression for normal/tangent vectors
*Oct32 for uncompressed vectors
*Oct16 for compressed vectors
Reduces vertex size for each attribute by
*Uncompressed: 12 bytes, vec4<float32> -> vec2<unorm16>
*Compressed: 2 bytes, vec4<unorm8> -> vec2<unorm8>
Binormal sign is encoded in the y coordinate of the encoded tangent
Added conversion functions to go from octahedral mapping to cartesian
for normal and tangent vectors
sprite_3d and soft_body meshes write to their vertex buffer memory
directly and need to convert their normals and tangents to the new oct
format before writing
Created a new mesh flag to specify whether a mesh is using octahedral
compression or not
Updated documentation to discuss new flag/defaults
Created shader flags to specify whether octahedral or cartesian vectors
are being used
Updated importers to use octahedral representation as the default format
for importing meshes
Updated ShaderGLES2 to support 64 bit version codes as we hit the limit
of the 32-bit integer that was previously used as a bitset to store
enabled/disabled flags
Implemented splitting of vertex positions and attributes in the vertex
buffer
Positions are sequential at the start of the buffer, followed by the
additional attributes which are interleaved
Made a project setting which enables/disabled the buffer formatting
throughout the project
Implemented in both GLES2 and GLES3
This improves performance particularly on tile-based GPUs as well as
cache performance for something like shadow mapping which only needs
position data
Updated Docs and Project Setting
This is an older, easier to implement variant of CAS as a pure
fragment shader. It doesn't support upscaling, but we won't make
use of it (at least for now).
The sharpening intensity can be adjusted on a per-Viewport basis.
For the root viewport, it can be adjusted in the Project Settings.
Since `textureLodOffset()` isn't available in GLES2, there is no
way to support contrast-adaptive sharpening in GLES2.
Add two new functions to the IP class that returns all addresses/aliases associated with a given address.
This is a cherry-pick merge from 010a3433df which was merged in 2.1, and has been updated to build with the latest code.
This merge adds two new methods IP.resolve_hostname_addresses and IP.get_resolve_item_addresses that returns a List of all addresses returned from the DNS request.
The error check was added for `FileAccessUnix` but it's not an error when both
`p_src` and `p_length` are zero.
Added correct error checks to all implementations to prevent the actual
erroneous case: `p_src` is nullptr but `p_length > 0` (risk of null pointer
indexing).
Fixes#33564.
(cherry picked from commit 01d5c463be)
This changes the types of a big number of variables.
General rules:
- Using `uint64_t` in general. We also considered `int64_t` but eventually
settled on keeping it unsigned, which is also closer to what one would expect
with `size_t`/`off_t`.
- We only keep `int64_t` for `seek_end` (takes a negative offset from the end)
and for the `Variant` bindings, since `Variant::INT` is `int64_t`. This means
we only need to guard against passing negative values in `core_bind.cpp`.
- Using `uint32_t` integers for concepts not needing such a huge range, like
pages, blocks, etc.
In addition:
- Improve usage of integer types in some related places; namely, `DirAccess`,
core binds.
Note:
- On Windows, `_ftelli64` reports invalid values when using 32-bit MinGW with
version < 8.0. This was an upstream bug fixed in 8.0. It breaks support for
big files on 32-bit Windows builds made with that toolchain. We might add a
workaround.
Fixes#44363.
Fixesgodotengine/godot-proposals#400.
Co-authored-by: Rémi Verschelde <rverschelde@gmail.com>
In the legacy renderer unrigged polys would display with no transform applied, whereas the software skinning didn't deal with these at all (outputted them with position zero). This PR simply copies the source to destination verts and replicates the legacy behaviour.
All my earlier test cases for software skinning had the polys parent transform to be identity. This works fine until you had cases where the user had moved the transform of the parent nodes of skinned polys.
This PR fixes this situation by taking into account the final (concatenated) transform of the polys RELATIVE to the skeleton base transform. It does this by applying the inverse skeleton base transform to the poly final transform.
Since we clone the environments to build thirdparty code, we don't get an
explicit dependency on the build objects produced by that environment.
So when we update thirdparty code, Godot code using it is not necessarily
rebuilt (I think it is for changed headers, but not for changed .c/.cpp files),
which can lead to an invalid compilation output (linking old Godot .o files
with a newer, potentially ABI breaking version of thirdparty code).
This was only seen as really problematic with bullet updates (leading to
crashes when rebuilding Godot after a bullet update without cleaning .o files),
but it's safer to fix it everywhere, even if it's a LOT of hacky boilerplate.
(cherry picked from commit c7b53c03ae)
We've been using standard C library functions `memcpy`/`memset` for these since
2016 with 67f65f6639.
There was still the possibility for third-party platform ports to override the
definitions with a custom header, but this doesn't seem useful anymore.
Backport of #48239.
The final_modulate was incorrectly being set in the uniform on light passes in GLES3 in situations where color was baked in the vertices. This was already correct in GLES2. This PR makes prevents setting final_modulate in this situation.
The translation to larger vertex formats was assuming that batches were rects, and not accounting that the num_commands had a different meaning for lines and polys, so the calculation for number of vertices to translate was incorrect in these cases.
Also prevents infinite loop if a single polygon has too many vertices to fit in the batch buffer.
When users create an invalid shader, the shader->valid flag is set to false. Batching previously assumes that shaders are valid, and this can result in primitives with invalid shader being joined, causing visual errors.
This PR prevents joining items that have invalid shaders.
Allows users to override default API usage, in order to get best performance on different platforms.
Also changes the default legacy flags to use STREAM rather than DYNAMIC.
When using modulate_fvf, final_modulate was still being applied on CPU in some circumstances, AS WELL as in the shader. This double application resulted in the wrong color.
This PR prevents CPU multiplication of final_modulate when modulate_fvf is in use.
It also applies an OR to the joined item flags with each item joined. This fixes a bug where a smaller FVF is used than required, resulting in incorrect colors.
In rare cases default batches could occur which were containing commands that were not owned by the first item referenced by the joined item. This had assumed to be the case, and would read the wrong command, or crash.
Instead for safety in this PR we now store a pointer to the parent item in default batches, and use this to determine the correct command list instead of assuming.
An earlier PR #46898 had flipped the rotation basis polarity. This turns out to also need a corresponding flip for the light angles for the lighting to make sense.
The editor under certain circumstances is passing invalid polys to the renderer. This should be fixed upstream but just in case this PR adds fault tolerance for invalid indices.
Trying to use the old `hardware_transform` flag to combine the new large_fvf has lead to several bugs. So here the logic is broken out into 2 separate components, single item and large_fvf.
The old `hardware_transform` name also no longer makes sense, as there are now 3 transform paths:
Software (CPU)
Hardware (uniform)
Hardware (attribute)
Large FVF which encodes the transform in a vertex attribute is triggered by reading from VERTEX in a custom shader. This means that the local vertex position must be available in the shader, so the only way to batch is to also pass the transform as an attribute.
The large FVF path already disabled CPU transform in the case of rects, but not in other primitives, which this PR fixes.
Note that large FVF is incompatible with 2d software skinning. So reading from VERTEX in a custom shader when using skinning will not work.
- Fix objects with no material being considered as fully transparent by the lightmapper.
- Added "environment_min_light" property: gives artistic control over the shadow color.
- Fixed "Custom Color" environment mode, it was ignored before.
- Added "interior" property to BakedLightmapData: controls whether dynamic capture objects receive environment light or not.
- Automatically update dynamic capture objects when the capture data changes (also works for "energy" which used to require object movement to trigger the update).
- Added "use_in_baked_light" property to GridMap: controls whether the GridMap will be included in BakedLightmap bakes.
- Set "flush zero" and "denormal zero" mode for SSE2 instructions in the Embree raycaster. According to Embree docs it should give a performance improvement.
As part of the improvements to batch more cases, batching can store final_modulate as an attribute in the vertex format rather than sending as a uniform. This allows draw calls with different final_modulate to be batched together.
However custom shader code was reading from only the final_modulate uniform, and not the attribute when it was in use. This was leading to visual errors.
This is tricky to solve, because we cannot use the same name for the attribute in the vertex and fragment shaders, because one is an attribute and one a varying, whereas a uniform is accessible anywhere. To get around this, a macro is used which can translate to the most appropriate variable depending on whether uniform or attribute or varying is required.
This is something that I missed from the initial implementation of large FVF. In large FVF the transform is sent per vertex in an attribute, and the vertex position is the original vertex position. This is so that the original vertex position can be read and modified in a custom shader.
This whole system is therefore incompatible with the legacy hardware transform method, whereby the transform is sent in a uniform. The shader already correctly ignores the uniform transform, but there are some parts of the CPU side logic that can be confused treating large FVF batches as if they were hardware transform.
This PR completes the logic by making the CPU treat large FVF as though it was software transform.