diff --git a/COPYRIGHT.txt b/COPYRIGHT.txt
index 7b212485baa..07b5d4042d8 100644
--- a/COPYRIGHT.txt
+++ b/COPYRIGHT.txt
@@ -120,6 +120,11 @@ Copyright: 2007, Starbreeze Studios
2014-2021, Godot Engine contributors.
License: Expat and Zlib
+Files: ./thirdparty/amd-fsr/
+Comment: AMD FidelityFX Super Resolution
+Copyright: 2021, Advanced Micro Devices, Inc.
+License: Expat
+
Files: ./thirdparty/basis_universal/
Comment: Basis Universal
Copyright: 2019, Binomial LLC.
diff --git a/doc/classes/ProjectSettings.xml b/doc/classes/ProjectSettings.xml
index 69ee51ca996..7fdac7ccd44 100644
--- a/doc/classes/ProjectSettings.xml
+++ b/doc/classes/ProjectSettings.xml
@@ -1506,12 +1506,8 @@
-
- Scales the 3D render buffer based on the viewport size and displays the result with linear filtering. Values lower than [code]1.0[/code] can be used to speed up 3D rendering at the cost of quality (undersampling). Values greater than [code]1.0[/code] can be used to improve 3D rendering quality at a high performance cost (supersampling). See also [member rendering/anti_aliasing/quality/msaa] for multi-sample antialiasing, which is significantly cheaper but only smoothens the edges of polygons.
- [b]Note:[/b] This property is only read when the project starts. To change the 3D rendering resolution scale at runtime, set [member Viewport.scale_3d] instead.
-
- Sets the number of MSAA samples to use (as a power of two). MSAA is used to reduce aliasing around the edges of polygons. A higher MSAA value results in smoother edges but can be significantly slower on some hardware. See also [member rendering/3d/viewport/scale] for supersampling, which provides higher quality but is much more expensive.
+ Sets the number of MSAA samples to use (as a power of two). MSAA is used to reduce aliasing around the edges of polygons. A higher MSAA value results in smoother edges but can be significantly slower on some hardware. See also bilinear scaling 3d [member rendering/scaling_3d/mode] for supersampling, which provides higher quality but is much more expensive.
Sets the screen-space antialiasing mode for the default screen [Viewport]. Screen-space antialiasing works by selectively blurring edges in a post-process shader. It differs from MSAA which takes multiple coverage samples while rendering objects. Screen-space AA methods are typically faster than MSAA and will smooth out specular aliasing, but tend to make scenes appear blurry.
@@ -1718,6 +1714,18 @@
Lower-end override for [member rendering/reflections/sky_reflections/texture_array_reflections] on mobile devices, due to performance concerns or driver support.
+
+ Affects the final texture sharpness by reading from a lower or higher mipmap. Negative values make textures sharper, while positive values make textures blurrier. When using FSR, this value is used to adjust the mipmap bias calculated internally which is based on the selected quality. The formula for this is [code]-log2(1.0 / scale) + mipmap_bias[/code]
+
+
+ Determines how sharp the upscaled image will be when using the FSR upscaling mode. Sharpness halves with every whole number. Values go from 0.0 (sharpest) to 2.0. Values above 2.0 won't make a visible difference.
+
+
+ Sets the scaling 3D mode. Bilinear scaling renders at different resolution to either undersample or supersample the viewport. FidelityFX Super Resolution 1.0, abbreviated to FSR, is an upscaling technology that produces high quality images at fast framerates by using a spatially aware upscaling algorithm. FSR is slightly more expensive than bilinear, but it produces significantly higher image quality. FSR should be used where possible.
+
+
+ Scales the 3D render buffer based on the viewport size uses an image filter specified in [member rendering/scaling_3d/mode] to scale the output image to the full viewport size. Values lower than [code]1.0[/code] can be used to speed up 3D rendering at the cost of quality (undersampling). Values greater than [code]1.0[/code] are only valid for bilinear mode and can be used to improve 3D rendering quality at a high performance cost (supersampling). See also [member rendering/anti_aliasing/quality/msaa] for multi-sample antialiasing, which is significantly cheaper but only smoothens the edges of polygons.
+
diff --git a/doc/classes/RenderingServer.xml b/doc/classes/RenderingServer.xml
index 7f4d5cf1cd7..0700650a915 100644
--- a/doc/classes/RenderingServer.xml
+++ b/doc/classes/RenderingServer.xml
@@ -3100,6 +3100,22 @@
If [code]true[/code], rendering of a viewport's environment is disabled.
+
+
+
+
+
+ Affects the final texture sharpness by reading from a lower or higher mipmap. Negative values make textures sharper, while positive values make textures blurrier. When using FSR, this value is used to adjust the mipmap bias calculated internally which is based on the selected quality. The formula for this is [code]-log2(1.0 / scale) + mipmap_bias[/code]
+
+
+
+
+
+
+
+ Determines how sharp the upscaled image will be when using the FSR upscaling mode. Sharpness halves with every whole number. Values go from 0.0 (sharpest) to 2.0. Values above 2.0 won't make a visible difference.
+
+
@@ -3151,12 +3167,21 @@
If [code]true[/code], render the contents of the viewport directly to screen. This allows a low-level optimization where you can skip drawing a viewport to the root viewport. While this optimization can result in a significant increase in speed (especially on older devices), it comes at a cost of usability. When this is enabled, you cannot read from the viewport or from the [code]SCREEN_TEXTURE[/code]. You also lose the benefit of certain window settings, such as the various stretch modes. Another consequence to be aware of is that in 2D the rendering happens in window coordinates, so if you have a viewport that is double the size of the window, and you set this, then only the portion that fits within the window will be drawn, no automatic scaling is possible, even if your game scene is significantly larger than the window size.
-
+
+
+
+
+
+ Sets scaling 3d mode. Bilinear scaling renders at different resolution to either undersample or supersample the viewport. FidelityFX Super Resolution 1.0, abbreviated to FSR, is an upscaling technology that produces high quality images at fast framerates by using a spatially aware upscaling algorithm. FSR is slightly more expensive than bilinear, but it produces significantly higher image quality. FSR should be used where possible.
+
+
+
- Sets the scale at which we render 3D contents.
+ Scales the 3D render buffer based on the viewport size uses an image filter specified in [enum ViewportScaling3DMode] to scale the output image to the full viewport size. Values lower than [code]1.0[/code] can be used to speed up 3D rendering at the cost of quality (undersampling). Values greater than [code]1.0[/code] are only valid for bilinear mode and can be used to improve 3D rendering quality at a high performance cost (supersampling). See also [enum ViewportMSAA] for multi-sample antialiasing, which is significantly cheaper but only smoothens the edges of polygons.
+ When using FSR upscaling, AMD recommends exposing the following values as preset options to users "Ultra Quality: 0.77", "Quality: 0.67", "Balanced: 0.59", "Performance: 0.5" instead of exposing the entire scale.
@@ -3844,6 +3869,14 @@
[FogVolume] will have no shape, will cover the whole world and will not be culled.
+
+ Enables bilinear scaling on 3D viewports. The amount of scaling can be set using [member Viewport.scaling_3d_scale]. Values less then [code]1.0[/code] will result in undersampling while values greater than [code]1.0[/code] will result in supersampling. A value of [code]1.0[/code] disables scaling.
+
+
+ Enables FSR upscaling on 3D viewports. The amount of scaling can be set using [member Viewport.scaling_3d_scale]. Values less then [code]1.0[/code] will be result in the viewport being upscaled using FSR. Values greater than [code]1.0[/code] are not supported and bilinear supersampling will be used instead. A value of [code]1.0[/code] disables scaling.
+
+
+
Do not update the viewport.
diff --git a/doc/classes/Viewport.xml b/doc/classes/Viewport.xml
index d83645a8af0..0418f298083 100644
--- a/doc/classes/Viewport.xml
+++ b/doc/classes/Viewport.xml
@@ -194,6 +194,14 @@
Disable 3D rendering (but keep 2D rendering).
+
+ Affects the final texture sharpness by reading from a lower or higher mipmap when using FSR. Mipmap bias does nothing when FSR is not being used. Negative values make textures sharper, while positive values make textures blurrier. This value is used to adjust the mipmap bias calculated internally which is based on the selected quality. The formula for this is [code]-log2(1.0 / scale) + mipmap_bias[/code]. This updates the rendering server's mipmap bias when called
+ To control this property on the root viewport, set the [member ProjectSettings.rendering/scaling_3d/fsr_mipmap_bias] project setting.
+
+
+ Determines how sharp the upscaled image will be when using the FSR upscaling mode. Sharpness halves with every whole number. Values go from 0.0 (sharpest) to 2.0. Values above 2.0 won't make a visible difference.
+ To control this property on the root viewport, set the [member ProjectSettings.rendering/scaling_3d/fsr_sharpness] project setting.
+
The global canvas transform of the viewport. The canvas transform is relative to this.
@@ -210,7 +218,7 @@
- The multisample anti-aliasing mode. A higher number results in smoother edges at the cost of significantly worse performance. A value of 2 or 4 is best unless targeting very high-end systems. See also [member scale_3d] for supersampling, which provides higher quality but is much more expensive.
+ The multisample anti-aliasing mode. A higher number results in smoother edges at the cost of significantly worse performance. A value of 2 or 4 is best unless targeting very high-end systems. See also bilinear scaling 3d [member scaling_3d_mode] for supersampling, which provides higher quality but is much more expensive.
If [code]true[/code], the viewport will use the [World3D] defined in [member world_3d].
@@ -218,9 +226,14 @@
If [code]true[/code], the objects rendered by viewport become subjects of mouse picking process.
-
- Scales the 3D render buffer based on the viewport size and displays the result with linear filtering. Values lower than [code]1.0[/code] can be used to speed up 3D rendering at the cost of quality (undersampling). Values greater than [code]1.0[/code] can be used to improve 3D rendering quality at a high performance cost (supersampling). See also [member msaa] for multi-sample antialiasing, which is significantly cheaper but only smoothens the edges of polygons.
- To control this property on the root viewport, set the [member ProjectSettings.rendering/3d/viewport/scale] project setting.
+
+ Sets scaling 3d mode. Bilinear scaling renders at different resolution to either undersample or supersample the viewport. FidelityFX Super Resolution 1.0, abbreviated to FSR, is an upscaling technology that produces high quality images at fast framerates by using a spatially aware upscaling algorithm. FSR is slightly more expensive than bilinear, but it produces significantly higher image quality. FSR should be used where possible.
+ To control this property on the root viewport, set the [member ProjectSettings.rendering/scaling_3d/mode] project setting.
+
+
+ Scales the 3D render buffer based on the viewport size uses an image filter specified in [member ProjectSettings.rendering/scaling_3d/mode] to scale the output image to the full viewport size. Values lower than [code]1.0[/code] can be used to speed up 3D rendering at the cost of quality (undersampling). Values greater than [code]1.0[/code] are only valid for bilinear mode and can be used to improve 3D rendering quality at a high performance cost (supersampling). See also [member ProjectSettings.rendering/anti_aliasing/quality/msaa] for multi-sample antialiasing, which is significantly cheaper but only smoothens the edges of polygons.
+ When using FSR upscaling, AMD recommends exposing the following values as preset options to users "Ultra Quality: 0.77", "Quality: 0.67", "Balanced: 0.59", "Performance: 0.5" instead of exposing the entire scale.
+ To control this property on the root viewport, set the [member ProjectSettings.rendering/scaling_3d/scale] project setting.
Sets the screen-space antialiasing method used. Screen-space antialiasing works by selectively blurring edges in a post-process shader. It differs from MSAA which takes multiple coverage samples while rendering objects. Screen-space AA methods are typically faster than MSAA and will smooth out specular aliasing, but tend to make scenes appear blurry.
@@ -306,6 +319,15 @@
Represents the size of the [enum ShadowAtlasQuadrantSubdiv] enum.
+
+ Enables bilinear scaling on 3D viewports. The amount of scaling can be set using [member scaling_3d_scale]. Values less then [code]1.0[/code] will result in undersampling while values greater than [code]1.0[/code] will result in supersampling. A value of [code]1.0[/code] disables scaling.
+
+
+ Enables FSR upscaling on 3D viewports. The amount of scaling can be set using [member scaling_3d_scale]. Values less then [code]1.0[/code] will be result in the viewport being upscaled using FSR. Values greater than [code]1.0[/code] are not supported and bilinear supersampling will be used instead. A value of [code]1.0[/code] disables scaling.
+
+
+ Represents the size of the [enum Scaling3DMode] enum.
+
Multisample antialiasing mode disabled. This is the default value, and is also the fastest setting.
diff --git a/drivers/gles3/rasterizer_scene_gles3.cpp b/drivers/gles3/rasterizer_scene_gles3.cpp
index e66bde90fed..c7753e2c5cb 100644
--- a/drivers/gles3/rasterizer_scene_gles3.cpp
+++ b/drivers/gles3/rasterizer_scene_gles3.cpp
@@ -421,7 +421,7 @@ RID RasterizerSceneGLES3::render_buffers_create() {
return RID();
}
-void RasterizerSceneGLES3::render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_width, int p_height, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) {
+void RasterizerSceneGLES3::render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_internal_width, int p_internal_height, int p_width, int p_height, float p_fsr_sharpness, float p_fsr_mipmap_bias, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) {
}
void RasterizerSceneGLES3::gi_set_use_half_resolution(bool p_enable) {
diff --git a/drivers/gles3/rasterizer_scene_gles3.h b/drivers/gles3/rasterizer_scene_gles3.h
index e8d257087f6..14ab0eaa4a8 100644
--- a/drivers/gles3/rasterizer_scene_gles3.h
+++ b/drivers/gles3/rasterizer_scene_gles3.h
@@ -203,7 +203,7 @@ public:
void set_debug_draw_mode(RS::ViewportDebugDraw p_debug_draw) override;
RID render_buffers_create() override;
- void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_width, int p_height, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) override;
+ void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_internal_width, int p_internal_height, int p_width, int p_height, float p_fsr_sharpness, float p_fsr_mipmap_bias, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) override;
void gi_set_use_half_resolution(bool p_enable) override;
void screen_space_roughness_limiter_set_active(bool p_enable, float p_amount, float p_curve) override;
diff --git a/drivers/vulkan/rendering_device_vulkan.cpp b/drivers/vulkan/rendering_device_vulkan.cpp
index 4cae051302c..952ee50074c 100644
--- a/drivers/vulkan/rendering_device_vulkan.cpp
+++ b/drivers/vulkan/rendering_device_vulkan.cpp
@@ -8822,6 +8822,7 @@ void RenderingDeviceVulkan::initialize(VulkanContext *p_context, bool p_local_de
// get info about further features
VulkanContext::MultiviewCapabilities multiview_capabilies = p_context->get_multiview_capabilities();
device_capabilities.supports_multiview = multiview_capabilies.is_supported && multiview_capabilies.max_view_count > 1;
+ device_capabilities.supports_fsr_half_float = p_context->get_shader_capabilities().shader_float16_is_supported && p_context->get_storage_buffer_capabilities().storage_buffer_16_bit_access_is_supported;
}
context = p_context;
diff --git a/drivers/vulkan/vulkan_context.cpp b/drivers/vulkan/vulkan_context.cpp
index c178a682361..5912f481ec5 100644
--- a/drivers/vulkan/vulkan_context.cpp
+++ b/drivers/vulkan/vulkan_context.cpp
@@ -535,6 +535,24 @@ Error VulkanContext::_check_capabilities() {
multiview_capabilities.is_supported = multiview_features.multiview;
multiview_capabilities.geometry_shader_is_supported = multiview_features.multiviewGeometryShader;
multiview_capabilities.tessellation_shader_is_supported = multiview_features.multiviewTessellationShader;
+
+ VkPhysicalDeviceShaderFloat16Int8FeaturesKHR shader_features;
+ shader_features.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_FLOAT16_INT8_FEATURES_KHR;
+ shader_features.pNext = NULL;
+
+ device_features.pNext = &shader_features;
+
+ device_features_func(gpu, &device_features);
+ shader_capabilities.shader_float16_is_supported = shader_features.shaderFloat16;
+
+ VkPhysicalDevice16BitStorageFeaturesKHR storage_feature;
+ storage_feature.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_16BIT_STORAGE_FEATURES_KHR;
+ storage_feature.pNext = NULL;
+
+ device_features.pNext = &storage_feature;
+
+ device_features_func(gpu, &device_features);
+ storage_buffer_capabilities.storage_buffer_16_bit_access_is_supported = storage_feature.storageBuffer16BitAccess;
}
// check extended properties
diff --git a/drivers/vulkan/vulkan_context.h b/drivers/vulkan/vulkan_context.h
index ae7c697be82..ab2f6a3eb53 100644
--- a/drivers/vulkan/vulkan_context.h
+++ b/drivers/vulkan/vulkan_context.h
@@ -66,6 +66,14 @@ public:
uint32_t max_instance_count;
};
+ struct ShaderCapabilities {
+ bool shader_float16_is_supported;
+ };
+
+ struct StorageBufferCapabilities {
+ bool storage_buffer_16_bit_access_is_supported;
+ };
+
private:
enum {
MAX_EXTENSIONS = 128,
@@ -88,6 +96,8 @@ private:
uint32_t vulkan_patch = 0;
SubgroupCapabilities subgroup_capabilities;
MultiviewCapabilities multiview_capabilities;
+ ShaderCapabilities shader_capabilities;
+ StorageBufferCapabilities storage_buffer_capabilities;
String device_vendor;
String device_name;
@@ -239,6 +249,8 @@ public:
uint32_t get_vulkan_minor() const { return vulkan_minor; };
SubgroupCapabilities get_subgroup_capabilities() const { return subgroup_capabilities; };
MultiviewCapabilities get_multiview_capabilities() const { return multiview_capabilities; };
+ ShaderCapabilities get_shader_capabilities() const { return shader_capabilities; };
+ StorageBufferCapabilities get_storage_buffer_capabilities() const { return storage_buffer_capabilities; };
VkDevice get_device();
VkPhysicalDevice get_physical_device();
diff --git a/glsl_builders.py b/glsl_builders.py
index 57aaed5f9fd..0926212e509 100644
--- a/glsl_builders.py
+++ b/glsl_builders.py
@@ -29,6 +29,10 @@ def include_file_in_rd_header(filename, header_data, depth):
while line:
+ index = line.find("//")
+ if index != -1:
+ line = line[:index]
+
if line.find("#[vertex]") != -1:
header_data.reading = "vertex"
line = fs.readline()
@@ -55,7 +59,14 @@ def include_file_in_rd_header(filename, header_data, depth):
import os.path
- included_file = os.path.relpath(os.path.dirname(filename) + "/" + includeline)
+ included_file = ""
+
+ if includeline.startswith("thirdparty/"):
+ included_file = os.path.relpath(includeline)
+
+ else:
+ included_file = os.path.relpath(os.path.dirname(filename) + "/" + includeline)
+
if not included_file in header_data.vertex_included_files and header_data.reading == "vertex":
header_data.vertex_included_files += [included_file]
if include_file_in_rd_header(included_file, header_data, depth + 1) is None:
diff --git a/scene/main/viewport.cpp b/scene/main/viewport.cpp
index 48a672b3103..7e35d3633bc 100644
--- a/scene/main/viewport.cpp
+++ b/scene/main/viewport.cpp
@@ -55,6 +55,7 @@
#include "scene/resources/world_2d.h"
#include "scene/scene_string_names.h"
#include "servers/audio_server.h"
+#include "servers/rendering/rendering_server_globals.h"
void ViewportTexture::setup_local_to_scene() {
Node *local_scene = get_local_scene();
@@ -3473,17 +3474,60 @@ bool Viewport::is_using_xr() {
return use_xr;
}
-void Viewport::set_scale_3d(float p_scale_3d) {
+void Viewport::set_scaling_3d_mode(Scaling3DMode p_scaling_3d_mode) {
+ if (scaling_3d_mode == p_scaling_3d_mode) {
+ return;
+ }
+
+ scaling_3d_mode = p_scaling_3d_mode;
+ RS::get_singleton()->viewport_set_scaling_3d_mode(viewport, (RS::ViewportScaling3DMode)(int)p_scaling_3d_mode);
+}
+
+Viewport::Scaling3DMode Viewport::get_scaling_3d_mode() const {
+ return scaling_3d_mode;
+}
+
+void Viewport::set_scaling_3d_scale(float p_scaling_3d_scale) {
// Clamp to reasonable values that are actually useful.
// Values above 2.0 don't serve a practical purpose since the viewport
// isn't displayed with mipmaps.
- scale_3d = CLAMP(p_scale_3d, 0.1, 2.0);
+ scaling_3d_scale = CLAMP(p_scaling_3d_scale, 0.1, 2.0);
- RS::get_singleton()->viewport_set_scale_3d(viewport, scale_3d);
+ RS::get_singleton()->viewport_set_scaling_3d_scale(viewport, scaling_3d_scale);
}
-float Viewport::get_scale_3d() const {
- return scale_3d;
+float Viewport::get_scaling_3d_scale() const {
+ return scaling_3d_scale;
+}
+
+void Viewport::set_fsr_sharpness(float p_fsr_sharpness) {
+ if (fsr_sharpness == p_fsr_sharpness) {
+ return;
+ }
+
+ if (p_fsr_sharpness < 0.0f) {
+ p_fsr_sharpness = 0.0f;
+ }
+
+ fsr_sharpness = p_fsr_sharpness;
+ RS::get_singleton()->viewport_set_fsr_sharpness(viewport, p_fsr_sharpness);
+}
+
+float Viewport::get_fsr_sharpness() const {
+ return fsr_sharpness;
+}
+
+void Viewport::set_fsr_mipmap_bias(float p_fsr_mipmap_bias) {
+ if (fsr_mipmap_bias == p_fsr_mipmap_bias) {
+ return;
+ }
+
+ fsr_mipmap_bias = p_fsr_mipmap_bias;
+ RS::get_singleton()->viewport_set_fsr_mipmap_bias(viewport, p_fsr_mipmap_bias);
+}
+
+float Viewport::get_fsr_mipmap_bias() const {
+ return fsr_mipmap_bias;
}
#endif // _3D_DISABLED
@@ -3611,12 +3655,20 @@ void Viewport::_bind_methods() {
ClassDB::bind_method(D_METHOD("set_use_xr", "use"), &Viewport::set_use_xr);
ClassDB::bind_method(D_METHOD("is_using_xr"), &Viewport::is_using_xr);
- ClassDB::bind_method(D_METHOD("set_scale_3d", "scale"), &Viewport::set_scale_3d);
- ClassDB::bind_method(D_METHOD("get_scale_3d"), &Viewport::get_scale_3d);
+ ClassDB::bind_method(D_METHOD("set_scaling_3d_mode", "scaling_3d_mode"), &Viewport::set_scaling_3d_mode);
+ ClassDB::bind_method(D_METHOD("get_scaling_3d_mode"), &Viewport::get_scaling_3d_mode);
+
+ ClassDB::bind_method(D_METHOD("set_scaling_3d_scale", "scale"), &Viewport::set_scaling_3d_scale);
+ ClassDB::bind_method(D_METHOD("get_scaling_3d_scale"), &Viewport::get_scaling_3d_scale);
+
+ ClassDB::bind_method(D_METHOD("set_fsr_sharpness", "fsr_sharpness"), &Viewport::set_fsr_sharpness);
+ ClassDB::bind_method(D_METHOD("get_fsr_sharpness"), &Viewport::get_fsr_sharpness);
+
+ ClassDB::bind_method(D_METHOD("set_fsr_mipmap_bias", "fsr_mipmap_bias"), &Viewport::set_fsr_mipmap_bias);
+ ClassDB::bind_method(D_METHOD("get_fsr_mipmap_bias"), &Viewport::get_fsr_mipmap_bias);
ADD_PROPERTY(PropertyInfo(Variant::BOOL, "disable_3d"), "set_disable_3d", "is_3d_disabled");
ADD_PROPERTY(PropertyInfo(Variant::BOOL, "use_xr"), "set_use_xr", "is_using_xr");
- ADD_PROPERTY(PropertyInfo(Variant::FLOAT, "scale_3d", PROPERTY_HINT_RANGE, "0.25,2.0,0.01"), "set_scale_3d", "get_scale_3d");
ADD_PROPERTY(PropertyInfo(Variant::BOOL, "audio_listener_enable_3d"), "set_as_audio_listener_3d", "is_audio_listener_3d");
ADD_PROPERTY(PropertyInfo(Variant::BOOL, "own_world_3d"), "set_use_own_world_3d", "is_using_own_world_3d");
ADD_PROPERTY(PropertyInfo(Variant::OBJECT, "world_3d", PROPERTY_HINT_RESOURCE_TYPE, "World3D"), "set_world_3d", "get_world_3d");
@@ -3633,6 +3685,13 @@ void Viewport::_bind_methods() {
ADD_PROPERTY(PropertyInfo(Variant::BOOL, "use_occlusion_culling"), "set_use_occlusion_culling", "is_using_occlusion_culling");
ADD_PROPERTY(PropertyInfo(Variant::FLOAT, "lod_threshold", PROPERTY_HINT_RANGE, "0,1024,0.1"), "set_lod_threshold", "get_lod_threshold");
ADD_PROPERTY(PropertyInfo(Variant::INT, "debug_draw", PROPERTY_HINT_ENUM, "Disabled,Unshaded,Overdraw,Wireframe"), "set_debug_draw", "get_debug_draw");
+#ifndef _3D_DISABLED
+ ADD_GROUP("Scaling 3D", "");
+ ADD_PROPERTY(PropertyInfo(Variant::INT, "scaling_3d_mode", PROPERTY_HINT_ENUM, "Disabled (Slowest),Bilinear (Fastest),FSR (Fast)"), "set_scaling_3d_mode", "get_scaling_3d_mode");
+ ADD_PROPERTY(PropertyInfo(Variant::FLOAT, "scaling_3d_scale", PROPERTY_HINT_RANGE, "0.25,2.0,0.01"), "set_scaling_3d_scale", "get_scaling_3d_scale");
+ ADD_PROPERTY(PropertyInfo(Variant::FLOAT, "fsr_mipmap_bias", PROPERTY_HINT_RANGE, "-2,2,0.1"), "set_fsr_mipmap_bias", "get_fsr_mipmap_bias");
+ ADD_PROPERTY(PropertyInfo(Variant::FLOAT, "fsr_sharpness", PROPERTY_HINT_RANGE, "0,2,0.1"), "set_fsr_sharpness", "get_fsr_sharpness");
+#endif
ADD_GROUP("Canvas Items", "canvas_item_");
ADD_PROPERTY(PropertyInfo(Variant::INT, "canvas_item_default_texture_filter", PROPERTY_HINT_ENUM, "Nearest,Linear,Linear Mipmap,Nearest Mipmap"), "set_default_canvas_item_texture_filter", "get_default_canvas_item_texture_filter");
ADD_PROPERTY(PropertyInfo(Variant::INT, "canvas_item_default_texture_repeat", PROPERTY_HINT_ENUM, "Disabled,Enabled,Mirror"), "set_default_canvas_item_texture_repeat", "get_default_canvas_item_texture_repeat");
@@ -3669,6 +3728,10 @@ void Viewport::_bind_methods() {
BIND_ENUM_CONSTANT(SHADOW_ATLAS_QUADRANT_SUBDIV_1024);
BIND_ENUM_CONSTANT(SHADOW_ATLAS_QUADRANT_SUBDIV_MAX);
+ BIND_ENUM_CONSTANT(SCALING_3D_MODE_BILINEAR);
+ BIND_ENUM_CONSTANT(SCALING_3D_MODE_FSR);
+ BIND_ENUM_CONSTANT(SCALING_3D_MODE_MAX);
+
BIND_ENUM_CONSTANT(MSAA_DISABLED);
BIND_ENUM_CONSTANT(MSAA_2X);
BIND_ENUM_CONSTANT(MSAA_4X);
@@ -3772,7 +3835,16 @@ Viewport::Viewport() {
ProjectSettings::get_singleton()->set_custom_property_info("gui/timers/tooltip_delay_sec", PropertyInfo(Variant::FLOAT, "gui/timers/tooltip_delay_sec", PROPERTY_HINT_RANGE, "0,5,0.01,or_greater")); // No negative numbers
#ifndef _3D_DISABLED
- set_scale_3d(GLOBAL_GET("rendering/3d/viewport/scale"));
+ Viewport::Scaling3DMode scaling_3d_mode = (Viewport::Scaling3DMode)(int)GLOBAL_GET("rendering/scaling_3d/mode");
+ set_scaling_3d_mode(scaling_3d_mode);
+
+ set_scaling_3d_scale(GLOBAL_GET("rendering/scaling_3d/scale"));
+
+ float fsr_sharpness = GLOBAL_GET("rendering/scaling_3d/fsr_sharpness");
+ set_fsr_sharpness(fsr_sharpness);
+
+ float fsr_mipmap_bias = GLOBAL_GET("rendering/scaling_3d/fsr_mipmap_bias");
+ set_fsr_mipmap_bias(fsr_mipmap_bias);
#endif // _3D_DISABLED
set_sdf_oversize(sdf_oversize); // Set to server.
diff --git a/scene/main/viewport.h b/scene/main/viewport.h
index 11b76b32ebb..38d43e1e597 100644
--- a/scene/main/viewport.h
+++ b/scene/main/viewport.h
@@ -89,6 +89,12 @@ class Viewport : public Node {
GDCLASS(Viewport, Node);
public:
+ enum Scaling3DMode {
+ SCALING_3D_MODE_BILINEAR,
+ SCALING_3D_MODE_FSR,
+ SCALING_3D_MODE_MAX
+ };
+
enum ShadowAtlasQuadrantSubdiv {
SHADOW_ATLAS_QUADRANT_SUBDIV_DISABLED,
SHADOW_ATLAS_QUADRANT_SUBDIV_1,
@@ -284,6 +290,11 @@ private:
MSAA msaa = MSAA_DISABLED;
ScreenSpaceAA screen_space_aa = SCREEN_SPACE_AA_DISABLED;
+
+ Scaling3DMode scaling_3d_mode = SCALING_3D_MODE_BILINEAR;
+ float scaling_3d_scale = 1.0;
+ float fsr_sharpness = 0.2f;
+ float fsr_mipmap_bias = 0.0f;
bool use_debanding = false;
float lod_threshold = 1.0;
bool use_occlusion_culling = false;
@@ -504,6 +515,18 @@ public:
void set_screen_space_aa(ScreenSpaceAA p_screen_space_aa);
ScreenSpaceAA get_screen_space_aa() const;
+ void set_scaling_3d_mode(Scaling3DMode p_scaling_3d_mode);
+ Scaling3DMode get_scaling_3d_mode() const;
+
+ void set_scaling_3d_scale(float p_scaling_3d_scale);
+ float get_scaling_3d_scale() const;
+
+ void set_fsr_sharpness(float p_fsr_sharpness);
+ float get_fsr_sharpness() const;
+
+ void set_fsr_mipmap_bias(float p_fsr_mipmap_bias);
+ float get_fsr_mipmap_bias() const;
+
void set_use_debanding(bool p_use_debanding);
bool is_using_debanding() const;
@@ -586,7 +609,6 @@ public:
#ifndef _3D_DISABLED
bool use_xr = false;
- float scale_3d = 1.0;
friend class AudioListener3D;
AudioListener3D *audio_listener_3d = nullptr;
Set audio_listener_3d_set;
@@ -657,9 +679,6 @@ public:
void set_use_xr(bool p_use_xr);
bool is_using_xr();
-
- void set_scale_3d(float p_scale_3d);
- float get_scale_3d() const;
#endif // _3D_DISABLED
Viewport();
@@ -714,6 +733,7 @@ public:
SubViewport();
~SubViewport();
};
+VARIANT_ENUM_CAST(Viewport::Scaling3DMode);
VARIANT_ENUM_CAST(SubViewport::UpdateMode);
VARIANT_ENUM_CAST(Viewport::ShadowAtlasQuadrantSubdiv);
VARIANT_ENUM_CAST(Viewport::MSAA);
diff --git a/servers/rendering/rasterizer_dummy.h b/servers/rendering/rasterizer_dummy.h
index cecb009aa09..3451ea2d393 100644
--- a/servers/rendering/rasterizer_dummy.h
+++ b/servers/rendering/rasterizer_dummy.h
@@ -195,7 +195,7 @@ public:
void set_debug_draw_mode(RS::ViewportDebugDraw p_debug_draw) override {}
RID render_buffers_create() override { return RID(); }
- void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_width, int p_height, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) override {}
+ void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_internal_width, int p_internal_height, int p_width, int p_height, float p_fsr_sharpness, float p_fsr_mipmap_bias, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) override {}
void gi_set_use_half_resolution(bool p_enable) override {}
void screen_space_roughness_limiter_set_active(bool p_enable, float p_amount, float p_curve) override {}
diff --git a/servers/rendering/renderer_compositor.h b/servers/rendering/renderer_compositor.h
index 1971c3e7815..4526354d176 100644
--- a/servers/rendering/renderer_compositor.h
+++ b/servers/rendering/renderer_compositor.h
@@ -67,6 +67,7 @@ private:
protected:
static RendererCompositor *(*_create_func)();
+ bool back_end = false;
public:
static RendererCompositor *create();
@@ -88,7 +89,7 @@ public:
virtual uint64_t get_frame_number() const = 0;
virtual double get_frame_delta_time() const = 0;
- virtual bool is_low_end() const = 0;
+ _FORCE_INLINE_ virtual bool is_low_end() const { return back_end; };
virtual bool is_xr_enabled() const;
RendererCompositor();
diff --git a/servers/rendering/renderer_rd/effects_rd.cpp b/servers/rendering/renderer_rd/effects_rd.cpp
index fdd6939a8b8..cf943901d48 100644
--- a/servers/rendering/renderer_rd/effects_rd.cpp
+++ b/servers/rendering/renderer_rd/effects_rd.cpp
@@ -237,6 +237,43 @@ RID EffectsRD::_get_compute_uniform_set_from_image_pair(RID p_texture1, RID p_te
return uniform_set;
}
+void EffectsRD::fsr_upscale(RID p_source_rd_texture, RID p_secondary_texture, RID p_destination_texture, const Size2i &p_internal_size, const Size2i &p_size, float p_fsr_upscale_sharpness) {
+ memset(&FSR_upscale.push_constant, 0, sizeof(FSRUpscalePushConstant));
+
+ int dispatch_x = (p_size.x + 15) / 16;
+ int dispatch_y = (p_size.y + 15) / 16;
+
+ RD::ComputeListID compute_list = RD::get_singleton()->compute_list_begin();
+ RD::get_singleton()->compute_list_bind_compute_pipeline(compute_list, FSR_upscale.pipeline);
+
+ FSR_upscale.push_constant.resolution_width = p_internal_size.width;
+ FSR_upscale.push_constant.resolution_height = p_internal_size.height;
+ FSR_upscale.push_constant.upscaled_width = p_size.width;
+ FSR_upscale.push_constant.upscaled_height = p_size.height;
+ FSR_upscale.push_constant.sharpness = p_fsr_upscale_sharpness;
+
+ //FSR Easc
+ FSR_upscale.push_constant.pass = FSR_UPSCALE_PASS_EASU;
+ RD::get_singleton()->compute_list_bind_uniform_set(compute_list, _get_compute_uniform_set_from_texture(p_source_rd_texture), 0);
+ RD::get_singleton()->compute_list_bind_uniform_set(compute_list, _get_uniform_set_from_image(p_secondary_texture), 1);
+
+ RD::get_singleton()->compute_list_set_push_constant(compute_list, &FSR_upscale.push_constant, sizeof(FSRUpscalePushConstant));
+
+ RD::get_singleton()->compute_list_dispatch(compute_list, dispatch_x, dispatch_y, 1);
+ RD::get_singleton()->compute_list_add_barrier(compute_list);
+
+ //FSR Rcas
+ FSR_upscale.push_constant.pass = FSR_UPSCALE_PASS_RCAS;
+ RD::get_singleton()->compute_list_bind_uniform_set(compute_list, _get_compute_uniform_set_from_texture(p_secondary_texture), 0);
+ RD::get_singleton()->compute_list_bind_uniform_set(compute_list, _get_uniform_set_from_image(p_destination_texture), 1);
+
+ RD::get_singleton()->compute_list_set_push_constant(compute_list, &FSR_upscale.push_constant, sizeof(FSRUpscalePushConstant));
+
+ RD::get_singleton()->compute_list_dispatch(compute_list, dispatch_x, dispatch_y, 1);
+
+ RD::get_singleton()->compute_list_end(compute_list);
+}
+
void EffectsRD::copy_to_atlas_fb(RID p_source_rd_texture, RID p_dest_framebuffer, const Rect2 &p_uv_rect, RD::DrawListID p_draw_list, bool p_flip_y, bool p_panorama) {
memset(©_to_fb.push_constant, 0, sizeof(CopyToFbPushConstant));
@@ -1888,6 +1925,27 @@ void EffectsRD::sort_buffer(RID p_uniform_set, int p_size) {
}
EffectsRD::EffectsRD(bool p_prefer_raster_effects) {
+ {
+ Vector FSR_upscale_modes;
+
+#if defined(OSX_ENABLED) || defined(IPHONE_ENABLED)
+ // MoltenVK does not support some of the operations used by the normal mode of FSR. Fallback works just fine though.
+ FSR_upscale_modes.push_back("\n#define MODE_FSR_UPSCALE_FALLBACK\n");
+#else
+ // Everyone else can use normal mode when available.
+ if (RD::get_singleton()->get_device_capabilities()->supports_fsr_half_float) {
+ FSR_upscale_modes.push_back("\n#define MODE_FSR_UPSCALE_NORMAL\n");
+ } else {
+ FSR_upscale_modes.push_back("\n#define MODE_FSR_UPSCALE_FALLBACK\n");
+ }
+#endif
+
+ FSR_upscale.shader.initialize(FSR_upscale_modes);
+
+ FSR_upscale.shader_version = FSR_upscale.shader.version_create();
+ FSR_upscale.pipeline = RD::get_singleton()->compute_pipeline_create(FSR_upscale.shader.version_get_shader(FSR_upscale.shader_version, 0));
+ }
+
prefer_raster_effects = p_prefer_raster_effects;
if (prefer_raster_effects) {
@@ -2523,6 +2581,7 @@ EffectsRD::~EffectsRD() {
RD::get_singleton()->free(index_buffer); //array gets freed as dependency
RD::get_singleton()->free(filter.coefficient_buffer);
+ FSR_upscale.shader.version_free(FSR_upscale.shader_version);
if (prefer_raster_effects) {
blur_raster.shader.version_free(blur_raster.shader_version);
bokeh.raster_shader.version_free(blur_raster.shader_version);
diff --git a/servers/rendering/renderer_rd/effects_rd.h b/servers/rendering/renderer_rd/effects_rd.h
index 551e50ed254..6037127e823 100644
--- a/servers/rendering/renderer_rd/effects_rd.h
+++ b/servers/rendering/renderer_rd/effects_rd.h
@@ -45,6 +45,7 @@
#include "servers/rendering/renderer_rd/shaders/cubemap_filter_raster.glsl.gen.h"
#include "servers/rendering/renderer_rd/shaders/cubemap_roughness.glsl.gen.h"
#include "servers/rendering/renderer_rd/shaders/cubemap_roughness_raster.glsl.gen.h"
+#include "servers/rendering/renderer_rd/shaders/fsr_upscale.glsl.gen.h"
#include "servers/rendering/renderer_rd/shaders/luminance_reduce.glsl.gen.h"
#include "servers/rendering/renderer_rd/shaders/luminance_reduce_raster.glsl.gen.h"
#include "servers/rendering/renderer_rd/shaders/resolve.glsl.gen.h"
@@ -69,6 +70,28 @@ class EffectsRD {
private:
bool prefer_raster_effects;
+ enum FSRUpscalePass {
+ FSR_UPSCALE_PASS_EASU = 0,
+ FSR_UPSCALE_PASS_RCAS = 1
+ };
+
+ struct FSRUpscalePushConstant {
+ float resolution_width;
+ float resolution_height;
+ float upscaled_width;
+ float upscaled_height;
+ float sharpness;
+ int pass;
+ int _unused0, _unused1;
+ };
+
+ struct FSRUpscale {
+ FSRUpscalePushConstant push_constant;
+ FsrUpscaleShaderRD shader;
+ RID shader_version;
+ RID pipeline;
+ } FSR_upscale;
+
enum BlurRasterMode {
BLUR_MIPMAP,
@@ -754,6 +777,7 @@ private:
public:
bool get_prefer_raster_effects();
+ void fsr_upscale(RID p_source_rd_texture, RID p_secondary_texture, RID p_destination_texture, const Size2i &p_internal_size, const Size2i &p_size, float p_fsr_upscale_sharpness);
void copy_to_fb_rect(RID p_source_rd_texture, RID p_dest_framebuffer, const Rect2i &p_rect, bool p_flip_y = false, bool p_force_luminance = false, bool p_alpha_to_zero = false, bool p_srgb = false, RID p_secondary = RID());
void copy_to_rect(RID p_source_rd_texture, RID p_dest_texture, const Rect2i &p_rect, bool p_flip_y = false, bool p_force_luminance = false, bool p_all_source = false, bool p_8_bit_dst = false, bool p_alpha_to_one = false);
void copy_cubemap_to_panorama(RID p_source_cube, RID p_dest_panorama, const Size2i &p_panorama_size, float p_lod, bool p_is_array);
diff --git a/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.cpp b/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.cpp
index 6d85c1f4c19..03ce2690bf2 100644
--- a/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.cpp
+++ b/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.cpp
@@ -1954,7 +1954,9 @@ void RenderForwardClustered::_base_uniforms_changed() {
}
void RenderForwardClustered::_update_render_base_uniform_set() {
- if (render_base_uniform_set.is_null() || !RD::get_singleton()->uniform_set_is_valid(render_base_uniform_set) || (lightmap_texture_array_version != storage->lightmap_array_get_version())) {
+ if (render_base_uniform_set.is_null() || !RD::get_singleton()->uniform_set_is_valid(render_base_uniform_set) || (lightmap_texture_array_version != storage->lightmap_array_get_version()) || base_uniform_set_updated) {
+ base_uniform_set_updated = false;
+
if (render_base_uniform_set.is_valid() && RD::get_singleton()->uniform_set_is_valid(render_base_uniform_set)) {
RD::get_singleton()->free(render_base_uniform_set);
}
@@ -1969,18 +1971,18 @@ void RenderForwardClustered::_update_render_base_uniform_set() {
u.binding = 1;
u.ids.resize(12);
RID *ids_ptr = u.ids.ptrw();
- ids_ptr[0] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
- ids_ptr[1] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
- ids_ptr[2] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
- ids_ptr[3] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
- ids_ptr[4] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
- ids_ptr[5] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
- ids_ptr[6] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
- ids_ptr[7] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
- ids_ptr[8] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
- ids_ptr[9] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
- ids_ptr[10] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
- ids_ptr[11] = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
+ ids_ptr[0] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ ids_ptr[1] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ ids_ptr[2] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ ids_ptr[3] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ ids_ptr[4] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ ids_ptr[5] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ ids_ptr[6] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
+ ids_ptr[7] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
+ ids_ptr[8] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
+ ids_ptr[9] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
+ ids_ptr[10] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
+ ids_ptr[11] = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED);
uniforms.push_back(u);
}
@@ -1999,19 +2001,19 @@ void RenderForwardClustered::_update_render_base_uniform_set() {
RID sampler;
switch (decals_get_filter()) {
case RS::DECAL_FILTER_NEAREST: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::DECAL_FILTER_NEAREST_MIPMAPS: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::DECAL_FILTER_LINEAR: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::DECAL_FILTER_LINEAR_MIPMAPS: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::DECAL_FILTER_LINEAR_MIPMAPS_ANISOTROPIC: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
}
@@ -2026,19 +2028,19 @@ void RenderForwardClustered::_update_render_base_uniform_set() {
RID sampler;
switch (light_projectors_get_filter()) {
case RS::LIGHT_PROJECTOR_FILTER_NEAREST: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::LIGHT_PROJECTOR_FILTER_NEAREST_MIPMAPS: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::LIGHT_PROJECTOR_FILTER_LINEAR: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::LIGHT_PROJECTOR_FILTER_LINEAR_MIPMAPS: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
case RS::LIGHT_PROJECTOR_FILTER_LINEAR_MIPMAPS_ANISOTROPIC: {
- sampler = storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
+ sampler = storage->sampler_rd_get_custom(RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED);
} break;
}
diff --git a/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.h b/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.h
index 7707d772969..d6ab4d1db2e 100644
--- a/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.h
+++ b/servers/rendering/renderer_rd/forward_clustered/render_forward_clustered.h
@@ -129,6 +129,7 @@ class RenderForwardClustered : public RendererSceneRenderRD {
virtual void _base_uniforms_changed() override;
virtual RID _render_buffers_get_normal_texture(RID p_render_buffers) override;
+ bool base_uniform_set_updated = false;
void _update_render_base_uniform_set();
RID _setup_sdfgi_render_pass_uniform_set(RID p_albedo_texture, RID p_emission_texture, RID p_emission_aniso_texture, RID p_geom_facing_texture);
RID _setup_render_pass_uniform_set(RenderListType p_render_list, const RenderDataRD *p_render_data, RID p_radiance_texture, bool p_use_directional_shadow_atlas = false, int p_index = 0);
@@ -603,6 +604,11 @@ protected:
virtual void _render_particle_collider_heightfield(RID p_fb, const Transform3D &p_cam_transform, const CameraMatrix &p_cam_projection, const PagedArray &p_instances) override;
public:
+ _FORCE_INLINE_ virtual void update_uniform_sets() override {
+ base_uniform_set_updated = true;
+ _update_render_base_uniform_set();
+ }
+
virtual GeometryInstance *geometry_instance_create(RID p_base) override;
virtual void geometry_instance_set_skeleton(GeometryInstance *p_geometry_instance, RID p_skeleton) override;
virtual void geometry_instance_set_material_override(GeometryInstance *p_geometry_instance, RID p_override) override;
diff --git a/servers/rendering/renderer_rd/renderer_compositor_rd.cpp b/servers/rendering/renderer_rd/renderer_compositor_rd.cpp
index 559e6d5ad78..522a8e81129 100644
--- a/servers/rendering/renderer_rd/renderer_compositor_rd.cpp
+++ b/servers/rendering/renderer_rd/renderer_compositor_rd.cpp
@@ -281,12 +281,12 @@ RendererCompositorRD::RendererCompositorRD() {
storage = memnew(RendererStorageRD);
canvas = memnew(RendererCanvasRenderRD(storage));
- uint32_t back_end = GLOBAL_GET("rendering/vulkan/rendering/back_end");
+ back_end = (bool)(int)GLOBAL_GET("rendering/vulkan/rendering/back_end");
uint32_t textures_per_stage = RD::get_singleton()->limit_get(RD::LIMIT_MAX_TEXTURES_PER_SHADER_STAGE);
- if (back_end == 1 || textures_per_stage < 48) {
+ if (back_end || textures_per_stage < 48) {
scene = memnew(RendererSceneRenderImplementation::RenderForwardMobile(storage));
- } else { // back_end == 0
+ } else { // back_end == false
// default to our high end renderer
scene = memnew(RendererSceneRenderImplementation::RenderForwardClustered(storage));
}
diff --git a/servers/rendering/renderer_rd/renderer_compositor_rd.h b/servers/rendering/renderer_rd/renderer_compositor_rd.h
index 0230c46800e..f69e40e0ff3 100644
--- a/servers/rendering/renderer_rd/renderer_compositor_rd.h
+++ b/servers/rendering/renderer_rd/renderer_compositor_rd.h
@@ -116,8 +116,6 @@ public:
_create_func = _create_current;
}
- virtual bool is_low_end() const { return false; }
-
static RendererCompositorRD *singleton;
RendererCompositorRD();
~RendererCompositorRD();
diff --git a/servers/rendering/renderer_rd/renderer_scene_gi_rd.cpp b/servers/rendering/renderer_rd/renderer_scene_gi_rd.cpp
index 807af00c8eb..b6b5c90b39e 100644
--- a/servers/rendering/renderer_rd/renderer_scene_gi_rd.cpp
+++ b/servers/rendering/renderer_rd/renderer_scene_gi_rd.cpp
@@ -3132,8 +3132,8 @@ void RendererSceneGIRD::process_gi(RID p_render_buffers, RID p_normal_roughness_
RD::TextureFormat tf;
tf.format = RD::DATA_FORMAT_R16G16B16A16_SFLOAT;
- tf.width = rb->width;
- tf.height = rb->height;
+ tf.width = rb->internal_width;
+ tf.height = rb->internal_height;
if (half_resolution) {
tf.width >>= 1;
tf.height >>= 1;
@@ -3146,13 +3146,13 @@ void RendererSceneGIRD::process_gi(RID p_render_buffers, RID p_normal_roughness_
PushConstant push_constant;
- push_constant.screen_size[0] = rb->width;
- push_constant.screen_size[1] = rb->height;
+ push_constant.screen_size[0] = rb->internal_width;
+ push_constant.screen_size[1] = rb->internal_height;
push_constant.z_near = p_projection.get_z_near();
push_constant.z_far = p_projection.get_z_far();
push_constant.orthogonal = p_projection.is_orthogonal();
- push_constant.proj_info[0] = -2.0f / (rb->width * p_projection.matrix[0][0]);
- push_constant.proj_info[1] = -2.0f / (rb->height * p_projection.matrix[1][1]);
+ push_constant.proj_info[0] = -2.0f / (rb->internal_width * p_projection.matrix[0][0]);
+ push_constant.proj_info[1] = -2.0f / (rb->internal_height * p_projection.matrix[1][1]);
push_constant.proj_info[2] = (1.0f - p_projection.matrix[0][2]) / p_projection.matrix[0][0];
push_constant.proj_info[3] = (1.0f + p_projection.matrix[1][2]) / p_projection.matrix[1][1];
push_constant.max_voxel_gi_instances = MIN((uint64_t)MAX_VOXEL_GI_INSTANCES, p_voxel_gi_instances.size());
@@ -3344,9 +3344,9 @@ void RendererSceneGIRD::process_gi(RID p_render_buffers, RID p_normal_roughness_
RD::get_singleton()->compute_list_set_push_constant(compute_list, &push_constant, sizeof(PushConstant));
if (rb->gi.using_half_size_gi) {
- RD::get_singleton()->compute_list_dispatch_threads(compute_list, rb->width >> 1, rb->height >> 1, 1);
+ RD::get_singleton()->compute_list_dispatch_threads(compute_list, rb->internal_width >> 1, rb->internal_height >> 1, 1);
} else {
- RD::get_singleton()->compute_list_dispatch_threads(compute_list, rb->width, rb->height, 1);
+ RD::get_singleton()->compute_list_dispatch_threads(compute_list, rb->internal_width, rb->internal_height, 1);
}
//do barrier later to allow oeverlap
//RD::get_singleton()->compute_list_end(RD::BARRIER_MASK_NO_BARRIER); //no barriers, let other compute, raster and transfer happen at the same time
diff --git a/servers/rendering/renderer_rd/renderer_scene_render_rd.cpp b/servers/rendering/renderer_rd/renderer_scene_render_rd.cpp
index b8e9f40bc49..ca77198629e 100644
--- a/servers/rendering/renderer_rd/renderer_scene_render_rd.cpp
+++ b/servers/rendering/renderer_rd/renderer_scene_render_rd.cpp
@@ -1503,8 +1503,8 @@ void RendererSceneRenderRD::_allocate_blur_textures(RenderBuffers *rb) {
RD::TextureFormat tf;
tf.format = _render_buffers_get_color_format(); // RD::DATA_FORMAT_R16G16B16A16_SFLOAT;
- tf.width = rb->width;
- tf.height = rb->height;
+ tf.width = rb->internal_width;
+ tf.height = rb->internal_height;
tf.texture_type = rb->view_count > 1 ? RD::TEXTURE_TYPE_2D_ARRAY : RD::TEXTURE_TYPE_2D;
tf.array_layers = rb->view_count;
tf.usage_bits = RD::TEXTURE_USAGE_SAMPLING_BIT | RD::TEXTURE_USAGE_CAN_COPY_TO_BIT;
@@ -1515,6 +1515,10 @@ void RendererSceneRenderRD::_allocate_blur_textures(RenderBuffers *rb) {
}
tf.mipmaps = mipmaps_required;
+ rb->sss_texture = RD::get_singleton()->texture_create(tf, RD::TextureView());
+
+ tf.width = rb->internal_width;
+ tf.height = rb->internal_height;
rb->blur[0].texture = RD::get_singleton()->texture_create(tf, RD::TextureView());
//the second one is smaller (only used for separatable part of blur)
tf.width >>= 1;
@@ -1522,8 +1526,8 @@ void RendererSceneRenderRD::_allocate_blur_textures(RenderBuffers *rb) {
tf.mipmaps--;
rb->blur[1].texture = RD::get_singleton()->texture_create(tf, RD::TextureView());
- int base_width = rb->width;
- int base_height = rb->height;
+ int base_width = rb->internal_width;
+ int base_height = rb->internal_height;
for (uint32_t i = 0; i < mipmaps_required; i++) {
RenderBuffers::Blur::Mipmap mm;
@@ -1577,8 +1581,8 @@ void RendererSceneRenderRD::_allocate_blur_textures(RenderBuffers *rb) {
// create 4 weight textures, 2 full size, 2 half size
tf.format = RD::DATA_FORMAT_R16_SFLOAT; // We could probably use DATA_FORMAT_R8_SNORM if we don't pre-multiply by blur_size but that depends on whether we can remove DEPTH_GAP
- tf.width = rb->width;
- tf.height = rb->height;
+ tf.width = rb->internal_width;
+ tf.height = rb->internal_height;
tf.texture_type = rb->view_count > 1 ? RD::TEXTURE_TYPE_2D_ARRAY : RD::TEXTURE_TYPE_2D;
tf.array_layers = rb->view_count;
tf.usage_bits = RD::TEXTURE_USAGE_SAMPLING_BIT | RD::TEXTURE_USAGE_CAN_COPY_TO_BIT | RD::TEXTURE_USAGE_COLOR_ATTACHMENT_BIT;
@@ -1656,8 +1660,8 @@ void RendererSceneRenderRD::_allocate_depth_backbuffer_textures(RenderBuffers *r
void RendererSceneRenderRD::_allocate_luminance_textures(RenderBuffers *rb) {
ERR_FAIL_COND(!rb->luminance.current.is_null());
- int w = rb->width;
- int h = rb->height;
+ int w = rb->internal_width;
+ int h = rb->internal_height;
while (true) {
w = MAX(w / 8, 1);
@@ -1709,9 +1713,26 @@ void RendererSceneRenderRD::_free_render_buffer_data(RenderBuffers *rb) {
rb->texture_fb = RID();
}
- if (rb->texture.is_valid()) {
- RD::get_singleton()->free(rb->texture);
+ if (rb->internal_texture == rb->texture && rb->internal_texture.is_valid()) {
+ RD::get_singleton()->free(rb->internal_texture);
rb->texture = RID();
+ rb->internal_texture = RID();
+ rb->upscale_texture = RID();
+ } else {
+ if (rb->texture.is_valid()) {
+ RD::get_singleton()->free(rb->texture);
+ rb->texture = RID();
+ }
+
+ if (rb->internal_texture.is_valid()) {
+ RD::get_singleton()->free(rb->internal_texture);
+ rb->internal_texture = RID();
+ }
+
+ if (rb->upscale_texture.is_valid()) {
+ RD::get_singleton()->free(rb->upscale_texture);
+ rb->upscale_texture = RID();
+ }
}
if (rb->depth_texture.is_valid()) {
@@ -1729,6 +1750,11 @@ void RendererSceneRenderRD::_free_render_buffer_data(RenderBuffers *rb) {
rb->depth_back_texture = RID();
}
+ if (rb->sss_texture.is_valid()) {
+ RD::get_singleton()->free(rb->sss_texture);
+ rb->sss_texture = RID();
+ }
+
for (int i = 0; i < 2; i++) {
for (int m = 0; m < rb->blur[i].mipmaps.size(); m++) {
// do we free the texture slice here? or is it enough to free the main texture?
@@ -1818,7 +1844,7 @@ void RendererSceneRenderRD::_process_sss(RID p_render_buffers, const CameraMatri
RenderBuffers *rb = render_buffers_owner.get_or_null(p_render_buffers);
ERR_FAIL_COND(!rb);
- bool can_use_effects = rb->width >= 8 && rb->height >= 8;
+ bool can_use_effects = rb->internal_width >= 8 && rb->internal_height >= 8;
if (!can_use_effects) {
//just copy
@@ -1829,18 +1855,18 @@ void RendererSceneRenderRD::_process_sss(RID p_render_buffers, const CameraMatri
_allocate_blur_textures(rb);
}
- storage->get_effects()->sub_surface_scattering(rb->texture, rb->blur[0].mipmaps[0].texture, rb->depth_texture, p_camera, Size2i(rb->width, rb->height), sss_scale, sss_depth_scale, sss_quality);
+ storage->get_effects()->sub_surface_scattering(rb->internal_texture, rb->sss_texture, rb->depth_texture, p_camera, Size2i(rb->internal_width, rb->internal_height), sss_scale, sss_depth_scale, sss_quality);
}
void RendererSceneRenderRD::_process_ssr(RID p_render_buffers, RID p_dest_framebuffer, RID p_normal_buffer, RID p_specular_buffer, RID p_metallic, const Color &p_metallic_mask, RID p_environment, const CameraMatrix &p_projection, bool p_use_additive) {
RenderBuffers *rb = render_buffers_owner.get_or_null(p_render_buffers);
ERR_FAIL_COND(!rb);
- bool can_use_effects = rb->width >= 8 && rb->height >= 8;
+ bool can_use_effects = rb->internal_width >= 8 && rb->internal_height >= 8;
if (!can_use_effects) {
//just copy
- storage->get_effects()->merge_specular(p_dest_framebuffer, p_specular_buffer, p_use_additive ? RID() : rb->texture, RID());
+ storage->get_effects()->merge_specular(p_dest_framebuffer, p_specular_buffer, p_use_additive ? RID() : rb->internal_texture, RID());
return;
}
@@ -1852,8 +1878,8 @@ void RendererSceneRenderRD::_process_ssr(RID p_render_buffers, RID p_dest_frameb
if (rb->ssr.depth_scaled.is_null()) {
RD::TextureFormat tf;
tf.format = RD::DATA_FORMAT_R32_SFLOAT;
- tf.width = rb->width / 2;
- tf.height = rb->height / 2;
+ tf.width = rb->internal_width / 2;
+ tf.height = rb->internal_height / 2;
tf.texture_type = RD::TEXTURE_TYPE_2D;
tf.usage_bits = RD::TEXTURE_USAGE_STORAGE_BIT;
@@ -1867,8 +1893,8 @@ void RendererSceneRenderRD::_process_ssr(RID p_render_buffers, RID p_dest_frameb
if (ssr_roughness_quality != RS::ENV_SSR_ROUGNESS_QUALITY_DISABLED && !rb->ssr.blur_radius[0].is_valid()) {
RD::TextureFormat tf;
tf.format = RD::DATA_FORMAT_R8_UNORM;
- tf.width = rb->width / 2;
- tf.height = rb->height / 2;
+ tf.width = rb->internal_width / 2;
+ tf.height = rb->internal_height / 2;
tf.texture_type = RD::TEXTURE_TYPE_2D;
tf.usage_bits = RD::TEXTURE_USAGE_STORAGE_BIT | RD::TEXTURE_USAGE_SAMPLING_BIT;
@@ -1880,8 +1906,8 @@ void RendererSceneRenderRD::_process_ssr(RID p_render_buffers, RID p_dest_frameb
_allocate_blur_textures(rb);
}
- storage->get_effects()->screen_space_reflection(rb->texture, p_normal_buffer, ssr_roughness_quality, rb->ssr.blur_radius[0], rb->ssr.blur_radius[1], p_metallic, p_metallic_mask, rb->depth_texture, rb->ssr.depth_scaled, rb->ssr.normal_scaled, rb->blur[0].mipmaps[1].texture, rb->blur[1].mipmaps[0].texture, Size2i(rb->width / 2, rb->height / 2), env->ssr_max_steps, env->ssr_fade_in, env->ssr_fade_out, env->ssr_depth_tolerance, p_projection);
- storage->get_effects()->merge_specular(p_dest_framebuffer, p_specular_buffer, p_use_additive ? RID() : rb->texture, rb->blur[0].mipmaps[1].texture);
+ storage->get_effects()->screen_space_reflection(rb->internal_texture, p_normal_buffer, ssr_roughness_quality, rb->ssr.blur_radius[0], rb->ssr.blur_radius[1], p_metallic, p_metallic_mask, rb->depth_texture, rb->ssr.depth_scaled, rb->ssr.normal_scaled, rb->blur[0].mipmaps[1].texture, rb->blur[1].mipmaps[0].texture, Size2i(rb->internal_width / 2, rb->internal_height / 2), env->ssr_max_steps, env->ssr_fade_in, env->ssr_fade_out, env->ssr_depth_tolerance, p_projection);
+ storage->get_effects()->merge_specular(p_dest_framebuffer, p_specular_buffer, p_use_additive ? RID() : rb->internal_texture, rb->blur[0].mipmaps[1].texture);
}
void RendererSceneRenderRD::_process_ssao(RID p_render_buffers, RID p_environment, RID p_normal_buffer, const CameraMatrix &p_projection) {
@@ -1918,15 +1944,15 @@ void RendererSceneRenderRD::_process_ssao(RID p_render_buffers, RID p_environmen
int half_width;
int half_height;
if (ssao_half_size) {
- buffer_width = (rb->width + 3) / 4;
- buffer_height = (rb->height + 3) / 4;
- half_width = (rb->width + 7) / 8;
- half_height = (rb->height + 7) / 8;
+ buffer_width = (rb->internal_width + 3) / 4;
+ buffer_height = (rb->internal_height + 3) / 4;
+ half_width = (rb->internal_width + 7) / 8;
+ half_height = (rb->internal_height + 7) / 8;
} else {
- buffer_width = (rb->width + 1) / 2;
- buffer_height = (rb->height + 1) / 2;
- half_width = (rb->width + 3) / 4;
- half_height = (rb->height + 3) / 4;
+ buffer_width = (rb->internal_width + 1) / 2;
+ buffer_height = (rb->internal_height + 1) / 2;
+ half_width = (rb->internal_width + 3) / 4;
+ half_height = (rb->internal_height + 3) / 4;
}
bool uniform_sets_are_invalid = false;
if (rb->ssao.depth.is_null()) {
@@ -1998,8 +2024,8 @@ void RendererSceneRenderRD::_process_ssao(RID p_render_buffers, RID p_environmen
{
RD::TextureFormat tf;
tf.format = RD::DATA_FORMAT_R8_UNORM;
- tf.width = rb->width;
- tf.height = rb->height;
+ tf.width = rb->internal_width;
+ tf.height = rb->internal_height;
tf.usage_bits = RD::TEXTURE_USAGE_SAMPLING_BIT | RD::TEXTURE_USAGE_STORAGE_BIT;
rb->ssao.ao_final = RD::get_singleton()->texture_create(tf, RD::TextureView());
RD::get_singleton()->set_resource_name(rb->ssao.ao_final, "SSAO Final");
@@ -2022,7 +2048,7 @@ void RendererSceneRenderRD::_process_ssao(RID p_render_buffers, RID p_environmen
settings.blur_passes = ssao_blur_passes;
settings.fadeout_from = ssao_fadeout_from;
settings.fadeout_to = ssao_fadeout_to;
- settings.full_screen_size = Size2i(rb->width, rb->height);
+ settings.full_screen_size = Size2i(rb->internal_width, rb->internal_height);
settings.half_screen_size = Size2i(buffer_width, buffer_height);
settings.quarter_screen_size = Size2i(half_width, half_height);
@@ -2102,9 +2128,9 @@ void RendererSceneRenderRD::_render_buffers_post_process_and_tonemap(const Rende
EffectsRD::BokehBuffers buffers;
- // Textures we use.
- buffers.base_texture_size = Size2i(rb->width, rb->height);
- buffers.base_texture = rb->texture;
+ // Textures we use
+ buffers.base_texture_size = Size2i(rb->internal_width, rb->internal_height);
+ buffers.base_texture = rb->internal_texture;
buffers.depth_texture = rb->depth_texture;
buffers.secondary_texture = rb->blur[0].mipmaps[0].texture;
buffers.half_texture[0] = rb->blur[1].mipmaps[0].texture;
@@ -2143,9 +2169,9 @@ void RendererSceneRenderRD::_render_buffers_post_process_and_tonemap(const Rende
double step = env->auto_exp_speed * time_step;
if (can_use_storage) {
- storage->get_effects()->luminance_reduction(rb->texture, Size2i(rb->width, rb->height), rb->luminance.reduce, rb->luminance.current, env->min_luminance, env->max_luminance, step, set_immediate);
+ storage->get_effects()->luminance_reduction(rb->internal_texture, Size2i(rb->internal_width, rb->internal_height), rb->luminance.reduce, rb->luminance.current, env->min_luminance, env->max_luminance, step, set_immediate);
} else {
- storage->get_effects()->luminance_reduction_raster(rb->texture, Size2i(rb->width, rb->height), rb->luminance.reduce, rb->luminance.fb, rb->luminance.current, env->min_luminance, env->max_luminance, step, set_immediate);
+ storage->get_effects()->luminance_reduction_raster(rb->internal_texture, Size2i(rb->internal_width, rb->internal_height), rb->luminance.reduce, rb->luminance.fb, rb->luminance.current, env->min_luminance, env->max_luminance, step, set_immediate);
}
// Swap final reduce with prev luminance.
SWAP(rb->luminance.current, rb->luminance.reduce.write[rb->luminance.reduce.size() - 1]);
@@ -2188,9 +2214,9 @@ void RendererSceneRenderRD::_render_buffers_post_process_and_tonemap(const Rende
luminance_texture = rb->luminance.current;
}
if (can_use_storage) {
- storage->get_effects()->gaussian_glow(rb->texture, rb->blur[1].mipmaps[i].texture, Size2i(vp_w, vp_h), env->glow_strength, glow_high_quality, true, env->glow_hdr_luminance_cap, env->exposure, env->glow_bloom, env->glow_hdr_bleed_threshold, env->glow_hdr_bleed_scale, luminance_texture, env->auto_exp_scale);
+ storage->get_effects()->gaussian_glow(rb->internal_texture, rb->blur[1].mipmaps[i].texture, Size2i(vp_w, vp_h), env->glow_strength, glow_high_quality, true, env->glow_hdr_luminance_cap, env->exposure, env->glow_bloom, env->glow_hdr_bleed_threshold, env->glow_hdr_bleed_scale, luminance_texture, env->auto_exp_scale);
} else {
- storage->get_effects()->gaussian_glow_raster(rb->texture, rb->blur[1].mipmaps[i].half_fb, rb->blur[1].mipmaps[i].half_texture, rb->blur[1].mipmaps[i].fb, Size2i(vp_w, vp_h), env->glow_strength, glow_high_quality, true, env->glow_hdr_luminance_cap, env->exposure, env->glow_bloom, env->glow_hdr_bleed_threshold, env->glow_hdr_bleed_scale, luminance_texture, env->auto_exp_scale);
+ storage->get_effects()->gaussian_glow_raster(rb->internal_texture, rb->blur[1].mipmaps[i].half_fb, rb->blur[1].mipmaps[i].half_texture, rb->blur[1].mipmaps[i].fb, Size2i(vp_w, vp_h), env->glow_strength, glow_high_quality, true, env->glow_hdr_luminance_cap, env->exposure, env->glow_bloom, env->glow_hdr_bleed_threshold, env->glow_hdr_bleed_scale, luminance_texture, env->auto_exp_scale);
}
} else {
if (can_use_storage) {
@@ -2237,7 +2263,7 @@ void RendererSceneRenderRD::_render_buffers_post_process_and_tonemap(const Rende
}
tonemap.use_debanding = rb->use_debanding;
- tonemap.texture_size = Vector2i(rb->width, rb->height);
+ tonemap.texture_size = Vector2i(rb->internal_width, rb->internal_height);
if (env) {
tonemap.tonemap_mode = env->tone_mapper;
@@ -2268,7 +2294,15 @@ void RendererSceneRenderRD::_render_buffers_post_process_and_tonemap(const Rende
tonemap.luminance_multiplier = _render_buffers_get_luminance_multiplier();
tonemap.view_count = p_render_data->view_count;
- storage->get_effects()->tonemapper(rb->texture, storage->render_target_get_rd_framebuffer(rb->render_target), tonemap);
+ storage->get_effects()->tonemapper(rb->internal_texture, storage->render_target_get_rd_framebuffer(rb->render_target), tonemap);
+
+ RD::get_singleton()->draw_command_end_label();
+ }
+
+ if (can_use_effects && can_use_storage && (rb->internal_width != rb->width || rb->internal_height != rb->height)) {
+ RD::get_singleton()->draw_command_begin_label("FSR Upscale");
+
+ storage->get_effects()->fsr_upscale(rb->internal_texture, rb->upscale_texture, rb->texture, Size2i(rb->internal_width, rb->internal_height), Size2i(rb->width, rb->height), rb->fsr_sharpness);
RD::get_singleton()->draw_command_end_label();
}
@@ -2628,14 +2662,28 @@ bool RendererSceneRenderRD::_render_buffers_can_be_storage() {
return true;
}
-void RendererSceneRenderRD::render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_width, int p_height, RS::ViewportMSAA p_msaa, RenderingServer::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) {
+void RendererSceneRenderRD::render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_internal_width, int p_internal_height, int p_width, int p_height, float p_fsr_sharpness, float p_fsr_mipmap_bias, RS::ViewportMSAA p_msaa, RenderingServer::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) {
ERR_FAIL_COND_MSG(p_view_count == 0, "Must have at least 1 view");
+ if (!_render_buffers_can_be_storage()) {
+ p_internal_height = p_height;
+ p_internal_width = p_width;
+ }
+
+ if (p_width != p_internal_width) {
+ float fsr_mipmap_bias = -log2f(p_width / p_internal_width) + p_fsr_mipmap_bias;
+ storage->sampler_rd_configure_custom(fsr_mipmap_bias);
+ update_uniform_sets();
+ }
+
RenderBuffers *rb = render_buffers_owner.get_or_null(p_render_buffers);
// Should we add an overrule per viewport?
+ rb->internal_width = p_internal_width;
+ rb->internal_height = p_internal_height;
rb->width = p_width;
rb->height = p_height;
+ rb->fsr_sharpness = p_fsr_sharpness;
rb->render_target = p_render_target;
rb->msaa = p_msaa;
rb->screen_space_aa = p_screen_space_aa;
@@ -2657,8 +2705,8 @@ void RendererSceneRenderRD::render_buffers_configure(RID p_render_buffers, RID p
tf.texture_type = RD::TEXTURE_TYPE_2D_ARRAY;
}
tf.format = _render_buffers_get_color_format();
- tf.width = rb->width;
- tf.height = rb->height;
+ tf.width = rb->internal_width; // If set to rb->width, msaa won't crash
+ tf.height = rb->internal_height; // If set to rb->width, msaa won't crash
tf.array_layers = rb->view_count; // create a layer for every view
tf.usage_bits = RD::TEXTURE_USAGE_SAMPLING_BIT | (_render_buffers_can_be_storage() ? RD::TEXTURE_USAGE_STORAGE_BIT : 0) | RD::TEXTURE_USAGE_COLOR_ATTACHMENT_BIT;
if (rb->msaa != RS::VIEWPORT_MSAA_DISABLED) {
@@ -2666,7 +2714,17 @@ void RendererSceneRenderRD::render_buffers_configure(RID p_render_buffers, RID p
}
tf.usage_bits |= RD::TEXTURE_USAGE_INPUT_ATTACHMENT_BIT; // only needed when using subpasses in the mobile renderer
- rb->texture = RD::get_singleton()->texture_create(tf, RD::TextureView());
+ rb->internal_texture = RD::get_singleton()->texture_create(tf, RD::TextureView());
+
+ if ((p_internal_width != p_width || p_internal_height != p_height)) {
+ tf.width = rb->width;
+ tf.height = rb->height;
+ rb->texture = RD::get_singleton()->texture_create(tf, RD::TextureView());
+ rb->upscale_texture = RD::get_singleton()->texture_create(tf, RD::TextureView());
+ } else {
+ rb->texture = rb->internal_texture;
+ rb->upscale_texture = rb->internal_texture;
+ }
}
{
@@ -2680,8 +2738,8 @@ void RendererSceneRenderRD::render_buffers_configure(RID p_render_buffers, RID p
tf.format = RD::DATA_FORMAT_R32_SFLOAT;
}
- tf.width = rb->width;
- tf.height = rb->height;
+ tf.width = rb->internal_width;
+ tf.height = rb->internal_height;
tf.usage_bits = RD::TEXTURE_USAGE_SAMPLING_BIT;
tf.array_layers = rb->view_count; // create a layer for every view
@@ -2697,16 +2755,16 @@ void RendererSceneRenderRD::render_buffers_configure(RID p_render_buffers, RID p
if (!_render_buffers_can_be_storage()) {
// ONLY USED ON MOBILE RENDERER, ONLY USED FOR POST EFFECTS!
Vector fb;
- fb.push_back(rb->texture);
+ fb.push_back(rb->internal_texture);
rb->texture_fb = RD::get_singleton()->framebuffer_create(fb, RenderingDevice::INVALID_ID, rb->view_count);
}
RID target_texture = storage->render_target_get_rd_texture(rb->render_target);
- rb->data->configure(rb->texture, rb->depth_texture, target_texture, rb->width, rb->height, p_msaa, p_view_count);
+ rb->data->configure(rb->internal_texture, rb->depth_texture, target_texture, p_internal_width, p_internal_height, p_msaa, p_view_count);
if (is_clustered_enabled()) {
- rb->cluster_builder->setup(Size2i(rb->width, rb->height), max_cluster_elements, rb->depth_texture, storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED), rb->texture);
+ rb->cluster_builder->setup(Size2i(p_internal_width, p_internal_height), max_cluster_elements, rb->depth_texture, storage->sampler_rd_get_default(RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST, RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED), rb->internal_texture);
}
}
@@ -4704,9 +4762,7 @@ void RendererSceneRenderRD::render_scene(RID p_render_buffers, const CameraData
if (p_render_buffers.is_valid()) {
/*
_debug_draw_cluster(p_render_buffers);
-
RENDER_TIMESTAMP("Tonemap");
-
_render_buffers_post_process_and_tonemap(&render_data);
*/
diff --git a/servers/rendering/renderer_rd/renderer_scene_render_rd.h b/servers/rendering/renderer_rd/renderer_scene_render_rd.h
index 740e0e75ce2..98ab1a2c3c4 100644
--- a/servers/rendering/renderer_rd/renderer_scene_render_rd.h
+++ b/servers/rendering/renderer_rd/renderer_scene_render_rd.h
@@ -456,7 +456,11 @@ private:
struct RenderBuffers {
RenderBufferData *data = nullptr;
- int width = 0, height = 0;
+ int internal_width = 0;
+ int internal_height = 0;
+ int width = 0;
+ int height = 0;
+ float fsr_sharpness = 0.2f;
RS::ViewportMSAA msaa = RS::VIEWPORT_MSAA_DISABLED;
RS::ViewportScreenSpaceAA screen_space_aa = RS::VIEWPORT_SCREEN_SPACE_AA_DISABLED;
bool use_debanding = false;
@@ -466,9 +470,12 @@ private:
uint64_t auto_exposure_version = 1;
- RID texture; //main texture for rendering to, must be filled after done rendering
+ RID sss_texture; //texture for sss. This needs to be a different resolution than blur[0]
+ RID internal_texture; //main texture for rendering to, must be filled after done rendering
+ RID texture; //upscaled version of main texture (This uses the same resource as internal_texture if there is no upscaling)
RID depth_texture; //main depth texture
RID texture_fb; // framebuffer for the main texture, ONLY USED FOR MOBILE RENDERER POST EFFECTS, DO NOT USE FOR RENDERING 3D!!!
+ RID upscale_texture; //used when upscaling internal_texture (This uses the same resource as internal_texture if there is no upscaling)
RendererSceneGIRD::SDFGI *sdfgi = nullptr;
VolumetricFog *volumetric_fog = nullptr;
@@ -1332,7 +1339,7 @@ public:
virtual RD::DataFormat _render_buffers_get_color_format();
virtual bool _render_buffers_can_be_storage();
virtual RID render_buffers_create() override;
- virtual void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_width, int p_height, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) override;
+ virtual void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_internal_width, int p_internal_height, int p_width, int p_height, float p_fsr_sharpness, float p_fsr_mipmap_bias, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) override;
virtual void gi_set_use_half_resolution(bool p_enable) override;
RID render_buffers_get_depth_texture(RID p_render_buffers);
@@ -1363,6 +1370,8 @@ public:
float render_buffers_get_volumetric_fog_end(RID p_render_buffers);
float render_buffers_get_volumetric_fog_detail_spread(RID p_render_buffers);
+ virtual void update_uniform_sets(){};
+
virtual void render_scene(RID p_render_buffers, const CameraData *p_camera_data, const PagedArray &p_instances, const PagedArray &p_lights, const PagedArray &p_reflection_probes, const PagedArray &p_voxel_gi_instances, const PagedArray &p_decals, const PagedArray &p_lightmaps, const PagedArray &p_fog_volumes, RID p_environment, RID p_camera_effects, RID p_shadow_atlas, RID p_occluder_debug_tex, RID p_reflection_atlas, RID p_reflection_probe, int p_reflection_probe_pass, float p_screen_lod_threshold, const RenderShadowData *p_render_shadows, int p_render_shadow_count, const RenderSDFGIData *p_render_sdfgi_regions, int p_render_sdfgi_region_count, const RenderSDFGIUpdateData *p_sdfgi_update_data = nullptr, RendererScene::RenderInfo *r_render_info = nullptr) override;
virtual void render_material(const Transform3D &p_cam_transform, const CameraMatrix &p_cam_projection, bool p_cam_ortogonal, const PagedArray &p_instances, RID p_framebuffer, const Rect2i &p_region) override;
diff --git a/servers/rendering/renderer_rd/renderer_storage_rd.cpp b/servers/rendering/renderer_rd/renderer_storage_rd.cpp
index 04753d7a9bf..04acc871b47 100644
--- a/servers/rendering/renderer_rd/renderer_storage_rd.cpp
+++ b/servers/rendering/renderer_rd/renderer_storage_rd.cpp
@@ -1227,6 +1227,100 @@ RendererStorageRD::CanvasTexture::~CanvasTexture() {
clear_sets();
}
+void RendererStorageRD::sampler_rd_configure_custom(float p_mipmap_bias) {
+ for (int i = 1; i < RS::CANVAS_ITEM_TEXTURE_FILTER_MAX; i++) {
+ for (int j = 1; j < RS::CANVAS_ITEM_TEXTURE_REPEAT_MAX; j++) {
+ RD::SamplerState sampler_state;
+ switch (i) {
+ case RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST: {
+ sampler_state.mag_filter = RD::SAMPLER_FILTER_NEAREST;
+ sampler_state.min_filter = RD::SAMPLER_FILTER_NEAREST;
+ sampler_state.max_lod = 0;
+ } break;
+ case RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR: {
+ sampler_state.mag_filter = RD::SAMPLER_FILTER_LINEAR;
+ sampler_state.min_filter = RD::SAMPLER_FILTER_LINEAR;
+ sampler_state.max_lod = 0;
+ } break;
+ case RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS: {
+ sampler_state.mag_filter = RD::SAMPLER_FILTER_NEAREST;
+ sampler_state.min_filter = RD::SAMPLER_FILTER_NEAREST;
+ if (GLOBAL_GET("rendering/textures/default_filters/use_nearest_mipmap_filter")) {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_NEAREST;
+ } else {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_LINEAR;
+ }
+ sampler_state.lod_bias = p_mipmap_bias;
+ } break;
+ case RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS: {
+ sampler_state.mag_filter = RD::SAMPLER_FILTER_LINEAR;
+ sampler_state.min_filter = RD::SAMPLER_FILTER_LINEAR;
+ if (GLOBAL_GET("rendering/textures/default_filters/use_nearest_mipmap_filter")) {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_NEAREST;
+ } else {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_LINEAR;
+ }
+ sampler_state.lod_bias = p_mipmap_bias;
+
+ } break;
+ case RS::CANVAS_ITEM_TEXTURE_FILTER_NEAREST_WITH_MIPMAPS_ANISOTROPIC: {
+ sampler_state.mag_filter = RD::SAMPLER_FILTER_NEAREST;
+ sampler_state.min_filter = RD::SAMPLER_FILTER_NEAREST;
+ if (GLOBAL_GET("rendering/textures/default_filters/use_nearest_mipmap_filter")) {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_NEAREST;
+ } else {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_LINEAR;
+ }
+ sampler_state.lod_bias = p_mipmap_bias;
+ sampler_state.use_anisotropy = true;
+ sampler_state.anisotropy_max = 1 << int(GLOBAL_GET("rendering/textures/default_filters/anisotropic_filtering_level"));
+ } break;
+ case RS::CANVAS_ITEM_TEXTURE_FILTER_LINEAR_WITH_MIPMAPS_ANISOTROPIC: {
+ sampler_state.mag_filter = RD::SAMPLER_FILTER_LINEAR;
+ sampler_state.min_filter = RD::SAMPLER_FILTER_LINEAR;
+ if (GLOBAL_GET("rendering/textures/default_filters/use_nearest_mipmap_filter")) {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_NEAREST;
+ } else {
+ sampler_state.mip_filter = RD::SAMPLER_FILTER_LINEAR;
+ }
+ sampler_state.lod_bias = p_mipmap_bias;
+ sampler_state.use_anisotropy = true;
+ sampler_state.anisotropy_max = 1 << int(GLOBAL_GET("rendering/textures/default_filters/anisotropic_filtering_level"));
+
+ } break;
+ default: {
+ }
+ }
+ switch (j) {
+ case RS::CANVAS_ITEM_TEXTURE_REPEAT_DISABLED: {
+ sampler_state.repeat_u = RD::SAMPLER_REPEAT_MODE_CLAMP_TO_EDGE;
+ sampler_state.repeat_v = RD::SAMPLER_REPEAT_MODE_CLAMP_TO_EDGE;
+ sampler_state.repeat_w = RD::SAMPLER_REPEAT_MODE_CLAMP_TO_EDGE;
+
+ } break;
+ case RS::CANVAS_ITEM_TEXTURE_REPEAT_ENABLED: {
+ sampler_state.repeat_u = RD::SAMPLER_REPEAT_MODE_REPEAT;
+ sampler_state.repeat_v = RD::SAMPLER_REPEAT_MODE_REPEAT;
+ sampler_state.repeat_w = RD::SAMPLER_REPEAT_MODE_REPEAT;
+ } break;
+ case RS::CANVAS_ITEM_TEXTURE_REPEAT_MIRROR: {
+ sampler_state.repeat_u = RD::SAMPLER_REPEAT_MODE_MIRRORED_REPEAT;
+ sampler_state.repeat_v = RD::SAMPLER_REPEAT_MODE_MIRRORED_REPEAT;
+ sampler_state.repeat_w = RD::SAMPLER_REPEAT_MODE_MIRRORED_REPEAT;
+ } break;
+ default: {
+ }
+ }
+
+ if (custom_rd_samplers[i][j].is_valid()) {
+ RD::get_singleton()->free(custom_rd_samplers[i][j]);
+ }
+
+ custom_rd_samplers[i][j] = RD::get_singleton()->sampler_create(sampler_state);
+ }
+ }
+}
+
RID RendererStorageRD::canvas_texture_allocate() {
return canvas_texture_owner.allocate_rid();
}
@@ -9774,6 +9868,9 @@ RendererStorageRD::RendererStorageRD() {
}
}
+ //custom sampler
+ sampler_rd_configure_custom(0.0f);
+
//default rd buffers
{
Vector buffer;
@@ -10104,6 +10201,15 @@ RendererStorageRD::~RendererStorageRD() {
}
}
+ //custom samplers
+ for (int i = 1; i < RS::CANVAS_ITEM_TEXTURE_FILTER_MAX; i++) {
+ for (int j = 0; j < RS::CANVAS_ITEM_TEXTURE_REPEAT_MAX; j++) {
+ if (custom_rd_samplers[i][j].is_valid()) {
+ RD::get_singleton()->free(custom_rd_samplers[i][j]);
+ }
+ }
+ }
+
//def buffers
for (int i = 0; i < DEFAULT_RD_BUFFER_MAX; i++) {
RD::get_singleton()->free(mesh_default_rd_buffers[i]);
diff --git a/servers/rendering/renderer_rd/renderer_storage_rd.h b/servers/rendering/renderer_rd/renderer_storage_rd.h
index 2cd3a01c66f..9a64480c3e1 100644
--- a/servers/rendering/renderer_rd/renderer_storage_rd.h
+++ b/servers/rendering/renderer_rd/renderer_storage_rd.h
@@ -320,6 +320,7 @@ private:
RID default_rd_textures[DEFAULT_RD_TEXTURE_MAX];
RID default_rd_samplers[RS::CANVAS_ITEM_TEXTURE_FILTER_MAX][RS::CANVAS_ITEM_TEXTURE_REPEAT_MAX];
+ RID custom_rd_samplers[RS::CANVAS_ITEM_TEXTURE_FILTER_MAX][RS::CANVAS_ITEM_TEXTURE_REPEAT_MAX];
RID default_rd_storage_buffer;
/* DECAL ATLAS */
@@ -1391,6 +1392,13 @@ public:
_FORCE_INLINE_ RID sampler_rd_get_default(RS::CanvasItemTextureFilter p_filter, RS::CanvasItemTextureRepeat p_repeat) {
return default_rd_samplers[p_filter][p_repeat];
}
+ _FORCE_INLINE_ RID sampler_rd_get_custom(RS::CanvasItemTextureFilter p_filter, RS::CanvasItemTextureRepeat p_repeat) {
+ return custom_rd_samplers[p_filter][p_repeat];
+ }
+
+ void sampler_rd_configure_custom(float mipmap_bias);
+
+ void sampler_rd_set_default(float p_mipmap_bias);
/* CANVAS TEXTURE API */
diff --git a/servers/rendering/renderer_rd/shaders/fsr_upscale.glsl b/servers/rendering/renderer_rd/shaders/fsr_upscale.glsl
new file mode 100644
index 00000000000..4e2ba84033b
--- /dev/null
+++ b/servers/rendering/renderer_rd/shaders/fsr_upscale.glsl
@@ -0,0 +1,173 @@
+/*************************************************************************/
+/* fsr_upscale.glsl */
+/*************************************************************************/
+/* This file is part of: */
+/* GODOT ENGINE */
+/* https://godotengine.org */
+/*************************************************************************/
+/* Copyright (c) 2007-2021 Juan Linietsky, Ariel Manzur. */
+/* Copyright (c) 2014-2021 Godot Engine contributors (cf. AUTHORS.md). */
+/* */
+/* Permission is hereby granted, free of charge, to any person obtaining */
+/* a copy of this software and associated documentation files (the */
+/* "Software"), to deal in the Software without restriction, including */
+/* without limitation the rights to use, copy, modify, merge, publish, */
+/* distribute, sublicense, and/or sell copies of the Software, and to */
+/* permit persons to whom the Software is furnished to do so, subject to */
+/* the following conditions: */
+/* */
+/* The above copyright notice and this permission notice shall be */
+/* included in all copies or substantial portions of the Software. */
+/* */
+/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, */
+/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF */
+/* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.*/
+/* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY */
+/* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, */
+/* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE */
+/* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
+/*************************************************************************/
+
+#[compute]
+
+#version 450
+
+#VERSION_DEFINES
+
+#define A_GPU
+#define A_GLSL
+
+#ifdef MODE_FSR_UPSCALE_NORMAL
+
+#define A_HALF
+
+#endif
+
+#include "thirdparty/amd-fsr/ffx_a.h"
+
+layout(local_size_x = 64, local_size_y = 1, local_size_z = 1) in;
+
+layout(rgba16f, set = 1, binding = 0) uniform restrict writeonly image2D fsr_image;
+layout(set = 0, binding = 0) uniform sampler2D source_image;
+
+#define FSR_UPSCALE_PASS_TYPE_EASU 0
+#define FSR_UPSCALE_PASS_TYPE_RCAS 1
+
+layout(push_constant, binding = 1, std430) uniform Params {
+ float resolution_width;
+ float resolution_height;
+ float upscaled_width;
+ float upscaled_height;
+ float sharpness;
+ int pass;
+}
+params;
+
+AU4 Const0, Const1, Const2, Const3;
+
+#ifdef MODE_FSR_UPSCALE_FALLBACK
+
+#define FSR_EASU_F
+AF4 FsrEasuRF(AF2 p) {
+ AF4 res = textureGather(source_image, p, 0);
+ return res;
+}
+AF4 FsrEasuGF(AF2 p) {
+ AF4 res = textureGather(source_image, p, 1);
+ return res;
+}
+AF4 FsrEasuBF(AF2 p) {
+ AF4 res = textureGather(source_image, p, 2);
+ return res;
+}
+
+#define FSR_RCAS_F
+AF4 FsrRcasLoadF(ASU2 p) {
+ return AF4(texelFetch(source_image, ASU2(p), 0));
+}
+void FsrRcasInputF(inout AF1 r, inout AF1 g, inout AF1 b) {}
+
+#else
+
+#define FSR_EASU_H
+AH4 FsrEasuRH(AF2 p) {
+ AH4 res = AH4(textureGather(source_image, p, 0));
+ return res;
+}
+AH4 FsrEasuGH(AF2 p) {
+ AH4 res = AH4(textureGather(source_image, p, 1));
+ return res;
+}
+AH4 FsrEasuBH(AF2 p) {
+ AH4 res = AH4(textureGather(source_image, p, 2));
+ return res;
+}
+
+#define FSR_RCAS_H
+AH4 FsrRcasLoadH(ASW2 p) {
+ return AH4(texelFetch(source_image, ASU2(p), 0));
+}
+void FsrRcasInputH(inout AH1 r, inout AH1 g, inout AH1 b) {}
+
+#endif
+
+#include "thirdparty/amd-fsr/ffx_fsr1.h"
+
+void fsr_easu_pass(AU2 pos) {
+#ifdef MODE_FSR_UPSCALE_NORMAL
+
+ AH3 Gamma2Color = AH3(0, 0, 0);
+ FsrEasuH(Gamma2Color, pos, Const0, Const1, Const2, Const3);
+ imageStore(fsr_image, ASU2(pos), AH4(Gamma2Color, 1));
+
+#else
+
+ AF3 Gamma2Color = AF3(0, 0, 0);
+ FsrEasuF(Gamma2Color, pos, Const0, Const1, Const2, Const3);
+ imageStore(fsr_image, ASU2(pos), AF4(Gamma2Color, 1));
+
+#endif
+}
+
+void fsr_rcas_pass(AU2 pos) {
+#ifdef MODE_FSR_UPSCALE_NORMAL
+
+ AH3 Gamma2Color = AH3(0, 0, 0);
+ FsrRcasH(Gamma2Color.r, Gamma2Color.g, Gamma2Color.b, pos, Const0);
+ imageStore(fsr_image, ASU2(pos), AH4(Gamma2Color, 1));
+
+#else
+
+ AF3 Gamma2Color = AF3(0, 0, 0);
+ FsrRcasF(Gamma2Color.r, Gamma2Color.g, Gamma2Color.b, pos, Const0);
+ imageStore(fsr_image, ASU2(pos), AF4(Gamma2Color, 1));
+
+#endif
+}
+
+void fsr_pass(AU2 pos) {
+ if (params.pass == FSR_UPSCALE_PASS_TYPE_EASU) {
+ fsr_easu_pass(pos);
+ } else if (params.pass == FSR_UPSCALE_PASS_TYPE_RCAS) {
+ fsr_rcas_pass(pos);
+ }
+}
+
+void main() {
+ // Clang does not like unused functions. If ffx_a.h is included in the binary, clang will throw a fit and not compile so we must configure FSR in this shader
+ if (params.pass == FSR_UPSCALE_PASS_TYPE_EASU) {
+ FsrEasuCon(Const0, Const1, Const2, Const3, params.resolution_width, params.resolution_height, params.resolution_width, params.resolution_height, params.upscaled_width, params.upscaled_height);
+ } else if (params.pass == FSR_UPSCALE_PASS_TYPE_RCAS) {
+ FsrRcasCon(Const0, params.sharpness);
+ }
+
+ AU2 gxy = ARmp8x8(gl_LocalInvocationID.x) + AU2(gl_WorkGroupID.x << 4u, gl_WorkGroupID.y << 4u);
+
+ fsr_pass(gxy);
+ gxy.x += 8u;
+ fsr_pass(gxy);
+ gxy.y += 8u;
+ fsr_pass(gxy);
+ gxy.x -= 8u;
+ fsr_pass(gxy);
+}
diff --git a/servers/rendering/renderer_scene.h b/servers/rendering/renderer_scene.h
index b6e99e4be56..02c845581ca 100644
--- a/servers/rendering/renderer_scene.h
+++ b/servers/rendering/renderer_scene.h
@@ -191,7 +191,7 @@ public:
virtual RID render_buffers_create() = 0;
- virtual void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_width, int p_height, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) = 0;
+ virtual void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_internal_width, int p_internal_height, int p_width, int p_height, float p_fsr_sharpness, float p_fsr_mipmap_bias, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) = 0;
virtual void gi_set_use_half_resolution(bool p_enable) = 0;
diff --git a/servers/rendering/renderer_scene_cull.h b/servers/rendering/renderer_scene_cull.h
index 2bfcfd462c9..e51a1fc02e8 100644
--- a/servers/rendering/renderer_scene_cull.h
+++ b/servers/rendering/renderer_scene_cull.h
@@ -1155,7 +1155,7 @@ public:
/* Render Buffers */
PASS0R(RID, render_buffers_create)
- PASS8(render_buffers_configure, RID, RID, int, int, RS::ViewportMSAA, RS::ViewportScreenSpaceAA, bool, uint32_t)
+ PASS12(render_buffers_configure, RID, RID, int, int, int, int, float, float, RS::ViewportMSAA, RS::ViewportScreenSpaceAA, bool, uint32_t)
PASS1(gi_set_use_half_resolution, bool)
/* Shadow Atlas */
diff --git a/servers/rendering/renderer_scene_render.h b/servers/rendering/renderer_scene_render.h
index 200ddc55d49..0d71ea22daa 100644
--- a/servers/rendering/renderer_scene_render.h
+++ b/servers/rendering/renderer_scene_render.h
@@ -256,7 +256,7 @@ public:
virtual void set_debug_draw_mode(RS::ViewportDebugDraw p_debug_draw) = 0;
virtual RID render_buffers_create() = 0;
- virtual void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_width, int p_height, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) = 0;
+ virtual void render_buffers_configure(RID p_render_buffers, RID p_render_target, int p_internal_width, int p_internal_height, int p_width, int p_height, float p_fsr_sharpness, float p_fsr_mipmap_bias, RS::ViewportMSAA p_msaa, RS::ViewportScreenSpaceAA p_screen_space_aa, bool p_use_debanding, uint32_t p_view_count) = 0;
virtual void gi_set_use_half_resolution(bool p_enable) = 0;
virtual void screen_space_roughness_limiter_set_active(bool p_enable, float p_amount, float p_limit) = 0;
diff --git a/servers/rendering/renderer_viewport.cpp b/servers/rendering/renderer_viewport.cpp
index c3d57a13adf..8a148345690 100644
--- a/servers/rendering/renderer_viewport.cpp
+++ b/servers/rendering/renderer_viewport.cpp
@@ -77,18 +77,69 @@ void RendererViewport::_configure_3d_render_buffers(Viewport *p_viewport) {
RSG::scene->free(p_viewport->render_buffers);
p_viewport->render_buffers = RID();
} else {
- float scale_3d = p_viewport->scale_3d;
- if (Engine::get_singleton()->is_editor_hint()) {
- // Ignore the 3D viewport render scaling inside of the editor.
- // The Half Resolution 3D editor viewport option should be used instead.
- scale_3d = 1.0;
+ float scaling_3d_scale = p_viewport->scaling_3d_scale;
+
+ RS::ViewportScaling3DMode scaling_3d_mode = p_viewport->scaling_3d_mode;
+ bool scaling_enabled = true;
+
+ if ((scaling_3d_mode == RS::VIEWPORT_SCALING_3D_MODE_FSR) && (scaling_3d_scale > 1.0)) {
+ // FSR is not design for downsampling.
+ // Throw a warning and fallback to VIEWPORT_SCALING_3D_MODE_BILINEAR
+ print_error("FSR does not support supersampling. Falling back to bilinear mode.");
+ scaling_3d_mode = RS::VIEWPORT_SCALING_3D_MODE_BILINEAR;
}
- // Clamp 3D rendering resolution to reasonable values supported on most hardware.
- // This prevents freezing the engine or outright crashing on lower-end GPUs.
- const int width = CLAMP(p_viewport->size.width * scale_3d, 1, 16384);
- const int height = CLAMP(p_viewport->size.height * scale_3d, 1, 16384);
- RSG::scene->render_buffers_configure(p_viewport->render_buffers, p_viewport->render_target, width, height, p_viewport->msaa, p_viewport->screen_space_aa, p_viewport->use_debanding, p_viewport->get_view_count());
+ if ((scaling_3d_mode == RS::VIEWPORT_SCALING_3D_MODE_FSR) && !p_viewport->fsr_enabled) {
+ // FSR is not actually available.
+ // Throw a warning and fallback to disable scaling
+ print_error("FSR is not available. Disabled FSR scaling 3D. Try bilinear mode.");
+ scaling_enabled = false;
+ }
+
+ if (scaling_3d_scale == 1.0) {
+ scaling_enabled = false;
+ }
+
+ int width;
+ int height;
+ int render_width;
+ int render_height;
+
+ if (scaling_enabled) {
+ switch (scaling_3d_mode) {
+ case RS::VIEWPORT_SCALING_3D_MODE_BILINEAR:
+ // Clamp 3D rendering resolution to reasonable values supported on most hardware.
+ // This prevents freezing the engine or outright crashing on lower-end GPUs.
+ width = CLAMP(p_viewport->size.width * scaling_3d_scale, 1, 16384);
+ height = CLAMP(p_viewport->size.height * scaling_3d_scale, 1, 16384);
+ render_width = width;
+ render_height = height;
+ break;
+ case RS::VIEWPORT_SCALING_3D_MODE_FSR:
+ width = p_viewport->size.width;
+ height = p_viewport->size.height;
+ render_width = MAX(width * scaling_3d_scale, 1.0); // width / (width * scaling)
+ render_height = MAX(height * scaling_3d_scale, 1.0);
+ break;
+ default:
+ // This is an unknown mode.
+ print_error(vformat("Unknown scaling mode: %d, disabling scaling 3D", scaling_3d_mode));
+ width = p_viewport->size.width;
+ height = p_viewport->size.height;
+ render_width = width;
+ render_height = height;
+ break;
+ }
+ } else {
+ width = p_viewport->size.width;
+ height = p_viewport->size.height;
+ render_width = width;
+ render_height = height;
+ }
+
+ p_viewport->internal_size = Size2(render_width, render_height);
+
+ RSG::scene->render_buffers_configure(p_viewport->render_buffers, p_viewport->render_target, render_width, render_height, width, height, p_viewport->fsr_sharpness, p_viewport->fsr_mipmap_bias, p_viewport->msaa, p_viewport->screen_space_aa, p_viewport->use_debanding, p_viewport->get_view_count());
}
}
}
@@ -117,7 +168,7 @@ void RendererViewport::_draw_3d(Viewport *p_viewport) {
}
float screen_lod_threshold = p_viewport->lod_threshold / float(p_viewport->size.width);
- RSG::scene->render_camera(p_viewport->render_buffers, p_viewport->camera, p_viewport->scenario, p_viewport->self, p_viewport->size, screen_lod_threshold, p_viewport->shadow_atlas, xr_interface, &p_viewport->render_info);
+ RSG::scene->render_camera(p_viewport->render_buffers, p_viewport->camera, p_viewport->scenario, p_viewport->self, p_viewport->internal_size, screen_lod_threshold, p_viewport->shadow_atlas, xr_interface, &p_viewport->render_info);
RENDER_TIMESTAMP("size = xr_interface->get_render_target_size();
uint32_t view_count = xr_interface->get_view_count();
- RSG::storage->render_target_set_size(vp->render_target, vp->size.x, vp->size.y, view_count);
+ RSG::storage->render_target_set_size(vp->render_target, vp->internal_size.x, vp->internal_size.y, view_count);
// check for an external texture destination (disabled for now, not yet supported)
// RSG::storage->render_target_set_external_texture(vp->render_target, xr_interface->get_external_texture_for_eye(leftOrMono));
@@ -662,6 +713,8 @@ void RendererViewport::viewport_initialize(RID p_rid) {
viewport->render_target = RSG::storage->render_target_create();
viewport->shadow_atlas = RSG::scene->shadow_atlas_create();
viewport->viewport_render_direct_to_screen = false;
+
+ viewport->fsr_enabled = !RSG::rasterizer->is_low_end() && !viewport->disable_3d;
}
void RendererViewport::viewport_set_use_xr(RID p_viewport, bool p_use_xr) {
@@ -676,18 +729,42 @@ void RendererViewport::viewport_set_use_xr(RID p_viewport, bool p_use_xr) {
_configure_3d_render_buffers(viewport);
}
-void RendererViewport::viewport_set_scale_3d(RID p_viewport, float p_scale_3d) {
+void RendererViewport::viewport_set_scaling_3d_mode(RID p_viewport, RS::ViewportScaling3DMode p_mode) {
+ Viewport *viewport = viewport_owner.get_or_null(p_viewport);
+ ERR_FAIL_COND(!viewport);
+
+ viewport->scaling_3d_mode = p_mode;
+ _configure_3d_render_buffers(viewport);
+}
+
+void RendererViewport::viewport_set_fsr_sharpness(RID p_viewport, float p_sharpness) {
+ Viewport *viewport = viewport_owner.get_or_null(p_viewport);
+ ERR_FAIL_COND(!viewport);
+
+ viewport->fsr_sharpness = p_sharpness;
+ _configure_3d_render_buffers(viewport);
+}
+
+void RendererViewport::viewport_set_fsr_mipmap_bias(RID p_viewport, float p_mipmap_bias) {
+ Viewport *viewport = viewport_owner.get_or_null(p_viewport);
+ ERR_FAIL_COND(!viewport);
+
+ viewport->fsr_mipmap_bias = p_mipmap_bias;
+ _configure_3d_render_buffers(viewport);
+}
+
+void RendererViewport::viewport_set_scaling_3d_scale(RID p_viewport, float p_scaling_3d_scale) {
Viewport *viewport = viewport_owner.get_or_null(p_viewport);
ERR_FAIL_COND(!viewport);
// Clamp to reasonable values that are actually useful.
// Values above 2.0 don't serve a practical purpose since the viewport
// isn't displayed with mipmaps.
- if (viewport->scale_3d == CLAMP(p_scale_3d, 0.1, 2.0)) {
+ if (viewport->scaling_3d_scale == CLAMP(p_scaling_3d_scale, 0.1, 2.0)) {
return;
}
- viewport->scale_3d = CLAMP(p_scale_3d, 0.1, 2.0);
+ viewport->scaling_3d_scale = CLAMP(p_scaling_3d_scale, 0.1, 2.0);
_configure_3d_render_buffers(viewport);
}
@@ -713,6 +790,7 @@ void RendererViewport::viewport_set_size(RID p_viewport, int p_width, int p_heig
ERR_FAIL_COND(!viewport);
viewport->size = Size2(p_width, p_height);
+
uint32_t view_count = viewport->get_view_count();
RSG::storage->render_target_set_size(viewport->render_target, p_width, p_height, view_count);
_configure_3d_render_buffers(viewport);
@@ -765,7 +843,7 @@ void RendererViewport::viewport_attach_to_screen(RID p_viewport, const Rect2 &p_
// if render_direct_to_screen was used, reset size and position
if (RSG::rasterizer->is_low_end() && viewport->viewport_render_direct_to_screen) {
RSG::storage->render_target_set_position(viewport->render_target, 0, 0);
- RSG::storage->render_target_set_size(viewport->render_target, viewport->size.x, viewport->size.y, viewport->get_view_count());
+ RSG::storage->render_target_set_size(viewport->render_target, viewport->internal_size.x, viewport->internal_size.y, viewport->get_view_count());
}
viewport->viewport_to_screen_rect = Rect2();
diff --git a/servers/rendering/renderer_viewport.h b/servers/rendering/renderer_viewport.h
index f6e6cc8e842..5bb4dbbc6f3 100644
--- a/servers/rendering/renderer_viewport.h
+++ b/servers/rendering/renderer_viewport.h
@@ -49,12 +49,16 @@ public:
bool use_xr; /* use xr interface to override camera positioning and projection matrices and control output */
- float scale_3d = 1.0;
-
+ Size2i internal_size;
Size2i size;
RID camera;
RID scenario;
+ RS::ViewportScaling3DMode scaling_3d_mode;
+ float scaling_3d_scale = 1.0;
+ float fsr_sharpness = 0.2f;
+ float fsr_mipmap_bias = 0.0f;
+ bool fsr_enabled;
RS::ViewportUpdateMode update_mode;
RID render_target;
RID render_target_texture;
@@ -207,7 +211,6 @@ public:
void viewport_initialize(RID p_rid);
void viewport_set_use_xr(RID p_viewport, bool p_use_xr);
- void viewport_set_scale_3d(RID p_viewport, float p_scale_3d);
void viewport_set_size(RID p_viewport, int p_width, int p_height);
@@ -216,6 +219,12 @@ public:
void viewport_set_active(RID p_viewport, bool p_active);
void viewport_set_parent_viewport(RID p_viewport, RID p_parent_viewport);
+
+ void viewport_set_scaling_3d_mode(RID p_viewport, RS::ViewportScaling3DMode p_mode);
+ void viewport_set_scaling_3d_scale(RID p_viewport, float p_scaling_3d_scale);
+ void viewport_set_fsr_sharpness(RID p_viewport, float p_sharpness);
+ void viewport_set_fsr_mipmap_bias(RID p_viewport, float p_mipmap_bias);
+
void viewport_set_update_mode(RID p_viewport, RS::ViewportUpdateMode p_mode);
void viewport_set_vflip(RID p_viewport, bool p_enable);
diff --git a/servers/rendering/rendering_device.h b/servers/rendering/rendering_device.h
index 5eb8f1cead5..563a80c12c6 100644
--- a/servers/rendering/rendering_device.h
+++ b/servers/rendering/rendering_device.h
@@ -120,6 +120,7 @@ public:
// features
bool supports_multiview = false; // If true this device supports multiview options
+ bool supports_fsr_half_float = false; // If true this device supports FSR scaling 3D in half float mode, otherwise use the fallback mode
};
typedef String (*ShaderSPIRVGetCacheKeyFunction)(const Capabilities *p_capabilities);
diff --git a/servers/rendering/rendering_server_default.h b/servers/rendering/rendering_server_default.h
index f75bca600e5..b50631bb216 100644
--- a/servers/rendering/rendering_server_default.h
+++ b/servers/rendering/rendering_server_default.h
@@ -528,7 +528,6 @@ public:
FUNCRIDSPLIT(viewport)
FUNC2(viewport_set_use_xr, RID, bool)
- FUNC2(viewport_set_scale_3d, RID, float)
FUNC3(viewport_set_size, RID, int, int)
FUNC2(viewport_set_active, RID, bool)
@@ -539,6 +538,11 @@ public:
FUNC3(viewport_attach_to_screen, RID, const Rect2 &, int)
FUNC2(viewport_set_render_direct_to_screen, RID, bool)
+ FUNC2(viewport_set_scaling_3d_mode, RID, ViewportScaling3DMode)
+ FUNC2(viewport_set_scaling_3d_scale, RID, float)
+ FUNC2(viewport_set_fsr_sharpness, RID, float)
+ FUNC2(viewport_set_fsr_mipmap_bias, RID, float)
+
FUNC2(viewport_set_update_mode, RID, ViewportUpdateMode)
FUNC1RC(RID, viewport_get_texture, RID)
diff --git a/servers/rendering_server.cpp b/servers/rendering_server.cpp
index ae72f47a44f..7a958546b6f 100644
--- a/servers/rendering_server.cpp
+++ b/servers/rendering_server.cpp
@@ -2165,13 +2165,16 @@ void RenderingServer::_bind_methods() {
ClassDB::bind_method(D_METHOD("viewport_create"), &RenderingServer::viewport_create);
ClassDB::bind_method(D_METHOD("viewport_set_use_xr", "viewport", "use_xr"), &RenderingServer::viewport_set_use_xr);
- ClassDB::bind_method(D_METHOD("viewport_set_scale_3d", "viewport", "scale"), &RenderingServer::viewport_set_scale_3d);
ClassDB::bind_method(D_METHOD("viewport_set_size", "viewport", "width", "height"), &RenderingServer::viewport_set_size);
ClassDB::bind_method(D_METHOD("viewport_set_active", "viewport", "active"), &RenderingServer::viewport_set_active);
ClassDB::bind_method(D_METHOD("viewport_set_parent_viewport", "viewport", "parent_viewport"), &RenderingServer::viewport_set_parent_viewport);
ClassDB::bind_method(D_METHOD("viewport_attach_to_screen", "viewport", "rect", "screen"), &RenderingServer::viewport_attach_to_screen, DEFVAL(Rect2()), DEFVAL(DisplayServer::MAIN_WINDOW_ID));
ClassDB::bind_method(D_METHOD("viewport_set_render_direct_to_screen", "viewport", "enabled"), &RenderingServer::viewport_set_render_direct_to_screen);
+ ClassDB::bind_method(D_METHOD("viewport_set_scaling_3d_mode", "viewport", "scaling_3d_mode"), &RenderingServer::viewport_set_scaling_3d_mode);
+ ClassDB::bind_method(D_METHOD("viewport_set_scaling_3d_scale", "viewport", "scale"), &RenderingServer::viewport_set_scaling_3d_scale);
+ ClassDB::bind_method(D_METHOD("viewport_set_fsr_sharpness", "viewport", "sharpness"), &RenderingServer::viewport_set_fsr_sharpness);
+ ClassDB::bind_method(D_METHOD("viewport_set_fsr_mipmap_bias", "viewport", "mipmap_bias"), &RenderingServer::viewport_set_fsr_mipmap_bias);
ClassDB::bind_method(D_METHOD("viewport_set_update_mode", "viewport", "update_mode"), &RenderingServer::viewport_set_update_mode);
ClassDB::bind_method(D_METHOD("viewport_set_clear_mode", "viewport", "clear_mode"), &RenderingServer::viewport_set_clear_mode);
ClassDB::bind_method(D_METHOD("viewport_get_texture", "viewport"), &RenderingServer::viewport_get_texture);
@@ -2213,6 +2216,10 @@ void RenderingServer::_bind_methods() {
ClassDB::bind_method(D_METHOD("viewport_get_measured_render_time_gpu", "viewport"), &RenderingServer::viewport_get_measured_render_time_gpu);
+ BIND_ENUM_CONSTANT(VIEWPORT_SCALING_3D_MODE_BILINEAR);
+ BIND_ENUM_CONSTANT(VIEWPORT_SCALING_3D_MODE_FSR);
+ BIND_ENUM_CONSTANT(VIEWPORT_SCALING_3D_MODE_MAX);
+
BIND_ENUM_CONSTANT(VIEWPORT_UPDATE_DISABLED);
BIND_ENUM_CONSTANT(VIEWPORT_UPDATE_ONCE); //then goes to disabled); must be manually updated
BIND_ENUM_CONSTANT(VIEWPORT_UPDATE_WHEN_VISIBLE); // default
@@ -2837,12 +2844,6 @@ RenderingServer::RenderingServer() {
GLOBAL_DEF("rendering/vulkan/staging_buffer/texture_upload_region_size_px", 64);
GLOBAL_DEF("rendering/vulkan/descriptor_pools/max_descriptors_per_pool", 64);
- GLOBAL_DEF("rendering/3d/viewport/scale", 1.0);
- ProjectSettings::get_singleton()->set_custom_property_info("rendering/3d/viewport/scale",
- PropertyInfo(Variant::FLOAT,
- "rendering/3d/viewport/scale",
- PROPERTY_HINT_RANGE, "0.25,2.0,0.01"));
-
GLOBAL_DEF("rendering/shader_compiler/shader_cache/enabled", true);
GLOBAL_DEF("rendering/shader_compiler/shader_cache/compress", true);
GLOBAL_DEF("rendering/shader_compiler/shader_cache/use_zstd_compression", true);
@@ -2903,6 +2904,29 @@ RenderingServer::RenderingServer() {
ProjectSettings::get_singleton()->set_custom_property_info("rendering/anti_aliasing/screen_space_roughness_limiter/amount", PropertyInfo(Variant::FLOAT, "rendering/anti_aliasing/screen_space_roughness_limiter/amount", PROPERTY_HINT_RANGE, "0.01,4.0,0.01"));
ProjectSettings::get_singleton()->set_custom_property_info("rendering/anti_aliasing/screen_space_roughness_limiter/limit", PropertyInfo(Variant::FLOAT, "rendering/anti_aliasing/screen_space_roughness_limiter/limit", PROPERTY_HINT_RANGE, "0.01,1.0,0.01"));
+ GLOBAL_DEF_RST("rendering/scaling_3d/mode", 0);
+ GLOBAL_DEF_RST("rendering/scaling_3d/scale", 1.0);
+ GLOBAL_DEF_RST("rendering/scaling_3d/fsr_sharpness", 0.2f);
+ GLOBAL_DEF_RST("rendering/scaling_3d/fsr_mipmap_bias", 0.0f);
+ ProjectSettings::get_singleton()->set_custom_property_info("rendering/scaling_3d/mode",
+ PropertyInfo(Variant::INT,
+ "rendering/scaling_3d/mode",
+ PROPERTY_HINT_ENUM, "Bilinear (Fastest),FSR (Fast)"));
+
+ ProjectSettings::get_singleton()->set_custom_property_info("rendering/scaling_3d/scale",
+ PropertyInfo(Variant::FLOAT,
+ "rendering/scaling_3d/scale",
+ PROPERTY_HINT_RANGE, "0.25,2.0,0.01"));
+
+ ProjectSettings::get_singleton()->set_custom_property_info("rendering/scaling_3d/fsr_sharpness",
+ PropertyInfo(Variant::FLOAT,
+ "rendering/scaling_3d/fsr_sharpness",
+ PROPERTY_HINT_RANGE, "0,2,0.1"));
+ ProjectSettings::get_singleton()->set_custom_property_info("rendering/scaling_3d/fsr_mipmap_bias",
+ PropertyInfo(Variant::FLOAT,
+ "rendering/scaling_3d/fsr_mipmap_bias",
+ PROPERTY_HINT_RANGE, "-2,2,0.1"));
+
GLOBAL_DEF("rendering/textures/decals/filter", DECAL_FILTER_LINEAR_MIPMAPS);
ProjectSettings::get_singleton()->set_custom_property_info("rendering/textures/decals/filter", PropertyInfo(Variant::INT, "rendering/textures/decals/filter", PROPERTY_HINT_ENUM, "Nearest (Fast),Nearest+Mipmaps,Linear,Linear+Mipmaps,Linear+Mipmaps Anisotropic (Slow)"));
GLOBAL_DEF("rendering/textures/light_projectors/filter", LIGHT_PROJECTOR_FILTER_LINEAR_MIPMAPS);
diff --git a/servers/rendering_server.h b/servers/rendering_server.h
index 32ec0197ee0..85f92bc0036 100644
--- a/servers/rendering_server.h
+++ b/servers/rendering_server.h
@@ -770,8 +770,13 @@ public:
virtual RID viewport_create() = 0;
+ enum ViewportScaling3DMode {
+ VIEWPORT_SCALING_3D_MODE_BILINEAR,
+ VIEWPORT_SCALING_3D_MODE_FSR,
+ VIEWPORT_SCALING_3D_MODE_MAX
+ };
+
virtual void viewport_set_use_xr(RID p_viewport, bool p_use_xr) = 0;
- virtual void viewport_set_scale_3d(RID p_viewport, float p_scale_3d) = 0;
virtual void viewport_set_size(RID p_viewport, int p_width, int p_height) = 0;
virtual void viewport_set_active(RID p_viewport, bool p_active) = 0;
virtual void viewport_set_parent_viewport(RID p_viewport, RID p_parent_viewport) = 0;
@@ -779,6 +784,11 @@ public:
virtual void viewport_attach_to_screen(RID p_viewport, const Rect2 &p_rect = Rect2(), DisplayServer::WindowID p_screen = DisplayServer::MAIN_WINDOW_ID) = 0;
virtual void viewport_set_render_direct_to_screen(RID p_viewport, bool p_enable) = 0;
+ virtual void viewport_set_scaling_3d_mode(RID p_viewport, ViewportScaling3DMode p_scaling_3d_mode) = 0;
+ virtual void viewport_set_scaling_3d_scale(RID p_viewport, float p_scaling_3d_scale) = 0;
+ virtual void viewport_set_fsr_sharpness(RID p_viewport, float p_fsr_sharpness) = 0;
+ virtual void viewport_set_fsr_mipmap_bias(RID p_viewport, float p_fsr_mipmap_bias) = 0;
+
enum ViewportUpdateMode {
VIEWPORT_UPDATE_DISABLED,
VIEWPORT_UPDATE_ONCE, //then goes to disabled, must be manually updated
@@ -1561,6 +1571,7 @@ VARIANT_ENUM_CAST(RenderingServer::ParticlesEmitFlags);
VARIANT_ENUM_CAST(RenderingServer::ParticlesCollisionType);
VARIANT_ENUM_CAST(RenderingServer::ParticlesCollisionHeightfieldResolution);
VARIANT_ENUM_CAST(RenderingServer::FogVolumeShape);
+VARIANT_ENUM_CAST(RenderingServer::ViewportScaling3DMode);
VARIANT_ENUM_CAST(RenderingServer::ViewportUpdateMode);
VARIANT_ENUM_CAST(RenderingServer::ViewportClearMode);
VARIANT_ENUM_CAST(RenderingServer::ViewportMSAA);
diff --git a/thirdparty/README.md b/thirdparty/README.md
index 6ebcffb7ae2..03e9885bd00 100644
--- a/thirdparty/README.md
+++ b/thirdparty/README.md
@@ -5,6 +5,18 @@ respective folder names. Use two empty lines to separate categories for
readability.
+## amd-fsr
+
+Upstream: https://github.com/GPUOpen-Effects/FidelityFX-FSR
+Version: 1.0.2 (a21ffb8f6c13233ba336352bdff293894c706575, 2021)
+License: MIT
+
+Files extracted from upstream source:
+
+- `ffx_a.h` and `ffx_fsr1.h` from `ffx-fsr`
+- `license.txt`
+
+
## basis_universal
- Upstream: https://github.com/BinomialLLC/basis_universal
diff --git a/thirdparty/amd-fsr/ffx_a.h b/thirdparty/amd-fsr/ffx_a.h
new file mode 100644
index 00000000000..d04bff55cbe
--- /dev/null
+++ b/thirdparty/amd-fsr/ffx_a.h
@@ -0,0 +1,2656 @@
+//==============================================================================================================================
+//
+// [A] SHADER PORTABILITY 1.20210629
+//
+//==============================================================================================================================
+// FidelityFX Super Resolution Sample
+//
+// Copyright (c) 2021 Advanced Micro Devices, Inc. All rights reserved.
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files(the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and / or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions :
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+// THE SOFTWARE.
+//------------------------------------------------------------------------------------------------------------------------------
+// MIT LICENSE
+// ===========
+// Copyright (c) 2014 Michal Drobot (for concepts used in "FLOAT APPROXIMATIONS").
+// -----------
+// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation
+// files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy,
+// modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
+// Software is furnished to do so, subject to the following conditions:
+// -----------
+// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
+// Software.
+// -----------
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+// WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
+// COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+//------------------------------------------------------------------------------------------------------------------------------
+// ABOUT
+// =====
+// Common central point for high-level shading language and C portability for various shader headers.
+//------------------------------------------------------------------------------------------------------------------------------
+// DEFINES
+// =======
+// A_CPU ..... Include the CPU related code.
+// A_GPU ..... Include the GPU related code.
+// A_GLSL .... Using GLSL.
+// A_HLSL .... Using HLSL.
+// A_HLSL_6_2 Using HLSL 6.2 with new 'uint16_t' and related types (requires '-enable-16bit-types').
+// A_NO_16_BIT_CAST Don't use instructions that are not availabe in SPIR-V (needed for running A_HLSL_6_2 on Vulkan)
+// A_GCC ..... Using a GCC compatible compiler (else assume MSVC compatible compiler by default).
+// =======
+// A_BYTE .... Support 8-bit integer.
+// A_HALF .... Support 16-bit integer and floating point.
+// A_LONG .... Support 64-bit integer.
+// A_DUBL .... Support 64-bit floating point.
+// =======
+// A_WAVE .... Support wave-wide operations.
+//------------------------------------------------------------------------------------------------------------------------------
+// To get #include "ffx_a.h" working in GLSL use '#extension GL_GOOGLE_include_directive:require'.
+//------------------------------------------------------------------------------------------------------------------------------
+// SIMPLIFIED TYPE SYSTEM
+// ======================
+// - All ints will be unsigned with exception of when signed is required.
+// - Type naming simplified and shortened "A<#components>",
+// - H = 16-bit float (half)
+// - F = 32-bit float (float)
+// - D = 64-bit float (double)
+// - P = 1-bit integer (predicate, not using bool because 'B' is used for byte)
+// - B = 8-bit integer (byte)
+// - W = 16-bit integer (word)
+// - U = 32-bit integer (unsigned)
+// - L = 64-bit integer (long)
+// - Using "AS<#components>" for signed when required.
+//------------------------------------------------------------------------------------------------------------------------------
+// TODO
+// ====
+// - Make sure 'ALerp*(a,b,m)' does 'b*m+(-a*m+a)' (2 ops).
+//------------------------------------------------------------------------------------------------------------------------------
+// CHANGE LOG
+// ==========
+// 20200914 - Expanded wave ops and prx code.
+// 20200713 - Added [ZOL] section, fixed serious bugs in sRGB and Rec.709 color conversion code, etc.
+//==============================================================================================================================
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// COMMON
+//==============================================================================================================================
+#define A_2PI 6.28318530718
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+//
+// CPU
+//
+//
+//==============================================================================================================================
+#ifdef A_CPU
+ // Supporting user defined overrides.
+ #ifndef A_RESTRICT
+ #define A_RESTRICT __restrict
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifndef A_STATIC
+ #define A_STATIC static
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ // Same types across CPU and GPU.
+ // Predicate uses 32-bit integer (C friendly bool).
+ typedef uint32_t AP1;
+ typedef float AF1;
+ typedef double AD1;
+ typedef uint8_t AB1;
+ typedef uint16_t AW1;
+ typedef uint32_t AU1;
+ typedef uint64_t AL1;
+ typedef int8_t ASB1;
+ typedef int16_t ASW1;
+ typedef int32_t ASU1;
+ typedef int64_t ASL1;
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AD1_(a) ((AD1)(a))
+ #define AF1_(a) ((AF1)(a))
+ #define AL1_(a) ((AL1)(a))
+ #define AU1_(a) ((AU1)(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASL1_(a) ((ASL1)(a))
+ #define ASU1_(a) ((ASU1)(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AU1 AU1_AF1(AF1 a){union{AF1 f;AU1 u;}bits;bits.f=a;return bits.u;}
+//------------------------------------------------------------------------------------------------------------------------------
+ #define A_TRUE 1
+ #define A_FALSE 0
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+// CPU/GPU PORTING
+//
+//------------------------------------------------------------------------------------------------------------------------------
+// Get CPU and GPU to share all setup code, without duplicate code paths.
+// This uses a lower-case prefix for special vector constructs.
+// - In C restrict pointers are used.
+// - In the shading language, in/inout/out arguments are used.
+// This depends on the ability to access a vector value in both languages via array syntax (aka color[2]).
+//==============================================================================================================================
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// VECTOR ARGUMENT/RETURN/INITIALIZATION PORTABILITY
+//==============================================================================================================================
+ #define retAD2 AD1 *A_RESTRICT
+ #define retAD3 AD1 *A_RESTRICT
+ #define retAD4 AD1 *A_RESTRICT
+ #define retAF2 AF1 *A_RESTRICT
+ #define retAF3 AF1 *A_RESTRICT
+ #define retAF4 AF1 *A_RESTRICT
+ #define retAL2 AL1 *A_RESTRICT
+ #define retAL3 AL1 *A_RESTRICT
+ #define retAL4 AL1 *A_RESTRICT
+ #define retAU2 AU1 *A_RESTRICT
+ #define retAU3 AU1 *A_RESTRICT
+ #define retAU4 AU1 *A_RESTRICT
+//------------------------------------------------------------------------------------------------------------------------------
+ #define inAD2 AD1 *A_RESTRICT
+ #define inAD3 AD1 *A_RESTRICT
+ #define inAD4 AD1 *A_RESTRICT
+ #define inAF2 AF1 *A_RESTRICT
+ #define inAF3 AF1 *A_RESTRICT
+ #define inAF4 AF1 *A_RESTRICT
+ #define inAL2 AL1 *A_RESTRICT
+ #define inAL3 AL1 *A_RESTRICT
+ #define inAL4 AL1 *A_RESTRICT
+ #define inAU2 AU1 *A_RESTRICT
+ #define inAU3 AU1 *A_RESTRICT
+ #define inAU4 AU1 *A_RESTRICT
+//------------------------------------------------------------------------------------------------------------------------------
+ #define inoutAD2 AD1 *A_RESTRICT
+ #define inoutAD3 AD1 *A_RESTRICT
+ #define inoutAD4 AD1 *A_RESTRICT
+ #define inoutAF2 AF1 *A_RESTRICT
+ #define inoutAF3 AF1 *A_RESTRICT
+ #define inoutAF4 AF1 *A_RESTRICT
+ #define inoutAL2 AL1 *A_RESTRICT
+ #define inoutAL3 AL1 *A_RESTRICT
+ #define inoutAL4 AL1 *A_RESTRICT
+ #define inoutAU2 AU1 *A_RESTRICT
+ #define inoutAU3 AU1 *A_RESTRICT
+ #define inoutAU4 AU1 *A_RESTRICT
+//------------------------------------------------------------------------------------------------------------------------------
+ #define outAD2 AD1 *A_RESTRICT
+ #define outAD3 AD1 *A_RESTRICT
+ #define outAD4 AD1 *A_RESTRICT
+ #define outAF2 AF1 *A_RESTRICT
+ #define outAF3 AF1 *A_RESTRICT
+ #define outAF4 AF1 *A_RESTRICT
+ #define outAL2 AL1 *A_RESTRICT
+ #define outAL3 AL1 *A_RESTRICT
+ #define outAL4 AL1 *A_RESTRICT
+ #define outAU2 AU1 *A_RESTRICT
+ #define outAU3 AU1 *A_RESTRICT
+ #define outAU4 AU1 *A_RESTRICT
+//------------------------------------------------------------------------------------------------------------------------------
+ #define varAD2(x) AD1 x[2]
+ #define varAD3(x) AD1 x[3]
+ #define varAD4(x) AD1 x[4]
+ #define varAF2(x) AF1 x[2]
+ #define varAF3(x) AF1 x[3]
+ #define varAF4(x) AF1 x[4]
+ #define varAL2(x) AL1 x[2]
+ #define varAL3(x) AL1 x[3]
+ #define varAL4(x) AL1 x[4]
+ #define varAU2(x) AU1 x[2]
+ #define varAU3(x) AU1 x[3]
+ #define varAU4(x) AU1 x[4]
+//------------------------------------------------------------------------------------------------------------------------------
+ #define initAD2(x,y) {x,y}
+ #define initAD3(x,y,z) {x,y,z}
+ #define initAD4(x,y,z,w) {x,y,z,w}
+ #define initAF2(x,y) {x,y}
+ #define initAF3(x,y,z) {x,y,z}
+ #define initAF4(x,y,z,w) {x,y,z,w}
+ #define initAL2(x,y) {x,y}
+ #define initAL3(x,y,z) {x,y,z}
+ #define initAL4(x,y,z,w) {x,y,z,w}
+ #define initAU2(x,y) {x,y}
+ #define initAU3(x,y,z) {x,y,z}
+ #define initAU4(x,y,z,w) {x,y,z,w}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// SCALAR RETURN OPS
+//------------------------------------------------------------------------------------------------------------------------------
+// TODO
+// ====
+// - Replace transcendentals with manual versions.
+//==============================================================================================================================
+ #ifdef A_GCC
+ A_STATIC AD1 AAbsD1(AD1 a){return __builtin_fabs(a);}
+ A_STATIC AF1 AAbsF1(AF1 a){return __builtin_fabsf(a);}
+ A_STATIC AU1 AAbsSU1(AU1 a){return AU1_(__builtin_abs(ASU1_(a)));}
+ A_STATIC AL1 AAbsSL1(AL1 a){return AL1_(__builtin_llabs(ASL1_(a)));}
+ #else
+ A_STATIC AD1 AAbsD1(AD1 a){return fabs(a);}
+ A_STATIC AF1 AAbsF1(AF1 a){return fabsf(a);}
+ A_STATIC AU1 AAbsSU1(AU1 a){return AU1_(abs(ASU1_(a)));}
+ A_STATIC AL1 AAbsSL1(AL1 a){return AL1_(labs((long)ASL1_(a)));}
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_GCC
+ A_STATIC AD1 ACosD1(AD1 a){return __builtin_cos(a);}
+ A_STATIC AF1 ACosF1(AF1 a){return __builtin_cosf(a);}
+ #else
+ A_STATIC AD1 ACosD1(AD1 a){return cos(a);}
+ A_STATIC AF1 ACosF1(AF1 a){return cosf(a);}
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 ADotD2(inAD2 a,inAD2 b){return a[0]*b[0]+a[1]*b[1];}
+ A_STATIC AD1 ADotD3(inAD3 a,inAD3 b){return a[0]*b[0]+a[1]*b[1]+a[2]*b[2];}
+ A_STATIC AD1 ADotD4(inAD4 a,inAD4 b){return a[0]*b[0]+a[1]*b[1]+a[2]*b[2]+a[3]*b[3];}
+ A_STATIC AF1 ADotF2(inAF2 a,inAF2 b){return a[0]*b[0]+a[1]*b[1];}
+ A_STATIC AF1 ADotF3(inAF3 a,inAF3 b){return a[0]*b[0]+a[1]*b[1]+a[2]*b[2];}
+ A_STATIC AF1 ADotF4(inAF4 a,inAF4 b){return a[0]*b[0]+a[1]*b[1]+a[2]*b[2]+a[3]*b[3];}
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_GCC
+ A_STATIC AD1 AExp2D1(AD1 a){return __builtin_exp2(a);}
+ A_STATIC AF1 AExp2F1(AF1 a){return __builtin_exp2f(a);}
+ #else
+ A_STATIC AD1 AExp2D1(AD1 a){return exp2(a);}
+ A_STATIC AF1 AExp2F1(AF1 a){return exp2f(a);}
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_GCC
+ A_STATIC AD1 AFloorD1(AD1 a){return __builtin_floor(a);}
+ A_STATIC AF1 AFloorF1(AF1 a){return __builtin_floorf(a);}
+ #else
+ A_STATIC AD1 AFloorD1(AD1 a){return floor(a);}
+ A_STATIC AF1 AFloorF1(AF1 a){return floorf(a);}
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 ALerpD1(AD1 a,AD1 b,AD1 c){return b*c+(-a*c+a);}
+ A_STATIC AF1 ALerpF1(AF1 a,AF1 b,AF1 c){return b*c+(-a*c+a);}
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_GCC
+ A_STATIC AD1 ALog2D1(AD1 a){return __builtin_log2(a);}
+ A_STATIC AF1 ALog2F1(AF1 a){return __builtin_log2f(a);}
+ #else
+ A_STATIC AD1 ALog2D1(AD1 a){return log2(a);}
+ A_STATIC AF1 ALog2F1(AF1 a){return log2f(a);}
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 AMaxD1(AD1 a,AD1 b){return a>b?a:b;}
+ A_STATIC AF1 AMaxF1(AF1 a,AF1 b){return a>b?a:b;}
+ A_STATIC AL1 AMaxL1(AL1 a,AL1 b){return a>b?a:b;}
+ A_STATIC AU1 AMaxU1(AU1 a,AU1 b){return a>b?a:b;}
+//------------------------------------------------------------------------------------------------------------------------------
+ // These follow the convention that A integer types don't have signage, until they are operated on.
+ A_STATIC AL1 AMaxSL1(AL1 a,AL1 b){return (ASL1_(a)>ASL1_(b))?a:b;}
+ A_STATIC AU1 AMaxSU1(AU1 a,AU1 b){return (ASU1_(a)>ASU1_(b))?a:b;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 AMinD1(AD1 a,AD1 b){return a>ASL1_(b));}
+ A_STATIC AU1 AShrSU1(AU1 a,AU1 b){return AU1_(ASU1_(a)>>ASU1_(b));}
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_GCC
+ A_STATIC AD1 ASinD1(AD1 a){return __builtin_sin(a);}
+ A_STATIC AF1 ASinF1(AF1 a){return __builtin_sinf(a);}
+ #else
+ A_STATIC AD1 ASinD1(AD1 a){return sin(a);}
+ A_STATIC AF1 ASinF1(AF1 a){return sinf(a);}
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_GCC
+ A_STATIC AD1 ASqrtD1(AD1 a){return __builtin_sqrt(a);}
+ A_STATIC AF1 ASqrtF1(AF1 a){return __builtin_sqrtf(a);}
+ #else
+ A_STATIC AD1 ASqrtD1(AD1 a){return sqrt(a);}
+ A_STATIC AF1 ASqrtF1(AF1 a){return sqrtf(a);}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// SCALAR RETURN OPS - DEPENDENT
+//==============================================================================================================================
+ A_STATIC AD1 AClampD1(AD1 x,AD1 n,AD1 m){return AMaxD1(n,AMinD1(x,m));}
+ A_STATIC AF1 AClampF1(AF1 x,AF1 n,AF1 m){return AMaxF1(n,AMinF1(x,m));}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 AFractD1(AD1 a){return a-AFloorD1(a);}
+ A_STATIC AF1 AFractF1(AF1 a){return a-AFloorF1(a);}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 APowD1(AD1 a,AD1 b){return AExp2D1(b*ALog2D1(a));}
+ A_STATIC AF1 APowF1(AF1 a,AF1 b){return AExp2F1(b*ALog2F1(a));}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 ARsqD1(AD1 a){return ARcpD1(ASqrtD1(a));}
+ A_STATIC AF1 ARsqF1(AF1 a){return ARcpF1(ASqrtF1(a));}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC AD1 ASatD1(AD1 a){return AMinD1(1.0,AMaxD1(0.0,a));}
+ A_STATIC AF1 ASatF1(AF1 a){return AMinF1(1.0f,AMaxF1(0.0f,a));}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// VECTOR OPS
+//------------------------------------------------------------------------------------------------------------------------------
+// These are added as needed for production or prototyping, so not necessarily a complete set.
+// They follow a convention of taking in a destination and also returning the destination value to increase utility.
+//==============================================================================================================================
+ A_STATIC retAD2 opAAbsD2(outAD2 d,inAD2 a){d[0]=AAbsD1(a[0]);d[1]=AAbsD1(a[1]);return d;}
+ A_STATIC retAD3 opAAbsD3(outAD3 d,inAD3 a){d[0]=AAbsD1(a[0]);d[1]=AAbsD1(a[1]);d[2]=AAbsD1(a[2]);return d;}
+ A_STATIC retAD4 opAAbsD4(outAD4 d,inAD4 a){d[0]=AAbsD1(a[0]);d[1]=AAbsD1(a[1]);d[2]=AAbsD1(a[2]);d[3]=AAbsD1(a[3]);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opAAbsF2(outAF2 d,inAF2 a){d[0]=AAbsF1(a[0]);d[1]=AAbsF1(a[1]);return d;}
+ A_STATIC retAF3 opAAbsF3(outAF3 d,inAF3 a){d[0]=AAbsF1(a[0]);d[1]=AAbsF1(a[1]);d[2]=AAbsF1(a[2]);return d;}
+ A_STATIC retAF4 opAAbsF4(outAF4 d,inAF4 a){d[0]=AAbsF1(a[0]);d[1]=AAbsF1(a[1]);d[2]=AAbsF1(a[2]);d[3]=AAbsF1(a[3]);return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opAAddD2(outAD2 d,inAD2 a,inAD2 b){d[0]=a[0]+b[0];d[1]=a[1]+b[1];return d;}
+ A_STATIC retAD3 opAAddD3(outAD3 d,inAD3 a,inAD3 b){d[0]=a[0]+b[0];d[1]=a[1]+b[1];d[2]=a[2]+b[2];return d;}
+ A_STATIC retAD4 opAAddD4(outAD4 d,inAD4 a,inAD4 b){d[0]=a[0]+b[0];d[1]=a[1]+b[1];d[2]=a[2]+b[2];d[3]=a[3]+b[3];return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opAAddF2(outAF2 d,inAF2 a,inAF2 b){d[0]=a[0]+b[0];d[1]=a[1]+b[1];return d;}
+ A_STATIC retAF3 opAAddF3(outAF3 d,inAF3 a,inAF3 b){d[0]=a[0]+b[0];d[1]=a[1]+b[1];d[2]=a[2]+b[2];return d;}
+ A_STATIC retAF4 opAAddF4(outAF4 d,inAF4 a,inAF4 b){d[0]=a[0]+b[0];d[1]=a[1]+b[1];d[2]=a[2]+b[2];d[3]=a[3]+b[3];return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opAAddOneD2(outAD2 d,inAD2 a,AD1 b){d[0]=a[0]+b;d[1]=a[1]+b;return d;}
+ A_STATIC retAD3 opAAddOneD3(outAD3 d,inAD3 a,AD1 b){d[0]=a[0]+b;d[1]=a[1]+b;d[2]=a[2]+b;return d;}
+ A_STATIC retAD4 opAAddOneD4(outAD4 d,inAD4 a,AD1 b){d[0]=a[0]+b;d[1]=a[1]+b;d[2]=a[2]+b;d[3]=a[3]+b;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opAAddOneF2(outAF2 d,inAF2 a,AF1 b){d[0]=a[0]+b;d[1]=a[1]+b;return d;}
+ A_STATIC retAF3 opAAddOneF3(outAF3 d,inAF3 a,AF1 b){d[0]=a[0]+b;d[1]=a[1]+b;d[2]=a[2]+b;return d;}
+ A_STATIC retAF4 opAAddOneF4(outAF4 d,inAF4 a,AF1 b){d[0]=a[0]+b;d[1]=a[1]+b;d[2]=a[2]+b;d[3]=a[3]+b;return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opACpyD2(outAD2 d,inAD2 a){d[0]=a[0];d[1]=a[1];return d;}
+ A_STATIC retAD3 opACpyD3(outAD3 d,inAD3 a){d[0]=a[0];d[1]=a[1];d[2]=a[2];return d;}
+ A_STATIC retAD4 opACpyD4(outAD4 d,inAD4 a){d[0]=a[0];d[1]=a[1];d[2]=a[2];d[3]=a[3];return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opACpyF2(outAF2 d,inAF2 a){d[0]=a[0];d[1]=a[1];return d;}
+ A_STATIC retAF3 opACpyF3(outAF3 d,inAF3 a){d[0]=a[0];d[1]=a[1];d[2]=a[2];return d;}
+ A_STATIC retAF4 opACpyF4(outAF4 d,inAF4 a){d[0]=a[0];d[1]=a[1];d[2]=a[2];d[3]=a[3];return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opALerpD2(outAD2 d,inAD2 a,inAD2 b,inAD2 c){d[0]=ALerpD1(a[0],b[0],c[0]);d[1]=ALerpD1(a[1],b[1],c[1]);return d;}
+ A_STATIC retAD3 opALerpD3(outAD3 d,inAD3 a,inAD3 b,inAD3 c){d[0]=ALerpD1(a[0],b[0],c[0]);d[1]=ALerpD1(a[1],b[1],c[1]);d[2]=ALerpD1(a[2],b[2],c[2]);return d;}
+ A_STATIC retAD4 opALerpD4(outAD4 d,inAD4 a,inAD4 b,inAD4 c){d[0]=ALerpD1(a[0],b[0],c[0]);d[1]=ALerpD1(a[1],b[1],c[1]);d[2]=ALerpD1(a[2],b[2],c[2]);d[3]=ALerpD1(a[3],b[3],c[3]);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opALerpF2(outAF2 d,inAF2 a,inAF2 b,inAF2 c){d[0]=ALerpF1(a[0],b[0],c[0]);d[1]=ALerpF1(a[1],b[1],c[1]);return d;}
+ A_STATIC retAF3 opALerpF3(outAF3 d,inAF3 a,inAF3 b,inAF3 c){d[0]=ALerpF1(a[0],b[0],c[0]);d[1]=ALerpF1(a[1],b[1],c[1]);d[2]=ALerpF1(a[2],b[2],c[2]);return d;}
+ A_STATIC retAF4 opALerpF4(outAF4 d,inAF4 a,inAF4 b,inAF4 c){d[0]=ALerpF1(a[0],b[0],c[0]);d[1]=ALerpF1(a[1],b[1],c[1]);d[2]=ALerpF1(a[2],b[2],c[2]);d[3]=ALerpF1(a[3],b[3],c[3]);return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opALerpOneD2(outAD2 d,inAD2 a,inAD2 b,AD1 c){d[0]=ALerpD1(a[0],b[0],c);d[1]=ALerpD1(a[1],b[1],c);return d;}
+ A_STATIC retAD3 opALerpOneD3(outAD3 d,inAD3 a,inAD3 b,AD1 c){d[0]=ALerpD1(a[0],b[0],c);d[1]=ALerpD1(a[1],b[1],c);d[2]=ALerpD1(a[2],b[2],c);return d;}
+ A_STATIC retAD4 opALerpOneD4(outAD4 d,inAD4 a,inAD4 b,AD1 c){d[0]=ALerpD1(a[0],b[0],c);d[1]=ALerpD1(a[1],b[1],c);d[2]=ALerpD1(a[2],b[2],c);d[3]=ALerpD1(a[3],b[3],c);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opALerpOneF2(outAF2 d,inAF2 a,inAF2 b,AF1 c){d[0]=ALerpF1(a[0],b[0],c);d[1]=ALerpF1(a[1],b[1],c);return d;}
+ A_STATIC retAF3 opALerpOneF3(outAF3 d,inAF3 a,inAF3 b,AF1 c){d[0]=ALerpF1(a[0],b[0],c);d[1]=ALerpF1(a[1],b[1],c);d[2]=ALerpF1(a[2],b[2],c);return d;}
+ A_STATIC retAF4 opALerpOneF4(outAF4 d,inAF4 a,inAF4 b,AF1 c){d[0]=ALerpF1(a[0],b[0],c);d[1]=ALerpF1(a[1],b[1],c);d[2]=ALerpF1(a[2],b[2],c);d[3]=ALerpF1(a[3],b[3],c);return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opAMaxD2(outAD2 d,inAD2 a,inAD2 b){d[0]=AMaxD1(a[0],b[0]);d[1]=AMaxD1(a[1],b[1]);return d;}
+ A_STATIC retAD3 opAMaxD3(outAD3 d,inAD3 a,inAD3 b){d[0]=AMaxD1(a[0],b[0]);d[1]=AMaxD1(a[1],b[1]);d[2]=AMaxD1(a[2],b[2]);return d;}
+ A_STATIC retAD4 opAMaxD4(outAD4 d,inAD4 a,inAD4 b){d[0]=AMaxD1(a[0],b[0]);d[1]=AMaxD1(a[1],b[1]);d[2]=AMaxD1(a[2],b[2]);d[3]=AMaxD1(a[3],b[3]);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opAMaxF2(outAF2 d,inAF2 a,inAF2 b){d[0]=AMaxF1(a[0],b[0]);d[1]=AMaxF1(a[1],b[1]);return d;}
+ A_STATIC retAF3 opAMaxF3(outAF3 d,inAF3 a,inAF3 b){d[0]=AMaxF1(a[0],b[0]);d[1]=AMaxF1(a[1],b[1]);d[2]=AMaxF1(a[2],b[2]);return d;}
+ A_STATIC retAF4 opAMaxF4(outAF4 d,inAF4 a,inAF4 b){d[0]=AMaxF1(a[0],b[0]);d[1]=AMaxF1(a[1],b[1]);d[2]=AMaxF1(a[2],b[2]);d[3]=AMaxF1(a[3],b[3]);return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opAMinD2(outAD2 d,inAD2 a,inAD2 b){d[0]=AMinD1(a[0],b[0]);d[1]=AMinD1(a[1],b[1]);return d;}
+ A_STATIC retAD3 opAMinD3(outAD3 d,inAD3 a,inAD3 b){d[0]=AMinD1(a[0],b[0]);d[1]=AMinD1(a[1],b[1]);d[2]=AMinD1(a[2],b[2]);return d;}
+ A_STATIC retAD4 opAMinD4(outAD4 d,inAD4 a,inAD4 b){d[0]=AMinD1(a[0],b[0]);d[1]=AMinD1(a[1],b[1]);d[2]=AMinD1(a[2],b[2]);d[3]=AMinD1(a[3],b[3]);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opAMinF2(outAF2 d,inAF2 a,inAF2 b){d[0]=AMinF1(a[0],b[0]);d[1]=AMinF1(a[1],b[1]);return d;}
+ A_STATIC retAF3 opAMinF3(outAF3 d,inAF3 a,inAF3 b){d[0]=AMinF1(a[0],b[0]);d[1]=AMinF1(a[1],b[1]);d[2]=AMinF1(a[2],b[2]);return d;}
+ A_STATIC retAF4 opAMinF4(outAF4 d,inAF4 a,inAF4 b){d[0]=AMinF1(a[0],b[0]);d[1]=AMinF1(a[1],b[1]);d[2]=AMinF1(a[2],b[2]);d[3]=AMinF1(a[3],b[3]);return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opAMulD2(outAD2 d,inAD2 a,inAD2 b){d[0]=a[0]*b[0];d[1]=a[1]*b[1];return d;}
+ A_STATIC retAD3 opAMulD3(outAD3 d,inAD3 a,inAD3 b){d[0]=a[0]*b[0];d[1]=a[1]*b[1];d[2]=a[2]*b[2];return d;}
+ A_STATIC retAD4 opAMulD4(outAD4 d,inAD4 a,inAD4 b){d[0]=a[0]*b[0];d[1]=a[1]*b[1];d[2]=a[2]*b[2];d[3]=a[3]*b[3];return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opAMulF2(outAF2 d,inAF2 a,inAF2 b){d[0]=a[0]*b[0];d[1]=a[1]*b[1];return d;}
+ A_STATIC retAF3 opAMulF3(outAF3 d,inAF3 a,inAF3 b){d[0]=a[0]*b[0];d[1]=a[1]*b[1];d[2]=a[2]*b[2];return d;}
+ A_STATIC retAF4 opAMulF4(outAF4 d,inAF4 a,inAF4 b){d[0]=a[0]*b[0];d[1]=a[1]*b[1];d[2]=a[2]*b[2];d[3]=a[3]*b[3];return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opAMulOneD2(outAD2 d,inAD2 a,AD1 b){d[0]=a[0]*b;d[1]=a[1]*b;return d;}
+ A_STATIC retAD3 opAMulOneD3(outAD3 d,inAD3 a,AD1 b){d[0]=a[0]*b;d[1]=a[1]*b;d[2]=a[2]*b;return d;}
+ A_STATIC retAD4 opAMulOneD4(outAD4 d,inAD4 a,AD1 b){d[0]=a[0]*b;d[1]=a[1]*b;d[2]=a[2]*b;d[3]=a[3]*b;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opAMulOneF2(outAF2 d,inAF2 a,AF1 b){d[0]=a[0]*b;d[1]=a[1]*b;return d;}
+ A_STATIC retAF3 opAMulOneF3(outAF3 d,inAF3 a,AF1 b){d[0]=a[0]*b;d[1]=a[1]*b;d[2]=a[2]*b;return d;}
+ A_STATIC retAF4 opAMulOneF4(outAF4 d,inAF4 a,AF1 b){d[0]=a[0]*b;d[1]=a[1]*b;d[2]=a[2]*b;d[3]=a[3]*b;return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opANegD2(outAD2 d,inAD2 a){d[0]=-a[0];d[1]=-a[1];return d;}
+ A_STATIC retAD3 opANegD3(outAD3 d,inAD3 a){d[0]=-a[0];d[1]=-a[1];d[2]=-a[2];return d;}
+ A_STATIC retAD4 opANegD4(outAD4 d,inAD4 a){d[0]=-a[0];d[1]=-a[1];d[2]=-a[2];d[3]=-a[3];return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opANegF2(outAF2 d,inAF2 a){d[0]=-a[0];d[1]=-a[1];return d;}
+ A_STATIC retAF3 opANegF3(outAF3 d,inAF3 a){d[0]=-a[0];d[1]=-a[1];d[2]=-a[2];return d;}
+ A_STATIC retAF4 opANegF4(outAF4 d,inAF4 a){d[0]=-a[0];d[1]=-a[1];d[2]=-a[2];d[3]=-a[3];return d;}
+//==============================================================================================================================
+ A_STATIC retAD2 opARcpD2(outAD2 d,inAD2 a){d[0]=ARcpD1(a[0]);d[1]=ARcpD1(a[1]);return d;}
+ A_STATIC retAD3 opARcpD3(outAD3 d,inAD3 a){d[0]=ARcpD1(a[0]);d[1]=ARcpD1(a[1]);d[2]=ARcpD1(a[2]);return d;}
+ A_STATIC retAD4 opARcpD4(outAD4 d,inAD4 a){d[0]=ARcpD1(a[0]);d[1]=ARcpD1(a[1]);d[2]=ARcpD1(a[2]);d[3]=ARcpD1(a[3]);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ A_STATIC retAF2 opARcpF2(outAF2 d,inAF2 a){d[0]=ARcpF1(a[0]);d[1]=ARcpF1(a[1]);return d;}
+ A_STATIC retAF3 opARcpF3(outAF3 d,inAF3 a){d[0]=ARcpF1(a[0]);d[1]=ARcpF1(a[1]);d[2]=ARcpF1(a[2]);return d;}
+ A_STATIC retAF4 opARcpF4(outAF4 d,inAF4 a){d[0]=ARcpF1(a[0]);d[1]=ARcpF1(a[1]);d[2]=ARcpF1(a[2]);d[3]=ARcpF1(a[3]);return d;}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// HALF FLOAT PACKING
+//==============================================================================================================================
+ // Convert float to half (in lower 16-bits of output).
+ // Same fast technique as documented here: ftp://ftp.fox-toolkit.org/pub/fasthalffloatconversion.pdf
+ // Supports denormals.
+ // Conversion rules are to make computations possibly "safer" on the GPU,
+ // -INF & -NaN -> -65504
+ // +INF & +NaN -> +65504
+ A_STATIC AU1 AU1_AH1_AF1(AF1 f){
+ static AW1 base[512]={
+ 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,
+ 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,
+ 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,
+ 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,
+ 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,
+ 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,
+ 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0001,0x0002,0x0004,0x0008,0x0010,0x0020,0x0040,0x0080,0x0100,
+ 0x0200,0x0400,0x0800,0x0c00,0x1000,0x1400,0x1800,0x1c00,0x2000,0x2400,0x2800,0x2c00,0x3000,0x3400,0x3800,0x3c00,
+ 0x4000,0x4400,0x4800,0x4c00,0x5000,0x5400,0x5800,0x5c00,0x6000,0x6400,0x6800,0x6c00,0x7000,0x7400,0x7800,0x7bff,
+ 0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,
+ 0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,
+ 0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,
+ 0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,
+ 0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,
+ 0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,
+ 0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,0x7bff,
+ 0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,
+ 0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,
+ 0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,
+ 0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,
+ 0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,
+ 0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,
+ 0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8000,0x8001,0x8002,0x8004,0x8008,0x8010,0x8020,0x8040,0x8080,0x8100,
+ 0x8200,0x8400,0x8800,0x8c00,0x9000,0x9400,0x9800,0x9c00,0xa000,0xa400,0xa800,0xac00,0xb000,0xb400,0xb800,0xbc00,
+ 0xc000,0xc400,0xc800,0xcc00,0xd000,0xd400,0xd800,0xdc00,0xe000,0xe400,0xe800,0xec00,0xf000,0xf400,0xf800,0xfbff,
+ 0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,
+ 0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,
+ 0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,
+ 0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,
+ 0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,
+ 0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,
+ 0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff,0xfbff};
+ static AB1 shift[512]={
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x17,0x16,0x15,0x14,0x13,0x12,0x11,0x10,0x0f,
+ 0x0e,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,
+ 0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x17,0x16,0x15,0x14,0x13,0x12,0x11,0x10,0x0f,
+ 0x0e,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,
+ 0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x0d,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,
+ 0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18};
+ union{AF1 f;AU1 u;}bits;bits.f=f;AU1 u=bits.u;AU1 i=u>>23;return (AU1)(base[i])+((u&0x7fffff)>>shift[i]);}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Used to output packed constant.
+ A_STATIC AU1 AU1_AH2_AF2(inAF2 a){return AU1_AH1_AF1(a[0])+(AU1_AH1_AF1(a[1])<<16);}
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+//
+// GLSL
+//
+//
+//==============================================================================================================================
+#if defined(A_GLSL) && defined(A_GPU)
+ #ifndef A_SKIP_EXT
+ #ifdef A_HALF
+ #extension GL_EXT_shader_16bit_storage:require
+ #extension GL_EXT_shader_explicit_arithmetic_types:require
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_LONG
+ #extension GL_ARB_gpu_shader_int64:require
+ #extension GL_NV_shader_atomic_int64:require
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_WAVE
+ #extension GL_KHR_shader_subgroup_arithmetic:require
+ #extension GL_KHR_shader_subgroup_ballot:require
+ #extension GL_KHR_shader_subgroup_quad:require
+ #extension GL_KHR_shader_subgroup_shuffle:require
+ #endif
+ #endif
+//==============================================================================================================================
+ #define AP1 bool
+ #define AP2 bvec2
+ #define AP3 bvec3
+ #define AP4 bvec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AF1 float
+ #define AF2 vec2
+ #define AF3 vec3
+ #define AF4 vec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AU1 uint
+ #define AU2 uvec2
+ #define AU3 uvec3
+ #define AU4 uvec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASU1 int
+ #define ASU2 ivec2
+ #define ASU3 ivec3
+ #define ASU4 ivec4
+//==============================================================================================================================
+ #define AF1_AU1(x) uintBitsToFloat(AU1(x))
+ #define AF2_AU2(x) uintBitsToFloat(AU2(x))
+ #define AF3_AU3(x) uintBitsToFloat(AU3(x))
+ #define AF4_AU4(x) uintBitsToFloat(AU4(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AU1_AF1(x) floatBitsToUint(AF1(x))
+ #define AU2_AF2(x) floatBitsToUint(AF2(x))
+ #define AU3_AF3(x) floatBitsToUint(AF3(x))
+ #define AU4_AF4(x) floatBitsToUint(AF4(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AU1_AH1_AF1_x(AF1 a){return packHalf2x16(AF2(a,0.0));}
+ #define AU1_AH1_AF1(a) AU1_AH1_AF1_x(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AU1_AH2_AF2 packHalf2x16
+ #define AU1_AW2Unorm_AF2 packUnorm2x16
+ #define AU1_AB4Unorm_AF4 packUnorm4x8
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AF2_AH2_AU1 unpackHalf2x16
+ #define AF2_AW2Unorm_AU1 unpackUnorm2x16
+ #define AF4_AB4Unorm_AU1 unpackUnorm4x8
+//==============================================================================================================================
+ AF1 AF1_x(AF1 a){return AF1(a);}
+ AF2 AF2_x(AF1 a){return AF2(a,a);}
+ AF3 AF3_x(AF1 a){return AF3(a,a,a);}
+ AF4 AF4_x(AF1 a){return AF4(a,a,a,a);}
+ #define AF1_(a) AF1_x(AF1(a))
+ #define AF2_(a) AF2_x(AF1(a))
+ #define AF3_(a) AF3_x(AF1(a))
+ #define AF4_(a) AF4_x(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AU1_x(AU1 a){return AU1(a);}
+ AU2 AU2_x(AU1 a){return AU2(a,a);}
+ AU3 AU3_x(AU1 a){return AU3(a,a,a);}
+ AU4 AU4_x(AU1 a){return AU4(a,a,a,a);}
+ #define AU1_(a) AU1_x(AU1(a))
+ #define AU2_(a) AU2_x(AU1(a))
+ #define AU3_(a) AU3_x(AU1(a))
+ #define AU4_(a) AU4_x(AU1(a))
+//==============================================================================================================================
+ AU1 AAbsSU1(AU1 a){return AU1(abs(ASU1(a)));}
+ AU2 AAbsSU2(AU2 a){return AU2(abs(ASU2(a)));}
+ AU3 AAbsSU3(AU3 a){return AU3(abs(ASU3(a)));}
+ AU4 AAbsSU4(AU4 a){return AU4(abs(ASU4(a)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 ABfe(AU1 src,AU1 off,AU1 bits){return bitfieldExtract(src,ASU1(off),ASU1(bits));}
+ AU1 ABfi(AU1 src,AU1 ins,AU1 mask){return (ins&mask)|(src&(~mask));}
+ // Proxy for V_BFI_B32 where the 'mask' is set as 'bits', 'mask=(1<>ASU1(b));}
+ AU2 AShrSU2(AU2 a,AU2 b){return AU2(ASU2(a)>>ASU2(b));}
+ AU3 AShrSU3(AU3 a,AU3 b){return AU3(ASU3(a)>>ASU3(b));}
+ AU4 AShrSU4(AU4 a,AU4 b){return AU4(ASU4(a)>>ASU4(b));}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// GLSL BYTE
+//==============================================================================================================================
+ #ifdef A_BYTE
+ #define AB1 uint8_t
+ #define AB2 u8vec2
+ #define AB3 u8vec3
+ #define AB4 u8vec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASB1 int8_t
+ #define ASB2 i8vec2
+ #define ASB3 i8vec3
+ #define ASB4 i8vec4
+//------------------------------------------------------------------------------------------------------------------------------
+ AB1 AB1_x(AB1 a){return AB1(a);}
+ AB2 AB2_x(AB1 a){return AB2(a,a);}
+ AB3 AB3_x(AB1 a){return AB3(a,a,a);}
+ AB4 AB4_x(AB1 a){return AB4(a,a,a,a);}
+ #define AB1_(a) AB1_x(AB1(a))
+ #define AB2_(a) AB2_x(AB1(a))
+ #define AB3_(a) AB3_x(AB1(a))
+ #define AB4_(a) AB4_x(AB1(a))
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// GLSL HALF
+//==============================================================================================================================
+ #ifdef A_HALF
+ #define AH1 float16_t
+ #define AH2 f16vec2
+ #define AH3 f16vec3
+ #define AH4 f16vec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AW1 uint16_t
+ #define AW2 u16vec2
+ #define AW3 u16vec3
+ #define AW4 u16vec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASW1 int16_t
+ #define ASW2 i16vec2
+ #define ASW3 i16vec3
+ #define ASW4 i16vec4
+//==============================================================================================================================
+ #define AH2_AU1(x) unpackFloat2x16(AU1(x))
+ AH4 AH4_AU2_x(AU2 x){return AH4(unpackFloat2x16(x.x),unpackFloat2x16(x.y));}
+ #define AH4_AU2(x) AH4_AU2_x(AU2(x))
+ #define AW2_AU1(x) unpackUint2x16(AU1(x))
+ #define AW4_AU2(x) unpackUint4x16(pack64(AU2(x)))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AU1_AH2(x) packFloat2x16(AH2(x))
+ AU2 AU2_AH4_x(AH4 x){return AU2(packFloat2x16(x.xy),packFloat2x16(x.zw));}
+ #define AU2_AH4(x) AU2_AH4_x(AH4(x))
+ #define AU1_AW2(x) packUint2x16(AW2(x))
+ #define AU2_AW4(x) unpack32(packUint4x16(AW4(x)))
+//==============================================================================================================================
+ #define AW1_AH1(x) halfBitsToUint16(AH1(x))
+ #define AW2_AH2(x) halfBitsToUint16(AH2(x))
+ #define AW3_AH3(x) halfBitsToUint16(AH3(x))
+ #define AW4_AH4(x) halfBitsToUint16(AH4(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AH1_AW1(x) uint16BitsToHalf(AW1(x))
+ #define AH2_AW2(x) uint16BitsToHalf(AW2(x))
+ #define AH3_AW3(x) uint16BitsToHalf(AW3(x))
+ #define AH4_AW4(x) uint16BitsToHalf(AW4(x))
+//==============================================================================================================================
+ AH1 AH1_x(AH1 a){return AH1(a);}
+ AH2 AH2_x(AH1 a){return AH2(a,a);}
+ AH3 AH3_x(AH1 a){return AH3(a,a,a);}
+ AH4 AH4_x(AH1 a){return AH4(a,a,a,a);}
+ #define AH1_(a) AH1_x(AH1(a))
+ #define AH2_(a) AH2_x(AH1(a))
+ #define AH3_(a) AH3_x(AH1(a))
+ #define AH4_(a) AH4_x(AH1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AW1_x(AW1 a){return AW1(a);}
+ AW2 AW2_x(AW1 a){return AW2(a,a);}
+ AW3 AW3_x(AW1 a){return AW3(a,a,a);}
+ AW4 AW4_x(AW1 a){return AW4(a,a,a,a);}
+ #define AW1_(a) AW1_x(AW1(a))
+ #define AW2_(a) AW2_x(AW1(a))
+ #define AW3_(a) AW3_x(AW1(a))
+ #define AW4_(a) AW4_x(AW1(a))
+//==============================================================================================================================
+ AW1 AAbsSW1(AW1 a){return AW1(abs(ASW1(a)));}
+ AW2 AAbsSW2(AW2 a){return AW2(abs(ASW2(a)));}
+ AW3 AAbsSW3(AW3 a){return AW3(abs(ASW3(a)));}
+ AW4 AAbsSW4(AW4 a){return AW4(abs(ASW4(a)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AClampH1(AH1 x,AH1 n,AH1 m){return clamp(x,n,m);}
+ AH2 AClampH2(AH2 x,AH2 n,AH2 m){return clamp(x,n,m);}
+ AH3 AClampH3(AH3 x,AH3 n,AH3 m){return clamp(x,n,m);}
+ AH4 AClampH4(AH4 x,AH4 n,AH4 m){return clamp(x,n,m);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AFractH1(AH1 x){return fract(x);}
+ AH2 AFractH2(AH2 x){return fract(x);}
+ AH3 AFractH3(AH3 x){return fract(x);}
+ AH4 AFractH4(AH4 x){return fract(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ALerpH1(AH1 x,AH1 y,AH1 a){return mix(x,y,a);}
+ AH2 ALerpH2(AH2 x,AH2 y,AH2 a){return mix(x,y,a);}
+ AH3 ALerpH3(AH3 x,AH3 y,AH3 a){return mix(x,y,a);}
+ AH4 ALerpH4(AH4 x,AH4 y,AH4 a){return mix(x,y,a);}
+//------------------------------------------------------------------------------------------------------------------------------
+ // No packed version of max3.
+ AH1 AMax3H1(AH1 x,AH1 y,AH1 z){return max(x,max(y,z));}
+ AH2 AMax3H2(AH2 x,AH2 y,AH2 z){return max(x,max(y,z));}
+ AH3 AMax3H3(AH3 x,AH3 y,AH3 z){return max(x,max(y,z));}
+ AH4 AMax3H4(AH4 x,AH4 y,AH4 z){return max(x,max(y,z));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AMaxSW1(AW1 a,AW1 b){return AW1(max(ASU1(a),ASU1(b)));}
+ AW2 AMaxSW2(AW2 a,AW2 b){return AW2(max(ASU2(a),ASU2(b)));}
+ AW3 AMaxSW3(AW3 a,AW3 b){return AW3(max(ASU3(a),ASU3(b)));}
+ AW4 AMaxSW4(AW4 a,AW4 b){return AW4(max(ASU4(a),ASU4(b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // No packed version of min3.
+ AH1 AMin3H1(AH1 x,AH1 y,AH1 z){return min(x,min(y,z));}
+ AH2 AMin3H2(AH2 x,AH2 y,AH2 z){return min(x,min(y,z));}
+ AH3 AMin3H3(AH3 x,AH3 y,AH3 z){return min(x,min(y,z));}
+ AH4 AMin3H4(AH4 x,AH4 y,AH4 z){return min(x,min(y,z));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AMinSW1(AW1 a,AW1 b){return AW1(min(ASU1(a),ASU1(b)));}
+ AW2 AMinSW2(AW2 a,AW2 b){return AW2(min(ASU2(a),ASU2(b)));}
+ AW3 AMinSW3(AW3 a,AW3 b){return AW3(min(ASU3(a),ASU3(b)));}
+ AW4 AMinSW4(AW4 a,AW4 b){return AW4(min(ASU4(a),ASU4(b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ARcpH1(AH1 x){return AH1_(1.0)/x;}
+ AH2 ARcpH2(AH2 x){return AH2_(1.0)/x;}
+ AH3 ARcpH3(AH3 x){return AH3_(1.0)/x;}
+ AH4 ARcpH4(AH4 x){return AH4_(1.0)/x;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ARsqH1(AH1 x){return AH1_(1.0)/sqrt(x);}
+ AH2 ARsqH2(AH2 x){return AH2_(1.0)/sqrt(x);}
+ AH3 ARsqH3(AH3 x){return AH3_(1.0)/sqrt(x);}
+ AH4 ARsqH4(AH4 x){return AH4_(1.0)/sqrt(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ASatH1(AH1 x){return clamp(x,AH1_(0.0),AH1_(1.0));}
+ AH2 ASatH2(AH2 x){return clamp(x,AH2_(0.0),AH2_(1.0));}
+ AH3 ASatH3(AH3 x){return clamp(x,AH3_(0.0),AH3_(1.0));}
+ AH4 ASatH4(AH4 x){return clamp(x,AH4_(0.0),AH4_(1.0));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AShrSW1(AW1 a,AW1 b){return AW1(ASW1(a)>>ASW1(b));}
+ AW2 AShrSW2(AW2 a,AW2 b){return AW2(ASW2(a)>>ASW2(b));}
+ AW3 AShrSW3(AW3 a,AW3 b){return AW3(ASW3(a)>>ASW3(b));}
+ AW4 AShrSW4(AW4 a,AW4 b){return AW4(ASW4(a)>>ASW4(b));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// GLSL DOUBLE
+//==============================================================================================================================
+ #ifdef A_DUBL
+ #define AD1 double
+ #define AD2 dvec2
+ #define AD3 dvec3
+ #define AD4 dvec4
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 AD1_x(AD1 a){return AD1(a);}
+ AD2 AD2_x(AD1 a){return AD2(a,a);}
+ AD3 AD3_x(AD1 a){return AD3(a,a,a);}
+ AD4 AD4_x(AD1 a){return AD4(a,a,a,a);}
+ #define AD1_(a) AD1_x(AD1(a))
+ #define AD2_(a) AD2_x(AD1(a))
+ #define AD3_(a) AD3_x(AD1(a))
+ #define AD4_(a) AD4_x(AD1(a))
+//==============================================================================================================================
+ AD1 AFractD1(AD1 x){return fract(x);}
+ AD2 AFractD2(AD2 x){return fract(x);}
+ AD3 AFractD3(AD3 x){return fract(x);}
+ AD4 AFractD4(AD4 x){return fract(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ALerpD1(AD1 x,AD1 y,AD1 a){return mix(x,y,a);}
+ AD2 ALerpD2(AD2 x,AD2 y,AD2 a){return mix(x,y,a);}
+ AD3 ALerpD3(AD3 x,AD3 y,AD3 a){return mix(x,y,a);}
+ AD4 ALerpD4(AD4 x,AD4 y,AD4 a){return mix(x,y,a);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ARcpD1(AD1 x){return AD1_(1.0)/x;}
+ AD2 ARcpD2(AD2 x){return AD2_(1.0)/x;}
+ AD3 ARcpD3(AD3 x){return AD3_(1.0)/x;}
+ AD4 ARcpD4(AD4 x){return AD4_(1.0)/x;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ARsqD1(AD1 x){return AD1_(1.0)/sqrt(x);}
+ AD2 ARsqD2(AD2 x){return AD2_(1.0)/sqrt(x);}
+ AD3 ARsqD3(AD3 x){return AD3_(1.0)/sqrt(x);}
+ AD4 ARsqD4(AD4 x){return AD4_(1.0)/sqrt(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ASatD1(AD1 x){return clamp(x,AD1_(0.0),AD1_(1.0));}
+ AD2 ASatD2(AD2 x){return clamp(x,AD2_(0.0),AD2_(1.0));}
+ AD3 ASatD3(AD3 x){return clamp(x,AD3_(0.0),AD3_(1.0));}
+ AD4 ASatD4(AD4 x){return clamp(x,AD4_(0.0),AD4_(1.0));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// GLSL LONG
+//==============================================================================================================================
+ #ifdef A_LONG
+ #define AL1 uint64_t
+ #define AL2 u64vec2
+ #define AL3 u64vec3
+ #define AL4 u64vec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASL1 int64_t
+ #define ASL2 i64vec2
+ #define ASL3 i64vec3
+ #define ASL4 i64vec4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AL1_AU2(x) packUint2x32(AU2(x))
+ #define AU2_AL1(x) unpackUint2x32(AL1(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ AL1 AL1_x(AL1 a){return AL1(a);}
+ AL2 AL2_x(AL1 a){return AL2(a,a);}
+ AL3 AL3_x(AL1 a){return AL3(a,a,a);}
+ AL4 AL4_x(AL1 a){return AL4(a,a,a,a);}
+ #define AL1_(a) AL1_x(AL1(a))
+ #define AL2_(a) AL2_x(AL1(a))
+ #define AL3_(a) AL3_x(AL1(a))
+ #define AL4_(a) AL4_x(AL1(a))
+//==============================================================================================================================
+ AL1 AAbsSL1(AL1 a){return AL1(abs(ASL1(a)));}
+ AL2 AAbsSL2(AL2 a){return AL2(abs(ASL2(a)));}
+ AL3 AAbsSL3(AL3 a){return AL3(abs(ASL3(a)));}
+ AL4 AAbsSL4(AL4 a){return AL4(abs(ASL4(a)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AL1 AMaxSL1(AL1 a,AL1 b){return AL1(max(ASU1(a),ASU1(b)));}
+ AL2 AMaxSL2(AL2 a,AL2 b){return AL2(max(ASU2(a),ASU2(b)));}
+ AL3 AMaxSL3(AL3 a,AL3 b){return AL3(max(ASU3(a),ASU3(b)));}
+ AL4 AMaxSL4(AL4 a,AL4 b){return AL4(max(ASU4(a),ASU4(b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AL1 AMinSL1(AL1 a,AL1 b){return AL1(min(ASU1(a),ASU1(b)));}
+ AL2 AMinSL2(AL2 a,AL2 b){return AL2(min(ASU2(a),ASU2(b)));}
+ AL3 AMinSL3(AL3 a,AL3 b){return AL3(min(ASU3(a),ASU3(b)));}
+ AL4 AMinSL4(AL4 a,AL4 b){return AL4(min(ASU4(a),ASU4(b)));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// WAVE OPERATIONS
+//==============================================================================================================================
+ #ifdef A_WAVE
+ // Where 'x' must be a compile time literal.
+ AF1 AWaveXorF1(AF1 v,AU1 x){return subgroupShuffleXor(v,x);}
+ AF2 AWaveXorF2(AF2 v,AU1 x){return subgroupShuffleXor(v,x);}
+ AF3 AWaveXorF3(AF3 v,AU1 x){return subgroupShuffleXor(v,x);}
+ AF4 AWaveXorF4(AF4 v,AU1 x){return subgroupShuffleXor(v,x);}
+ AU1 AWaveXorU1(AU1 v,AU1 x){return subgroupShuffleXor(v,x);}
+ AU2 AWaveXorU2(AU2 v,AU1 x){return subgroupShuffleXor(v,x);}
+ AU3 AWaveXorU3(AU3 v,AU1 x){return subgroupShuffleXor(v,x);}
+ AU4 AWaveXorU4(AU4 v,AU1 x){return subgroupShuffleXor(v,x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_HALF
+ AH2 AWaveXorH2(AH2 v,AU1 x){return AH2_AU1(subgroupShuffleXor(AU1_AH2(v),x));}
+ AH4 AWaveXorH4(AH4 v,AU1 x){return AH4_AU2(subgroupShuffleXor(AU2_AH4(v),x));}
+ AW2 AWaveXorW2(AW2 v,AU1 x){return AW2_AU1(subgroupShuffleXor(AU1_AW2(v),x));}
+ AW4 AWaveXorW4(AW4 v,AU1 x){return AW4_AU2(subgroupShuffleXor(AU2_AW4(v),x));}
+ #endif
+ #endif
+//==============================================================================================================================
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+//
+// HLSL
+//
+//
+//==============================================================================================================================
+#if defined(A_HLSL) && defined(A_GPU)
+ #ifdef A_HLSL_6_2
+ #define AP1 bool
+ #define AP2 bool2
+ #define AP3 bool3
+ #define AP4 bool4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AF1 float32_t
+ #define AF2 float32_t2
+ #define AF3 float32_t3
+ #define AF4 float32_t4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AU1 uint32_t
+ #define AU2 uint32_t2
+ #define AU3 uint32_t3
+ #define AU4 uint32_t4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASU1 int32_t
+ #define ASU2 int32_t2
+ #define ASU3 int32_t3
+ #define ASU4 int32_t4
+ #else
+ #define AP1 bool
+ #define AP2 bool2
+ #define AP3 bool3
+ #define AP4 bool4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AF1 float
+ #define AF2 float2
+ #define AF3 float3
+ #define AF4 float4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AU1 uint
+ #define AU2 uint2
+ #define AU3 uint3
+ #define AU4 uint4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASU1 int
+ #define ASU2 int2
+ #define ASU3 int3
+ #define ASU4 int4
+ #endif
+//==============================================================================================================================
+ #define AF1_AU1(x) asfloat(AU1(x))
+ #define AF2_AU2(x) asfloat(AU2(x))
+ #define AF3_AU3(x) asfloat(AU3(x))
+ #define AF4_AU4(x) asfloat(AU4(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AU1_AF1(x) asuint(AF1(x))
+ #define AU2_AF2(x) asuint(AF2(x))
+ #define AU3_AF3(x) asuint(AF3(x))
+ #define AU4_AF4(x) asuint(AF4(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AU1_AH1_AF1_x(AF1 a){return f32tof16(a);}
+ #define AU1_AH1_AF1(a) AU1_AH1_AF1_x(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AU1_AH2_AF2_x(AF2 a){return f32tof16(a.x)|(f32tof16(a.y)<<16);}
+ #define AU1_AH2_AF2(a) AU1_AH2_AF2_x(AF2(a))
+ #define AU1_AB4Unorm_AF4(x) D3DCOLORtoUBYTE4(AF4(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 AF2_AH2_AU1_x(AU1 x){return AF2(f16tof32(x&0xFFFF),f16tof32(x>>16));}
+ #define AF2_AH2_AU1(x) AF2_AH2_AU1_x(AU1(x))
+//==============================================================================================================================
+ AF1 AF1_x(AF1 a){return AF1(a);}
+ AF2 AF2_x(AF1 a){return AF2(a,a);}
+ AF3 AF3_x(AF1 a){return AF3(a,a,a);}
+ AF4 AF4_x(AF1 a){return AF4(a,a,a,a);}
+ #define AF1_(a) AF1_x(AF1(a))
+ #define AF2_(a) AF2_x(AF1(a))
+ #define AF3_(a) AF3_x(AF1(a))
+ #define AF4_(a) AF4_x(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AU1_x(AU1 a){return AU1(a);}
+ AU2 AU2_x(AU1 a){return AU2(a,a);}
+ AU3 AU3_x(AU1 a){return AU3(a,a,a);}
+ AU4 AU4_x(AU1 a){return AU4(a,a,a,a);}
+ #define AU1_(a) AU1_x(AU1(a))
+ #define AU2_(a) AU2_x(AU1(a))
+ #define AU3_(a) AU3_x(AU1(a))
+ #define AU4_(a) AU4_x(AU1(a))
+//==============================================================================================================================
+ AU1 AAbsSU1(AU1 a){return AU1(abs(ASU1(a)));}
+ AU2 AAbsSU2(AU2 a){return AU2(abs(ASU2(a)));}
+ AU3 AAbsSU3(AU3 a){return AU3(abs(ASU3(a)));}
+ AU4 AAbsSU4(AU4 a){return AU4(abs(ASU4(a)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 ABfe(AU1 src,AU1 off,AU1 bits){AU1 mask=(1u<>off)&mask;}
+ AU1 ABfi(AU1 src,AU1 ins,AU1 mask){return (ins&mask)|(src&(~mask));}
+ AU1 ABfiM(AU1 src,AU1 ins,AU1 bits){AU1 mask=(1u<>ASU1(b));}
+ AU2 AShrSU2(AU2 a,AU2 b){return AU2(ASU2(a)>>ASU2(b));}
+ AU3 AShrSU3(AU3 a,AU3 b){return AU3(ASU3(a)>>ASU3(b));}
+ AU4 AShrSU4(AU4 a,AU4 b){return AU4(ASU4(a)>>ASU4(b));}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// HLSL BYTE
+//==============================================================================================================================
+ #ifdef A_BYTE
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// HLSL HALF
+//==============================================================================================================================
+ #ifdef A_HALF
+ #ifdef A_HLSL_6_2
+ #define AH1 float16_t
+ #define AH2 float16_t2
+ #define AH3 float16_t3
+ #define AH4 float16_t4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AW1 uint16_t
+ #define AW2 uint16_t2
+ #define AW3 uint16_t3
+ #define AW4 uint16_t4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASW1 int16_t
+ #define ASW2 int16_t2
+ #define ASW3 int16_t3
+ #define ASW4 int16_t4
+ #else
+ #define AH1 min16float
+ #define AH2 min16float2
+ #define AH3 min16float3
+ #define AH4 min16float4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AW1 min16uint
+ #define AW2 min16uint2
+ #define AW3 min16uint3
+ #define AW4 min16uint4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASW1 min16int
+ #define ASW2 min16int2
+ #define ASW3 min16int3
+ #define ASW4 min16int4
+ #endif
+//==============================================================================================================================
+ // Need to use manual unpack to get optimal execution (don't use packed types in buffers directly).
+ // Unpack requires this pattern: https://gpuopen.com/first-steps-implementing-fp16/
+ AH2 AH2_AU1_x(AU1 x){AF2 t=f16tof32(AU2(x&0xFFFF,x>>16));return AH2(t);}
+ AH4 AH4_AU2_x(AU2 x){return AH4(AH2_AU1_x(x.x),AH2_AU1_x(x.y));}
+ AW2 AW2_AU1_x(AU1 x){AU2 t=AU2(x&0xFFFF,x>>16);return AW2(t);}
+ AW4 AW4_AU2_x(AU2 x){return AW4(AW2_AU1_x(x.x),AW2_AU1_x(x.y));}
+ #define AH2_AU1(x) AH2_AU1_x(AU1(x))
+ #define AH4_AU2(x) AH4_AU2_x(AU2(x))
+ #define AW2_AU1(x) AW2_AU1_x(AU1(x))
+ #define AW4_AU2(x) AW4_AU2_x(AU2(x))
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AU1_AH2_x(AH2 x){return f32tof16(x.x)+(f32tof16(x.y)<<16);}
+ AU2 AU2_AH4_x(AH4 x){return AU2(AU1_AH2_x(x.xy),AU1_AH2_x(x.zw));}
+ AU1 AU1_AW2_x(AW2 x){return AU1(x.x)+(AU1(x.y)<<16);}
+ AU2 AU2_AW4_x(AW4 x){return AU2(AU1_AW2_x(x.xy),AU1_AW2_x(x.zw));}
+ #define AU1_AH2(x) AU1_AH2_x(AH2(x))
+ #define AU2_AH4(x) AU2_AH4_x(AH4(x))
+ #define AU1_AW2(x) AU1_AW2_x(AW2(x))
+ #define AU2_AW4(x) AU2_AW4_x(AW4(x))
+//==============================================================================================================================
+ #if defined(A_HLSL_6_2) && !defined(A_NO_16_BIT_CAST)
+ #define AW1_AH1(x) asuint16(x)
+ #define AW2_AH2(x) asuint16(x)
+ #define AW3_AH3(x) asuint16(x)
+ #define AW4_AH4(x) asuint16(x)
+ #else
+ #define AW1_AH1(a) AW1(f32tof16(AF1(a)))
+ #define AW2_AH2(a) AW2(AW1_AH1((a).x),AW1_AH1((a).y))
+ #define AW3_AH3(a) AW3(AW1_AH1((a).x),AW1_AH1((a).y),AW1_AH1((a).z))
+ #define AW4_AH4(a) AW4(AW1_AH1((a).x),AW1_AH1((a).y),AW1_AH1((a).z),AW1_AH1((a).w))
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #if defined(A_HLSL_6_2) && !defined(A_NO_16_BIT_CAST)
+ #define AH1_AW1(x) asfloat16(x)
+ #define AH2_AW2(x) asfloat16(x)
+ #define AH3_AW3(x) asfloat16(x)
+ #define AH4_AW4(x) asfloat16(x)
+ #else
+ #define AH1_AW1(a) AH1(f16tof32(AU1(a)))
+ #define AH2_AW2(a) AH2(AH1_AW1((a).x),AH1_AW1((a).y))
+ #define AH3_AW3(a) AH3(AH1_AW1((a).x),AH1_AW1((a).y),AH1_AW1((a).z))
+ #define AH4_AW4(a) AH4(AH1_AW1((a).x),AH1_AW1((a).y),AH1_AW1((a).z),AH1_AW1((a).w))
+ #endif
+//==============================================================================================================================
+ AH1 AH1_x(AH1 a){return AH1(a);}
+ AH2 AH2_x(AH1 a){return AH2(a,a);}
+ AH3 AH3_x(AH1 a){return AH3(a,a,a);}
+ AH4 AH4_x(AH1 a){return AH4(a,a,a,a);}
+ #define AH1_(a) AH1_x(AH1(a))
+ #define AH2_(a) AH2_x(AH1(a))
+ #define AH3_(a) AH3_x(AH1(a))
+ #define AH4_(a) AH4_x(AH1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AW1_x(AW1 a){return AW1(a);}
+ AW2 AW2_x(AW1 a){return AW2(a,a);}
+ AW3 AW3_x(AW1 a){return AW3(a,a,a);}
+ AW4 AW4_x(AW1 a){return AW4(a,a,a,a);}
+ #define AW1_(a) AW1_x(AW1(a))
+ #define AW2_(a) AW2_x(AW1(a))
+ #define AW3_(a) AW3_x(AW1(a))
+ #define AW4_(a) AW4_x(AW1(a))
+//==============================================================================================================================
+ AW1 AAbsSW1(AW1 a){return AW1(abs(ASW1(a)));}
+ AW2 AAbsSW2(AW2 a){return AW2(abs(ASW2(a)));}
+ AW3 AAbsSW3(AW3 a){return AW3(abs(ASW3(a)));}
+ AW4 AAbsSW4(AW4 a){return AW4(abs(ASW4(a)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AClampH1(AH1 x,AH1 n,AH1 m){return max(n,min(x,m));}
+ AH2 AClampH2(AH2 x,AH2 n,AH2 m){return max(n,min(x,m));}
+ AH3 AClampH3(AH3 x,AH3 n,AH3 m){return max(n,min(x,m));}
+ AH4 AClampH4(AH4 x,AH4 n,AH4 m){return max(n,min(x,m));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // V_FRACT_F16 (note DX frac() is different).
+ AH1 AFractH1(AH1 x){return x-floor(x);}
+ AH2 AFractH2(AH2 x){return x-floor(x);}
+ AH3 AFractH3(AH3 x){return x-floor(x);}
+ AH4 AFractH4(AH4 x){return x-floor(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ALerpH1(AH1 x,AH1 y,AH1 a){return lerp(x,y,a);}
+ AH2 ALerpH2(AH2 x,AH2 y,AH2 a){return lerp(x,y,a);}
+ AH3 ALerpH3(AH3 x,AH3 y,AH3 a){return lerp(x,y,a);}
+ AH4 ALerpH4(AH4 x,AH4 y,AH4 a){return lerp(x,y,a);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AMax3H1(AH1 x,AH1 y,AH1 z){return max(x,max(y,z));}
+ AH2 AMax3H2(AH2 x,AH2 y,AH2 z){return max(x,max(y,z));}
+ AH3 AMax3H3(AH3 x,AH3 y,AH3 z){return max(x,max(y,z));}
+ AH4 AMax3H4(AH4 x,AH4 y,AH4 z){return max(x,max(y,z));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AMaxSW1(AW1 a,AW1 b){return AW1(max(ASU1(a),ASU1(b)));}
+ AW2 AMaxSW2(AW2 a,AW2 b){return AW2(max(ASU2(a),ASU2(b)));}
+ AW3 AMaxSW3(AW3 a,AW3 b){return AW3(max(ASU3(a),ASU3(b)));}
+ AW4 AMaxSW4(AW4 a,AW4 b){return AW4(max(ASU4(a),ASU4(b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AMin3H1(AH1 x,AH1 y,AH1 z){return min(x,min(y,z));}
+ AH2 AMin3H2(AH2 x,AH2 y,AH2 z){return min(x,min(y,z));}
+ AH3 AMin3H3(AH3 x,AH3 y,AH3 z){return min(x,min(y,z));}
+ AH4 AMin3H4(AH4 x,AH4 y,AH4 z){return min(x,min(y,z));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AMinSW1(AW1 a,AW1 b){return AW1(min(ASU1(a),ASU1(b)));}
+ AW2 AMinSW2(AW2 a,AW2 b){return AW2(min(ASU2(a),ASU2(b)));}
+ AW3 AMinSW3(AW3 a,AW3 b){return AW3(min(ASU3(a),ASU3(b)));}
+ AW4 AMinSW4(AW4 a,AW4 b){return AW4(min(ASU4(a),ASU4(b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ARcpH1(AH1 x){return rcp(x);}
+ AH2 ARcpH2(AH2 x){return rcp(x);}
+ AH3 ARcpH3(AH3 x){return rcp(x);}
+ AH4 ARcpH4(AH4 x){return rcp(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ARsqH1(AH1 x){return rsqrt(x);}
+ AH2 ARsqH2(AH2 x){return rsqrt(x);}
+ AH3 ARsqH3(AH3 x){return rsqrt(x);}
+ AH4 ARsqH4(AH4 x){return rsqrt(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ASatH1(AH1 x){return saturate(x);}
+ AH2 ASatH2(AH2 x){return saturate(x);}
+ AH3 ASatH3(AH3 x){return saturate(x);}
+ AH4 ASatH4(AH4 x){return saturate(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AShrSW1(AW1 a,AW1 b){return AW1(ASW1(a)>>ASW1(b));}
+ AW2 AShrSW2(AW2 a,AW2 b){return AW2(ASW2(a)>>ASW2(b));}
+ AW3 AShrSW3(AW3 a,AW3 b){return AW3(ASW3(a)>>ASW3(b));}
+ AW4 AShrSW4(AW4 a,AW4 b){return AW4(ASW4(a)>>ASW4(b));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// HLSL DOUBLE
+//==============================================================================================================================
+ #ifdef A_DUBL
+ #ifdef A_HLSL_6_2
+ #define AD1 float64_t
+ #define AD2 float64_t2
+ #define AD3 float64_t3
+ #define AD4 float64_t4
+ #else
+ #define AD1 double
+ #define AD2 double2
+ #define AD3 double3
+ #define AD4 double4
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 AD1_x(AD1 a){return AD1(a);}
+ AD2 AD2_x(AD1 a){return AD2(a,a);}
+ AD3 AD3_x(AD1 a){return AD3(a,a,a);}
+ AD4 AD4_x(AD1 a){return AD4(a,a,a,a);}
+ #define AD1_(a) AD1_x(AD1(a))
+ #define AD2_(a) AD2_x(AD1(a))
+ #define AD3_(a) AD3_x(AD1(a))
+ #define AD4_(a) AD4_x(AD1(a))
+//==============================================================================================================================
+ AD1 AFractD1(AD1 a){return a-floor(a);}
+ AD2 AFractD2(AD2 a){return a-floor(a);}
+ AD3 AFractD3(AD3 a){return a-floor(a);}
+ AD4 AFractD4(AD4 a){return a-floor(a);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ALerpD1(AD1 x,AD1 y,AD1 a){return lerp(x,y,a);}
+ AD2 ALerpD2(AD2 x,AD2 y,AD2 a){return lerp(x,y,a);}
+ AD3 ALerpD3(AD3 x,AD3 y,AD3 a){return lerp(x,y,a);}
+ AD4 ALerpD4(AD4 x,AD4 y,AD4 a){return lerp(x,y,a);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ARcpD1(AD1 x){return rcp(x);}
+ AD2 ARcpD2(AD2 x){return rcp(x);}
+ AD3 ARcpD3(AD3 x){return rcp(x);}
+ AD4 ARcpD4(AD4 x){return rcp(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ARsqD1(AD1 x){return rsqrt(x);}
+ AD2 ARsqD2(AD2 x){return rsqrt(x);}
+ AD3 ARsqD3(AD3 x){return rsqrt(x);}
+ AD4 ARsqD4(AD4 x){return rsqrt(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD1 ASatD1(AD1 x){return saturate(x);}
+ AD2 ASatD2(AD2 x){return saturate(x);}
+ AD3 ASatD3(AD3 x){return saturate(x);}
+ AD4 ASatD4(AD4 x){return saturate(x);}
+ #endif
+//==============================================================================================================================
+// HLSL WAVE
+//==============================================================================================================================
+ #ifdef A_WAVE
+ // Where 'x' must be a compile time literal.
+ AF1 AWaveXorF1(AF1 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+ AF2 AWaveXorF2(AF2 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+ AF3 AWaveXorF3(AF3 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+ AF4 AWaveXorF4(AF4 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+ AU1 AWaveXorU1(AU1 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+ AU2 AWaveXorU1(AU2 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+ AU3 AWaveXorU1(AU3 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+ AU4 AWaveXorU1(AU4 v,AU1 x){return WaveReadLaneAt(v,WaveGetLaneIndex()^x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_HALF
+ AH2 AWaveXorH2(AH2 v,AU1 x){return AH2_AU1(WaveReadLaneAt(AU1_AH2(v),WaveGetLaneIndex()^x));}
+ AH4 AWaveXorH4(AH4 v,AU1 x){return AH4_AU2(WaveReadLaneAt(AU2_AH4(v),WaveGetLaneIndex()^x));}
+ AW2 AWaveXorW2(AW2 v,AU1 x){return AW2_AU1(WaveReadLaneAt(AU1_AW2(v),WaveGetLaneIndex()^x));}
+ AW4 AWaveXorW4(AW4 v,AU1 x){return AW4_AU1(WaveReadLaneAt(AU1_AW4(v),WaveGetLaneIndex()^x));}
+ #endif
+ #endif
+//==============================================================================================================================
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+//
+// GPU COMMON
+//
+//
+//==============================================================================================================================
+#ifdef A_GPU
+ // Negative and positive infinity.
+ #define A_INFP_F AF1_AU1(0x7f800000u)
+ #define A_INFN_F AF1_AU1(0xff800000u)
+//------------------------------------------------------------------------------------------------------------------------------
+ // Copy sign from 's' to positive 'd'.
+ AF1 ACpySgnF1(AF1 d,AF1 s){return AF1_AU1(AU1_AF1(d)|(AU1_AF1(s)&AU1_(0x80000000u)));}
+ AF2 ACpySgnF2(AF2 d,AF2 s){return AF2_AU2(AU2_AF2(d)|(AU2_AF2(s)&AU2_(0x80000000u)));}
+ AF3 ACpySgnF3(AF3 d,AF3 s){return AF3_AU3(AU3_AF3(d)|(AU3_AF3(s)&AU3_(0x80000000u)));}
+ AF4 ACpySgnF4(AF4 d,AF4 s){return AF4_AU4(AU4_AF4(d)|(AU4_AF4(s)&AU4_(0x80000000u)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Single operation to return (useful to create a mask to use in lerp for branch free logic),
+ // m=NaN := 0
+ // m>=0 := 0
+ // m<0 := 1
+ // Uses the following useful floating point logic,
+ // saturate(+a*(-INF)==-INF) := 0
+ // saturate( 0*(-INF)== NaN) := 0
+ // saturate(-a*(-INF)==+INF) := 1
+ AF1 ASignedF1(AF1 m){return ASatF1(m*AF1_(A_INFN_F));}
+ AF2 ASignedF2(AF2 m){return ASatF2(m*AF2_(A_INFN_F));}
+ AF3 ASignedF3(AF3 m){return ASatF3(m*AF3_(A_INFN_F));}
+ AF4 ASignedF4(AF4 m){return ASatF4(m*AF4_(A_INFN_F));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AGtZeroF1(AF1 m){return ASatF1(m*AF1_(A_INFP_F));}
+ AF2 AGtZeroF2(AF2 m){return ASatF2(m*AF2_(A_INFP_F));}
+ AF3 AGtZeroF3(AF3 m){return ASatF3(m*AF3_(A_INFP_F));}
+ AF4 AGtZeroF4(AF4 m){return ASatF4(m*AF4_(A_INFP_F));}
+//==============================================================================================================================
+ #ifdef A_HALF
+ #ifdef A_HLSL_6_2
+ #define A_INFP_H AH1_AW1((uint16_t)0x7c00u)
+ #define A_INFN_H AH1_AW1((uint16_t)0xfc00u)
+ #else
+ #define A_INFP_H AH1_AW1(0x7c00u)
+ #define A_INFN_H AH1_AW1(0xfc00u)
+ #endif
+
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ACpySgnH1(AH1 d,AH1 s){return AH1_AW1(AW1_AH1(d)|(AW1_AH1(s)&AW1_(0x8000u)));}
+ AH2 ACpySgnH2(AH2 d,AH2 s){return AH2_AW2(AW2_AH2(d)|(AW2_AH2(s)&AW2_(0x8000u)));}
+ AH3 ACpySgnH3(AH3 d,AH3 s){return AH3_AW3(AW3_AH3(d)|(AW3_AH3(s)&AW3_(0x8000u)));}
+ AH4 ACpySgnH4(AH4 d,AH4 s){return AH4_AW4(AW4_AH4(d)|(AW4_AH4(s)&AW4_(0x8000u)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ASignedH1(AH1 m){return ASatH1(m*AH1_(A_INFN_H));}
+ AH2 ASignedH2(AH2 m){return ASatH2(m*AH2_(A_INFN_H));}
+ AH3 ASignedH3(AH3 m){return ASatH3(m*AH3_(A_INFN_H));}
+ AH4 ASignedH4(AH4 m){return ASatH4(m*AH4_(A_INFN_H));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AGtZeroH1(AH1 m){return ASatH1(m*AH1_(A_INFP_H));}
+ AH2 AGtZeroH2(AH2 m){return ASatH2(m*AH2_(A_INFP_H));}
+ AH3 AGtZeroH3(AH3 m){return ASatH3(m*AH3_(A_INFP_H));}
+ AH4 AGtZeroH4(AH4 m){return ASatH4(m*AH4_(A_INFP_H));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// [FIS] FLOAT INTEGER SORTABLE
+//------------------------------------------------------------------------------------------------------------------------------
+// Float to integer sortable.
+// - If sign bit=0, flip the sign bit (positives).
+// - If sign bit=1, flip all bits (negatives).
+// Integer sortable to float.
+// - If sign bit=1, flip the sign bit (positives).
+// - If sign bit=0, flip all bits (negatives).
+// Has nice side effects.
+// - Larger integers are more positive values.
+// - Float zero is mapped to center of integers (so clear to integer zero is a nice default for atomic max usage).
+// Burns 3 ops for conversion {shift,or,xor}.
+//==============================================================================================================================
+ AU1 AFisToU1(AU1 x){return x^(( AShrSU1(x,AU1_(31)))|AU1_(0x80000000));}
+ AU1 AFisFromU1(AU1 x){return x^((~AShrSU1(x,AU1_(31)))|AU1_(0x80000000));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Just adjust high 16-bit value (useful when upper part of 32-bit word is a 16-bit float value).
+ AU1 AFisToHiU1(AU1 x){return x^(( AShrSU1(x,AU1_(15)))|AU1_(0x80000000));}
+ AU1 AFisFromHiU1(AU1 x){return x^((~AShrSU1(x,AU1_(15)))|AU1_(0x80000000));}
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_HALF
+ AW1 AFisToW1(AW1 x){return x^(( AShrSW1(x,AW1_(15)))|AW1_(0x8000));}
+ AW1 AFisFromW1(AW1 x){return x^((~AShrSW1(x,AW1_(15)))|AW1_(0x8000));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW2 AFisToW2(AW2 x){return x^(( AShrSW2(x,AW2_(15)))|AW2_(0x8000));}
+ AW2 AFisFromW2(AW2 x){return x^((~AShrSW2(x,AW2_(15)))|AW2_(0x8000));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// [PERM] V_PERM_B32
+//------------------------------------------------------------------------------------------------------------------------------
+// Support for V_PERM_B32 started in the 3rd generation of GCN.
+//------------------------------------------------------------------------------------------------------------------------------
+// yyyyxxxx - The 'i' input.
+// 76543210
+// ========
+// HGFEDCBA - Naming on permutation.
+//------------------------------------------------------------------------------------------------------------------------------
+// TODO
+// ====
+// - Make sure compiler optimizes this.
+//==============================================================================================================================
+ #ifdef A_HALF
+ AU1 APerm0E0A(AU2 i){return((i.x )&0xffu)|((i.y<<16)&0xff0000u);}
+ AU1 APerm0F0B(AU2 i){return((i.x>> 8)&0xffu)|((i.y<< 8)&0xff0000u);}
+ AU1 APerm0G0C(AU2 i){return((i.x>>16)&0xffu)|((i.y )&0xff0000u);}
+ AU1 APerm0H0D(AU2 i){return((i.x>>24)&0xffu)|((i.y>> 8)&0xff0000u);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 APermHGFA(AU2 i){return((i.x )&0x000000ffu)|(i.y&0xffffff00u);}
+ AU1 APermHGFC(AU2 i){return((i.x>>16)&0x000000ffu)|(i.y&0xffffff00u);}
+ AU1 APermHGAE(AU2 i){return((i.x<< 8)&0x0000ff00u)|(i.y&0xffff00ffu);}
+ AU1 APermHGCE(AU2 i){return((i.x>> 8)&0x0000ff00u)|(i.y&0xffff00ffu);}
+ AU1 APermHAFE(AU2 i){return((i.x<<16)&0x00ff0000u)|(i.y&0xff00ffffu);}
+ AU1 APermHCFE(AU2 i){return((i.x )&0x00ff0000u)|(i.y&0xff00ffffu);}
+ AU1 APermAGFE(AU2 i){return((i.x<<24)&0xff000000u)|(i.y&0x00ffffffu);}
+ AU1 APermCGFE(AU2 i){return((i.x<< 8)&0xff000000u)|(i.y&0x00ffffffu);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 APermGCEA(AU2 i){return((i.x)&0x00ff00ffu)|((i.y<<8)&0xff00ff00u);}
+ AU1 APermGECA(AU2 i){return(((i.x)&0xffu)|((i.x>>8)&0xff00u)|((i.y<<16)&0xff0000u)|((i.y<<8)&0xff000000u));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// [BUC] BYTE UNSIGNED CONVERSION
+//------------------------------------------------------------------------------------------------------------------------------
+// Designed to use the optimal conversion, enables the scaling to possibly be factored into other computation.
+// Works on a range of {0 to A_BUC_<32,16>}, for <32-bit, and 16-bit> respectively.
+//------------------------------------------------------------------------------------------------------------------------------
+// OPCODE NOTES
+// ============
+// GCN does not do UNORM or SNORM for bytes in opcodes.
+// - V_CVT_F32_UBYTE{0,1,2,3} - Unsigned byte to float.
+// - V_CVT_PKACC_U8_F32 - Float to unsigned byte (does bit-field insert into 32-bit integer).
+// V_PERM_B32 does byte packing with ability to zero fill bytes as well.
+// - Can pull out byte values from two sources, and zero fill upper 8-bits of packed hi and lo.
+//------------------------------------------------------------------------------------------------------------------------------
+// BYTE : FLOAT - ABuc{0,1,2,3}{To,From}U1() - Designed for V_CVT_F32_UBYTE* and V_CVT_PKACCUM_U8_F32 ops.
+// ==== =====
+// 0 : 0
+// 1 : 1
+// ...
+// 255 : 255
+// : 256 (just outside the encoding range)
+//------------------------------------------------------------------------------------------------------------------------------
+// BYTE : FLOAT - ABuc{0,1,2,3}{To,From}U2() - Designed for 16-bit denormal tricks and V_PERM_B32.
+// ==== =====
+// 0 : 0
+// 1 : 1/512
+// 2 : 1/256
+// ...
+// 64 : 1/8
+// 128 : 1/4
+// 255 : 255/512
+// : 1/2 (just outside the encoding range)
+//------------------------------------------------------------------------------------------------------------------------------
+// OPTIMAL IMPLEMENTATIONS ON AMD ARCHITECTURES
+// ============================================
+// r=ABuc0FromU1(i)
+// V_CVT_F32_UBYTE0 r,i
+// --------------------------------------------
+// r=ABuc0ToU1(d,i)
+// V_CVT_PKACCUM_U8_F32 r,i,0,d
+// --------------------------------------------
+// d=ABuc0FromU2(i)
+// Where 'k0' is an SGPR with 0x0E0A
+// Where 'k1' is an SGPR with {32768.0} packed into the lower 16-bits
+// V_PERM_B32 d,i.x,i.y,k0
+// V_PK_FMA_F16 d,d,k1.x,0
+// --------------------------------------------
+// r=ABuc0ToU2(d,i)
+// Where 'k0' is an SGPR with {1.0/32768.0} packed into the lower 16-bits
+// Where 'k1' is an SGPR with 0x????
+// Where 'k2' is an SGPR with 0x????
+// V_PK_FMA_F16 i,i,k0.x,0
+// V_PERM_B32 r.x,i,i,k1
+// V_PERM_B32 r.y,i,i,k2
+//==============================================================================================================================
+ // Peak range for 32-bit and 16-bit operations.
+ #define A_BUC_32 (255.0)
+ #define A_BUC_16 (255.0/512.0)
+//==============================================================================================================================
+ #if 1
+ // Designed to be one V_CVT_PKACCUM_U8_F32.
+ // The extra min is required to pattern match to V_CVT_PKACCUM_U8_F32.
+ AU1 ABuc0ToU1(AU1 d,AF1 i){return (d&0xffffff00u)|((min(AU1(i),255u) )&(0x000000ffu));}
+ AU1 ABuc1ToU1(AU1 d,AF1 i){return (d&0xffff00ffu)|((min(AU1(i),255u)<< 8)&(0x0000ff00u));}
+ AU1 ABuc2ToU1(AU1 d,AF1 i){return (d&0xff00ffffu)|((min(AU1(i),255u)<<16)&(0x00ff0000u));}
+ AU1 ABuc3ToU1(AU1 d,AF1 i){return (d&0x00ffffffu)|((min(AU1(i),255u)<<24)&(0xff000000u));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Designed to be one V_CVT_F32_UBYTE*.
+ AF1 ABuc0FromU1(AU1 i){return AF1((i )&255u);}
+ AF1 ABuc1FromU1(AU1 i){return AF1((i>> 8)&255u);}
+ AF1 ABuc2FromU1(AU1 i){return AF1((i>>16)&255u);}
+ AF1 ABuc3FromU1(AU1 i){return AF1((i>>24)&255u);}
+ #endif
+//==============================================================================================================================
+ #ifdef A_HALF
+ // Takes {x0,x1} and {y0,y1} and builds {{x0,y0},{x1,y1}}.
+ AW2 ABuc01ToW2(AH2 x,AH2 y){x*=AH2_(1.0/32768.0);y*=AH2_(1.0/32768.0);
+ return AW2_AU1(APermGCEA(AU2(AU1_AW2(AW2_AH2(x)),AU1_AW2(AW2_AH2(y)))));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Designed for 3 ops to do SOA to AOS and conversion.
+ AU2 ABuc0ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)));
+ return AU2(APermHGFA(AU2(d.x,b)),APermHGFC(AU2(d.y,b)));}
+ AU2 ABuc1ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)));
+ return AU2(APermHGAE(AU2(d.x,b)),APermHGCE(AU2(d.y,b)));}
+ AU2 ABuc2ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)));
+ return AU2(APermHAFE(AU2(d.x,b)),APermHCFE(AU2(d.y,b)));}
+ AU2 ABuc3ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)));
+ return AU2(APermAGFE(AU2(d.x,b)),APermCGFE(AU2(d.y,b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Designed for 2 ops to do both AOS to SOA, and conversion.
+ AH2 ABuc0FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0E0A(i)))*AH2_(32768.0);}
+ AH2 ABuc1FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0F0B(i)))*AH2_(32768.0);}
+ AH2 ABuc2FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0G0C(i)))*AH2_(32768.0);}
+ AH2 ABuc3FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0H0D(i)))*AH2_(32768.0);}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// [BSC] BYTE SIGNED CONVERSION
+//------------------------------------------------------------------------------------------------------------------------------
+// Similar to [BUC].
+// Works on a range of {-/+ A_BSC_<32,16>}, for <32-bit, and 16-bit> respectively.
+//------------------------------------------------------------------------------------------------------------------------------
+// ENCODING (without zero-based encoding)
+// ========
+// 0 = unused (can be used to mean something else)
+// 1 = lowest value
+// 128 = exact zero center (zero based encoding
+// 255 = highest value
+//------------------------------------------------------------------------------------------------------------------------------
+// Zero-based [Zb] flips the MSB bit of the byte (making 128 "exact zero" actually zero).
+// This is useful if there is a desire for cleared values to decode as zero.
+//------------------------------------------------------------------------------------------------------------------------------
+// BYTE : FLOAT - ABsc{0,1,2,3}{To,From}U2() - Designed for 16-bit denormal tricks and V_PERM_B32.
+// ==== =====
+// 0 : -127/512 (unused)
+// 1 : -126/512
+// 2 : -125/512
+// ...
+// 128 : 0
+// ...
+// 255 : 127/512
+// : 1/4 (just outside the encoding range)
+//==============================================================================================================================
+ // Peak range for 32-bit and 16-bit operations.
+ #define A_BSC_32 (127.0)
+ #define A_BSC_16 (127.0/512.0)
+//==============================================================================================================================
+ #if 1
+ AU1 ABsc0ToU1(AU1 d,AF1 i){return (d&0xffffff00u)|((min(AU1(i+128.0),255u) )&(0x000000ffu));}
+ AU1 ABsc1ToU1(AU1 d,AF1 i){return (d&0xffff00ffu)|((min(AU1(i+128.0),255u)<< 8)&(0x0000ff00u));}
+ AU1 ABsc2ToU1(AU1 d,AF1 i){return (d&0xff00ffffu)|((min(AU1(i+128.0),255u)<<16)&(0x00ff0000u));}
+ AU1 ABsc3ToU1(AU1 d,AF1 i){return (d&0x00ffffffu)|((min(AU1(i+128.0),255u)<<24)&(0xff000000u));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 ABsc0ToZbU1(AU1 d,AF1 i){return ((d&0xffffff00u)|((min(AU1(trunc(i)+128.0),255u) )&(0x000000ffu)))^0x00000080u;}
+ AU1 ABsc1ToZbU1(AU1 d,AF1 i){return ((d&0xffff00ffu)|((min(AU1(trunc(i)+128.0),255u)<< 8)&(0x0000ff00u)))^0x00008000u;}
+ AU1 ABsc2ToZbU1(AU1 d,AF1 i){return ((d&0xff00ffffu)|((min(AU1(trunc(i)+128.0),255u)<<16)&(0x00ff0000u)))^0x00800000u;}
+ AU1 ABsc3ToZbU1(AU1 d,AF1 i){return ((d&0x00ffffffu)|((min(AU1(trunc(i)+128.0),255u)<<24)&(0xff000000u)))^0x80000000u;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 ABsc0FromU1(AU1 i){return AF1((i )&255u)-128.0;}
+ AF1 ABsc1FromU1(AU1 i){return AF1((i>> 8)&255u)-128.0;}
+ AF1 ABsc2FromU1(AU1 i){return AF1((i>>16)&255u)-128.0;}
+ AF1 ABsc3FromU1(AU1 i){return AF1((i>>24)&255u)-128.0;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 ABsc0FromZbU1(AU1 i){return AF1(((i )&255u)^0x80u)-128.0;}
+ AF1 ABsc1FromZbU1(AU1 i){return AF1(((i>> 8)&255u)^0x80u)-128.0;}
+ AF1 ABsc2FromZbU1(AU1 i){return AF1(((i>>16)&255u)^0x80u)-128.0;}
+ AF1 ABsc3FromZbU1(AU1 i){return AF1(((i>>24)&255u)^0x80u)-128.0;}
+ #endif
+//==============================================================================================================================
+ #ifdef A_HALF
+ // Takes {x0,x1} and {y0,y1} and builds {{x0,y0},{x1,y1}}.
+ AW2 ABsc01ToW2(AH2 x,AH2 y){x=x*AH2_(1.0/32768.0)+AH2_(0.25/32768.0);y=y*AH2_(1.0/32768.0)+AH2_(0.25/32768.0);
+ return AW2_AU1(APermGCEA(AU2(AU1_AW2(AW2_AH2(x)),AU1_AW2(AW2_AH2(y)))));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU2 ABsc0ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)));
+ return AU2(APermHGFA(AU2(d.x,b)),APermHGFC(AU2(d.y,b)));}
+ AU2 ABsc1ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)));
+ return AU2(APermHGAE(AU2(d.x,b)),APermHGCE(AU2(d.y,b)));}
+ AU2 ABsc2ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)));
+ return AU2(APermHAFE(AU2(d.x,b)),APermHCFE(AU2(d.y,b)));}
+ AU2 ABsc3ToU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)));
+ return AU2(APermAGFE(AU2(d.x,b)),APermCGFE(AU2(d.y,b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU2 ABsc0ToZbU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)))^0x00800080u;
+ return AU2(APermHGFA(AU2(d.x,b)),APermHGFC(AU2(d.y,b)));}
+ AU2 ABsc1ToZbU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)))^0x00800080u;
+ return AU2(APermHGAE(AU2(d.x,b)),APermHGCE(AU2(d.y,b)));}
+ AU2 ABsc2ToZbU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)))^0x00800080u;
+ return AU2(APermHAFE(AU2(d.x,b)),APermHCFE(AU2(d.y,b)));}
+ AU2 ABsc3ToZbU2(AU2 d,AH2 i){AU1 b=AU1_AW2(AW2_AH2(i*AH2_(1.0/32768.0)+AH2_(0.25/32768.0)))^0x00800080u;
+ return AU2(APermAGFE(AU2(d.x,b)),APermCGFE(AU2(d.y,b)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH2 ABsc0FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0E0A(i)))*AH2_(32768.0)-AH2_(0.25);}
+ AH2 ABsc1FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0F0B(i)))*AH2_(32768.0)-AH2_(0.25);}
+ AH2 ABsc2FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0G0C(i)))*AH2_(32768.0)-AH2_(0.25);}
+ AH2 ABsc3FromU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0H0D(i)))*AH2_(32768.0)-AH2_(0.25);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH2 ABsc0FromZbU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0E0A(i)^0x00800080u))*AH2_(32768.0)-AH2_(0.25);}
+ AH2 ABsc1FromZbU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0F0B(i)^0x00800080u))*AH2_(32768.0)-AH2_(0.25);}
+ AH2 ABsc2FromZbU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0G0C(i)^0x00800080u))*AH2_(32768.0)-AH2_(0.25);}
+ AH2 ABsc3FromZbU2(AU2 i){return AH2_AW2(AW2_AU1(APerm0H0D(i)^0x00800080u))*AH2_(32768.0)-AH2_(0.25);}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// HALF APPROXIMATIONS
+//------------------------------------------------------------------------------------------------------------------------------
+// These support only positive inputs.
+// Did not see value yet in specialization for range.
+// Using quick testing, ended up mostly getting the same "best" approximation for various ranges.
+// With hardware that can co-execute transcendentals, the value in approximations could be less than expected.
+// However from a latency perspective, if execution of a transcendental is 4 clk, with no packed support, -> 8 clk total.
+// And co-execution would require a compiler interleaving a lot of independent work for packed usage.
+//------------------------------------------------------------------------------------------------------------------------------
+// The one Newton Raphson iteration form of rsq() was skipped (requires 6 ops total).
+// Same with sqrt(), as this could be x*rsq() (7 ops).
+//==============================================================================================================================
+ #ifdef A_HALF
+ // Minimize squared error across full positive range, 2 ops.
+ // The 0x1de2 based approximation maps {0 to 1} input maps to < 1 output.
+ AH1 APrxLoSqrtH1(AH1 a){return AH1_AW1((AW1_AH1(a)>>AW1_(1))+AW1_(0x1de2));}
+ AH2 APrxLoSqrtH2(AH2 a){return AH2_AW2((AW2_AH2(a)>>AW2_(1))+AW2_(0x1de2));}
+ AH3 APrxLoSqrtH3(AH3 a){return AH3_AW3((AW3_AH3(a)>>AW3_(1))+AW3_(0x1de2));}
+ AH4 APrxLoSqrtH4(AH4 a){return AH4_AW4((AW4_AH4(a)>>AW4_(1))+AW4_(0x1de2));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Lower precision estimation, 1 op.
+ // Minimize squared error across {smallest normal to 16384.0}.
+ AH1 APrxLoRcpH1(AH1 a){return AH1_AW1(AW1_(0x7784)-AW1_AH1(a));}
+ AH2 APrxLoRcpH2(AH2 a){return AH2_AW2(AW2_(0x7784)-AW2_AH2(a));}
+ AH3 APrxLoRcpH3(AH3 a){return AH3_AW3(AW3_(0x7784)-AW3_AH3(a));}
+ AH4 APrxLoRcpH4(AH4 a){return AH4_AW4(AW4_(0x7784)-AW4_AH4(a));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Medium precision estimation, one Newton Raphson iteration, 3 ops.
+ AH1 APrxMedRcpH1(AH1 a){AH1 b=AH1_AW1(AW1_(0x778d)-AW1_AH1(a));return b*(-b*a+AH1_(2.0));}
+ AH2 APrxMedRcpH2(AH2 a){AH2 b=AH2_AW2(AW2_(0x778d)-AW2_AH2(a));return b*(-b*a+AH2_(2.0));}
+ AH3 APrxMedRcpH3(AH3 a){AH3 b=AH3_AW3(AW3_(0x778d)-AW3_AH3(a));return b*(-b*a+AH3_(2.0));}
+ AH4 APrxMedRcpH4(AH4 a){AH4 b=AH4_AW4(AW4_(0x778d)-AW4_AH4(a));return b*(-b*a+AH4_(2.0));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Minimize squared error across {smallest normal to 16384.0}, 2 ops.
+ AH1 APrxLoRsqH1(AH1 a){return AH1_AW1(AW1_(0x59a3)-(AW1_AH1(a)>>AW1_(1)));}
+ AH2 APrxLoRsqH2(AH2 a){return AH2_AW2(AW2_(0x59a3)-(AW2_AH2(a)>>AW2_(1)));}
+ AH3 APrxLoRsqH3(AH3 a){return AH3_AW3(AW3_(0x59a3)-(AW3_AH3(a)>>AW3_(1)));}
+ AH4 APrxLoRsqH4(AH4 a){return AH4_AW4(AW4_(0x59a3)-(AW4_AH4(a)>>AW4_(1)));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// FLOAT APPROXIMATIONS
+//------------------------------------------------------------------------------------------------------------------------------
+// Michal Drobot has an excellent presentation on these: "Low Level Optimizations For GCN",
+// - Idea dates back to SGI, then to Quake 3, etc.
+// - https://michaldrobot.files.wordpress.com/2014/05/gcn_alu_opt_digitaldragons2014.pdf
+// - sqrt(x)=rsqrt(x)*x
+// - rcp(x)=rsqrt(x)*rsqrt(x) for positive x
+// - https://github.com/michaldrobot/ShaderFastLibs/blob/master/ShaderFastMathLib.h
+//------------------------------------------------------------------------------------------------------------------------------
+// These below are from perhaps less complete searching for optimal.
+// Used FP16 normal range for testing with +4096 32-bit step size for sampling error.
+// So these match up well with the half approximations.
+//==============================================================================================================================
+ AF1 APrxLoSqrtF1(AF1 a){return AF1_AU1((AU1_AF1(a)>>AU1_(1))+AU1_(0x1fbc4639));}
+ AF1 APrxLoRcpF1(AF1 a){return AF1_AU1(AU1_(0x7ef07ebb)-AU1_AF1(a));}
+ AF1 APrxMedRcpF1(AF1 a){AF1 b=AF1_AU1(AU1_(0x7ef19fff)-AU1_AF1(a));return b*(-b*a+AF1_(2.0));}
+ AF1 APrxLoRsqF1(AF1 a){return AF1_AU1(AU1_(0x5f347d74)-(AU1_AF1(a)>>AU1_(1)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 APrxLoSqrtF2(AF2 a){return AF2_AU2((AU2_AF2(a)>>AU2_(1))+AU2_(0x1fbc4639));}
+ AF2 APrxLoRcpF2(AF2 a){return AF2_AU2(AU2_(0x7ef07ebb)-AU2_AF2(a));}
+ AF2 APrxMedRcpF2(AF2 a){AF2 b=AF2_AU2(AU2_(0x7ef19fff)-AU2_AF2(a));return b*(-b*a+AF2_(2.0));}
+ AF2 APrxLoRsqF2(AF2 a){return AF2_AU2(AU2_(0x5f347d74)-(AU2_AF2(a)>>AU2_(1)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF3 APrxLoSqrtF3(AF3 a){return AF3_AU3((AU3_AF3(a)>>AU3_(1))+AU3_(0x1fbc4639));}
+ AF3 APrxLoRcpF3(AF3 a){return AF3_AU3(AU3_(0x7ef07ebb)-AU3_AF3(a));}
+ AF3 APrxMedRcpF3(AF3 a){AF3 b=AF3_AU3(AU3_(0x7ef19fff)-AU3_AF3(a));return b*(-b*a+AF3_(2.0));}
+ AF3 APrxLoRsqF3(AF3 a){return AF3_AU3(AU3_(0x5f347d74)-(AU3_AF3(a)>>AU3_(1)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF4 APrxLoSqrtF4(AF4 a){return AF4_AU4((AU4_AF4(a)>>AU4_(1))+AU4_(0x1fbc4639));}
+ AF4 APrxLoRcpF4(AF4 a){return AF4_AU4(AU4_(0x7ef07ebb)-AU4_AF4(a));}
+ AF4 APrxMedRcpF4(AF4 a){AF4 b=AF4_AU4(AU4_(0x7ef19fff)-AU4_AF4(a));return b*(-b*a+AF4_(2.0));}
+ AF4 APrxLoRsqF4(AF4 a){return AF4_AU4(AU4_(0x5f347d74)-(AU4_AF4(a)>>AU4_(1)));}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// PQ APPROXIMATIONS
+//------------------------------------------------------------------------------------------------------------------------------
+// PQ is very close to x^(1/8). The functions below Use the fast float approximation method to do
+// PQ<~>Gamma2 (4th power and fast 4th root) and PQ<~>Linear (8th power and fast 8th root). Maximum error is ~0.2%.
+//==============================================================================================================================
+// Helpers
+ AF1 Quart(AF1 a) { a = a * a; return a * a;}
+ AF1 Oct(AF1 a) { a = a * a; a = a * a; return a * a; }
+ AF2 Quart(AF2 a) { a = a * a; return a * a; }
+ AF2 Oct(AF2 a) { a = a * a; a = a * a; return a * a; }
+ AF3 Quart(AF3 a) { a = a * a; return a * a; }
+ AF3 Oct(AF3 a) { a = a * a; a = a * a; return a * a; }
+ AF4 Quart(AF4 a) { a = a * a; return a * a; }
+ AF4 Oct(AF4 a) { a = a * a; a = a * a; return a * a; }
+ //------------------------------------------------------------------------------------------------------------------------------
+ AF1 APrxPQToGamma2(AF1 a) { return Quart(a); }
+ AF1 APrxPQToLinear(AF1 a) { return Oct(a); }
+ AF1 APrxLoGamma2ToPQ(AF1 a) { return AF1_AU1((AU1_AF1(a) >> AU1_(2)) + AU1_(0x2F9A4E46)); }
+ AF1 APrxMedGamma2ToPQ(AF1 a) { AF1 b = AF1_AU1((AU1_AF1(a) >> AU1_(2)) + AU1_(0x2F9A4E46)); AF1 b4 = Quart(b); return b - b * (b4 - a) / (AF1_(4.0) * b4); }
+ AF1 APrxHighGamma2ToPQ(AF1 a) { return sqrt(sqrt(a)); }
+ AF1 APrxLoLinearToPQ(AF1 a) { return AF1_AU1((AU1_AF1(a) >> AU1_(3)) + AU1_(0x378D8723)); }
+ AF1 APrxMedLinearToPQ(AF1 a) { AF1 b = AF1_AU1((AU1_AF1(a) >> AU1_(3)) + AU1_(0x378D8723)); AF1 b8 = Oct(b); return b - b * (b8 - a) / (AF1_(8.0) * b8); }
+ AF1 APrxHighLinearToPQ(AF1 a) { return sqrt(sqrt(sqrt(a))); }
+ //------------------------------------------------------------------------------------------------------------------------------
+ AF2 APrxPQToGamma2(AF2 a) { return Quart(a); }
+ AF2 APrxPQToLinear(AF2 a) { return Oct(a); }
+ AF2 APrxLoGamma2ToPQ(AF2 a) { return AF2_AU2((AU2_AF2(a) >> AU2_(2)) + AU2_(0x2F9A4E46)); }
+ AF2 APrxMedGamma2ToPQ(AF2 a) { AF2 b = AF2_AU2((AU2_AF2(a) >> AU2_(2)) + AU2_(0x2F9A4E46)); AF2 b4 = Quart(b); return b - b * (b4 - a) / (AF1_(4.0) * b4); }
+ AF2 APrxHighGamma2ToPQ(AF2 a) { return sqrt(sqrt(a)); }
+ AF2 APrxLoLinearToPQ(AF2 a) { return AF2_AU2((AU2_AF2(a) >> AU2_(3)) + AU2_(0x378D8723)); }
+ AF2 APrxMedLinearToPQ(AF2 a) { AF2 b = AF2_AU2((AU2_AF2(a) >> AU2_(3)) + AU2_(0x378D8723)); AF2 b8 = Oct(b); return b - b * (b8 - a) / (AF1_(8.0) * b8); }
+ AF2 APrxHighLinearToPQ(AF2 a) { return sqrt(sqrt(sqrt(a))); }
+ //------------------------------------------------------------------------------------------------------------------------------
+ AF3 APrxPQToGamma2(AF3 a) { return Quart(a); }
+ AF3 APrxPQToLinear(AF3 a) { return Oct(a); }
+ AF3 APrxLoGamma2ToPQ(AF3 a) { return AF3_AU3((AU3_AF3(a) >> AU3_(2)) + AU3_(0x2F9A4E46)); }
+ AF3 APrxMedGamma2ToPQ(AF3 a) { AF3 b = AF3_AU3((AU3_AF3(a) >> AU3_(2)) + AU3_(0x2F9A4E46)); AF3 b4 = Quart(b); return b - b * (b4 - a) / (AF1_(4.0) * b4); }
+ AF3 APrxHighGamma2ToPQ(AF3 a) { return sqrt(sqrt(a)); }
+ AF3 APrxLoLinearToPQ(AF3 a) { return AF3_AU3((AU3_AF3(a) >> AU3_(3)) + AU3_(0x378D8723)); }
+ AF3 APrxMedLinearToPQ(AF3 a) { AF3 b = AF3_AU3((AU3_AF3(a) >> AU3_(3)) + AU3_(0x378D8723)); AF3 b8 = Oct(b); return b - b * (b8 - a) / (AF1_(8.0) * b8); }
+ AF3 APrxHighLinearToPQ(AF3 a) { return sqrt(sqrt(sqrt(a))); }
+ //------------------------------------------------------------------------------------------------------------------------------
+ AF4 APrxPQToGamma2(AF4 a) { return Quart(a); }
+ AF4 APrxPQToLinear(AF4 a) { return Oct(a); }
+ AF4 APrxLoGamma2ToPQ(AF4 a) { return AF4_AU4((AU4_AF4(a) >> AU4_(2)) + AU4_(0x2F9A4E46)); }
+ AF4 APrxMedGamma2ToPQ(AF4 a) { AF4 b = AF4_AU4((AU4_AF4(a) >> AU4_(2)) + AU4_(0x2F9A4E46)); AF4 b4 = Quart(b); return b - b * (b4 - a) / (AF1_(4.0) * b4); }
+ AF4 APrxHighGamma2ToPQ(AF4 a) { return sqrt(sqrt(a)); }
+ AF4 APrxLoLinearToPQ(AF4 a) { return AF4_AU4((AU4_AF4(a) >> AU4_(3)) + AU4_(0x378D8723)); }
+ AF4 APrxMedLinearToPQ(AF4 a) { AF4 b = AF4_AU4((AU4_AF4(a) >> AU4_(3)) + AU4_(0x378D8723)); AF4 b8 = Oct(b); return b - b * (b8 - a) / (AF1_(8.0) * b8); }
+ AF4 APrxHighLinearToPQ(AF4 a) { return sqrt(sqrt(sqrt(a))); }
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// PARABOLIC SIN & COS
+//------------------------------------------------------------------------------------------------------------------------------
+// Approximate answers to transcendental questions.
+//------------------------------------------------------------------------------------------------------------------------------
+//==============================================================================================================================
+ #if 1
+ // Valid input range is {-1 to 1} representing {0 to 2 pi}.
+ // Output range is {-1/4 to 1/4} representing {-1 to 1}.
+ AF1 APSinF1(AF1 x){return x*abs(x)-x;} // MAD.
+ AF2 APSinF2(AF2 x){return x*abs(x)-x;}
+ AF1 APCosF1(AF1 x){x=AFractF1(x*AF1_(0.5)+AF1_(0.75));x=x*AF1_(2.0)-AF1_(1.0);return APSinF1(x);} // 3x MAD, FRACT
+ AF2 APCosF2(AF2 x){x=AFractF2(x*AF2_(0.5)+AF2_(0.75));x=x*AF2_(2.0)-AF2_(1.0);return APSinF2(x);}
+ AF2 APSinCosF1(AF1 x){AF1 y=AFractF1(x*AF1_(0.5)+AF1_(0.75));y=y*AF1_(2.0)-AF1_(1.0);return APSinF2(AF2(x,y));}
+ #endif
+//------------------------------------------------------------------------------------------------------------------------------
+ #ifdef A_HALF
+ // For a packed {sin,cos} pair,
+ // - Native takes 16 clocks and 4 issue slots (no packed transcendentals).
+ // - Parabolic takes 8 clocks and 8 issue slots (only fract is non-packed).
+ AH1 APSinH1(AH1 x){return x*abs(x)-x;}
+ AH2 APSinH2(AH2 x){return x*abs(x)-x;} // AND,FMA
+ AH1 APCosH1(AH1 x){x=AFractH1(x*AH1_(0.5)+AH1_(0.75));x=x*AH1_(2.0)-AH1_(1.0);return APSinH1(x);}
+ AH2 APCosH2(AH2 x){x=AFractH2(x*AH2_(0.5)+AH2_(0.75));x=x*AH2_(2.0)-AH2_(1.0);return APSinH2(x);} // 3x FMA, 2xFRACT, AND
+ AH2 APSinCosH1(AH1 x){AH1 y=AFractH1(x*AH1_(0.5)+AH1_(0.75));y=y*AH1_(2.0)-AH1_(1.0);return APSinH2(AH2(x,y));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// [ZOL] ZERO ONE LOGIC
+//------------------------------------------------------------------------------------------------------------------------------
+// Conditional free logic designed for easy 16-bit packing, and backwards porting to 32-bit.
+//------------------------------------------------------------------------------------------------------------------------------
+// 0 := false
+// 1 := true
+//------------------------------------------------------------------------------------------------------------------------------
+// AndNot(x,y) -> !(x&y) .... One op.
+// AndOr(x,y,z) -> (x&y)|z ... One op.
+// GtZero(x) -> x>0.0 ..... One op.
+// Sel(x,y,z) -> x?y:z ..... Two ops, has no precision loss.
+// Signed(x) -> x<0.0 ..... One op.
+// ZeroPass(x,y) -> x?0:y ..... Two ops, 'y' is a pass through safe for aliasing as integer.
+//------------------------------------------------------------------------------------------------------------------------------
+// OPTIMIZATION NOTES
+// ==================
+// - On Vega to use 2 constants in a packed op, pass in as one AW2 or one AH2 'k.xy' and use as 'k.xx' and 'k.yy'.
+// For example 'a.xy*k.xx+k.yy'.
+//==============================================================================================================================
+ #if 1
+ AU1 AZolAndU1(AU1 x,AU1 y){return min(x,y);}
+ AU2 AZolAndU2(AU2 x,AU2 y){return min(x,y);}
+ AU3 AZolAndU3(AU3 x,AU3 y){return min(x,y);}
+ AU4 AZolAndU4(AU4 x,AU4 y){return min(x,y);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AZolNotU1(AU1 x){return x^AU1_(1);}
+ AU2 AZolNotU2(AU2 x){return x^AU2_(1);}
+ AU3 AZolNotU3(AU3 x){return x^AU3_(1);}
+ AU4 AZolNotU4(AU4 x){return x^AU4_(1);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AU1 AZolOrU1(AU1 x,AU1 y){return max(x,y);}
+ AU2 AZolOrU2(AU2 x,AU2 y){return max(x,y);}
+ AU3 AZolOrU3(AU3 x,AU3 y){return max(x,y);}
+ AU4 AZolOrU4(AU4 x,AU4 y){return max(x,y);}
+//==============================================================================================================================
+ AU1 AZolF1ToU1(AF1 x){return AU1(x);}
+ AU2 AZolF2ToU2(AF2 x){return AU2(x);}
+ AU3 AZolF3ToU3(AF3 x){return AU3(x);}
+ AU4 AZolF4ToU4(AF4 x){return AU4(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ // 2 ops, denormals don't work in 32-bit on PC (and if they are enabled, OMOD is disabled).
+ AU1 AZolNotF1ToU1(AF1 x){return AU1(AF1_(1.0)-x);}
+ AU2 AZolNotF2ToU2(AF2 x){return AU2(AF2_(1.0)-x);}
+ AU3 AZolNotF3ToU3(AF3 x){return AU3(AF3_(1.0)-x);}
+ AU4 AZolNotF4ToU4(AF4 x){return AU4(AF4_(1.0)-x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolU1ToF1(AU1 x){return AF1(x);}
+ AF2 AZolU2ToF2(AU2 x){return AF2(x);}
+ AF3 AZolU3ToF3(AU3 x){return AF3(x);}
+ AF4 AZolU4ToF4(AU4 x){return AF4(x);}
+//==============================================================================================================================
+ AF1 AZolAndF1(AF1 x,AF1 y){return min(x,y);}
+ AF2 AZolAndF2(AF2 x,AF2 y){return min(x,y);}
+ AF3 AZolAndF3(AF3 x,AF3 y){return min(x,y);}
+ AF4 AZolAndF4(AF4 x,AF4 y){return min(x,y);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 ASolAndNotF1(AF1 x,AF1 y){return (-x)*y+AF1_(1.0);}
+ AF2 ASolAndNotF2(AF2 x,AF2 y){return (-x)*y+AF2_(1.0);}
+ AF3 ASolAndNotF3(AF3 x,AF3 y){return (-x)*y+AF3_(1.0);}
+ AF4 ASolAndNotF4(AF4 x,AF4 y){return (-x)*y+AF4_(1.0);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolAndOrF1(AF1 x,AF1 y,AF1 z){return ASatF1(x*y+z);}
+ AF2 AZolAndOrF2(AF2 x,AF2 y,AF2 z){return ASatF2(x*y+z);}
+ AF3 AZolAndOrF3(AF3 x,AF3 y,AF3 z){return ASatF3(x*y+z);}
+ AF4 AZolAndOrF4(AF4 x,AF4 y,AF4 z){return ASatF4(x*y+z);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolGtZeroF1(AF1 x){return ASatF1(x*AF1_(A_INFP_F));}
+ AF2 AZolGtZeroF2(AF2 x){return ASatF2(x*AF2_(A_INFP_F));}
+ AF3 AZolGtZeroF3(AF3 x){return ASatF3(x*AF3_(A_INFP_F));}
+ AF4 AZolGtZeroF4(AF4 x){return ASatF4(x*AF4_(A_INFP_F));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolNotF1(AF1 x){return AF1_(1.0)-x;}
+ AF2 AZolNotF2(AF2 x){return AF2_(1.0)-x;}
+ AF3 AZolNotF3(AF3 x){return AF3_(1.0)-x;}
+ AF4 AZolNotF4(AF4 x){return AF4_(1.0)-x;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolOrF1(AF1 x,AF1 y){return max(x,y);}
+ AF2 AZolOrF2(AF2 x,AF2 y){return max(x,y);}
+ AF3 AZolOrF3(AF3 x,AF3 y){return max(x,y);}
+ AF4 AZolOrF4(AF4 x,AF4 y){return max(x,y);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolSelF1(AF1 x,AF1 y,AF1 z){AF1 r=(-x)*z+z;return x*y+r;}
+ AF2 AZolSelF2(AF2 x,AF2 y,AF2 z){AF2 r=(-x)*z+z;return x*y+r;}
+ AF3 AZolSelF3(AF3 x,AF3 y,AF3 z){AF3 r=(-x)*z+z;return x*y+r;}
+ AF4 AZolSelF4(AF4 x,AF4 y,AF4 z){AF4 r=(-x)*z+z;return x*y+r;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolSignedF1(AF1 x){return ASatF1(x*AF1_(A_INFN_F));}
+ AF2 AZolSignedF2(AF2 x){return ASatF2(x*AF2_(A_INFN_F));}
+ AF3 AZolSignedF3(AF3 x){return ASatF3(x*AF3_(A_INFN_F));}
+ AF4 AZolSignedF4(AF4 x){return ASatF4(x*AF4_(A_INFN_F));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AZolZeroPassF1(AF1 x,AF1 y){return AF1_AU1((AU1_AF1(x)!=AU1_(0))?AU1_(0):AU1_AF1(y));}
+ AF2 AZolZeroPassF2(AF2 x,AF2 y){return AF2_AU2((AU2_AF2(x)!=AU2_(0))?AU2_(0):AU2_AF2(y));}
+ AF3 AZolZeroPassF3(AF3 x,AF3 y){return AF3_AU3((AU3_AF3(x)!=AU3_(0))?AU3_(0):AU3_AF3(y));}
+ AF4 AZolZeroPassF4(AF4 x,AF4 y){return AF4_AU4((AU4_AF4(x)!=AU4_(0))?AU4_(0):AU4_AF4(y));}
+ #endif
+//==============================================================================================================================
+ #ifdef A_HALF
+ AW1 AZolAndW1(AW1 x,AW1 y){return min(x,y);}
+ AW2 AZolAndW2(AW2 x,AW2 y){return min(x,y);}
+ AW3 AZolAndW3(AW3 x,AW3 y){return min(x,y);}
+ AW4 AZolAndW4(AW4 x,AW4 y){return min(x,y);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AZolNotW1(AW1 x){return x^AW1_(1);}
+ AW2 AZolNotW2(AW2 x){return x^AW2_(1);}
+ AW3 AZolNotW3(AW3 x){return x^AW3_(1);}
+ AW4 AZolNotW4(AW4 x){return x^AW4_(1);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AW1 AZolOrW1(AW1 x,AW1 y){return max(x,y);}
+ AW2 AZolOrW2(AW2 x,AW2 y){return max(x,y);}
+ AW3 AZolOrW3(AW3 x,AW3 y){return max(x,y);}
+ AW4 AZolOrW4(AW4 x,AW4 y){return max(x,y);}
+//==============================================================================================================================
+ // Uses denormal trick.
+ AW1 AZolH1ToW1(AH1 x){return AW1_AH1(x*AH1_AW1(AW1_(1)));}
+ AW2 AZolH2ToW2(AH2 x){return AW2_AH2(x*AH2_AW2(AW2_(1)));}
+ AW3 AZolH3ToW3(AH3 x){return AW3_AH3(x*AH3_AW3(AW3_(1)));}
+ AW4 AZolH4ToW4(AH4 x){return AW4_AH4(x*AH4_AW4(AW4_(1)));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // AMD arch lacks a packed conversion opcode.
+ AH1 AZolW1ToH1(AW1 x){return AH1_AW1(x*AW1_AH1(AH1_(1.0)));}
+ AH2 AZolW2ToH2(AW2 x){return AH2_AW2(x*AW2_AH2(AH2_(1.0)));}
+ AH3 AZolW1ToH3(AW3 x){return AH3_AW3(x*AW3_AH3(AH3_(1.0)));}
+ AH4 AZolW2ToH4(AW4 x){return AH4_AW4(x*AW4_AH4(AH4_(1.0)));}
+//==============================================================================================================================
+ AH1 AZolAndH1(AH1 x,AH1 y){return min(x,y);}
+ AH2 AZolAndH2(AH2 x,AH2 y){return min(x,y);}
+ AH3 AZolAndH3(AH3 x,AH3 y){return min(x,y);}
+ AH4 AZolAndH4(AH4 x,AH4 y){return min(x,y);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 ASolAndNotH1(AH1 x,AH1 y){return (-x)*y+AH1_(1.0);}
+ AH2 ASolAndNotH2(AH2 x,AH2 y){return (-x)*y+AH2_(1.0);}
+ AH3 ASolAndNotH3(AH3 x,AH3 y){return (-x)*y+AH3_(1.0);}
+ AH4 ASolAndNotH4(AH4 x,AH4 y){return (-x)*y+AH4_(1.0);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AZolAndOrH1(AH1 x,AH1 y,AH1 z){return ASatH1(x*y+z);}
+ AH2 AZolAndOrH2(AH2 x,AH2 y,AH2 z){return ASatH2(x*y+z);}
+ AH3 AZolAndOrH3(AH3 x,AH3 y,AH3 z){return ASatH3(x*y+z);}
+ AH4 AZolAndOrH4(AH4 x,AH4 y,AH4 z){return ASatH4(x*y+z);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AZolGtZeroH1(AH1 x){return ASatH1(x*AH1_(A_INFP_H));}
+ AH2 AZolGtZeroH2(AH2 x){return ASatH2(x*AH2_(A_INFP_H));}
+ AH3 AZolGtZeroH3(AH3 x){return ASatH3(x*AH3_(A_INFP_H));}
+ AH4 AZolGtZeroH4(AH4 x){return ASatH4(x*AH4_(A_INFP_H));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AZolNotH1(AH1 x){return AH1_(1.0)-x;}
+ AH2 AZolNotH2(AH2 x){return AH2_(1.0)-x;}
+ AH3 AZolNotH3(AH3 x){return AH3_(1.0)-x;}
+ AH4 AZolNotH4(AH4 x){return AH4_(1.0)-x;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AZolOrH1(AH1 x,AH1 y){return max(x,y);}
+ AH2 AZolOrH2(AH2 x,AH2 y){return max(x,y);}
+ AH3 AZolOrH3(AH3 x,AH3 y){return max(x,y);}
+ AH4 AZolOrH4(AH4 x,AH4 y){return max(x,y);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AZolSelH1(AH1 x,AH1 y,AH1 z){AH1 r=(-x)*z+z;return x*y+r;}
+ AH2 AZolSelH2(AH2 x,AH2 y,AH2 z){AH2 r=(-x)*z+z;return x*y+r;}
+ AH3 AZolSelH3(AH3 x,AH3 y,AH3 z){AH3 r=(-x)*z+z;return x*y+r;}
+ AH4 AZolSelH4(AH4 x,AH4 y,AH4 z){AH4 r=(-x)*z+z;return x*y+r;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AZolSignedH1(AH1 x){return ASatH1(x*AH1_(A_INFN_H));}
+ AH2 AZolSignedH2(AH2 x){return ASatH2(x*AH2_(A_INFN_H));}
+ AH3 AZolSignedH3(AH3 x){return ASatH3(x*AH3_(A_INFN_H));}
+ AH4 AZolSignedH4(AH4 x){return ASatH4(x*AH4_(A_INFN_H));}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// COLOR CONVERSIONS
+//------------------------------------------------------------------------------------------------------------------------------
+// These are all linear to/from some other space (where 'linear' has been shortened out of the function name).
+// So 'ToGamma' is 'LinearToGamma', and 'FromGamma' is 'LinearFromGamma'.
+// These are branch free implementations.
+// The AToSrgbF1() function is useful for stores for compute shaders for GPUs without hardware linear->sRGB store conversion.
+//------------------------------------------------------------------------------------------------------------------------------
+// TRANSFER FUNCTIONS
+// ==================
+// 709 ..... Rec709 used for some HDTVs
+// Gamma ... Typically 2.2 for some PC displays, or 2.4-2.5 for CRTs, or 2.2 FreeSync2 native
+// Pq ...... PQ native for HDR10
+// Srgb .... The sRGB output, typical of PC displays, useful for 10-bit output, or storing to 8-bit UNORM without SRGB type
+// Two ..... Gamma 2.0, fastest conversion (useful for intermediate pass approximations)
+// Three ... Gamma 3.0, less fast, but good for HDR.
+//------------------------------------------------------------------------------------------------------------------------------
+// KEEPING TO SPEC
+// ===============
+// Both Rec.709 and sRGB have a linear segment which as spec'ed would intersect the curved segment 2 times.
+// (a.) For 8-bit sRGB, steps {0 to 10.3} are in the linear region (4% of the encoding range).
+// (b.) For 8-bit 709, steps {0 to 20.7} are in the linear region (8% of the encoding range).
+// Also there is a slight step in the transition regions.
+// Precision of the coefficients in the spec being the likely cause.
+// Main usage case of the sRGB code is to do the linear->sRGB converstion in a compute shader before store.
+// This is to work around lack of hardware (typically only ROP does the conversion for free).
+// To "correct" the linear segment, would be to introduce error, because hardware decode of sRGB->linear is fixed (and free).
+// So this header keeps with the spec.
+// For linear->sRGB transforms, the linear segment in some respects reduces error, because rounding in that region is linear.
+// Rounding in the curved region in hardware (and fast software code) introduces error due to rounding in non-linear.
+//------------------------------------------------------------------------------------------------------------------------------
+// FOR PQ
+// ======
+// Both input and output is {0.0-1.0}, and where output 1.0 represents 10000.0 cd/m^2.
+// All constants are only specified to FP32 precision.
+// External PQ source reference,
+// - https://github.com/ampas/aces-dev/blob/master/transforms/ctl/utilities/ACESlib.Utilities_Color.a1.0.1.ctl
+//------------------------------------------------------------------------------------------------------------------------------
+// PACKED VERSIONS
+// ===============
+// These are the A*H2() functions.
+// There is no PQ functions as FP16 seemed to not have enough precision for the conversion.
+// The remaining functions are "good enough" for 8-bit, and maybe 10-bit if not concerned about a few 1-bit errors.
+// Precision is lowest in the 709 conversion, higher in sRGB, higher still in Two and Gamma (when using 2.2 at least).
+//------------------------------------------------------------------------------------------------------------------------------
+// NOTES
+// =====
+// Could be faster for PQ conversions to be in ALU or a texture lookup depending on usage case.
+//==============================================================================================================================
+ #if 1
+ AF1 ATo709F1(AF1 c){AF3 j=AF3(0.018*4.5,4.5,0.45);AF2 k=AF2(1.099,-0.099);
+ return clamp(j.x ,c*j.y ,pow(c,j.z )*k.x +k.y );}
+ AF2 ATo709F2(AF2 c){AF3 j=AF3(0.018*4.5,4.5,0.45);AF2 k=AF2(1.099,-0.099);
+ return clamp(j.xx ,c*j.yy ,pow(c,j.zz )*k.xx +k.yy );}
+ AF3 ATo709F3(AF3 c){AF3 j=AF3(0.018*4.5,4.5,0.45);AF2 k=AF2(1.099,-0.099);
+ return clamp(j.xxx,c*j.yyy,pow(c,j.zzz)*k.xxx+k.yyy);}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Note 'rcpX' is '1/x', where the 'x' is what would be used in AFromGamma().
+ AF1 AToGammaF1(AF1 c,AF1 rcpX){return pow(c,AF1_(rcpX));}
+ AF2 AToGammaF2(AF2 c,AF1 rcpX){return pow(c,AF2_(rcpX));}
+ AF3 AToGammaF3(AF3 c,AF1 rcpX){return pow(c,AF3_(rcpX));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AToPqF1(AF1 x){AF1 p=pow(x,AF1_(0.159302));
+ return pow((AF1_(0.835938)+AF1_(18.8516)*p)/(AF1_(1.0)+AF1_(18.6875)*p),AF1_(78.8438));}
+ AF2 AToPqF1(AF2 x){AF2 p=pow(x,AF2_(0.159302));
+ return pow((AF2_(0.835938)+AF2_(18.8516)*p)/(AF2_(1.0)+AF2_(18.6875)*p),AF2_(78.8438));}
+ AF3 AToPqF1(AF3 x){AF3 p=pow(x,AF3_(0.159302));
+ return pow((AF3_(0.835938)+AF3_(18.8516)*p)/(AF3_(1.0)+AF3_(18.6875)*p),AF3_(78.8438));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AToSrgbF1(AF1 c){AF3 j=AF3(0.0031308*12.92,12.92,1.0/2.4);AF2 k=AF2(1.055,-0.055);
+ return clamp(j.x ,c*j.y ,pow(c,j.z )*k.x +k.y );}
+ AF2 AToSrgbF2(AF2 c){AF3 j=AF3(0.0031308*12.92,12.92,1.0/2.4);AF2 k=AF2(1.055,-0.055);
+ return clamp(j.xx ,c*j.yy ,pow(c,j.zz )*k.xx +k.yy );}
+ AF3 AToSrgbF3(AF3 c){AF3 j=AF3(0.0031308*12.92,12.92,1.0/2.4);AF2 k=AF2(1.055,-0.055);
+ return clamp(j.xxx,c*j.yyy,pow(c,j.zzz)*k.xxx+k.yyy);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AToTwoF1(AF1 c){return sqrt(c);}
+ AF2 AToTwoF2(AF2 c){return sqrt(c);}
+ AF3 AToTwoF3(AF3 c){return sqrt(c);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AToThreeF1(AF1 c){return pow(c,AF1_(1.0/3.0));}
+ AF2 AToThreeF2(AF2 c){return pow(c,AF2_(1.0/3.0));}
+ AF3 AToThreeF3(AF3 c){return pow(c,AF3_(1.0/3.0));}
+ #endif
+//==============================================================================================================================
+ #if 1
+ // Unfortunately median won't work here.
+ AF1 AFrom709F1(AF1 c){AF3 j=AF3(0.081/4.5,1.0/4.5,1.0/0.45);AF2 k=AF2(1.0/1.099,0.099/1.099);
+ return AZolSelF1(AZolSignedF1(c-j.x ),c*j.y ,pow(c*k.x +k.y ,j.z ));}
+ AF2 AFrom709F2(AF2 c){AF3 j=AF3(0.081/4.5,1.0/4.5,1.0/0.45);AF2 k=AF2(1.0/1.099,0.099/1.099);
+ return AZolSelF2(AZolSignedF2(c-j.xx ),c*j.yy ,pow(c*k.xx +k.yy ,j.zz ));}
+ AF3 AFrom709F3(AF3 c){AF3 j=AF3(0.081/4.5,1.0/4.5,1.0/0.45);AF2 k=AF2(1.0/1.099,0.099/1.099);
+ return AZolSelF3(AZolSignedF3(c-j.xxx),c*j.yyy,pow(c*k.xxx+k.yyy,j.zzz));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AFromGammaF1(AF1 c,AF1 x){return pow(c,AF1_(x));}
+ AF2 AFromGammaF2(AF2 c,AF1 x){return pow(c,AF2_(x));}
+ AF3 AFromGammaF3(AF3 c,AF1 x){return pow(c,AF3_(x));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AFromPqF1(AF1 x){AF1 p=pow(x,AF1_(0.0126833));
+ return pow(ASatF1(p-AF1_(0.835938))/(AF1_(18.8516)-AF1_(18.6875)*p),AF1_(6.27739));}
+ AF2 AFromPqF1(AF2 x){AF2 p=pow(x,AF2_(0.0126833));
+ return pow(ASatF2(p-AF2_(0.835938))/(AF2_(18.8516)-AF2_(18.6875)*p),AF2_(6.27739));}
+ AF3 AFromPqF1(AF3 x){AF3 p=pow(x,AF3_(0.0126833));
+ return pow(ASatF3(p-AF3_(0.835938))/(AF3_(18.8516)-AF3_(18.6875)*p),AF3_(6.27739));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Unfortunately median won't work here.
+ AF1 AFromSrgbF1(AF1 c){AF3 j=AF3(0.04045/12.92,1.0/12.92,2.4);AF2 k=AF2(1.0/1.055,0.055/1.055);
+ return AZolSelF1(AZolSignedF1(c-j.x ),c*j.y ,pow(c*k.x +k.y ,j.z ));}
+ AF2 AFromSrgbF2(AF2 c){AF3 j=AF3(0.04045/12.92,1.0/12.92,2.4);AF2 k=AF2(1.0/1.055,0.055/1.055);
+ return AZolSelF2(AZolSignedF2(c-j.xx ),c*j.yy ,pow(c*k.xx +k.yy ,j.zz ));}
+ AF3 AFromSrgbF3(AF3 c){AF3 j=AF3(0.04045/12.92,1.0/12.92,2.4);AF2 k=AF2(1.0/1.055,0.055/1.055);
+ return AZolSelF3(AZolSignedF3(c-j.xxx),c*j.yyy,pow(c*k.xxx+k.yyy,j.zzz));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AFromTwoF1(AF1 c){return c*c;}
+ AF2 AFromTwoF2(AF2 c){return c*c;}
+ AF3 AFromTwoF3(AF3 c){return c*c;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF1 AFromThreeF1(AF1 c){return c*c*c;}
+ AF2 AFromThreeF2(AF2 c){return c*c*c;}
+ AF3 AFromThreeF3(AF3 c){return c*c*c;}
+ #endif
+//==============================================================================================================================
+ #ifdef A_HALF
+ AH1 ATo709H1(AH1 c){AH3 j=AH3(0.018*4.5,4.5,0.45);AH2 k=AH2(1.099,-0.099);
+ return clamp(j.x ,c*j.y ,pow(c,j.z )*k.x +k.y );}
+ AH2 ATo709H2(AH2 c){AH3 j=AH3(0.018*4.5,4.5,0.45);AH2 k=AH2(1.099,-0.099);
+ return clamp(j.xx ,c*j.yy ,pow(c,j.zz )*k.xx +k.yy );}
+ AH3 ATo709H3(AH3 c){AH3 j=AH3(0.018*4.5,4.5,0.45);AH2 k=AH2(1.099,-0.099);
+ return clamp(j.xxx,c*j.yyy,pow(c,j.zzz)*k.xxx+k.yyy);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AToGammaH1(AH1 c,AH1 rcpX){return pow(c,AH1_(rcpX));}
+ AH2 AToGammaH2(AH2 c,AH1 rcpX){return pow(c,AH2_(rcpX));}
+ AH3 AToGammaH3(AH3 c,AH1 rcpX){return pow(c,AH3_(rcpX));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AToSrgbH1(AH1 c){AH3 j=AH3(0.0031308*12.92,12.92,1.0/2.4);AH2 k=AH2(1.055,-0.055);
+ return clamp(j.x ,c*j.y ,pow(c,j.z )*k.x +k.y );}
+ AH2 AToSrgbH2(AH2 c){AH3 j=AH3(0.0031308*12.92,12.92,1.0/2.4);AH2 k=AH2(1.055,-0.055);
+ return clamp(j.xx ,c*j.yy ,pow(c,j.zz )*k.xx +k.yy );}
+ AH3 AToSrgbH3(AH3 c){AH3 j=AH3(0.0031308*12.92,12.92,1.0/2.4);AH2 k=AH2(1.055,-0.055);
+ return clamp(j.xxx,c*j.yyy,pow(c,j.zzz)*k.xxx+k.yyy);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AToTwoH1(AH1 c){return sqrt(c);}
+ AH2 AToTwoH2(AH2 c){return sqrt(c);}
+ AH3 AToTwoH3(AH3 c){return sqrt(c);}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AToThreeF1(AH1 c){return pow(c,AH1_(1.0/3.0));}
+ AH2 AToThreeF2(AH2 c){return pow(c,AH2_(1.0/3.0));}
+ AH3 AToThreeF3(AH3 c){return pow(c,AH3_(1.0/3.0));}
+ #endif
+//==============================================================================================================================
+ #ifdef A_HALF
+ AH1 AFrom709H1(AH1 c){AH3 j=AH3(0.081/4.5,1.0/4.5,1.0/0.45);AH2 k=AH2(1.0/1.099,0.099/1.099);
+ return AZolSelH1(AZolSignedH1(c-j.x ),c*j.y ,pow(c*k.x +k.y ,j.z ));}
+ AH2 AFrom709H2(AH2 c){AH3 j=AH3(0.081/4.5,1.0/4.5,1.0/0.45);AH2 k=AH2(1.0/1.099,0.099/1.099);
+ return AZolSelH2(AZolSignedH2(c-j.xx ),c*j.yy ,pow(c*k.xx +k.yy ,j.zz ));}
+ AH3 AFrom709H3(AH3 c){AH3 j=AH3(0.081/4.5,1.0/4.5,1.0/0.45);AH2 k=AH2(1.0/1.099,0.099/1.099);
+ return AZolSelH3(AZolSignedH3(c-j.xxx),c*j.yyy,pow(c*k.xxx+k.yyy,j.zzz));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AFromGammaH1(AH1 c,AH1 x){return pow(c,AH1_(x));}
+ AH2 AFromGammaH2(AH2 c,AH1 x){return pow(c,AH2_(x));}
+ AH3 AFromGammaH3(AH3 c,AH1 x){return pow(c,AH3_(x));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AHromSrgbF1(AH1 c){AH3 j=AH3(0.04045/12.92,1.0/12.92,2.4);AH2 k=AH2(1.0/1.055,0.055/1.055);
+ return AZolSelH1(AZolSignedH1(c-j.x ),c*j.y ,pow(c*k.x +k.y ,j.z ));}
+ AH2 AHromSrgbF2(AH2 c){AH3 j=AH3(0.04045/12.92,1.0/12.92,2.4);AH2 k=AH2(1.0/1.055,0.055/1.055);
+ return AZolSelH2(AZolSignedH2(c-j.xx ),c*j.yy ,pow(c*k.xx +k.yy ,j.zz ));}
+ AH3 AHromSrgbF3(AH3 c){AH3 j=AH3(0.04045/12.92,1.0/12.92,2.4);AH2 k=AH2(1.0/1.055,0.055/1.055);
+ return AZolSelH3(AZolSignedH3(c-j.xxx),c*j.yyy,pow(c*k.xxx+k.yyy,j.zzz));}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AFromTwoH1(AH1 c){return c*c;}
+ AH2 AFromTwoH2(AH2 c){return c*c;}
+ AH3 AFromTwoH3(AH3 c){return c*c;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AH1 AFromThreeH1(AH1 c){return c*c*c;}
+ AH2 AFromThreeH2(AH2 c){return c*c*c;}
+ AH3 AFromThreeH3(AH3 c){return c*c*c;}
+ #endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// CS REMAP
+//==============================================================================================================================
+ // Simple remap 64x1 to 8x8 with rotated 2x2 pixel quads in quad linear.
+ // 543210
+ // ======
+ // ..xxx.
+ // yy...y
+ AU2 ARmp8x8(AU1 a){return AU2(ABfe(a,1u,3u),ABfiM(ABfe(a,3u,3u),a,1u));}
+//==============================================================================================================================
+ // More complex remap 64x1 to 8x8 which is necessary for 2D wave reductions.
+ // 543210
+ // ======
+ // .xx..x
+ // y..yy.
+ // Details,
+ // LANE TO 8x8 MAPPING
+ // ===================
+ // 00 01 08 09 10 11 18 19
+ // 02 03 0a 0b 12 13 1a 1b
+ // 04 05 0c 0d 14 15 1c 1d
+ // 06 07 0e 0f 16 17 1e 1f
+ // 20 21 28 29 30 31 38 39
+ // 22 23 2a 2b 32 33 3a 3b
+ // 24 25 2c 2d 34 35 3c 3d
+ // 26 27 2e 2f 36 37 3e 3f
+ AU2 ARmpRed8x8(AU1 a){return AU2(ABfiM(ABfe(a,2u,3u),a,1u),ABfiM(ABfe(a,3u,3u),ABfe(a,1u,2u),2u));}
+//==============================================================================================================================
+ #ifdef A_HALF
+ AW2 ARmp8x8H(AU1 a){return AW2(ABfe(a,1u,3u),ABfiM(ABfe(a,3u,3u),a,1u));}
+ AW2 ARmpRed8x8H(AU1 a){return AW2(ABfiM(ABfe(a,2u,3u),a,1u),ABfiM(ABfe(a,3u,3u),ABfe(a,1u,2u),2u));}
+ #endif
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+// REFERENCE
+//
+//------------------------------------------------------------------------------------------------------------------------------
+// IEEE FLOAT RULES
+// ================
+// - saturate(NaN)=0, saturate(-INF)=0, saturate(+INF)=1
+// - {+/-}0 * {+/-}INF = NaN
+// - -INF + (+INF) = NaN
+// - {+/-}0 / {+/-}0 = NaN
+// - {+/-}INF / {+/-}INF = NaN
+// - a<(-0) := sqrt(a) = NaN (a=-0.0 won't NaN)
+// - 0 == -0
+// - 4/0 = +INF
+// - 4/-0 = -INF
+// - 4+INF = +INF
+// - 4-INF = -INF
+// - 4*(+INF) = +INF
+// - 4*(-INF) = -INF
+// - -4*(+INF) = -INF
+// - sqrt(+INF) = +INF
+//------------------------------------------------------------------------------------------------------------------------------
+// FP16 ENCODING
+// =============
+// fedcba9876543210
+// ----------------
+// ......mmmmmmmmmm 10-bit mantissa (encodes 11-bit 0.5 to 1.0 except for denormals)
+// .eeeee.......... 5-bit exponent
+// .00000.......... denormals
+// .00001.......... -14 exponent
+// .11110.......... 15 exponent
+// .111110000000000 infinity
+// .11111nnnnnnnnnn NaN with n!=0
+// s............... sign
+//------------------------------------------------------------------------------------------------------------------------------
+// FP16/INT16 ALIASING DENORMAL
+// ============================
+// 11-bit unsigned integers alias with half float denormal/normal values,
+// 1 = 2^(-24) = 1/16777216 ....................... first denormal value
+// 2 = 2^(-23)
+// ...
+// 1023 = 2^(-14)*(1-2^(-10)) = 2^(-14)*(1-1/1024) ... last denormal value
+// 1024 = 2^(-14) = 1/16384 .......................... first normal value that still maps to integers
+// 2047 .............................................. last normal value that still maps to integers
+// Scaling limits,
+// 2^15 = 32768 ...................................... largest power of 2 scaling
+// Largest pow2 conversion mapping is at *32768,
+// 1 : 2^(-9) = 1/512
+// 2 : 1/256
+// 4 : 1/128
+// 8 : 1/64
+// 16 : 1/32
+// 32 : 1/16
+// 64 : 1/8
+// 128 : 1/4
+// 256 : 1/2
+// 512 : 1
+// 1024 : 2
+// 2047 : a little less than 4
+//==============================================================================================================================
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+//
+// GPU/CPU PORTABILITY
+//
+//
+//------------------------------------------------------------------------------------------------------------------------------
+// This is the GPU implementation.
+// See the CPU implementation for docs.
+//==============================================================================================================================
+#ifdef A_GPU
+ #define A_TRUE true
+ #define A_FALSE false
+ #define A_STATIC
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// VECTOR ARGUMENT/RETURN/INITIALIZATION PORTABILITY
+//==============================================================================================================================
+ #define retAD2 AD2
+ #define retAD3 AD3
+ #define retAD4 AD4
+ #define retAF2 AF2
+ #define retAF3 AF3
+ #define retAF4 AF4
+ #define retAL2 AL2
+ #define retAL3 AL3
+ #define retAL4 AL4
+ #define retAU2 AU2
+ #define retAU3 AU3
+ #define retAU4 AU4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define inAD2 in AD2
+ #define inAD3 in AD3
+ #define inAD4 in AD4
+ #define inAF2 in AF2
+ #define inAF3 in AF3
+ #define inAF4 in AF4
+ #define inAL2 in AL2
+ #define inAL3 in AL3
+ #define inAL4 in AL4
+ #define inAU2 in AU2
+ #define inAU3 in AU3
+ #define inAU4 in AU4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define inoutAD2 inout AD2
+ #define inoutAD3 inout AD3
+ #define inoutAD4 inout AD4
+ #define inoutAF2 inout AF2
+ #define inoutAF3 inout AF3
+ #define inoutAF4 inout AF4
+ #define inoutAL2 inout AL2
+ #define inoutAL3 inout AL3
+ #define inoutAL4 inout AL4
+ #define inoutAU2 inout AU2
+ #define inoutAU3 inout AU3
+ #define inoutAU4 inout AU4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define outAD2 out AD2
+ #define outAD3 out AD3
+ #define outAD4 out AD4
+ #define outAF2 out AF2
+ #define outAF3 out AF3
+ #define outAF4 out AF4
+ #define outAL2 out AL2
+ #define outAL3 out AL3
+ #define outAL4 out AL4
+ #define outAU2 out AU2
+ #define outAU3 out AU3
+ #define outAU4 out AU4
+//------------------------------------------------------------------------------------------------------------------------------
+ #define varAD2(x) AD2 x
+ #define varAD3(x) AD3 x
+ #define varAD4(x) AD4 x
+ #define varAF2(x) AF2 x
+ #define varAF3(x) AF3 x
+ #define varAF4(x) AF4 x
+ #define varAL2(x) AL2 x
+ #define varAL3(x) AL3 x
+ #define varAL4(x) AL4 x
+ #define varAU2(x) AU2 x
+ #define varAU3(x) AU3 x
+ #define varAU4(x) AU4 x
+//------------------------------------------------------------------------------------------------------------------------------
+ #define initAD2(x,y) AD2(x,y)
+ #define initAD3(x,y,z) AD3(x,y,z)
+ #define initAD4(x,y,z,w) AD4(x,y,z,w)
+ #define initAF2(x,y) AF2(x,y)
+ #define initAF3(x,y,z) AF3(x,y,z)
+ #define initAF4(x,y,z,w) AF4(x,y,z,w)
+ #define initAL2(x,y) AL2(x,y)
+ #define initAL3(x,y,z) AL3(x,y,z)
+ #define initAL4(x,y,z,w) AL4(x,y,z,w)
+ #define initAU2(x,y) AU2(x,y)
+ #define initAU3(x,y,z) AU3(x,y,z)
+ #define initAU4(x,y,z,w) AU4(x,y,z,w)
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// SCALAR RETURN OPS
+//==============================================================================================================================
+ #define AAbsD1(a) abs(AD1(a))
+ #define AAbsF1(a) abs(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ACosD1(a) cos(AD1(a))
+ #define ACosF1(a) cos(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ADotD2(a,b) dot(AD2(a),AD2(b))
+ #define ADotD3(a,b) dot(AD3(a),AD3(b))
+ #define ADotD4(a,b) dot(AD4(a),AD4(b))
+ #define ADotF2(a,b) dot(AF2(a),AF2(b))
+ #define ADotF3(a,b) dot(AF3(a),AF3(b))
+ #define ADotF4(a,b) dot(AF4(a),AF4(b))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AExp2D1(a) exp2(AD1(a))
+ #define AExp2F1(a) exp2(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AFloorD1(a) floor(AD1(a))
+ #define AFloorF1(a) floor(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ALog2D1(a) log2(AD1(a))
+ #define ALog2F1(a) log2(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AMaxD1(a,b) max(a,b)
+ #define AMaxF1(a,b) max(a,b)
+ #define AMaxL1(a,b) max(a,b)
+ #define AMaxU1(a,b) max(a,b)
+//------------------------------------------------------------------------------------------------------------------------------
+ #define AMinD1(a,b) min(a,b)
+ #define AMinF1(a,b) min(a,b)
+ #define AMinL1(a,b) min(a,b)
+ #define AMinU1(a,b) min(a,b)
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASinD1(a) sin(AD1(a))
+ #define ASinF1(a) sin(AF1(a))
+//------------------------------------------------------------------------------------------------------------------------------
+ #define ASqrtD1(a) sqrt(AD1(a))
+ #define ASqrtF1(a) sqrt(AF1(a))
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// SCALAR RETURN OPS - DEPENDENT
+//==============================================================================================================================
+ #define APowD1(a,b) pow(AD1(a),AF1(b))
+ #define APowF1(a,b) pow(AF1(a),AF1(b))
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// VECTOR OPS
+//------------------------------------------------------------------------------------------------------------------------------
+// These are added as needed for production or prototyping, so not necessarily a complete set.
+// They follow a convention of taking in a destination and also returning the destination value to increase utility.
+//==============================================================================================================================
+ #ifdef A_DUBL
+ AD2 opAAbsD2(outAD2 d,inAD2 a){d=abs(a);return d;}
+ AD3 opAAbsD3(outAD3 d,inAD3 a){d=abs(a);return d;}
+ AD4 opAAbsD4(outAD4 d,inAD4 a){d=abs(a);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opAAddD2(outAD2 d,inAD2 a,inAD2 b){d=a+b;return d;}
+ AD3 opAAddD3(outAD3 d,inAD3 a,inAD3 b){d=a+b;return d;}
+ AD4 opAAddD4(outAD4 d,inAD4 a,inAD4 b){d=a+b;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opAAddOneD2(outAD2 d,inAD2 a,AD1 b){d=a+AD2_(b);return d;}
+ AD3 opAAddOneD3(outAD3 d,inAD3 a,AD1 b){d=a+AD3_(b);return d;}
+ AD4 opAAddOneD4(outAD4 d,inAD4 a,AD1 b){d=a+AD4_(b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opACpyD2(outAD2 d,inAD2 a){d=a;return d;}
+ AD3 opACpyD3(outAD3 d,inAD3 a){d=a;return d;}
+ AD4 opACpyD4(outAD4 d,inAD4 a){d=a;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opALerpD2(outAD2 d,inAD2 a,inAD2 b,inAD2 c){d=ALerpD2(a,b,c);return d;}
+ AD3 opALerpD3(outAD3 d,inAD3 a,inAD3 b,inAD3 c){d=ALerpD3(a,b,c);return d;}
+ AD4 opALerpD4(outAD4 d,inAD4 a,inAD4 b,inAD4 c){d=ALerpD4(a,b,c);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opALerpOneD2(outAD2 d,inAD2 a,inAD2 b,AD1 c){d=ALerpD2(a,b,AD2_(c));return d;}
+ AD3 opALerpOneD3(outAD3 d,inAD3 a,inAD3 b,AD1 c){d=ALerpD3(a,b,AD3_(c));return d;}
+ AD4 opALerpOneD4(outAD4 d,inAD4 a,inAD4 b,AD1 c){d=ALerpD4(a,b,AD4_(c));return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opAMaxD2(outAD2 d,inAD2 a,inAD2 b){d=max(a,b);return d;}
+ AD3 opAMaxD3(outAD3 d,inAD3 a,inAD3 b){d=max(a,b);return d;}
+ AD4 opAMaxD4(outAD4 d,inAD4 a,inAD4 b){d=max(a,b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opAMinD2(outAD2 d,inAD2 a,inAD2 b){d=min(a,b);return d;}
+ AD3 opAMinD3(outAD3 d,inAD3 a,inAD3 b){d=min(a,b);return d;}
+ AD4 opAMinD4(outAD4 d,inAD4 a,inAD4 b){d=min(a,b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opAMulD2(outAD2 d,inAD2 a,inAD2 b){d=a*b;return d;}
+ AD3 opAMulD3(outAD3 d,inAD3 a,inAD3 b){d=a*b;return d;}
+ AD4 opAMulD4(outAD4 d,inAD4 a,inAD4 b){d=a*b;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opAMulOneD2(outAD2 d,inAD2 a,AD1 b){d=a*AD2_(b);return d;}
+ AD3 opAMulOneD3(outAD3 d,inAD3 a,AD1 b){d=a*AD3_(b);return d;}
+ AD4 opAMulOneD4(outAD4 d,inAD4 a,AD1 b){d=a*AD4_(b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opANegD2(outAD2 d,inAD2 a){d=-a;return d;}
+ AD3 opANegD3(outAD3 d,inAD3 a){d=-a;return d;}
+ AD4 opANegD4(outAD4 d,inAD4 a){d=-a;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AD2 opARcpD2(outAD2 d,inAD2 a){d=ARcpD2(a);return d;}
+ AD3 opARcpD3(outAD3 d,inAD3 a){d=ARcpD3(a);return d;}
+ AD4 opARcpD4(outAD4 d,inAD4 a){d=ARcpD4(a);return d;}
+ #endif
+//==============================================================================================================================
+ AF2 opAAbsF2(outAF2 d,inAF2 a){d=abs(a);return d;}
+ AF3 opAAbsF3(outAF3 d,inAF3 a){d=abs(a);return d;}
+ AF4 opAAbsF4(outAF4 d,inAF4 a){d=abs(a);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opAAddF2(outAF2 d,inAF2 a,inAF2 b){d=a+b;return d;}
+ AF3 opAAddF3(outAF3 d,inAF3 a,inAF3 b){d=a+b;return d;}
+ AF4 opAAddF4(outAF4 d,inAF4 a,inAF4 b){d=a+b;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opAAddOneF2(outAF2 d,inAF2 a,AF1 b){d=a+AF2_(b);return d;}
+ AF3 opAAddOneF3(outAF3 d,inAF3 a,AF1 b){d=a+AF3_(b);return d;}
+ AF4 opAAddOneF4(outAF4 d,inAF4 a,AF1 b){d=a+AF4_(b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opACpyF2(outAF2 d,inAF2 a){d=a;return d;}
+ AF3 opACpyF3(outAF3 d,inAF3 a){d=a;return d;}
+ AF4 opACpyF4(outAF4 d,inAF4 a){d=a;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opALerpF2(outAF2 d,inAF2 a,inAF2 b,inAF2 c){d=ALerpF2(a,b,c);return d;}
+ AF3 opALerpF3(outAF3 d,inAF3 a,inAF3 b,inAF3 c){d=ALerpF3(a,b,c);return d;}
+ AF4 opALerpF4(outAF4 d,inAF4 a,inAF4 b,inAF4 c){d=ALerpF4(a,b,c);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opALerpOneF2(outAF2 d,inAF2 a,inAF2 b,AF1 c){d=ALerpF2(a,b,AF2_(c));return d;}
+ AF3 opALerpOneF3(outAF3 d,inAF3 a,inAF3 b,AF1 c){d=ALerpF3(a,b,AF3_(c));return d;}
+ AF4 opALerpOneF4(outAF4 d,inAF4 a,inAF4 b,AF1 c){d=ALerpF4(a,b,AF4_(c));return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opAMaxF2(outAF2 d,inAF2 a,inAF2 b){d=max(a,b);return d;}
+ AF3 opAMaxF3(outAF3 d,inAF3 a,inAF3 b){d=max(a,b);return d;}
+ AF4 opAMaxF4(outAF4 d,inAF4 a,inAF4 b){d=max(a,b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opAMinF2(outAF2 d,inAF2 a,inAF2 b){d=min(a,b);return d;}
+ AF3 opAMinF3(outAF3 d,inAF3 a,inAF3 b){d=min(a,b);return d;}
+ AF4 opAMinF4(outAF4 d,inAF4 a,inAF4 b){d=min(a,b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opAMulF2(outAF2 d,inAF2 a,inAF2 b){d=a*b;return d;}
+ AF3 opAMulF3(outAF3 d,inAF3 a,inAF3 b){d=a*b;return d;}
+ AF4 opAMulF4(outAF4 d,inAF4 a,inAF4 b){d=a*b;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opAMulOneF2(outAF2 d,inAF2 a,AF1 b){d=a*AF2_(b);return d;}
+ AF3 opAMulOneF3(outAF3 d,inAF3 a,AF1 b){d=a*AF3_(b);return d;}
+ AF4 opAMulOneF4(outAF4 d,inAF4 a,AF1 b){d=a*AF4_(b);return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opANegF2(outAF2 d,inAF2 a){d=-a;return d;}
+ AF3 opANegF3(outAF3 d,inAF3 a){d=-a;return d;}
+ AF4 opANegF4(outAF4 d,inAF4 a){d=-a;return d;}
+//------------------------------------------------------------------------------------------------------------------------------
+ AF2 opARcpF2(outAF2 d,inAF2 a){d=ARcpF2(a);return d;}
+ AF3 opARcpF3(outAF3 d,inAF3 a){d=ARcpF3(a);return d;}
+ AF4 opARcpF4(outAF4 d,inAF4 a){d=ARcpF4(a);return d;}
+#endif
diff --git a/thirdparty/amd-fsr/ffx_fsr1.h b/thirdparty/amd-fsr/ffx_fsr1.h
new file mode 100644
index 00000000000..4e0b3d54855
--- /dev/null
+++ b/thirdparty/amd-fsr/ffx_fsr1.h
@@ -0,0 +1,1199 @@
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+//
+// AMD FidelityFX SUPER RESOLUTION [FSR 1] ::: SPATIAL SCALING & EXTRAS - v1.20210629
+//
+//
+//------------------------------------------------------------------------------------------------------------------------------
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//------------------------------------------------------------------------------------------------------------------------------
+// FidelityFX Super Resolution Sample
+//
+// Copyright (c) 2021 Advanced Micro Devices, Inc. All rights reserved.
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files(the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and / or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions :
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+// THE SOFTWARE.
+//------------------------------------------------------------------------------------------------------------------------------
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//------------------------------------------------------------------------------------------------------------------------------
+// ABOUT
+// =====
+// FSR is a collection of algorithms relating to generating a higher resolution image.
+// This specific header focuses on single-image non-temporal image scaling, and related tools.
+//
+// The core functions are EASU and RCAS:
+// [EASU] Edge Adaptive Spatial Upsampling ....... 1x to 4x area range spatial scaling, clamped adaptive elliptical filter.
+// [RCAS] Robust Contrast Adaptive Sharpening .... A non-scaling variation on CAS.
+// RCAS needs to be applied after EASU as a separate pass.
+//
+// Optional utility functions are:
+// [LFGA] Linear Film Grain Applicator ........... Tool to apply film grain after scaling.
+// [SRTM] Simple Reversible Tone-Mapper .......... Linear HDR {0 to FP16_MAX} to {0 to 1} and back.
+// [TEPD] Temporal Energy Preserving Dither ...... Temporally energy preserving dithered {0 to 1} linear to gamma 2.0 conversion.
+// See each individual sub-section for inline documentation.
+//------------------------------------------------------------------------------------------------------------------------------
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//------------------------------------------------------------------------------------------------------------------------------
+// FUNCTION PERMUTATIONS
+// =====================
+// *F() ..... Single item computation with 32-bit.
+// *H() ..... Single item computation with 16-bit, with packing (aka two 16-bit ops in parallel) when possible.
+// *Hx2() ... Processing two items in parallel with 16-bit, easier packing.
+// Not all interfaces in this file have a *Hx2() form.
+//==============================================================================================================================
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+// FSR - [EASU] EDGE ADAPTIVE SPATIAL UPSAMPLING
+//
+//------------------------------------------------------------------------------------------------------------------------------
+// EASU provides a high quality spatial-only scaling at relatively low cost.
+// Meaning EASU is appropiate for laptops and other low-end GPUs.
+// Quality from 1x to 4x area scaling is good.
+//------------------------------------------------------------------------------------------------------------------------------
+// The scalar uses a modified fast approximation to the standard lanczos(size=2) kernel.
+// EASU runs in a single pass, so it applies a directionally and anisotropically adaptive radial lanczos.
+// This is also kept as simple as possible to have minimum runtime.
+//------------------------------------------------------------------------------------------------------------------------------
+// The lanzcos filter has negative lobes, so by itself it will introduce ringing.
+// To remove all ringing, the algorithm uses the nearest 2x2 input texels as a neighborhood,
+// and limits output to the minimum and maximum of that neighborhood.
+//------------------------------------------------------------------------------------------------------------------------------
+// Input image requirements:
+//
+// Color needs to be encoded as 3 channel[red, green, blue](e.g.XYZ not supported)
+// Each channel needs to be in the range[0, 1]
+// Any color primaries are supported
+// Display / tonemapping curve needs to be as if presenting to sRGB display or similar(e.g.Gamma 2.0)
+// There should be no banding in the input
+// There should be no high amplitude noise in the input
+// There should be no noise in the input that is not at input pixel granularity
+// For performance purposes, use 32bpp formats
+//------------------------------------------------------------------------------------------------------------------------------
+// Best to apply EASU at the end of the frame after tonemapping
+// but before film grain or composite of the UI.
+//------------------------------------------------------------------------------------------------------------------------------
+// Example of including this header for D3D HLSL :
+//
+// #define A_GPU 1
+// #define A_HLSL 1
+// #define A_HALF 1
+// #include "ffx_a.h"
+// #define FSR_EASU_H 1
+// #define FSR_RCAS_H 1
+// //declare input callbacks
+// #include "ffx_fsr1.h"
+//
+// Example of including this header for Vulkan GLSL :
+//
+// #define A_GPU 1
+// #define A_GLSL 1
+// #define A_HALF 1
+// #include "ffx_a.h"
+// #define FSR_EASU_H 1
+// #define FSR_RCAS_H 1
+// //declare input callbacks
+// #include "ffx_fsr1.h"
+//
+// Example of including this header for Vulkan HLSL :
+//
+// #define A_GPU 1
+// #define A_HLSL 1
+// #define A_HLSL_6_2 1
+// #define A_NO_16_BIT_CAST 1
+// #define A_HALF 1
+// #include "ffx_a.h"
+// #define FSR_EASU_H 1
+// #define FSR_RCAS_H 1
+// //declare input callbacks
+// #include "ffx_fsr1.h"
+//
+// Example of declaring the required input callbacks for GLSL :
+// The callbacks need to gather4 for each color channel using the specified texture coordinate 'p'.
+// EASU uses gather4 to reduce position computation logic and for free Arrays of Structures to Structures of Arrays conversion.
+//
+// AH4 FsrEasuRH(AF2 p){return AH4(textureGather(sampler2D(tex,sam),p,0));}
+// AH4 FsrEasuGH(AF2 p){return AH4(textureGather(sampler2D(tex,sam),p,1));}
+// AH4 FsrEasuBH(AF2 p){return AH4(textureGather(sampler2D(tex,sam),p,2));}
+// ...
+// The FsrEasuCon function needs to be called from the CPU or GPU to set up constants.
+// The difference in viewport and input image size is there to support Dynamic Resolution Scaling.
+// To use FsrEasuCon() on the CPU, define A_CPU before including ffx_a and ffx_fsr1.
+// Including a GPU example here, the 'con0' through 'con3' values would be stored out to a constant buffer.
+// AU4 con0,con1,con2,con3;
+// FsrEasuCon(con0,con1,con2,con3,
+// 1920.0,1080.0, // Viewport size (top left aligned) in the input image which is to be scaled.
+// 3840.0,2160.0, // The size of the input image.
+// 2560.0,1440.0); // The output resolution.
+//==============================================================================================================================
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// CONSTANT SETUP
+//==============================================================================================================================
+// Call to setup required constant values (works on CPU or GPU).
+A_STATIC void FsrEasuCon(
+outAU4 con0,
+outAU4 con1,
+outAU4 con2,
+outAU4 con3,
+// This the rendered image resolution being upscaled
+AF1 inputViewportInPixelsX,
+AF1 inputViewportInPixelsY,
+// This is the resolution of the resource containing the input image (useful for dynamic resolution)
+AF1 inputSizeInPixelsX,
+AF1 inputSizeInPixelsY,
+// This is the display resolution which the input image gets upscaled to
+AF1 outputSizeInPixelsX,
+AF1 outputSizeInPixelsY){
+ // Output integer position to a pixel position in viewport.
+ con0[0]=AU1_AF1(inputViewportInPixelsX*ARcpF1(outputSizeInPixelsX));
+ con0[1]=AU1_AF1(inputViewportInPixelsY*ARcpF1(outputSizeInPixelsY));
+ con0[2]=AU1_AF1(AF1_(0.5)*inputViewportInPixelsX*ARcpF1(outputSizeInPixelsX)-AF1_(0.5));
+ con0[3]=AU1_AF1(AF1_(0.5)*inputViewportInPixelsY*ARcpF1(outputSizeInPixelsY)-AF1_(0.5));
+ // Viewport pixel position to normalized image space.
+ // This is used to get upper-left of 'F' tap.
+ con1[0]=AU1_AF1(ARcpF1(inputSizeInPixelsX));
+ con1[1]=AU1_AF1(ARcpF1(inputSizeInPixelsY));
+ // Centers of gather4, first offset from upper-left of 'F'.
+ // +---+---+
+ // | | |
+ // +--(0)--+
+ // | b | c |
+ // +---F---+---+---+
+ // | e | f | g | h |
+ // +--(1)--+--(2)--+
+ // | i | j | k | l |
+ // +---+---+---+---+
+ // | n | o |
+ // +--(3)--+
+ // | | |
+ // +---+---+
+ con1[2]=AU1_AF1(AF1_( 1.0)*ARcpF1(inputSizeInPixelsX));
+ con1[3]=AU1_AF1(AF1_(-1.0)*ARcpF1(inputSizeInPixelsY));
+ // These are from (0) instead of 'F'.
+ con2[0]=AU1_AF1(AF1_(-1.0)*ARcpF1(inputSizeInPixelsX));
+ con2[1]=AU1_AF1(AF1_( 2.0)*ARcpF1(inputSizeInPixelsY));
+ con2[2]=AU1_AF1(AF1_( 1.0)*ARcpF1(inputSizeInPixelsX));
+ con2[3]=AU1_AF1(AF1_( 2.0)*ARcpF1(inputSizeInPixelsY));
+ con3[0]=AU1_AF1(AF1_( 0.0)*ARcpF1(inputSizeInPixelsX));
+ con3[1]=AU1_AF1(AF1_( 4.0)*ARcpF1(inputSizeInPixelsY));
+ con3[2]=con3[3]=0;}
+
+//If the an offset into the input image resource
+A_STATIC void FsrEasuConOffset(
+ outAU4 con0,
+ outAU4 con1,
+ outAU4 con2,
+ outAU4 con3,
+ // This the rendered image resolution being upscaled
+ AF1 inputViewportInPixelsX,
+ AF1 inputViewportInPixelsY,
+ // This is the resolution of the resource containing the input image (useful for dynamic resolution)
+ AF1 inputSizeInPixelsX,
+ AF1 inputSizeInPixelsY,
+ // This is the display resolution which the input image gets upscaled to
+ AF1 outputSizeInPixelsX,
+ AF1 outputSizeInPixelsY,
+ // This is the input image offset into the resource containing it (useful for dynamic resolution)
+ AF1 inputOffsetInPixelsX,
+ AF1 inputOffsetInPixelsY) {
+ FsrEasuCon(con0, con1, con2, con3, inputViewportInPixelsX, inputViewportInPixelsY, inputSizeInPixelsX, inputSizeInPixelsY, outputSizeInPixelsX, outputSizeInPixelsY);
+ con0[2] = AU1_AF1(AF1_(0.5) * inputViewportInPixelsX * ARcpF1(outputSizeInPixelsX) - AF1_(0.5) + inputOffsetInPixelsX);
+ con0[3] = AU1_AF1(AF1_(0.5) * inputViewportInPixelsY * ARcpF1(outputSizeInPixelsY) - AF1_(0.5) + inputOffsetInPixelsY);
+}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// NON-PACKED 32-BIT VERSION
+//==============================================================================================================================
+#if defined(A_GPU)&&defined(FSR_EASU_F)
+ // Input callback prototypes, need to be implemented by calling shader
+ AF4 FsrEasuRF(AF2 p);
+ AF4 FsrEasuGF(AF2 p);
+ AF4 FsrEasuBF(AF2 p);
+//------------------------------------------------------------------------------------------------------------------------------
+ // Filtering for a given tap for the scalar.
+ void FsrEasuTapF(
+ inout AF3 aC, // Accumulated color, with negative lobe.
+ inout AF1 aW, // Accumulated weight.
+ AF2 off, // Pixel offset from resolve position to tap.
+ AF2 dir, // Gradient direction.
+ AF2 len, // Length.
+ AF1 lob, // Negative lobe strength.
+ AF1 clp, // Clipping point.
+ AF3 c){ // Tap color.
+ // Rotate offset by direction.
+ AF2 v;
+ v.x=(off.x*( dir.x))+(off.y*dir.y);
+ v.y=(off.x*(-dir.y))+(off.y*dir.x);
+ // Anisotropy.
+ v*=len;
+ // Compute distance^2.
+ AF1 d2=v.x*v.x+v.y*v.y;
+ // Limit to the window as at corner, 2 taps can easily be outside.
+ d2=min(d2,clp);
+ // Approximation of lancos2 without sin() or rcp(), or sqrt() to get x.
+ // (25/16 * (2/5 * x^2 - 1)^2 - (25/16 - 1)) * (1/4 * x^2 - 1)^2
+ // |_______________________________________| |_______________|
+ // base window
+ // The general form of the 'base' is,
+ // (a*(b*x^2-1)^2-(a-1))
+ // Where 'a=1/(2*b-b^2)' and 'b' moves around the negative lobe.
+ AF1 wB=AF1_(2.0/5.0)*d2+AF1_(-1.0);
+ AF1 wA=lob*d2+AF1_(-1.0);
+ wB*=wB;
+ wA*=wA;
+ wB=AF1_(25.0/16.0)*wB+AF1_(-(25.0/16.0-1.0));
+ AF1 w=wB*wA;
+ // Do weighted average.
+ aC+=c*w;aW+=w;}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Accumulate direction and length.
+ void FsrEasuSetF(
+ inout AF2 dir,
+ inout AF1 len,
+ AF2 pp,
+ AP1 biS,AP1 biT,AP1 biU,AP1 biV,
+ AF1 lA,AF1 lB,AF1 lC,AF1 lD,AF1 lE){
+ // Compute bilinear weight, branches factor out as predicates are compiler time immediates.
+ // s t
+ // u v
+ AF1 w = AF1_(0.0);
+ if(biS)w=(AF1_(1.0)-pp.x)*(AF1_(1.0)-pp.y);
+ if(biT)w= pp.x *(AF1_(1.0)-pp.y);
+ if(biU)w=(AF1_(1.0)-pp.x)* pp.y ;
+ if(biV)w= pp.x * pp.y ;
+ // Direction is the '+' diff.
+ // a
+ // b c d
+ // e
+ // Then takes magnitude from abs average of both sides of 'c'.
+ // Length converts gradient reversal to 0, smoothly to non-reversal at 1, shaped, then adding horz and vert terms.
+ AF1 dc=lD-lC;
+ AF1 cb=lC-lB;
+ AF1 lenX=max(abs(dc),abs(cb));
+ lenX=APrxLoRcpF1(lenX);
+ AF1 dirX=lD-lB;
+ dir.x+=dirX*w;
+ lenX=ASatF1(abs(dirX)*lenX);
+ lenX*=lenX;
+ len+=lenX*w;
+ // Repeat for the y axis.
+ AF1 ec=lE-lC;
+ AF1 ca=lC-lA;
+ AF1 lenY=max(abs(ec),abs(ca));
+ lenY=APrxLoRcpF1(lenY);
+ AF1 dirY=lE-lA;
+ dir.y+=dirY*w;
+ lenY=ASatF1(abs(dirY)*lenY);
+ lenY*=lenY;
+ len+=lenY*w;}
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrEasuF(
+ out AF3 pix,
+ AU2 ip, // Integer pixel position in output.
+ AU4 con0, // Constants generated by FsrEasuCon().
+ AU4 con1,
+ AU4 con2,
+ AU4 con3){
+//------------------------------------------------------------------------------------------------------------------------------
+ // Get position of 'f'.
+ AF2 pp=AF2(ip)*AF2_AU2(con0.xy)+AF2_AU2(con0.zw);
+ AF2 fp=floor(pp);
+ pp-=fp;
+//------------------------------------------------------------------------------------------------------------------------------
+ // 12-tap kernel.
+ // b c
+ // e f g h
+ // i j k l
+ // n o
+ // Gather 4 ordering.
+ // a b
+ // r g
+ // For packed FP16, need either {rg} or {ab} so using the following setup for gather in all versions,
+ // a b <- unused (z)
+ // r g
+ // a b a b
+ // r g r g
+ // a b
+ // r g <- unused (z)
+ // Allowing dead-code removal to remove the 'z's.
+ AF2 p0=fp*AF2_AU2(con1.xy)+AF2_AU2(con1.zw);
+ // These are from p0 to avoid pulling two constants on pre-Navi hardware.
+ AF2 p1=p0+AF2_AU2(con2.xy);
+ AF2 p2=p0+AF2_AU2(con2.zw);
+ AF2 p3=p0+AF2_AU2(con3.xy);
+ AF4 bczzR=FsrEasuRF(p0);
+ AF4 bczzG=FsrEasuGF(p0);
+ AF4 bczzB=FsrEasuBF(p0);
+ AF4 ijfeR=FsrEasuRF(p1);
+ AF4 ijfeG=FsrEasuGF(p1);
+ AF4 ijfeB=FsrEasuBF(p1);
+ AF4 klhgR=FsrEasuRF(p2);
+ AF4 klhgG=FsrEasuGF(p2);
+ AF4 klhgB=FsrEasuBF(p2);
+ AF4 zzonR=FsrEasuRF(p3);
+ AF4 zzonG=FsrEasuGF(p3);
+ AF4 zzonB=FsrEasuBF(p3);
+//------------------------------------------------------------------------------------------------------------------------------
+ // Simplest multi-channel approximate luma possible (luma times 2, in 2 FMA/MAD).
+ AF4 bczzL=bczzB*AF4_(0.5)+(bczzR*AF4_(0.5)+bczzG);
+ AF4 ijfeL=ijfeB*AF4_(0.5)+(ijfeR*AF4_(0.5)+ijfeG);
+ AF4 klhgL=klhgB*AF4_(0.5)+(klhgR*AF4_(0.5)+klhgG);
+ AF4 zzonL=zzonB*AF4_(0.5)+(zzonR*AF4_(0.5)+zzonG);
+ // Rename.
+ AF1 bL=bczzL.x;
+ AF1 cL=bczzL.y;
+ AF1 iL=ijfeL.x;
+ AF1 jL=ijfeL.y;
+ AF1 fL=ijfeL.z;
+ AF1 eL=ijfeL.w;
+ AF1 kL=klhgL.x;
+ AF1 lL=klhgL.y;
+ AF1 hL=klhgL.z;
+ AF1 gL=klhgL.w;
+ AF1 oL=zzonL.z;
+ AF1 nL=zzonL.w;
+ // Accumulate for bilinear interpolation.
+ AF2 dir=AF2_(0.0);
+ AF1 len=AF1_(0.0);
+ FsrEasuSetF(dir,len,pp,true, false,false,false,bL,eL,fL,gL,jL);
+ FsrEasuSetF(dir,len,pp,false,true ,false,false,cL,fL,gL,hL,kL);
+ FsrEasuSetF(dir,len,pp,false,false,true ,false,fL,iL,jL,kL,nL);
+ FsrEasuSetF(dir,len,pp,false,false,false,true ,gL,jL,kL,lL,oL);
+//------------------------------------------------------------------------------------------------------------------------------
+ // Normalize with approximation, and cleanup close to zero.
+ AF2 dir2=dir*dir;
+ AF1 dirR=dir2.x+dir2.y;
+ AP1 zro=dirR w = -m/(n+e+w+s)
+// 1 == (w*(n+e+w+s)+m)/(4*w+1) -> w = (1-m)/(n+e+w+s-4*1)
+// Then chooses the 'w' which results in no clipping, limits 'w', and multiplies by the 'sharp' amount.
+// This solution above has issues with MSAA input as the steps along the gradient cause edge detection issues.
+// So RCAS uses 4x the maximum and 4x the minimum (depending on equation)in place of the individual taps.
+// As well as switching from 'm' to either the minimum or maximum (depending on side), to help in energy conservation.
+// This stabilizes RCAS.
+// RCAS does a simple highpass which is normalized against the local contrast then shaped,
+// 0.25
+// 0.25 -1 0.25
+// 0.25
+// This is used as a noise detection filter, to reduce the effect of RCAS on grain, and focus on real edges.
+//
+// GLSL example for the required callbacks :
+//
+// AH4 FsrRcasLoadH(ASW2 p){return AH4(imageLoad(imgSrc,ASU2(p)));}
+// void FsrRcasInputH(inout AH1 r,inout AH1 g,inout AH1 b)
+// {
+// //do any simple input color conversions here or leave empty if none needed
+// }
+//
+// FsrRcasCon need to be called from the CPU or GPU to set up constants.
+// Including a GPU example here, the 'con' value would be stored out to a constant buffer.
+//
+// AU4 con;
+// FsrRcasCon(con,
+// 0.0); // The scale is {0.0 := maximum sharpness, to N>0, where N is the number of stops (halving) of the reduction of sharpness}.
+// ---------------
+// RCAS sharpening supports a CAS-like pass-through alpha via,
+// #define FSR_RCAS_PASSTHROUGH_ALPHA 1
+// RCAS also supports a define to enable a more expensive path to avoid some sharpening of noise.
+// Would suggest it is better to apply film grain after RCAS sharpening (and after scaling) instead of using this define,
+// #define FSR_RCAS_DENOISE 1
+//==============================================================================================================================
+// This is set at the limit of providing unnatural results for sharpening.
+#define FSR_RCAS_LIMIT (0.25-(1.0/16.0))
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// CONSTANT SETUP
+//==============================================================================================================================
+// Call to setup required constant values (works on CPU or GPU).
+A_STATIC void FsrRcasCon(
+outAU4 con,
+// The scale is {0.0 := maximum, to N>0, where N is the number of stops (halving) of the reduction of sharpness}.
+AF1 sharpness){
+ // Transform from stops to linear value.
+ sharpness=AExp2F1(-sharpness);
+ varAF2(hSharp)=initAF2(sharpness,sharpness);
+ con[0]=AU1_AF1(sharpness);
+ con[1]=AU1_AH2_AF2(hSharp);
+ con[2]=0;
+ con[3]=0;}
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// NON-PACKED 32-BIT VERSION
+//==============================================================================================================================
+#if defined(A_GPU)&&defined(FSR_RCAS_F)
+ // Input callback prototypes that need to be implemented by calling shader
+ AF4 FsrRcasLoadF(ASU2 p);
+ void FsrRcasInputF(inout AF1 r,inout AF1 g,inout AF1 b);
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrRcasF(
+ out AF1 pixR, // Output values, non-vector so port between RcasFilter() and RcasFilterH() is easy.
+ out AF1 pixG,
+ out AF1 pixB,
+ #ifdef FSR_RCAS_PASSTHROUGH_ALPHA
+ out AF1 pixA,
+ #endif
+ AU2 ip, // Integer pixel position in output.
+ AU4 con){ // Constant generated by RcasSetup().
+ // Algorithm uses minimal 3x3 pixel neighborhood.
+ // b
+ // d e f
+ // h
+ ASU2 sp=ASU2(ip);
+ AF3 b=FsrRcasLoadF(sp+ASU2( 0,-1)).rgb;
+ AF3 d=FsrRcasLoadF(sp+ASU2(-1, 0)).rgb;
+ #ifdef FSR_RCAS_PASSTHROUGH_ALPHA
+ AF4 ee=FsrRcasLoadF(sp);
+ AF3 e=ee.rgb;pixA=ee.a;
+ #else
+ AF3 e=FsrRcasLoadF(sp).rgb;
+ #endif
+ AF3 f=FsrRcasLoadF(sp+ASU2( 1, 0)).rgb;
+ AF3 h=FsrRcasLoadF(sp+ASU2( 0, 1)).rgb;
+ // Rename (32-bit) or regroup (16-bit).
+ AF1 bR=b.r;
+ AF1 bG=b.g;
+ AF1 bB=b.b;
+ AF1 dR=d.r;
+ AF1 dG=d.g;
+ AF1 dB=d.b;
+ AF1 eR=e.r;
+ AF1 eG=e.g;
+ AF1 eB=e.b;
+ AF1 fR=f.r;
+ AF1 fG=f.g;
+ AF1 fB=f.b;
+ AF1 hR=h.r;
+ AF1 hG=h.g;
+ AF1 hB=h.b;
+ // Run optional input transform.
+ FsrRcasInputF(bR,bG,bB);
+ FsrRcasInputF(dR,dG,dB);
+ FsrRcasInputF(eR,eG,eB);
+ FsrRcasInputF(fR,fG,fB);
+ FsrRcasInputF(hR,hG,hB);
+ // Luma times 2.
+ AF1 bL=bB*AF1_(0.5)+(bR*AF1_(0.5)+bG);
+ AF1 dL=dB*AF1_(0.5)+(dR*AF1_(0.5)+dG);
+ AF1 eL=eB*AF1_(0.5)+(eR*AF1_(0.5)+eG);
+ AF1 fL=fB*AF1_(0.5)+(fR*AF1_(0.5)+fG);
+ AF1 hL=hB*AF1_(0.5)+(hR*AF1_(0.5)+hG);
+ // Noise detection.
+ AF1 nz=AF1_(0.25)*bL+AF1_(0.25)*dL+AF1_(0.25)*fL+AF1_(0.25)*hL-eL;
+ nz=ASatF1(abs(nz)*APrxMedRcpF1(AMax3F1(AMax3F1(bL,dL,eL),fL,hL)-AMin3F1(AMin3F1(bL,dL,eL),fL,hL)));
+ nz=AF1_(-0.5)*nz+AF1_(1.0);
+ // Min and max of ring.
+ AF1 mn4R=min(AMin3F1(bR,dR,fR),hR);
+ AF1 mn4G=min(AMin3F1(bG,dG,fG),hG);
+ AF1 mn4B=min(AMin3F1(bB,dB,fB),hB);
+ AF1 mx4R=max(AMax3F1(bR,dR,fR),hR);
+ AF1 mx4G=max(AMax3F1(bG,dG,fG),hG);
+ AF1 mx4B=max(AMax3F1(bB,dB,fB),hB);
+ // Immediate constants for peak range.
+ AF2 peakC=AF2(1.0,-1.0*4.0);
+ // Limiters, these need to be high precision RCPs.
+ AF1 hitMinR=min(mn4R,eR)*ARcpF1(AF1_(4.0)*mx4R);
+ AF1 hitMinG=min(mn4G,eG)*ARcpF1(AF1_(4.0)*mx4G);
+ AF1 hitMinB=min(mn4B,eB)*ARcpF1(AF1_(4.0)*mx4B);
+ AF1 hitMaxR=(peakC.x-max(mx4R,eR))*ARcpF1(AF1_(4.0)*mn4R+peakC.y);
+ AF1 hitMaxG=(peakC.x-max(mx4G,eG))*ARcpF1(AF1_(4.0)*mn4G+peakC.y);
+ AF1 hitMaxB=(peakC.x-max(mx4B,eB))*ARcpF1(AF1_(4.0)*mn4B+peakC.y);
+ AF1 lobeR=max(-hitMinR,hitMaxR);
+ AF1 lobeG=max(-hitMinG,hitMaxG);
+ AF1 lobeB=max(-hitMinB,hitMaxB);
+ AF1 lobe=max(AF1_(-FSR_RCAS_LIMIT),min(AMax3F1(lobeR,lobeG,lobeB),AF1_(0.0)))*AF1_AU1(con.x);
+ // Apply noise removal.
+ #ifdef FSR_RCAS_DENOISE
+ lobe*=nz;
+ #endif
+ // Resolve, which needs the medium precision rcp approximation to avoid visible tonality changes.
+ AF1 rcpL=APrxMedRcpF1(AF1_(4.0)*lobe+AF1_(1.0));
+ pixR=(lobe*bR+lobe*dR+lobe*hR+lobe*fR+eR)*rcpL;
+ pixG=(lobe*bG+lobe*dG+lobe*hG+lobe*fG+eG)*rcpL;
+ pixB=(lobe*bB+lobe*dB+lobe*hB+lobe*fB+eB)*rcpL;
+ return;}
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// NON-PACKED 16-BIT VERSION
+//==============================================================================================================================
+#if defined(A_GPU)&&defined(A_HALF)&&defined(FSR_RCAS_H)
+ // Input callback prototypes that need to be implemented by calling shader
+ AH4 FsrRcasLoadH(ASW2 p);
+ void FsrRcasInputH(inout AH1 r,inout AH1 g,inout AH1 b);
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrRcasH(
+ out AH1 pixR, // Output values, non-vector so port between RcasFilter() and RcasFilterH() is easy.
+ out AH1 pixG,
+ out AH1 pixB,
+ #ifdef FSR_RCAS_PASSTHROUGH_ALPHA
+ out AH1 pixA,
+ #endif
+ AU2 ip, // Integer pixel position in output.
+ AU4 con){ // Constant generated by RcasSetup().
+ // Sharpening algorithm uses minimal 3x3 pixel neighborhood.
+ // b
+ // d e f
+ // h
+ ASW2 sp=ASW2(ip);
+ AH3 b=FsrRcasLoadH(sp+ASW2( 0,-1)).rgb;
+ AH3 d=FsrRcasLoadH(sp+ASW2(-1, 0)).rgb;
+ #ifdef FSR_RCAS_PASSTHROUGH_ALPHA
+ AH4 ee=FsrRcasLoadH(sp);
+ AH3 e=ee.rgb;pixA=ee.a;
+ #else
+ AH3 e=FsrRcasLoadH(sp).rgb;
+ #endif
+ AH3 f=FsrRcasLoadH(sp+ASW2( 1, 0)).rgb;
+ AH3 h=FsrRcasLoadH(sp+ASW2( 0, 1)).rgb;
+ // Rename (32-bit) or regroup (16-bit).
+ AH1 bR=b.r;
+ AH1 bG=b.g;
+ AH1 bB=b.b;
+ AH1 dR=d.r;
+ AH1 dG=d.g;
+ AH1 dB=d.b;
+ AH1 eR=e.r;
+ AH1 eG=e.g;
+ AH1 eB=e.b;
+ AH1 fR=f.r;
+ AH1 fG=f.g;
+ AH1 fB=f.b;
+ AH1 hR=h.r;
+ AH1 hG=h.g;
+ AH1 hB=h.b;
+ // Run optional input transform.
+ FsrRcasInputH(bR,bG,bB);
+ FsrRcasInputH(dR,dG,dB);
+ FsrRcasInputH(eR,eG,eB);
+ FsrRcasInputH(fR,fG,fB);
+ FsrRcasInputH(hR,hG,hB);
+ // Luma times 2.
+ AH1 bL=bB*AH1_(0.5)+(bR*AH1_(0.5)+bG);
+ AH1 dL=dB*AH1_(0.5)+(dR*AH1_(0.5)+dG);
+ AH1 eL=eB*AH1_(0.5)+(eR*AH1_(0.5)+eG);
+ AH1 fL=fB*AH1_(0.5)+(fR*AH1_(0.5)+fG);
+ AH1 hL=hB*AH1_(0.5)+(hR*AH1_(0.5)+hG);
+ // Noise detection.
+ AH1 nz=AH1_(0.25)*bL+AH1_(0.25)*dL+AH1_(0.25)*fL+AH1_(0.25)*hL-eL;
+ nz=ASatH1(abs(nz)*APrxMedRcpH1(AMax3H1(AMax3H1(bL,dL,eL),fL,hL)-AMin3H1(AMin3H1(bL,dL,eL),fL,hL)));
+ nz=AH1_(-0.5)*nz+AH1_(1.0);
+ // Min and max of ring.
+ AH1 mn4R=min(AMin3H1(bR,dR,fR),hR);
+ AH1 mn4G=min(AMin3H1(bG,dG,fG),hG);
+ AH1 mn4B=min(AMin3H1(bB,dB,fB),hB);
+ AH1 mx4R=max(AMax3H1(bR,dR,fR),hR);
+ AH1 mx4G=max(AMax3H1(bG,dG,fG),hG);
+ AH1 mx4B=max(AMax3H1(bB,dB,fB),hB);
+ // Immediate constants for peak range.
+ AH2 peakC=AH2(1.0,-1.0*4.0);
+ // Limiters, these need to be high precision RCPs.
+ AH1 hitMinR=min(mn4R,eR)*ARcpH1(AH1_(4.0)*mx4R);
+ AH1 hitMinG=min(mn4G,eG)*ARcpH1(AH1_(4.0)*mx4G);
+ AH1 hitMinB=min(mn4B,eB)*ARcpH1(AH1_(4.0)*mx4B);
+ AH1 hitMaxR=(peakC.x-max(mx4R,eR))*ARcpH1(AH1_(4.0)*mn4R+peakC.y);
+ AH1 hitMaxG=(peakC.x-max(mx4G,eG))*ARcpH1(AH1_(4.0)*mn4G+peakC.y);
+ AH1 hitMaxB=(peakC.x-max(mx4B,eB))*ARcpH1(AH1_(4.0)*mn4B+peakC.y);
+ AH1 lobeR=max(-hitMinR,hitMaxR);
+ AH1 lobeG=max(-hitMinG,hitMaxG);
+ AH1 lobeB=max(-hitMinB,hitMaxB);
+ AH1 lobe=max(AH1_(-FSR_RCAS_LIMIT),min(AMax3H1(lobeR,lobeG,lobeB),AH1_(0.0)))*AH2_AU1(con.y).x;
+ // Apply noise removal.
+ #ifdef FSR_RCAS_DENOISE
+ lobe*=nz;
+ #endif
+ // Resolve, which needs the medium precision rcp approximation to avoid visible tonality changes.
+ AH1 rcpL=APrxMedRcpH1(AH1_(4.0)*lobe+AH1_(1.0));
+ pixR=(lobe*bR+lobe*dR+lobe*hR+lobe*fR+eR)*rcpL;
+ pixG=(lobe*bG+lobe*dG+lobe*hG+lobe*fG+eG)*rcpL;
+ pixB=(lobe*bB+lobe*dB+lobe*hB+lobe*fB+eB)*rcpL;}
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+// PACKED 16-BIT VERSION
+//==============================================================================================================================
+#if defined(A_GPU)&&defined(A_HALF)&&defined(FSR_RCAS_HX2)
+ // Input callback prototypes that need to be implemented by the calling shader
+ AH4 FsrRcasLoadHx2(ASW2 p);
+ void FsrRcasInputHx2(inout AH2 r,inout AH2 g,inout AH2 b);
+//------------------------------------------------------------------------------------------------------------------------------
+ // Can be used to convert from packed Structures of Arrays to Arrays of Structures for store.
+ void FsrRcasDepackHx2(out AH4 pix0,out AH4 pix1,AH2 pixR,AH2 pixG,AH2 pixB){
+ #ifdef A_HLSL
+ // Invoke a slower path for DX only, since it won't allow uninitialized values.
+ pix0.a=pix1.a=0.0;
+ #endif
+ pix0.rgb=AH3(pixR.x,pixG.x,pixB.x);
+ pix1.rgb=AH3(pixR.y,pixG.y,pixB.y);}
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrRcasHx2(
+ // Output values are for 2 8x8 tiles in a 16x8 region.
+ // pix.x = left 8x8 tile
+ // pix.y = right 8x8 tile
+ // This enables later processing to easily be packed as well.
+ out AH2 pixR,
+ out AH2 pixG,
+ out AH2 pixB,
+ #ifdef FSR_RCAS_PASSTHROUGH_ALPHA
+ out AH2 pixA,
+ #endif
+ AU2 ip, // Integer pixel position in output.
+ AU4 con){ // Constant generated by RcasSetup().
+ // No scaling algorithm uses minimal 3x3 pixel neighborhood.
+ ASW2 sp0=ASW2(ip);
+ AH3 b0=FsrRcasLoadHx2(sp0+ASW2( 0,-1)).rgb;
+ AH3 d0=FsrRcasLoadHx2(sp0+ASW2(-1, 0)).rgb;
+ #ifdef FSR_RCAS_PASSTHROUGH_ALPHA
+ AH4 ee0=FsrRcasLoadHx2(sp0);
+ AH3 e0=ee0.rgb;pixA.r=ee0.a;
+ #else
+ AH3 e0=FsrRcasLoadHx2(sp0).rgb;
+ #endif
+ AH3 f0=FsrRcasLoadHx2(sp0+ASW2( 1, 0)).rgb;
+ AH3 h0=FsrRcasLoadHx2(sp0+ASW2( 0, 1)).rgb;
+ ASW2 sp1=sp0+ASW2(8,0);
+ AH3 b1=FsrRcasLoadHx2(sp1+ASW2( 0,-1)).rgb;
+ AH3 d1=FsrRcasLoadHx2(sp1+ASW2(-1, 0)).rgb;
+ #ifdef FSR_RCAS_PASSTHROUGH_ALPHA
+ AH4 ee1=FsrRcasLoadHx2(sp1);
+ AH3 e1=ee1.rgb;pixA.g=ee1.a;
+ #else
+ AH3 e1=FsrRcasLoadHx2(sp1).rgb;
+ #endif
+ AH3 f1=FsrRcasLoadHx2(sp1+ASW2( 1, 0)).rgb;
+ AH3 h1=FsrRcasLoadHx2(sp1+ASW2( 0, 1)).rgb;
+ // Arrays of Structures to Structures of Arrays conversion.
+ AH2 bR=AH2(b0.r,b1.r);
+ AH2 bG=AH2(b0.g,b1.g);
+ AH2 bB=AH2(b0.b,b1.b);
+ AH2 dR=AH2(d0.r,d1.r);
+ AH2 dG=AH2(d0.g,d1.g);
+ AH2 dB=AH2(d0.b,d1.b);
+ AH2 eR=AH2(e0.r,e1.r);
+ AH2 eG=AH2(e0.g,e1.g);
+ AH2 eB=AH2(e0.b,e1.b);
+ AH2 fR=AH2(f0.r,f1.r);
+ AH2 fG=AH2(f0.g,f1.g);
+ AH2 fB=AH2(f0.b,f1.b);
+ AH2 hR=AH2(h0.r,h1.r);
+ AH2 hG=AH2(h0.g,h1.g);
+ AH2 hB=AH2(h0.b,h1.b);
+ // Run optional input transform.
+ FsrRcasInputHx2(bR,bG,bB);
+ FsrRcasInputHx2(dR,dG,dB);
+ FsrRcasInputHx2(eR,eG,eB);
+ FsrRcasInputHx2(fR,fG,fB);
+ FsrRcasInputHx2(hR,hG,hB);
+ // Luma times 2.
+ AH2 bL=bB*AH2_(0.5)+(bR*AH2_(0.5)+bG);
+ AH2 dL=dB*AH2_(0.5)+(dR*AH2_(0.5)+dG);
+ AH2 eL=eB*AH2_(0.5)+(eR*AH2_(0.5)+eG);
+ AH2 fL=fB*AH2_(0.5)+(fR*AH2_(0.5)+fG);
+ AH2 hL=hB*AH2_(0.5)+(hR*AH2_(0.5)+hG);
+ // Noise detection.
+ AH2 nz=AH2_(0.25)*bL+AH2_(0.25)*dL+AH2_(0.25)*fL+AH2_(0.25)*hL-eL;
+ nz=ASatH2(abs(nz)*APrxMedRcpH2(AMax3H2(AMax3H2(bL,dL,eL),fL,hL)-AMin3H2(AMin3H2(bL,dL,eL),fL,hL)));
+ nz=AH2_(-0.5)*nz+AH2_(1.0);
+ // Min and max of ring.
+ AH2 mn4R=min(AMin3H2(bR,dR,fR),hR);
+ AH2 mn4G=min(AMin3H2(bG,dG,fG),hG);
+ AH2 mn4B=min(AMin3H2(bB,dB,fB),hB);
+ AH2 mx4R=max(AMax3H2(bR,dR,fR),hR);
+ AH2 mx4G=max(AMax3H2(bG,dG,fG),hG);
+ AH2 mx4B=max(AMax3H2(bB,dB,fB),hB);
+ // Immediate constants for peak range.
+ AH2 peakC=AH2(1.0,-1.0*4.0);
+ // Limiters, these need to be high precision RCPs.
+ AH2 hitMinR=min(mn4R,eR)*ARcpH2(AH2_(4.0)*mx4R);
+ AH2 hitMinG=min(mn4G,eG)*ARcpH2(AH2_(4.0)*mx4G);
+ AH2 hitMinB=min(mn4B,eB)*ARcpH2(AH2_(4.0)*mx4B);
+ AH2 hitMaxR=(peakC.x-max(mx4R,eR))*ARcpH2(AH2_(4.0)*mn4R+peakC.y);
+ AH2 hitMaxG=(peakC.x-max(mx4G,eG))*ARcpH2(AH2_(4.0)*mn4G+peakC.y);
+ AH2 hitMaxB=(peakC.x-max(mx4B,eB))*ARcpH2(AH2_(4.0)*mn4B+peakC.y);
+ AH2 lobeR=max(-hitMinR,hitMaxR);
+ AH2 lobeG=max(-hitMinG,hitMaxG);
+ AH2 lobeB=max(-hitMinB,hitMaxB);
+ AH2 lobe=max(AH2_(-FSR_RCAS_LIMIT),min(AMax3H2(lobeR,lobeG,lobeB),AH2_(0.0)))*AH2_(AH2_AU1(con.y).x);
+ // Apply noise removal.
+ #ifdef FSR_RCAS_DENOISE
+ lobe*=nz;
+ #endif
+ // Resolve, which needs the medium precision rcp approximation to avoid visible tonality changes.
+ AH2 rcpL=APrxMedRcpH2(AH2_(4.0)*lobe+AH2_(1.0));
+ pixR=(lobe*bR+lobe*dR+lobe*hR+lobe*fR+eR)*rcpL;
+ pixG=(lobe*bG+lobe*dG+lobe*hG+lobe*fG+eG)*rcpL;
+ pixB=(lobe*bB+lobe*dB+lobe*hB+lobe*fB+eB)*rcpL;}
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+// FSR - [LFGA] LINEAR FILM GRAIN APPLICATOR
+//
+//------------------------------------------------------------------------------------------------------------------------------
+// Adding output-resolution film grain after scaling is a good way to mask both rendering and scaling artifacts.
+// Suggest using tiled blue noise as film grain input, with peak noise frequency set for a specific look and feel.
+// The 'Lfga*()' functions provide a convenient way to introduce grain.
+// These functions limit grain based on distance to signal limits.
+// This is done so that the grain is temporally energy preserving, and thus won't modify image tonality.
+// Grain application should be done in a linear colorspace.
+// The grain should be temporally changing, but have a temporal sum per pixel that adds to zero (non-biased).
+//------------------------------------------------------------------------------------------------------------------------------
+// Usage,
+// FsrLfga*(
+// color, // In/out linear colorspace color {0 to 1} ranged.
+// grain, // Per pixel grain texture value {-0.5 to 0.5} ranged, input is 3-channel to support colored grain.
+// amount); // Amount of grain (0 to 1} ranged.
+//------------------------------------------------------------------------------------------------------------------------------
+// Example if grain texture is monochrome: 'FsrLfgaF(color,AF3_(grain),amount)'
+//==============================================================================================================================
+#if defined(A_GPU)
+ // Maximum grain is the minimum distance to the signal limit.
+ void FsrLfgaF(inout AF3 c,AF3 t,AF1 a){c+=(t*AF3_(a))*min(AF3_(1.0)-c,c);}
+#endif
+//==============================================================================================================================
+#if defined(A_GPU)&&defined(A_HALF)
+ // Half precision version (slower).
+ void FsrLfgaH(inout AH3 c,AH3 t,AH1 a){c+=(t*AH3_(a))*min(AH3_(1.0)-c,c);}
+//------------------------------------------------------------------------------------------------------------------------------
+ // Packed half precision version (faster).
+ void FsrLfgaHx2(inout AH2 cR,inout AH2 cG,inout AH2 cB,AH2 tR,AH2 tG,AH2 tB,AH1 a){
+ cR+=(tR*AH2_(a))*min(AH2_(1.0)-cR,cR);cG+=(tG*AH2_(a))*min(AH2_(1.0)-cG,cG);cB+=(tB*AH2_(a))*min(AH2_(1.0)-cB,cB);}
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+// FSR - [SRTM] SIMPLE REVERSIBLE TONE-MAPPER
+//
+//------------------------------------------------------------------------------------------------------------------------------
+// This provides a way to take linear HDR color {0 to FP16_MAX} and convert it into a temporary {0 to 1} ranged post-tonemapped linear.
+// The tonemapper preserves RGB ratio, which helps maintain HDR color bleed during filtering.
+//------------------------------------------------------------------------------------------------------------------------------
+// Reversible tonemapper usage,
+// FsrSrtm*(color); // {0 to FP16_MAX} converted to {0 to 1}.
+// FsrSrtmInv*(color); // {0 to 1} converted into {0 to 32768, output peak safe for FP16}.
+//==============================================================================================================================
+#if defined(A_GPU)
+ void FsrSrtmF(inout AF3 c){c*=AF3_(ARcpF1(AMax3F1(c.r,c.g,c.b)+AF1_(1.0)));}
+ // The extra max solves the c=1.0 case (which is a /0).
+ void FsrSrtmInvF(inout AF3 c){c*=AF3_(ARcpF1(max(AF1_(1.0/32768.0),AF1_(1.0)-AMax3F1(c.r,c.g,c.b))));}
+#endif
+//==============================================================================================================================
+#if defined(A_GPU)&&defined(A_HALF)
+ void FsrSrtmH(inout AH3 c){c*=AH3_(ARcpH1(AMax3H1(c.r,c.g,c.b)+AH1_(1.0)));}
+ void FsrSrtmInvH(inout AH3 c){c*=AH3_(ARcpH1(max(AH1_(1.0/32768.0),AH1_(1.0)-AMax3H1(c.r,c.g,c.b))));}
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrSrtmHx2(inout AH2 cR,inout AH2 cG,inout AH2 cB){
+ AH2 rcp=ARcpH2(AMax3H2(cR,cG,cB)+AH2_(1.0));cR*=rcp;cG*=rcp;cB*=rcp;}
+ void FsrSrtmInvHx2(inout AH2 cR,inout AH2 cG,inout AH2 cB){
+ AH2 rcp=ARcpH2(max(AH2_(1.0/32768.0),AH2_(1.0)-AMax3H2(cR,cG,cB)));cR*=rcp;cG*=rcp;cB*=rcp;}
+#endif
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+//_____________________________________________________________/\_______________________________________________________________
+//==============================================================================================================================
+//
+// FSR - [TEPD] TEMPORAL ENERGY PRESERVING DITHER
+//
+//------------------------------------------------------------------------------------------------------------------------------
+// Temporally energy preserving dithered {0 to 1} linear to gamma 2.0 conversion.
+// Gamma 2.0 is used so that the conversion back to linear is just to square the color.
+// The conversion comes in 8-bit and 10-bit modes, designed for output to 8-bit UNORM or 10:10:10:2 respectively.
+// Given good non-biased temporal blue noise as dither input,
+// the output dither will temporally conserve energy.
+// This is done by choosing the linear nearest step point instead of perceptual nearest.
+// See code below for details.
+//------------------------------------------------------------------------------------------------------------------------------
+// DX SPEC RULES FOR FLOAT->UNORM 8-BIT CONVERSION
+// ===============================================
+// - Output is 'uint(floor(saturate(n)*255.0+0.5))'.
+// - Thus rounding is to nearest.
+// - NaN gets converted to zero.
+// - INF is clamped to {0.0 to 1.0}.
+//==============================================================================================================================
+#if defined(A_GPU)
+ // Hand tuned integer position to dither value, with more values than simple checkerboard.
+ // Only 32-bit has enough precision for this compddation.
+ // Output is {0 to <1}.
+ AF1 FsrTepdDitF(AU2 p,AU1 f){
+ AF1 x=AF1_(p.x+f);
+ AF1 y=AF1_(p.y);
+ // The 1.61803 golden ratio.
+ AF1 a=AF1_((1.0+sqrt(5.0))/2.0);
+ // Number designed to provide a good visual pattern.
+ AF1 b=AF1_(1.0/3.69);
+ x=x*a+(y*b);
+ return AFractF1(x);}
+//------------------------------------------------------------------------------------------------------------------------------
+ // This version is 8-bit gamma 2.0.
+ // The 'c' input is {0 to 1}.
+ // Output is {0 to 1} ready for image store.
+ void FsrTepdC8F(inout AF3 c,AF1 dit){
+ AF3 n=sqrt(c);
+ n=floor(n*AF3_(255.0))*AF3_(1.0/255.0);
+ AF3 a=n*n;
+ AF3 b=n+AF3_(1.0/255.0);b=b*b;
+ // Ratio of 'a' to 'b' required to produce 'c'.
+ // APrxLoRcpF1() won't work here (at least for very high dynamic ranges).
+ // APrxMedRcpF1() is an IADD,FMA,MUL.
+ AF3 r=(c-b)*APrxMedRcpF3(a-b);
+ // Use the ratio as a cutoff to choose 'a' or 'b'.
+ // AGtZeroF1() is a MUL.
+ c=ASatF3(n+AGtZeroF3(AF3_(dit)-r)*AF3_(1.0/255.0));}
+//------------------------------------------------------------------------------------------------------------------------------
+ // This version is 10-bit gamma 2.0.
+ // The 'c' input is {0 to 1}.
+ // Output is {0 to 1} ready for image store.
+ void FsrTepdC10F(inout AF3 c,AF1 dit){
+ AF3 n=sqrt(c);
+ n=floor(n*AF3_(1023.0))*AF3_(1.0/1023.0);
+ AF3 a=n*n;
+ AF3 b=n+AF3_(1.0/1023.0);b=b*b;
+ AF3 r=(c-b)*APrxMedRcpF3(a-b);
+ c=ASatF3(n+AGtZeroF3(AF3_(dit)-r)*AF3_(1.0/1023.0));}
+#endif
+//==============================================================================================================================
+#if defined(A_GPU)&&defined(A_HALF)
+ AH1 FsrTepdDitH(AU2 p,AU1 f){
+ AF1 x=AF1_(p.x+f);
+ AF1 y=AF1_(p.y);
+ AF1 a=AF1_((1.0+sqrt(5.0))/2.0);
+ AF1 b=AF1_(1.0/3.69);
+ x=x*a+(y*b);
+ return AH1(AFractF1(x));}
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrTepdC8H(inout AH3 c,AH1 dit){
+ AH3 n=sqrt(c);
+ n=floor(n*AH3_(255.0))*AH3_(1.0/255.0);
+ AH3 a=n*n;
+ AH3 b=n+AH3_(1.0/255.0);b=b*b;
+ AH3 r=(c-b)*APrxMedRcpH3(a-b);
+ c=ASatH3(n+AGtZeroH3(AH3_(dit)-r)*AH3_(1.0/255.0));}
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrTepdC10H(inout AH3 c,AH1 dit){
+ AH3 n=sqrt(c);
+ n=floor(n*AH3_(1023.0))*AH3_(1.0/1023.0);
+ AH3 a=n*n;
+ AH3 b=n+AH3_(1.0/1023.0);b=b*b;
+ AH3 r=(c-b)*APrxMedRcpH3(a-b);
+ c=ASatH3(n+AGtZeroH3(AH3_(dit)-r)*AH3_(1.0/1023.0));}
+//==============================================================================================================================
+ // This computes dither for positions 'p' and 'p+{8,0}'.
+ AH2 FsrTepdDitHx2(AU2 p,AU1 f){
+ AF2 x;
+ x.x=AF1_(p.x+f);
+ x.y=x.x+AF1_(8.0);
+ AF1 y=AF1_(p.y);
+ AF1 a=AF1_((1.0+sqrt(5.0))/2.0);
+ AF1 b=AF1_(1.0/3.69);
+ x=x*AF2_(a)+AF2_(y*b);
+ return AH2(AFractF2(x));}
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrTepdC8Hx2(inout AH2 cR,inout AH2 cG,inout AH2 cB,AH2 dit){
+ AH2 nR=sqrt(cR);
+ AH2 nG=sqrt(cG);
+ AH2 nB=sqrt(cB);
+ nR=floor(nR*AH2_(255.0))*AH2_(1.0/255.0);
+ nG=floor(nG*AH2_(255.0))*AH2_(1.0/255.0);
+ nB=floor(nB*AH2_(255.0))*AH2_(1.0/255.0);
+ AH2 aR=nR*nR;
+ AH2 aG=nG*nG;
+ AH2 aB=nB*nB;
+ AH2 bR=nR+AH2_(1.0/255.0);bR=bR*bR;
+ AH2 bG=nG+AH2_(1.0/255.0);bG=bG*bG;
+ AH2 bB=nB+AH2_(1.0/255.0);bB=bB*bB;
+ AH2 rR=(cR-bR)*APrxMedRcpH2(aR-bR);
+ AH2 rG=(cG-bG)*APrxMedRcpH2(aG-bG);
+ AH2 rB=(cB-bB)*APrxMedRcpH2(aB-bB);
+ cR=ASatH2(nR+AGtZeroH2(dit-rR)*AH2_(1.0/255.0));
+ cG=ASatH2(nG+AGtZeroH2(dit-rG)*AH2_(1.0/255.0));
+ cB=ASatH2(nB+AGtZeroH2(dit-rB)*AH2_(1.0/255.0));}
+//------------------------------------------------------------------------------------------------------------------------------
+ void FsrTepdC10Hx2(inout AH2 cR,inout AH2 cG,inout AH2 cB,AH2 dit){
+ AH2 nR=sqrt(cR);
+ AH2 nG=sqrt(cG);
+ AH2 nB=sqrt(cB);
+ nR=floor(nR*AH2_(1023.0))*AH2_(1.0/1023.0);
+ nG=floor(nG*AH2_(1023.0))*AH2_(1.0/1023.0);
+ nB=floor(nB*AH2_(1023.0))*AH2_(1.0/1023.0);
+ AH2 aR=nR*nR;
+ AH2 aG=nG*nG;
+ AH2 aB=nB*nB;
+ AH2 bR=nR+AH2_(1.0/1023.0);bR=bR*bR;
+ AH2 bG=nG+AH2_(1.0/1023.0);bG=bG*bG;
+ AH2 bB=nB+AH2_(1.0/1023.0);bB=bB*bB;
+ AH2 rR=(cR-bR)*APrxMedRcpH2(aR-bR);
+ AH2 rG=(cG-bG)*APrxMedRcpH2(aG-bG);
+ AH2 rB=(cB-bB)*APrxMedRcpH2(aB-bB);
+ cR=ASatH2(nR+AGtZeroH2(dit-rR)*AH2_(1.0/1023.0));
+ cG=ASatH2(nG+AGtZeroH2(dit-rG)*AH2_(1.0/1023.0));
+ cB=ASatH2(nB+AGtZeroH2(dit-rB)*AH2_(1.0/1023.0));}
+#endif
diff --git a/thirdparty/amd-fsr/license.txt b/thirdparty/amd-fsr/license.txt
new file mode 100644
index 00000000000..324cba594d1
--- /dev/null
+++ b/thirdparty/amd-fsr/license.txt
@@ -0,0 +1,19 @@
+Copyright (c) 2021 Advanced Micro Devices, Inc. All rights reserved.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.