My take on shaders: Spherical mask post-processing effect

I promised it and now I’m delivering it: The spherical mask as a post-processing effect. As I mentioned in the spherical mask dissolve post, and many other posts before that, I am super lazy. I don’t like having to apply an effect to a bunch of different materials and them to a bunch of different objects in the scene. Therefore, after I worked out the spherical mask for object shaders (like the one in the aforementioned post), my next thought was “ok, this is cool, but I want it in screen space”. So I tried to figure out how that would work. I knew I wouldn’t be able to do a bunch of effects, like dissolve, since whatever effect would be applied in the generated texture that’s on the screen, so there’s not geometry and stuff at this point (to put it really roughly).

The first step was to, well, get world position through a screen space effect. After a bunch of digging I found that in some rendering engines there’s a “position texture” that you can read from to get the world position of each pixel on the screen! Neat huh?? Now guess what popular game engine does *not* have that feature. Yes, Unity does *not* have that feature. The alternative people were suggesting was to get the camera’s depth and reconstruct the world position of the pixels based on that. So, I was like “I know how to get the camera’s depth!” The next step was to do the re-construction thing, so I ran into this awesome forum thread post and I just used this code into the effect.

Now, I’ll be honest with you. The effect is not always stable, and it seems as if there is some kind of position offset from inside the editor to a recording or a build. But still, when it works as expected it’s kinda cool, and it introduces some interesting technical elements (that I’m not really suited to explain). I found that a more rigid implementation of the same concept is done by Keijiro Takahasi and it can be found in this repository.

As indicated by the featured image, this specific effect works by returning the luminance value of the pixels inside a spherical mask, instead of their actual color value. Of course, again, this masking can be used for a bunch of other effects, but since we’re in screen-space, we’re a bit more restricted to mostly working with the colors of the scene. Even if we wanted to use textures for, say, some displacement, they’d either have to be in screen space (which kinda ruins the illusion) or in world-space, which means that the texture should be tri-planarly mapped into world space via a post-processing effect. It’s definitely doable and actually relies on the same principles, but it’s a bit of a hassle. A good write-up on something like this can be found in this blog post by the Knights of Unity.

Shader

Enough chit-chat, let’s get into the shader code:

Shader "Hidden/SphericalMaskPP"
{
	Properties
	{
		_MainTex ("Texture", 2D) = "white" {}
		_Position("Position", vector) = (0,0,0,0)
		_Radius("Radius", float) = 0.5
		_Softness("Softness", float) = 0.5
	}
	SubShader
	{
		// No culling or depth
		Cull Off ZWrite Off ZTest Always

		Pass
		{
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			
			#include "UnityCG.cginc"

			struct appdata
			{
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;
			};

			struct v2f
			{
				float2 uv : TEXCOORD0;
				float3 worldDirection : TEXCOORD1;
				float4 vertex : SV_POSITION;
			};
			
			float4x4 _ClipToWorld;

			v2f vert (appdata v)
			{
				v2f o;
				o.vertex = UnityObjectToClipPos(v.vertex);
				o.uv = v.uv;

				float4 clip = float4(o.vertex.xy, 0.0, 1.0);
				o.worldDirection = mul(_ClipToWorld, clip) - _WorldSpaceCameraPos;
				return o;
			}
			
			sampler2D _MainTex;

			sampler2D _CameraDepthTexture;

			float4 _Position;
			half _Radius;
			half _Softness;

			fixed4 frag (v2f i) : SV_Target
			{
				//Get the depth of the camera
				float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv.xy);
				depth = LinearEyeDepth(depth);
			
				//Get the different colors
				fixed4 col = tex2D(_MainTex, i.uv);
				half lum = Luminance(col.rgb);
				fixed4 colGray = fixed4(lum,lum,lum,1);

				//Calculate world position and distance from the spherical mask
				float3 wpos = i.worldDirection * depth + _WorldSpaceCameraPos;
				half d = distance(_Position, wpos);
				half sum = saturate((d - _Radius) / _Softness);
				fixed4 finalColor = lerp(colGray, col, sum);

				return finalColor;
			}
			ENDCG
		}
	}
}

The shader itself is fairly simple, with some tricky and interesting bits here and there. The properties are, pretty straightforward, especially if you’ve read the spherical mask dissolve post. “_Position” is a vector representing the world-space position of the spherical mask, “_Radius” its radius and “_Softness”, the crispiness of the sphere’s edges. Do keep in mind that a negative value in the “_Softness” property will invert the mask, and that can create some interesting effects as well.

In line 32 I add another field to the v2f struct, in order to pass the camera’s world-space direction from the vertex to the fragment shader. I also declare a 4×4 matrix in line 36, used for the world-space position reconstruction. Its value is assigned via the controller script (which I show below). Then, in lines 44-45 I calculate the world direction, as shown in the shader from the aforementioned unity forum post. The concept is that after I get the matrix that converts clip position to world position, I multiply it with the camera’s clip position and subtract its world position from it, to get a vector that represents the camera’s direction in world-space.

After that, I redeclare the properties from the properties block, along with the “_CameraDepthTexture” to get the camera’s depth, and I move on to the fragment shader. In lines 60-61 I sample the depth texture to get the camera’s linear eye depth. Fun fact: there’s something happening that I did not expect if you replace “LinearEyeDepth(depth)” with “Linear01Depth(depth)”, but I’ll let you discover it on your own. 😉

Lines 64-66 are just to get the camera’s original color and a grayscale version of that using the “Luminance” method. Nothing exciting here.

The magic happens in lines 69-72. Specifically, in line 69 (nice) I calculate the precious world position. This is what it’s all about, really. The high-level explanation of this technique (or at least my take on it) is this: In the vertex shader we got a vector indicating which way the camera is looking in world-space, called “worldDirection”. The idea is simple: the eye depth starts at 0 really close to the camera and gets to 1 as it gets towards the farthest position the camera can see. Therefore, if you get the “worldDirection” vector (which is not normalized because we also want it’s length that matches how far the camera can see) and you multiply it by the eye depth you get the world-space offset of the pixel from the camera. Add it to the camera’s world-space position and you’ve got the pixel’s world-space position.

After the world position is recreated, we check if the pixel’s position is within the the spherical mask, the exact same way as with the spherical mask dissolve shader. So, if you need any hints about how that works, you can check out that post. Then I just use lerp to mix the two color values according to the sphere mask and return it. Simple as that.

Controller

Since this ain’t no your regular old effect next door, the classic custom image effect script won’t cut it. some calculations that involve the camera’s viewing matrix need to be performed on the script side and, thus, the controller script looks like this:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

#if UNITY_EDITOR
using UnityEditor;
#endif

[ExecuteInEditMode]
[ImageEffectAllowedInSceneView]
public class SphericalMaskPPController : MonoBehaviour {

	public Material material;
	public Vector3 spherePosition;

	public float radius = 0.5f;
	public float softness = 0.5f;

	private Camera camera;
	void OnEnable () {
		camera = GetComponent<Camera> ();
		camera.depthTextureMode = DepthTextureMode.Depth;
	}

	void OnRenderImage (RenderTexture src, RenderTexture dest) {
		if (material == null) return;
		var p = GL.GetGPUProjectionMatrix (camera.projectionMatrix, false);
		p[2, 3] = p[3, 2] = 0.0f;
		p[3, 3] = 1.0f;
		var clipToWorld = Matrix4x4.Inverse (p * camera.worldToCameraMatrix) * Matrix4x4.TRS (new Vector3 (0, 0, -p[2, 2]), Quaternion.identity, Vector3.one);
		material.SetMatrix ("_ClipToWorld", clipToWorld);
		material.SetVector ("_Position", spherePosition);
		material.SetFloat ("_Radius", radius);
		material.SetFloat ("_Softness", softness);
		Graphics.Blit (src, dest, material);
	}

}

#if UNITY_EDITOR
[CustomEditor (typeof (SphericalMaskPPController))]
public class SphericalMaskPPControllerEditor : Editor {
	private void OnSceneGUI () {
		SphericalMaskPPController controller = target as SphericalMaskPPController;
		Vector3 spherePosition = controller.spherePosition;
		EditorGUI.BeginChangeCheck ();
		spherePosition = Handles.DoPositionHandle (spherePosition, Quaternion.identity);
		if (EditorGUI.EndChangeCheck ()) {
			Undo.RecordObject (controller, "Move sphere pos");
			EditorUtility.SetDirty (controller);
			controller.spherePosition = spherePosition;
		}

		Handles.DrawWireDisc (controller.spherePosition, Vector3.up, controller.radius);
		Handles.DrawWireDisc (controller.spherePosition, Vector3.forward, controller.radius);
		Handles.DrawWireDisc (controller.spherePosition, Vector3.right, controller.radius);
	}
}
#endif

The code in “OnRenderImage” is taken as is from that forum post, and I’d be lying if I said I knew exactly what it does. I just figured that it kinda works, so let’s just leave it at that. Besides that, it applies the effect on the camera and passes the exposed values to the properties of the shader. So, in order to use it, one has to create a material using the shader above, attach the script to the camera and pass the created material to the controller component. There’s some editor scripting stuff there to better visualize the position and size of the sphere, as well as to move it in world space without having to manually input a vector. When you select the camera with this script attached, a second move handle will appear, indicating the position of the spherical mask. Do keep in mind, also, that since the effect is applied in edit mode too, the script overrides the position, radius and softness properties of the material. They can actually be removed from the “properties” block of the shader, but I decided to keep them for clarity.

I think that’s it for now, at least I can’t think of anything else on this effect. So, see you in the next one!




Disclaimer

The code in the shader tutorials is under the CC0 license, so you can freely use it in your commercial or non-commercial projects without the need for any attribution/credits.

These posts will never go behind a paywall. But if you really really super enjoyed this post, consider buying me a coffee, or becoming my patron to get exciting benefits!

Become a Patron!

Or don't, I'm not the boss of you.

Comments

  1. Francisco Fernández Mañas

    Simply awesome, thanks for it.

    Will you do any backgrounds/sky related shaders in the future?

    1. Post
      Author

Comments are closed.