🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Percentage Closer Soft Shadows: ( combined with jumpflood ) how estimating correctly penumbra size ( diagrams inside )

Started by
20 comments, last by JoeJ 2 years, 1 month ago

Hi, i am working on softening my raytraced pixel perfect hard shadows.

I dont want to use sample and denoise technique but i use jumpflood technique.

jumpflood
a. that means for every pixel in the hard shadow texture is the pixel coordinate to the NEAREST light pixel calculated.
b. if the distance of a shadow pixel = 1 pixel we have a border pixel between shadow und darkness

in the following diagrams you can see how the information by reprojection is used to calculate the penumbra size of the shadow pixels.
- for every shadow pixel the coordinate of the nearest light pixel is read
- the distance of the WORLD coordinates of theese two pixel is calculated
- the lenght of the distance of the world penumbra size is proportional to the distance in pixels in screen space

with an analytical formula, without bluring pixels we can have a nice soft shadow with big penumbra sizes

It works so far, but i cannot correctly estimate the penumbra size because there are REAL border pixels and FAILED border pixles which leads to a white coronna ( soft shadow ) around objects.

I did a lot of work to make some diagrams to show exactly what is going on.

Could please, please someone help me

Advertisement

evelyn4you said:
I did a lot of work to make some diagrams to show exactly what is going on.

Unfortunately, even after spending lots of time on looking at the pictures, i'm just confused ; )

I think you want to post a render too, containing valid and failed cases. Then it might start to make sense…

Hi joej,

a post of a render will unfortunatelly not show whats the problem. You are a professional so the first first diagramm will be clear to you.

1. raytrace the scene and make a gbuffer.
2. the depth values are used to reconstruct world positions
3. cast ray from each world position in direction of the dirlight we get NOT a typical shadow map but a hard shadow texture with the same resolution of the gbuffer
4. the dark areas show the HARD shadow result in the “raytraced screen hard shadow texture”
5. for understnanding i only show the 2D case. of the “raytraced screen hard shadow texture”, that is only a line.
6. per jumpflood i can find the border or edge pixels from HARD shadow to full Light
7 in our case we only find 2 regions each has two border pixels which mark the begin if “lighted area” and the end of “lighted area”
8 by projections of these pixels we can find the world coordinate of the border pixels. so we know where in the world space light begins and where it ends.
9 now we want to soften our shadows.
10 the diagramm show the OK result of the softening process in the lower part of the picture.
11 only shadow pixels are processes.
12 every shadow pixel tells me how far it is from the nearest Light pixel, so i can calculate the penumbra value.
13 but in the upper part of the picture the same process is done, but i get soft shadows where no soft shadow should be.
14 i know the problem is a depth discontinuity between the world coordinates, but i cannot find any good formula to detect the problematic cases.

e.g. i calculate
float depth_ratio = scene_pos_Light_pixel.z / scene_pos_Seed_pixel.z;

if ((depth_ratio > 0.99) && (depth_ratio < 1.01))
{
calculate_soft_shadow_intensity()
}

but depending on the scene the ratio changes.




Ok, i think it's clear now.

So the problem is, for pixels above the yellow one we find the very distant lit point from the blue arrow, while what we want would be the shadowed but occluded point following the red circle, correct?

Well, the first idea i have is: If the found point is distant and likely not on the same surface, we can assume the relevant but occluded neighborhood is all in shadow. Ofc. this will give us similar artifacts than any other screenspace technique suffering from missing information.
And because of that, the cost of raytracing isn't well spent. : /

A promising solution to such problems are stochastic depth buffers. Maybe this paper gives some inspiration.

However, my belly feeling is: Denoising would be less pain then trying to come up with hacks. AMD has open source on their GPU Open site.
I mean, even if you could solve this robustly and reliable, there are still some other problems:
1. The hard shadow does not contain any information about the area light we try to approximate.
2. Processing only shadow pixels isn't right. The border between light and shadow should be in 50% shadow most probably, but you'll have it 100% lit, so you'll have less shadow than there should be.
EDIT: 3. Distance to the closest lit surface is just a heuristic to guarantee a smooth transition, but does not really approximate general area lights.
You're aware about that i guess, but those are still arguments. Conventional Monte Carlo + denoising would get those things right.

But it's not that i don't like the idea of blurring shadows in screenspace to fake soft shadows. I'm just no expert with shadows, and can't really help.
First time i saw the idea was here, in this impressive project. On his YT channel there are more vids, and here we can see some masks / blurring he did:

UE5 also does this afaik, to turn their almost pixel perfect virtual shadow maps into faked soft shadows. So that's another resource to check out.

I just briefly ran through - but I don't think your first image is correct.

Few definitions:

  • I defined a light as Light point - the light is area light between LightRadiusA and LightRadiusB (Sphere light is just representing radius - the point is, original point light is in center of area light)
    • Note: You could go for spherical lights, but the geometry would be different and falloffs not ‘linear’. as penumbra should really be calculated as ratio of visible surface of light from point x (taking occluders into account) over visible surface of light from point x (without taking occluders into account).
    • Note: While you could have original point light at position of F.e. LightRadiusB, you have to keep in mind that hard shadow does NOT begin at start or end of penumbra region, but in center of it - the moment you start taking area lights into account you no longer take ‘Light’ at all into account, only LightRadiusA and lightRadiusB are your concern
  • Original hard shadow is between points A and B (HardShadowAB)
  • Soft shadow is between PenA and PenB, with full hard shadow between UmA and UmB … original hard shadow is from A to B (in this case center of PenumbraA to center of PenumbraB)

Back to your image - your original hard shadow point would be at (Pbw + Psw / 2).

Note: Your penumbra math won't be accurate though - I'd like to explain but now I need to drop off right now (and the math is a bit more complex than for a 5 minute post). I'll probably get a bit of time during the day to put it together.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Hi JoeJ and Vilem Otte,

many thanks for your comments.

I am aware of the problem of my technique NOT correctly calculate the penumbra region to BOTH directions ( growing to shadow side and to lightfull side )

But if you dont have the direct side by side comparison the picture will still be very convincing. I am also aware that i should not yust compare the world distance but the 2 dimensional dot product so that i get the correct virtual intersection point of the light beam. Because the diagram 1 formula asumes that the world area is parallel to the light area but i didn´t want the explanation to get too complicated.

@joej

I checked the code of the youtuber you have linked. In his following github file he makes a typical blocker search with 2 nested loops that costs a lot of performance.

This has the advantage that he does not get into the issues i have.

The softening of the shadow image takes about 0.3ms - 0.4ms when doing it in half resolution, which is quite fast.

Thats the reason why i thougt i should spend some more effort to make the effect game ready.

But now i think i will fail and have to go the sampling and temporal und spacial denoising route.

The code of AMD GPU Open Hybrid raytraced shadows i have also worked with, but the integration of the technique in my game engine is much effort, because i work with c# engine and cannot yust bind the c++ source and header files. I must completely understand the code before i can implement it in my own engine.


github.com → danielkrakowiak → Engine1/blob/master/Engine1/Shaders/DistanceToOccluderSearchShader/DistanceToOccluderSearch_cs.hlsl

Never type text to a post after pasting in a link. Editor is too buggy. >: (

That's why i hide my links in my text. Selecting a word, then click link button, then paste the link into the opened box. It seems this avoids this annoying bug.

evelyn4you said:
I checked the code of the youtuber you have linked. In his following github file he makes a typical blocker search with 2 nested loops that costs a lot of performance.

Ah ok. I see that's not what you look for.
But i assume UE5 does no blocker search. I think they only aim for slightly soft shadows as we get from direct sun light. For this, some hack should be still good enough i would hope.
I guess they get a penumbra radius just from distance to first occluder form SM, eventually refine using a screenspace trace, then blur using some depth aware filter.

evelyn4you said:
But now i think i will fail and have to go the sampling and temporal und spacial denoising route.

Not cool, but that's how ray tracing usually works. It's basically point sampling, so there is no way around taking multiple samples. And if we can distribute this over multiple frames, that's very nice and some temporal issues are acceptable.

First a warning - this post is going to be image heavy… let's start with description of what we're going to work with:

This is our scene - where:

  • L_o represents light point (center), from which is used as origin for shadow map creation
  • L represents actual physical light (it can be of any shape, but I've used “sphere”)
  • Black segments represent scene data (which are also written into depth map rendered from origin L - i.e. our shadow map)
  • Axes are the only as helpers (defining shadow map projection)

For starters - let's draw down how you can solve hard shadows with ray tracing and with shadow map, ideally the result of such will be equivalent:

Where:

  • Red represents ray tracing solution
    • ShadowRay from X to Light origin (center)
    • HitPoint along the way which distance is smaller than distance to Light origin - therefore X is in shadow
  • Blue represents shadow map solution
    • DepthMapSample is representing depth value read out from projected depth map from light origin to the point X
    • DepthX is actual depth value of point X (computed) from light origin point
    • Due to DepthMapSample < DepthX we conclude that X is in shadow

Now, let's go one step further - we would like to run into area of penumbrae shadows (or accurate soft shadows). The analytical solution to that would be to perform integration over the whole hemisphere above X determining how much of the light is visible and how much is occluded (you can simplify this to integration only over the solid angle under which you see light from X). It is easy and simple solution right? Well… let's see:

Where (in 2D analogy here):

  • Blue beta represents resulting solid angle visibility
  • Red alpha represents solid angle under which we see the light sphere
  • Now one can determine what fraction of light is actually visible (which will be ~15% in this case), point X will be in penumbra, the shadow won't have full intensity there

Clearly this is impossible to calculate, so in ray tracing world, we use approach of performing some sort of approximation of the integration - which can be F.e. monte carlo method. By selecting N samples we determine shadow:

Where red rays represent rays we have used as shadow rays for integration approximation method. In this approach you can see there was 5 rays towards randomly selected points on light casted, out of those 5, 4 detected a hit earlier than reaching light. After these 5 samples we can approximate that about 20% of light is visible (which is a good approximation - of course the result will heavily depend on our sampling scheme).

Now, there are other ways how to determine the penumbrae shadows correctly - one of the approaches was presented already back in 2000's, the approach is close to analytic solution and yields very good results. The idea is:

Where:

  • Blue axes represent projection of light (min and max point)
  • Red axes represent projection of occluder
  • Result is then ratio of (area of projected light - occluders over the area of projected light) / area of projected light

This approach is called back projection, the light projection itself is straight forward, but for the occluders we are going to use values stored inside of the shadow map (each value written in shadow map actually represent a rectangle-like shaped occluder which you can back project). The main pitfall of this approach is that you may need to back project large number of occluders - which will end up in possible huge search area. There are optimizations (like hierarchical min-max shadow maps) - which improve performance issues for most scenarios of this approach (while having some bugs on their own). The solution though becomes quite complex - and requires min-max shadow map (which is basically 2 separate mip chains for shadow map). It seems to me that you're trying to implement this.

Another idea (which is what UE uses I believe) is mixing the idea of actual ray tracing of N shadow rays and backprojection approach (especially the part where you use shadow map as your scene). So what UE really does is that from point X you create N rays towards the light (much like ray tracing approach), but instead of doing proper ray tracing against scene you perform ray tracing against data in shadow map (this reminds me of things like soft self shadowing on height fields). Essentially you determine where in 2D depth map your point X belongs when projected (along with depth value) - and you traverse 2D height map in direction of random sample on your light. If you collide with anything in shadow map your current sample is in shadow, if not your current sample is lit.

Ray tracing a depth map can be reduced to 2D problem … which is fast. UE called this SMRT (Shadow Maps Ray Tracing), but they do have some limits - :

  • If you look at “Penumbra Size Limits" it looks like there is a max. penumbra. With growing penumbra your number of samples within that single shadow map ray cast would grow (a LOT), unless you used some sort of hierarchical optimization (min-max mipmap pyramids for shadow map would work - but that brings you to performance problem of previous solution) … this also magically solves any issues with banding you would get
  • If you look closely at produced shadow it's noisy (banding is not visible though) - the number of samples is going to be reasonable number (not insane like 200), using textures tend to be great at covering this though.
  • Inconsistent penumbra - it seems like they preferred to have incorrect penumbra size in favor of avoiding light leaks (which clearly are going to be a problem - mainly because shadow map does not have all the information about the scene you would need!)

I personally have not implemented this one yet.

I have tried to implement back-projection (and to some extent it worked - the code is huge mess, contains multiple bugs and is not properly connected to light size & geometry). I was having it on to-do list to give it a try again though (and yes, I create min-max hierarchy of my shadow maps for this):

Note: I know it is visible that penumbra of the plant is WRONG next to the column - you can see shard edge there. That's the issue what you should be looking at.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

@Vilem Otte

Wow, many thanks for your valuable post.

Do you have a open source version of your engine ?

May i ask about the performance of your solution ?
I suppose the generation of the shadow min max hierarchies will take quite a lot of time.

The very smart advantage of my implementation is the fact, that the raytraced hard shadows are of very high quality even in half resolution so not only the softening of shadows but also the raytracing process in half resolution is quite fast.

time for raytracing hard shadows for one dirlight in 1080p = 0.5ms , in half res about 0.2ms with rtx 3070 ( really big scene)

Also Shadowmap technique would need qite a high resolution shadowmap ( cascades ) at least 4000k for comparable quality and will NOT be faster.

But i have clearly to admit a BIG DISADVANTAGE of raytraced shadows.
Using chracter animation with high poly characters will slow down considerably, because it is needed to rebuild or update the BLAS ( bottom level access structure per frame)


This topic is closed to new replies.

Advertisement