🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Volumetric Irradiance Map technique

Started by
3 comments, last by ddlox 3 years, 8 months ago

Ive found a technique for GI called Volumetric Irradiance Map. Here is the link to the slides - https://www.gdcvault.com/play/1026469/Scalable-Real-Time-Global-Illumination.

Can somebody please describe to me how it works?

Advertisement

ok i'm gonna say this once, and I will assume (respectfully) that you have little knowledge on this topic only so I can choose my words carefully, so pay attention ?

These slides show that in today's game engines, Global Illumination (GI) can be done better. Better than what? we need a bit of history to understand what's going on...

A/ The old days

back in the days of DirectX8/9 and OGL3 and earlier, Global Illumination (GI) was done with the help of Directional Lights and Ambient Lighting, have a look here:

https://docs.microsoft.com/en-us/windows/win32/direct3d9/lights-and-materials

This old revered system was fine for its time, but had among others 2 major limitations:
- one problem was u could only use up to 8 lights (I'll leave it as an exercise for u to find out why 8, if u don't know why)
- the other problem was that light values that could be used ranged from 0 to 1 incl. (or 0 to 255 incl.), again this at the time was cool, but for today not enough

For this reason game dev such as John Carmack came up with in-game lightmaps: "a means by which a DOOMed devil could light up a cigarette", the sort of thing Jeremy Clarkson would say; but essentially, for us game developers, lightmaps were a novel way of bypassing the 8-light limit by pre-baking light values into a texture (note that this could be seen as a pre-cambien form of deferred lighting ?)

Upon seeing these results in DOOM/QUAKE, game dev communities and graphics makers realised that more could be done (in software and hardware) for a game to look great: and the crunchy steroids pill were born. And as we all went on steroids to move from this rudimentary system to a better one: the angel of light called High-Dynamic Range (HDR) was born and GI was given a fresh look on life

B/ Dawn to a new era: Deferred lighting

As time went, with newer graphics card, more cpu power, more memory, more everything... a new technique to render your scene using more than 8 lights became possible, it was called: Deferred Lighting (DL)

DL in reality is based on Deferred Shading(DS) which was actually introduced by Michael (this is him here http://michaelfrankdeering.com/blog/hobbies/ before we knew how to walk).

DS has 2 render passes.

DL has 3 render passes:

- pass1: render scene geometry such that only the light IRRADIANCE values (or per-pixel data) are stored to what is known as a G-Buffer (look this up as an exercise)
(the slide that you posted makes reference to irradiance as well, so this is where it comes or started from)

- pass2: render the same scene geometry such that the DIFFUSE and OPACITY values are stored to G-buffer
(this is why this pass2 is called deferred because in the old days, this pass2 would be done first then pass1 after)

- pass3: pass2 values are mixed with pass1 by reading back the light irradiance values from pass1
(in DS, this pass3 happens at the same time during pass2; there are pros and cons as to which Dxxx u choose. Find out as an exercise)

I think these (https://sites.google.com/site/richgel99/home) were first folks to implement DS in game. DS/DL can actually be done in DirectX9 (just google it...)

C/ Here we are

Ok so, please accept my apologies if the history/dating of some of the things I've said in point A/ or point B/ maybe slightly off but that's the origin gist of things.

The technique presented on these slides that posted is yet another great effort of achieving better ways of processing light in-game, this is also done using deferred methods as well but with voxels. IT'S ALWAYS ABOUT HOW TO GET GOOD LOOKING LIGHT VALUES and HOW TO MIX THEM INTO YOUR SCENE:

The idea here is that light IRRADIANCE values are compartmentalised in 3D spaced areas known as voxels. And the voxel themselves are placed at regular distances from each other (because if u keep them too close to each other then there may not be that much difference in the irradiance values). So a voxel (that thing that looks like a cube) in this case represents a volumetric area where light irradiance samples are the same and can be sampled from to light up the scene in that particular area.

That's why the scene is voxelized (if I may say in other words: simplified into cube-like forms) so that light IRRADIANCE values can be sampled within (or if you like around) each of these volumes. Believe or not, that's basically it!

The bulk of the slides is then about how to produce those light values for GI:

- what they are trying to achieve using voxels in DL

- the different techniques they tried to produce realistic irradiance values (using light rays, etc...)

- what problems they had and the solution they offer (storing probes in voxels)

- what new feature they were trying to add (light leaking into rooms, time of day, etc...)

- how they store results in the volumes

- the performance watch, data queueuing....

- and in their conclusion u can see what they have achieved (the pros/cons, memory footprint, requirements... and future improvements...)

I hope this clarifies it a bit for you ?

That's it … all the best ?

Thanks you a lot for your reply, man. I ve actually have an experience with OGL HAHA ). So i know what lightmap, voxel, deferred pipeline is)

What im interested in, is how this GI technique ACTUALLY works…

What i read is that he takes a bunch of ‘probes’ (voxels in space) and then casts rays from them (ray marching in 3d volume???). But if you use this approach you will get noise and flickering. So, can you explain me in detailes how he calculates irradiance volume values? Thanks.

ha! ok here is a bit more meat then ?

Read this paper, it explains in great detail how to do this:

https://research.nvidia.com/publication/interactive-indirect-illumination-using-voxel-cone-tracing

the slides that you posted is based on this and then they modified for their needs.

now remember this terminology, it will help a bit:

  • direct light: this is light value coming straight from an emitter into camera/eye
  • indirect light (IL): is light from an emitter that has bounced off N times on scene geometry before reaching camera/eye. IL is also what we call Global Illumination
  • ambient occlusion: is the amount of indirect light value that has been occluded by scene geometry
  • voxel: is an area in 3d scene that -you- create into the scene to store direct light data and emit this stored light as indirect light data. This is key. The process of creating voxels in yr scene is voxelization and there's 2 kinds :
    • opacity (you turn yr scene into a tree of voxels 'aka sparse voxelization octree' to represent the density of yr scene)
    • and emittance (you turn yr scene light data into emitters, so the scene light irradiance values are stored on 3d volumes such as spheres or your friendly cubes ?)

When yr scene is voxelized, and from the eye position, u can now shoot cones (not rays, so not ray tracing) to do cone tracing and compute lighting for ILs bouncing off the geometry 1x, or 2x or … infinitely.

So no need to pre-bake anything only if u r nostalgic -lol- , vxgi when done well is fast, 2 to 3 msec even on large scenes, that's why it works for dynamic lighting and on moving objects. Voxelisation can be done approx 1msec.

Until then ?

This topic is closed to new replies.

Advertisement