I've been playing around with reviving radiosity for incremental GI for low-poly scenes, and this sounds very similar (and probably much better). I put a camera on each lightmap texel, rendered the scene, then summed the pixels (roughly) in the render to get the light. I chose the "slow" approach, where lighting took several seconds, but was done idly. And then once the lightmap had a certain amount of stability I'd stop the light calculations until the scene changed.
It sounds like the advantages here are:
- Optimized sampling, rather than just every lightmap texel. My idea was to tie the lightmap to LOD, but I feel like this is much smarter.
- Optimized light accumulation, dedicating more resolution to high-light areas to reduce noise
It seems like it has a more advanced "stability" calculation.
Things that are the same:
- Lighting is still incremental - when they e.g. change the light direction, even with optimizations, there's still some ghost light that slowly moves over so I'm not sure how this would work in really dynamic situations (car traffic)
Things that are different:
- It looks like the light data is cached based on the current view. I store light for the whole scene, so there's no light fluctuation when doing camera movement/rotation. I think the tradeoff here is the view-relative caching is probably more optimized (light detail is view invariant) - I think that's mostly important for HD-style assets.
Limitations of both, IIUC:
- Reflections, water, etc. Radiosity is diffuse lighting only. I think you can combine it with other hacks like screen space reflections though
cadamsdotcom
Awesome.
> The fact that you can get physically-plausible light bounce and temporal stability all running in real-time on a web page... on a phone... feels like we're actually in the future.
Even as some things about the open web are in trouble, others are thriving! This was such a great in depth read, learned a ton and got to see great graphics and play with lots of knobs. A+ :)
Panzerschrek
Why not using an approach with light probes instead? They can be placed statically (or be changed only rarely), usually much less probes are required compared to surfels.
show comments
ivanjermakov
Sad to see that most GI techniques require temporal caching and denoising. We might never come back to crisp, noise-free, instant graphics.
altmanaltman
Demo page doesn't seem to be working for me. Getting this error;
Something went wrong
Cannot read properties of null (reading 'isInterleavedBufferAttribute')
I've been playing around with reviving radiosity for incremental GI for low-poly scenes, and this sounds very similar (and probably much better). I put a camera on each lightmap texel, rendered the scene, then summed the pixels (roughly) in the render to get the light. I chose the "slow" approach, where lighting took several seconds, but was done idly. And then once the lightmap had a certain amount of stability I'd stop the light calculations until the scene changed.
It sounds like the advantages here are:
- Optimized sampling, rather than just every lightmap texel. My idea was to tie the lightmap to LOD, but I feel like this is much smarter.
- Optimized light accumulation, dedicating more resolution to high-light areas to reduce noise
It seems like it has a more advanced "stability" calculation.
Things that are the same:
- Lighting is still incremental - when they e.g. change the light direction, even with optimizations, there's still some ghost light that slowly moves over so I'm not sure how this would work in really dynamic situations (car traffic)
Things that are different:
- It looks like the light data is cached based on the current view. I store light for the whole scene, so there's no light fluctuation when doing camera movement/rotation. I think the tradeoff here is the view-relative caching is probably more optimized (light detail is view invariant) - I think that's mostly important for HD-style assets.
Limitations of both, IIUC:
- Reflections, water, etc. Radiosity is diffuse lighting only. I think you can combine it with other hacks like screen space reflections though
Awesome.
> The fact that you can get physically-plausible light bounce and temporal stability all running in real-time on a web page... on a phone... feels like we're actually in the future.
Even as some things about the open web are in trouble, others are thriving! This was such a great in depth read, learned a ton and got to see great graphics and play with lots of knobs. A+ :)
Why not using an approach with light probes instead? They can be placed statically (or be changed only rarely), usually much less probes are required compared to surfels.
Sad to see that most GI techniques require temporal caching and denoising. We might never come back to crisp, noise-free, instant graphics.
Demo page doesn't seem to be working for me. Getting this error;
Something went wrong Cannot read properties of null (reading 'isInterleavedBufferAttribute')