Best practices for denoising with upscaled images?

Discussions about MakeHuman, Blender and MPFB. It is ok to ask for general Blender support here, even if it isn't directly related to MakeHuman

Best practices for denoising with upscaled images?

Postby talba » Wed Jan 31, 2024 7:47 am

Hi,

I was reading about denoising the other day. Someone mentioned that denoisers work better if they have a lot of data to play with, and therefore, in's a good idea to render images at 4x the resolution (2x wider, 2x higher) with 1/4 the samples. Having more pixels lets them do a better job at cutting out the noise. So, on a recent animation, I've rendered out each frame at 4x the resolution with 1/4 the noise threshhold.

I'm wondering what the best practice is for actually denoising images rendered like this.

My current plan is to take my frames into the compositor, denoise first, and then scale each frame down by half to the intended resolution. Alternatively I could scale down first, then denoise; but I imagine that won't really improve things too much over just rendering at the original resolution, with the original noise threshhold.
talba
 
Posts: 1
Joined: Wed Jan 31, 2024 7:41 am

Re: Best practices for denoising with upscaled images?

Postby RobBaer » Fri Feb 09, 2024 11:43 pm

It seems to me that either of your schemes is doomed. What you are effectively hoping for is that you can recover information that is not there. The only way to increase the amount of information that is there is to increase the dimensions without reducing the number of samples. Of course, the effect of number of samples is not linear, and if you were oversampling to begin with you might end up with a "better product" as judged by your eye. You need to match samples to the complexity of your light bounces.

What you are asking about is equivalent to photography discussions of signal, noise, and resolution. There "amount of light" is weighed against "iso setting". Low light and high iso are two ways to approximate the rendering situation where you are using a low sampling dimentsion. In the "iso case" you amplify the light you do get to make it "brighter", but as a side effect you also amplify light noise as well as light signal. When you reduce the dimensions of an image, you are using an algorhythm to average neighboring pixels. This can appear to "remove noise" by averaging (blurring) neighboring extremes of pixel intensity, but it also reduces the quality of your image by blurring real differences in pixel intensity. The apparent effect can depend on what algorhythm you use to reduce image size relative to the type of heterogeneity that exists in nearby pixels to start with. And, of course, "smart" denoising algrhythms can be designed to remove noisey pixels without disturbing pixels that are likely to contain real signal - typically they "understand" the way noise is introduced by the renderer in the first place and can use this info to find it. In summary, good results are situationally dependent, and simple schemes like the ones you propose are unlikely to provide real world improvements.

Imagine in the extreme case where you reduced you image size to a single pixel. It would have the average color intensity of your starting image, and it would have zero noise. However, the information of your scene would be gone because you have reduced the signal along with the noise.

I would propose that best practice is to: 1) render at the final size you want; 2) using a sample rate that minimizes noise without compromising your patience waiting for it to finish; and 3) use a "smart denoiser" to buy back some of the time you don't want to use.
User avatar
RobBaer
 
Posts: 1209
Joined: Sat Jul 13, 2013 3:30 pm
Location: Kirksville, MO USA

Re: Best practices for denoising with upscaled images?

Postby towerprincess » Thu Feb 22, 2024 9:32 am

RobBaer wrote:It seems to me that either of your schemes is doomed. What you are effectively hoping for is that you can recover information that is not there. The only way to increase the amount of information that is there is to increase the dimensions without reducing the number of samples. Of course, the effect of number of samples is not linear, and if you were oversampling to begin with you might end up with a "better product" as judged by your eye. You need to match samples to the complexity of your light bounces.

What you are asking about is equivalent to photography discussions of signal, noise, and resolution. There "amount of light" is weighed against "iso setting". Low light and high iso are two ways to approximate the rendering situation where you are using a low sampling dimentsion. In the "iso case" you amplify the light you do get to make it "brighter", but as a side effect you also amplify light noise as well as light signal. When you reduce the dimensions of an image, you are using an algorhythm to average neighboring pixels. This can appear to "remove noise" by averaging (blurring) neighboring extremes of pixel intensity, but it also reduces the quality of your image by blurring real differences in pixel intensity. The apparent effect can depend on what algorhythm you use to reduce image size relative to the type of heterogeneity that exists in nearby pixels to start with. And, of course, "smart" denoising algrhythms can be designed to remove noisey pixels without disturbing pixels that are likely to contain real signal - typically they "understand" the way noise is introduced by the renderer in the first place and can use this info to find it. In summary, good results are situationally dependent, and simple schemes like the ones you propose are unlikely to provide real world improvements.

Imagine in the extreme case where you reduced you image size to a single pixel. It would have the average color intensity of your starting image, and it would have zero noise. However, the information of your scene would be gone because you have reduced the signal along with the noise.
geometry dash unblocked
I would propose that best practice is to: 1) render at the final size you want; 2) using a sample rate that minimizes noise without compromising your patience waiting for it to finish; and 3) use a "smart denoiser" to buy back some of the time you don't want to use.

Thank you for sharing your insights on signal, noise, and resolution, particularly in the context of rendering and image processing. Your analogy to photography and the discussion around signal amplification, noise reduction, and resolution reduction is very apt. It's important to strike a balance between capturing sufficient detail and minimizing noise, and your points about the limitations of simple schemes for noise reduction are well taken.

Rendering and image processing indeed require careful consideration of sampling rates, denoising algorithms, and final output sizes to achieve optimal results. Your proposed best practices of rendering at the final size, optimizing sample rates, and utilizing smart denoisers are valuable guidelines for achieving high-quality outcomes while managing computational resources effectively.

Thank you for sharing your expertise on this topic!
towerprincess
 
Posts: 1
Joined: Thu Feb 22, 2024 9:29 am

Re: Best practices for denoising with upscaled images?

Postby Worker11811 » Sat Feb 24, 2024 9:27 pm

IMO, it's always best to render at your intended final resolution. You can scale up or down by, say, double or half size, but there will always be compromises when you do.

Most of the time, you can double the size of an image (4 x total pixels) and still get decent results but, compared side-by-side with an image rendered at the intended resolution, you will notice a difference.

You can render at a higher resolution and scale down but why? You would increase the render time by a factor of four only to throw a lot of your work away when you scale down. Changing the number of samples or your de-noising settings to a lower setting will only degrade the quality of the image. Scaling down won't erase the resulting imperfections. (GIGO = "Garbage IN - Garbage OUT.")

My suggestion is to render at your intended final size then adjust your sampling and de-noising settings until you get an acceptable image with a render time you can tolerate. Make a test render at whatever settings you think are right then make a few more renders with different settings until you find something you like. I suggest using a "double and half" process.

Make your first test at, for example, 128 samples. Look at the image quality and consider your render time. Make a second test at 64 samples then compare the two images. If it doesn't compare with the first to your liking, try 96 samples. Do a couple of others at, for example 72 or 80 samples then pick the one you like best.

While you're doing this, look at the render times. Maybe the first, at 128 samples, looked good but it took too long to render. Maybe the second, at 64 samples, rendered quickly but looked crappy. Gradually home in on a number of samples... 128-64-96-72-80, etc... until you find find something that works. Look at the render times... 10 min. - 2 min. - 4 min., etc... until you get an acceptable time. When you finally hit your mark, lock those settings in and use them for your final work.

In my experience, many people have a tendency to make oversized images at way to high sample settings. They'll try to make images at 4K resolution with 1,000 samples then complain that it took ten hours to render one frame. Then, with all that time spent, they'll post it on the internet, thinking that they have a great picture but nobody really notices. The reason is that you have virtually no control over how your image will be viewed via the internet.

If the guy who views your picture has a high-rez-4K display then, great! What if the other guy is viewing on his iPhone? The latest iPhone has an approximate 2K-rez screen. (2796 x 1290) No matter how you slice it, that guy's iPhone is going to throw away 50% to 75% of the data from your picture. You will have taken four times longer to render the image and it will take longer for the viewer to download. It will waste bandwidth and storage space on servers.

Why would anybody with half a brain do that? Just to satisfy their ego? Just to prove that they have a great, big computer with a super-duper high-rez display? Nobody but nobody will really care in the end. They'll look at your picture for ten seconds then swipe left.

Do yourself a favor. Save yourself a lot of time and trouble. Show people that you know how to make interesting images that download quickly, that don't waste bandwidth and still look good.

Tailor your image size to the display you intend them to be viewed on. Most of the images that I make have a size of approximately 1536 x 1024. IMO that size produces an image with an aspect ratio and resolution that fits closely with most computer, television and smart phone. It probably won't fit perfectly on every display but it'll be close enough.

Are you making images to be displayed on smart phones? Television screens? Computer displays? Digital cinema? Maybe just general fooling around on the internet? Decide on your target audience. Make an educated guess at what kind of device your audience will be watching at. Make your images to match. :)
Worker11811
 
Posts: 22
Joined: Tue Apr 08, 2014 12:25 am


Return to Blender and MPFB

Who is online

Users browsing this forum: No registered users and 1 guest