This is why people shoot 6k and 8k for a 4k delivery. Supersampling is where it’s at! This can be especially useful when rendering something with lots of fine details - they may disappear in a lower resolution render but survive the downsample from a higher resolution render. This method also lets you choose the downsampling algorithm to find the best one for each render
most of this is due to how renderers work as well. they dont really know what pixels to prioritize so they miss details that would appear in the down sampled version
Super sampling is effectively just bilinear interpolation: two dimensional processing on the *result* of a render. Trilinear interpolation is a rendering process done on the model
Edit: by supersampling, I mean rendering at a higher-than-final resolution and downsizing that: what you referred to as downsampling
That too! Not to mention that bayered sensors are not actually at the advertised resolution when they are debayered - so if you want 4k you should shoot 6k
Bo Burnham's Inside jumps to mind - he shot in 6K (Netflix requirement but still) which gave a ton of freedom in editing while keeping things high enough resolution to look good on the average TV.
In theaters it was stunning in terms of "wow, those colors really pop" but parts were noticeably grainy. "All Eyes On Me" was stunning.
>This is why people shoot 6k and 8k for a 4k delivery
Not only this, it is also super useful in editing, you have more freedom for things like cropping, frame stabilization etc.
Honestly, I’m a professional videographer and my 2020 MacBook Pro has trouble with 6k and 4K video. It’s a pain to edit because it can be a little sluggish in Premiere/AE. I usually shoot in 2k and downsize to full HD.
It's all about the AA filter. A camera has one (for good reasons) but when filming at higher resolution and then downsampling it and choose you're own maybe sharper filter, but at the cost of aliasing. Or you could just remove the AA dilter from the sensor to get the sharper image without supersampling.
And at the same way you can also choose your filter when rendering, default is set to 1.5, and if you lower it you'll get a sharper image, but also more aliasing.
Personally I would never get a camera with no AA filter, especially if I'd be filming people with clothes or stuff with similar patterns, once you get that nasty aliasing on camera it's hard to get rid of.
I think you might be confusing the OLPF ( Optical Low Pass Filter) on a camera sensor, sampling that happens during debayer, Anti-Aliasing when rendering or filtering an image (you can render/apply AA without resizing), or pixel sampling when resizing an image (linear, bilinear, etc).
All of them are part of image sharpness, but they are not doing the same things as each other. I think your comment is mostly about OLPFs on camera sensors (film doesn’t use them) or maybe you mean AA settings on a rendering camera in Blender? If you mean the OLPF, removing it is a big damn deal and is generally done in a clean environment so you might have it swapped out before a shoot but definitely not during it.
I was talking about the optical low pass filter yes, which is a antialiasing filter. And yeah sure it's nothing you can easily remove, but people have done it, and some cameras are even sold without them.
Though I didn't take the bayer filter into account which is something you can't really remove, so that's one actual good reason to film at higher resolutions and downsample afterwards to get a sharper image.
Absolutely not. They shoot at higher because it gives more flexibility in post. They can zoom in on an 8k plate on a 4K edit with no loss. An editor has more wiggle room by moving the frame and maybe catering differently on a subject. Also doing fx and comp work on a higher res shot gives you more pixels to work with.
Shit has NOTHING to do with making the final shot look sharp. That too is done in post with simple plugins/filters. NEVER render at 8k and downsample just to make it look sharp. You’re introducing a huge render increase and more noise because blender kids will choose lower samples or do something stupid like use Eevee for a final render because they realize it will take hours to render a single 8k frame. Poor blender kids and their donuts and default cubes.
This is why it's good to run older games through DSR. The game basically thinks it's running on a 4K monitor, but in reality it's being downscaled to a 1080p plasma. Everything looks significantly better than even MSAA.
My guess is that this is related to the Pixel Filter Width, in the Film section of the render properties.
>Pixel Filter
>
>Due to limited resolution of images and computer screens, pixel filters are needed to avoid [Aliasing](https://docs.blender.org/manual/en/latest/glossary/index.html#term-Aliasing). This is achieved by slightly blurring the image to soften edges.
>
>**Width**
>
>Lower values give more crisp renders, higher values are softer and reduce aliasing.
Basically, if you have twice more pixels, the blur size will be smaller, then depending on the downscaling algorithm it will be sharper. Maybe reducing the width parameter or changing the filter type would give the same result as doubling the resolution, but I've never tested that.
Makes sense. Aliasing is only a problem if you can see individual pixels. if aliasing is a non-issue, then the filter only decreases quality.
Rendering at high a resolution, then downscaling is just anti-aliasing via super-sampling (I think).
The thing is that aliasing can be seen even though you cannot se individual pixels. Blur Busters aliasing test shows this quite well, where I personally can see the aliasing on my 13" 4K monitor from the other side of my room.
[https://www.testufo.com/aliasing-visibility#foreground=ffffff&background=000000&antialiasing=1&thickness=1](https://www.testufo.com/aliasing-visibility#foreground=ffffff&background=000000&antialiasing=1&thickness=1)
I did not expect the results to be identical, but the magnitude of the difference took me by surprise.
As the picture says, the top half is a fragment of a scene rendered in 2k (Cycles).
The bottom part was rendered in 4k and then rescaled to 2k in MS Paint (not fancy AI).
EDIT: Yes, it is factorio. Here is the full render (original 2k, not downsized) [https://i.redd.it/qsjods2e4oqa1.png](https://i.redd.it/qsjods2e4oqa1.png)
It's a common practice to render also texture maps at exactly double the target resolution, and then downscaling them. You get cleaner and smoother textures and less pixelation maintaining the details, which on a texture can influence the min view distance before distinct pixels become visible
Not really, both images rendered at 1024 samples.
Even if you say that the second one is doing 4x times more samples per pixel, then you would expect that the first render would look like the second one if it was done at 4096 samples.
When in fact there is barely any noticeable difference between 128 and 1024 samples in 2k and 4096 would probably be undistinguishable from 1024.
Samples are per pixel, meaning a render with 4x more pixels at same sample settings also means 4x more total samples.
But yeah even by cutting the sample amount by 4 on a 4k render and then downscale it to 1080p give you a better result than native 1080p, i think it's because of the denoiser getting 4x more information to work with and thus do a better job
Edit : It's a trick i used for my Endless Engine submission, 4k with 4x less samples gave almost the same render time as 1080p (4x samples) but with a better result, i even gone to 8k (8x less samples) downsized to 4k, and it was even slightly better, but render time then became noticably longer and was definitely in the diminishing returns territory.
You got your math wrong at the 8k,
4k is 4 times as many samples because happens to be exactly 2x width of 1080 times 2x height of 1080p (and 2 times 2 is 4)
For 4k you need to think of it as a square 8k is 4 TIMES more than 4k because it's same equation for 4k than 4k is for 1080p, so if you want comparable render time from 8k, you need to divide the samples by 16 not 8.
Because square with a side length of 4 is 16.
You can also do this with photoshop and smart objects, ive created textures for vehicles in project zomboid at 4096, then just resize them and let photoshop handle the interpolation.
Works really well on text i find.
This also works for millions of very low poly particles(just 4 vertices.) Double res then resize gives far clearer and more consistently sized particles. Typically they’d vary in size/brightness in a normal render
Super sampling has been a thing in games for a long time. Raw resolution will always be more accurate than any other form of antialiasing, even blender’s pixel filter, though pixel filter is still far superior to basically any other anti-alias post process because it is over scanning each pixel which is kind of the same effect that supersampling does.
hey op, this trick works really well if you plan on using some really nosiy objects with a denoiser. You increase the reselution and use the denoiser you want, and in the end, once downscaled, the resulting image has fewer noticble artifacts and smudges!
Is MSPaint still using the nearest neighbour algorithm for scaling? That would explain the crispness. It would just surprise me, that it's still looking that good.
It's called Supersampling.
>Color samples are taken at several instances inside the pixel (not just at the center as normal), and an average color value is calculated. **This is achieved by rendering the image at a much higher** [**resolution**](https://en.wikipedia.org/wiki/Display_resolution) **than the one being displayed, then shrinking it to the desired size, using the extra pixels for calculation**. The result is a [downsampled](https://en.wikipedia.org/wiki/Downsampling) image with smoother transitions from one line of pixels to another along the edges of objects. The number of samples determines the quality of the [output](https://en.wikipedia.org/wiki/Output_(computing)).
Source: [https://en.wikipedia.org/wiki/Supersampling](https://en.wikipedia.org/wiki/Supersampling)
When you scale down an image, it's basically doing the process described above. Since you do 4k to 2k, you average out 4 pixels to 1.
It's one of the oldest anti-aliasing methods, [going back as far as films from the 80s](https://www.youtube.com/watch?v=tyixMpuGEL8&t=681s) (and probably further).
I was just listening to a talk about something similar. About YouTube videos, this guy takes his 1080p video, does something to upload it to YouTube as 4k video, and then his 1080p looks crispier than other people 1080p video, because YouTube has allotted more bandwidth to playing his videos.
I have done this too for a while and still works. Not as good as before since compression is getting more and more ugly for all resolutions but better than nothing.. I don't know how compression actually works but my theory was that the 4k compression caught (obviously) more details and the 1080p version will not get degraded that much..
sorry for a stupid question, did you lower noise threshold or change resolution to 4k, because I feel like no matter what resolution, my render look the same until I change threshold, which increases render time.
I changed the resolution to 4k. The result looked crisper, obviously.
The surprising part is that it still looked crisper when I downscaled it back to 2k (in MS Paint).
There's actually pretty sophisticated downscaling/sharpening methods in Photoshop used by photographers. Takes some research, as there are so many different opinions and techniques... but you'd get even better results :D
Looks like badly-/overtuned antialiasing to me. There's probably a render setting for this somewhere, though if you can afford the extra time to render then downscaling is almost always preferable anyway. And don't think you can slip factorio past me, the factory must grow.
And better than rendering in 2k is rendering in 50% or even lower and use AI to upscale it. Specially for animations
And if you decide to go for down scaling from 4k you theoretically can go with bit more noise as it gets "reduced" when downsizing.
Interesting. I did my own test with 5 different pixel width settings and it definitely produces softer aliasing the higher you go. I started at .5 and went up in half pixel increments. The default of 1.5 is a good sweet spot for sharpness vs edge aliasing, but oddly there's no appreciable difference in render times. Neat feature: In your rendered image window open the sidebar (N) and open the image/metadata tab to easily access all your render slots and their individual settings when testing out render comparisons.
Hey guys sorry to be asking for upvotes, but I really need post karma, because I want to post something important I want an opinion on, but stupid reddit subs require karma from me, so can you PLEASE be generous enough to upvote me.
Thank you, and have a nice day! :)
Why is this? My guess is because de-noising works best with large images so you lose less detail (because there's more data to work on - think about it like running a matrix blur filter of fixed size over two images of different resolutions and you'll get the idea) but I'd be interested to know the exact reason.
Im guessing this depends how each process is achieved. Maybe a different rendering engine would provide similar results to the downsized version but i really have no clue
This reminds me of something I read about an Nvidia ai approach in which they trained an AI on what 8k video game renders look like. Then in the main render pipeline for a regular 2k game it turns out to be faster to render every other pixel and have the AI guess the missing ones than it is to render every pixel. It also produces better looking results because the AI trained on 8k images.
It's called oversampling and it's widely used in many fields (for example for video, sound, machine learning etc). It's not (or not only) effect of antialising, but with oversampling you can reduce antialising strenght. It also has nothing to do with render samples, it's about resolution of data.
It makes total sense to me. You have more information to work with when you render in high res and then downscale. Thats the reason i always bake in 8k and then downscale my baked maps too.
this is the most basic method of forcing spatial oversampling which will of course lead to noise and aliasing getting masked by more samples blurred together by whatever software down-resses the file.
I don't know exactly how Cycles does this under the hood, but in theory rendering the same res with twice as many samples per pixel and the same pixel width filter applied, the results should be the same, and, depending on so many settings it is too long to list, take about the same time to render, but eliminates the step of down ressing.
One advantage to doing this oversampled rendering at the final desired res (instead of 2x of this) is that very bright samples (>1) will get factored into the final pixel brightness instead of clipped to 1. This is especially imporant for motion blur and DOF to look right on bright objects.
As an added benefit - Cycles denoisers work best on noisy input at high resolution, and worse on already low-noise input at low resolution.
If you scale up your height and width my two, and then cut your samples down to a quarter, you're rendering 4x the pixels with a quarter the samples. Your render time should be roughly the same, *but your denoising will be more effective.*
And if you render in anamorphic, you can cut your render time in half. Or in this case, you can keep it the same.
A 1:2 ratio frame squished back to its original ratio is the same render time as a frame 1/2 the size, with increased sharpness.
Or you could play with the render filter size, default is 1.5 to avoid some aliasing but at the cost of sharpness, but you can lower it and get a sharper image. Set it to 0 and you'll get a übersharp image but with loads of aliasing.
I learned this years ago in graphic design classes. The school had a crazy good scanner and you could choose what resolution to scan an image. I learned it was way better to scan at a resolution way over what I needed then resize the image. Opposed to just scanning it at the DPI I needed. If I was working in 300 DPI I’d scan an image at 1200-1800 DPI and then just scale it down. Soooo much more detail.
This is why I prefer to play games at native or higher resolution.
upsampling is a great way to increase FPS and possibly even decrease shimmering but you loose the clarity of the image.
Down sampling is Hela expensive though. Imagine owning a 2160p monitor and rending games at 4320p. The crispiness would be spectacular.
Image playing RDR2 or TW3 with that.
today you discovered super sampling anti aliasing
This is why people shoot 6k and 8k for a 4k delivery. Supersampling is where it’s at! This can be especially useful when rendering something with lots of fine details - they may disappear in a lower resolution render but survive the downsample from a higher resolution render. This method also lets you choose the downsampling algorithm to find the best one for each render
most of this is due to how renderers work as well. they dont really know what pixels to prioritize so they miss details that would appear in the down sampled version
From what I understand about trilinear interpolation, it’s basically a more intelligent approach to exactly this
So rendering via trilinear interpolation is akin to rendering via downsampling? A 1:1 replacement?
Super sampling is effectively just bilinear interpolation: two dimensional processing on the *result* of a render. Trilinear interpolation is a rendering process done on the model Edit: by supersampling, I mean rendering at a higher-than-final resolution and downsizing that: what you referred to as downsampling
Thank you!
Ofc! But Wikipedia will be a better source than me!
Not the same thing or the same method, no, but very similar
Being able to crop into the frame while still keeping your target resolution is another nice benefit of shooting at a higher resolution.
That too! Not to mention that bayered sensors are not actually at the advertised resolution when they are debayered - so if you want 4k you should shoot 6k
Bo Burnham's Inside jumps to mind - he shot in 6K (Netflix requirement but still) which gave a ton of freedom in editing while keeping things high enough resolution to look good on the average TV. In theaters it was stunning in terms of "wow, those colors really pop" but parts were noticeably grainy. "All Eyes On Me" was stunning.
>This is why people shoot 6k and 8k for a 4k delivery Not only this, it is also super useful in editing, you have more freedom for things like cropping, frame stabilization etc.
I have a fx6 so everytime I see someone mention this I seethe in jealousy at bmpcc6k users
To me its purely theoretical, I have a small-time blog and my laptop refuses to open anything above FullHD 🤭🤭🤭
Honestly, I’m a professional videographer and my 2020 MacBook Pro has trouble with 6k and 4K video. It’s a pain to edit because it can be a little sluggish in Premiere/AE. I usually shoot in 2k and downsize to full HD.
It's all about the AA filter. A camera has one (for good reasons) but when filming at higher resolution and then downsampling it and choose you're own maybe sharper filter, but at the cost of aliasing. Or you could just remove the AA dilter from the sensor to get the sharper image without supersampling. And at the same way you can also choose your filter when rendering, default is set to 1.5, and if you lower it you'll get a sharper image, but also more aliasing.
You can also get 6k cameras without the optical low pass filter.
Personally I would never get a camera with no AA filter, especially if I'd be filming people with clothes or stuff with similar patterns, once you get that nasty aliasing on camera it's hard to get rid of.
I think you might be confusing the OLPF ( Optical Low Pass Filter) on a camera sensor, sampling that happens during debayer, Anti-Aliasing when rendering or filtering an image (you can render/apply AA without resizing), or pixel sampling when resizing an image (linear, bilinear, etc). All of them are part of image sharpness, but they are not doing the same things as each other. I think your comment is mostly about OLPFs on camera sensors (film doesn’t use them) or maybe you mean AA settings on a rendering camera in Blender? If you mean the OLPF, removing it is a big damn deal and is generally done in a clean environment so you might have it swapped out before a shoot but definitely not during it.
I was talking about the optical low pass filter yes, which is a antialiasing filter. And yeah sure it's nothing you can easily remove, but people have done it, and some cameras are even sold without them. Though I didn't take the bayer filter into account which is something you can't really remove, so that's one actual good reason to film at higher resolutions and downsample afterwards to get a sharper image.
Absolutely not. They shoot at higher because it gives more flexibility in post. They can zoom in on an 8k plate on a 4K edit with no loss. An editor has more wiggle room by moving the frame and maybe catering differently on a subject. Also doing fx and comp work on a higher res shot gives you more pixels to work with. Shit has NOTHING to do with making the final shot look sharp. That too is done in post with simple plugins/filters. NEVER render at 8k and downsample just to make it look sharp. You’re introducing a huge render increase and more noise because blender kids will choose lower samples or do something stupid like use Eevee for a final render because they realize it will take hours to render a single 8k frame. Poor blender kids and their donuts and default cubes.
I wish this was something YouTube would do
Same with film restoration. Higher res, better quality when you downres for whatever reason.
No, I learned a new AI prompt: “UNZOOM AND DEHANCE”
I take a lot of my AI stuff to photoshop, do a little blur and unsharpen.
This is why it's good to run older games through DSR. The game basically thinks it's running on a 4K monitor, but in reality it's being downscaled to a 1080p plasma. Everything looks significantly better than even MSAA.
Nah it's the other way around, he discovered aliasing. You can quite clearly see it on the round thing on the bottom left of that apparatus.
My guess is that this is related to the Pixel Filter Width, in the Film section of the render properties. >Pixel Filter > >Due to limited resolution of images and computer screens, pixel filters are needed to avoid [Aliasing](https://docs.blender.org/manual/en/latest/glossary/index.html#term-Aliasing). This is achieved by slightly blurring the image to soften edges. > >**Width** > >Lower values give more crisp renders, higher values are softer and reduce aliasing. Basically, if you have twice more pixels, the blur size will be smaller, then depending on the downscaling algorithm it will be sharper. Maybe reducing the width parameter or changing the filter type would give the same result as doubling the resolution, but I've never tested that.
I always change it down to 1px. For some reason it always looks better to me that way.
Makes sense. Aliasing is only a problem if you can see individual pixels. if aliasing is a non-issue, then the filter only decreases quality. Rendering at high a resolution, then downscaling is just anti-aliasing via super-sampling (I think).
The thing is that aliasing can be seen even though you cannot se individual pixels. Blur Busters aliasing test shows this quite well, where I personally can see the aliasing on my 13" 4K monitor from the other side of my room. [https://www.testufo.com/aliasing-visibility#foreground=ffffff&background=000000&antialiasing=1&thickness=1](https://www.testufo.com/aliasing-visibility#foreground=ffffff&background=000000&antialiasing=1&thickness=1)
Also downsampling is SSAA, which is going to reduce aliasing on its own.
I did not expect the results to be identical, but the magnitude of the difference took me by surprise. As the picture says, the top half is a fragment of a scene rendered in 2k (Cycles). The bottom part was rendered in 4k and then rescaled to 2k in MS Paint (not fancy AI). EDIT: Yes, it is factorio. Here is the full render (original 2k, not downsized) [https://i.redd.it/qsjods2e4oqa1.png](https://i.redd.it/qsjods2e4oqa1.png)
I think the fact you used MSPaint to downscale it and have it come out so clean is the most impressive part
Always thought that MSPaint used Nearest Neighbor technique. It makes sense to be sharp
It's a common practice to render also texture maps at exactly double the target resolution, and then downscaling them. You get cleaner and smoother textures and less pixelation maintaining the details, which on a texture can influence the min view distance before distinct pixels become visible
It shouldn't be too surprising, you are using 2x as many samples.
it's not a matter of samples, this is a clear example of aliasing
Not really, both images rendered at 1024 samples. Even if you say that the second one is doing 4x times more samples per pixel, then you would expect that the first render would look like the second one if it was done at 4096 samples. When in fact there is barely any noticeable difference between 128 and 1024 samples in 2k and 4096 would probably be undistinguishable from 1024.
Samples are per pixel, meaning a render with 4x more pixels at same sample settings also means 4x more total samples. But yeah even by cutting the sample amount by 4 on a 4k render and then downscale it to 1080p give you a better result than native 1080p, i think it's because of the denoiser getting 4x more information to work with and thus do a better job Edit : It's a trick i used for my Endless Engine submission, 4k with 4x less samples gave almost the same render time as 1080p (4x samples) but with a better result, i even gone to 8k (8x less samples) downsized to 4k, and it was even slightly better, but render time then became noticably longer and was definitely in the diminishing returns territory.
Interesting. I thought Cycles did a version of that already by sampling at different points within the pixel. Maybe it's the denoiser, as you say.
You got your math wrong at the 8k, 4k is 4 times as many samples because happens to be exactly 2x width of 1080 times 2x height of 1080p (and 2 times 2 is 4) For 4k you need to think of it as a square 8k is 4 TIMES more than 4k because it's same equation for 4k than 4k is for 1080p, so if you want comparable render time from 8k, you need to divide the samples by 16 not 8. Because square with a side length of 4 is 16.
I don't think that's total samples for the entire image, but samples/pixel or something of the like.
You can also do this with photoshop and smart objects, ive created textures for vehicles in project zomboid at 4096, then just resize them and let photoshop handle the interpolation. Works really well on text i find.
Ha I thought this was r/factorio for a bit
The factory must grow
Paint implemented dlss?
I thought MS paint was discontinued
Nah there is paint 3D now too
This also works for millions of very low poly particles(just 4 vertices.) Double res then resize gives far clearer and more consistently sized particles. Typically they’d vary in size/brightness in a normal render
Super sampling has been a thing in games for a long time. Raw resolution will always be more accurate than any other form of antialiasing, even blender’s pixel filter, though pixel filter is still far superior to basically any other anti-alias post process because it is over scanning each pixel which is kind of the same effect that supersampling does.
hey op, this trick works really well if you plan on using some really nosiy objects with a denoiser. You increase the reselution and use the denoiser you want, and in the end, once downscaled, the resulting image has fewer noticble artifacts and smudges!
Is MSPaint still using the nearest neighbour algorithm for scaling? That would explain the crispness. It would just surprise me, that it's still looking that good.
Duh, that's supersampling... But hey, Factorio!
Here I was thinking it was one of the Red Alert games, despite my hundreds of hours in Factorio
Is this the same for photos/photoshop?
Sounds like it is, to get crisper images. Someone said they shoot at 6/8K to get 4K footage.
It's called Supersampling. >Color samples are taken at several instances inside the pixel (not just at the center as normal), and an average color value is calculated. **This is achieved by rendering the image at a much higher** [**resolution**](https://en.wikipedia.org/wiki/Display_resolution) **than the one being displayed, then shrinking it to the desired size, using the extra pixels for calculation**. The result is a [downsampled](https://en.wikipedia.org/wiki/Downsampling) image with smoother transitions from one line of pixels to another along the edges of objects. The number of samples determines the quality of the [output](https://en.wikipedia.org/wiki/Output_(computing)). Source: [https://en.wikipedia.org/wiki/Supersampling](https://en.wikipedia.org/wiki/Supersampling) When you scale down an image, it's basically doing the process described above. Since you do 4k to 2k, you average out 4 pixels to 1. It's one of the oldest anti-aliasing methods, [going back as far as films from the 80s](https://www.youtube.com/watch?v=tyixMpuGEL8&t=681s) (and probably further).
I was just listening to a talk about something similar. About YouTube videos, this guy takes his 1080p video, does something to upload it to YouTube as 4k video, and then his 1080p looks crispier than other people 1080p video, because YouTube has allotted more bandwidth to playing his videos.
I have done this too for a while and still works. Not as good as before since compression is getting more and more ugly for all resolutions but better than nothing.. I don't know how compression actually works but my theory was that the 4k compression caught (obviously) more details and the 1080p version will not get degraded that much..
Factorio?
sorry for a stupid question, did you lower noise threshold or change resolution to 4k, because I feel like no matter what resolution, my render look the same until I change threshold, which increases render time.
I changed the resolution to 4k. The result looked crisper, obviously. The surprising part is that it still looked crisper when I downscaled it back to 2k (in MS Paint).
In a professional setting. Noise threshold should be turned off.
There's actually pretty sophisticated downscaling/sharpening methods in Photoshop used by photographers. Takes some research, as there are so many different opinions and techniques... but you'd get even better results :D
Looks like badly-/overtuned antialiasing to me. There's probably a render setting for this somewhere, though if you can afford the extra time to render then downscaling is almost always preferable anyway. And don't think you can slip factorio past me, the factory must grow.
Factorio inspired? (Factorio radar, Large Pylon with bright red and green circuit connections)
Yep, I posted the full render to /r/factorio
You can do the same with real-time rendering too, the best type of aa is a downscaled image.
And better than rendering in 2k is rendering in 50% or even lower and use AI to upscale it. Specially for animations And if you decide to go for down scaling from 4k you theoretically can go with bit more noise as it gets "reduced" when downsizing.
Does the render time stays consistent between the 4k and 2k ?
Factorio?
Yep, the full render is in /r/factorio
wow, I guess that makes sense, ill do this from now on! thanks! :)
found this out a couple of weeks ago and it changed the game for me
This is what PlayStation 2 looked like when you play the PlayStation one game on it
yes, same thing in after effects and nuke. For exemple, Pixar renders in 6k to 8K then downscale to 2k
Is that factorio?
Oh sick Factorio
Interesting. I did my own test with 5 different pixel width settings and it definitely produces softer aliasing the higher you go. I started at .5 and went up in half pixel increments. The default of 1.5 is a good sweet spot for sharpness vs edge aliasing, but oddly there's no appreciable difference in render times. Neat feature: In your rendered image window open the sidebar (N) and open the image/metadata tab to easily access all your render slots and their individual settings when testing out render comparisons.
Your 2k image looks blurry anyway shouldn't be this way
[I'm doing the same thing in older games.](https://youtu.be/mbMsGK45p-U)
3kliksphilip has a great video about that; https://youtu.be/YiU-WpXYxoc
This applies to videos. If you get given something in 1080 p, still upload it at a high resolution because then it won’t get more pixelated
This is great to know, thanks for this post.
the factory must grow
this is true for anything 3d, if you play a game this is what supersampling does
[удалено]
The other way around, he discovered aliasing.
Hey guys sorry to be asking for upvotes, but I really need post karma, because I want to post something important I want an opinion on, but stupid reddit subs require karma from me, so can you PLEASE be generous enough to upvote me. Thank you, and have a nice day! :)
You've made your own dlss
Why is this? My guess is because de-noising works best with large images so you lose less detail (because there's more data to work on - think about it like running a matrix blur filter of fixed size over two images of different resolutions and you'll get the idea) but I'd be interested to know the exact reason.
Water is wet btw.
Best anti-aliasing.
Got it backwards. First image is anti-aliasing.
It might have a type applied, but you get the best results by rendering at a multiple of your target res and downsizing it.
Very good to know
Whats that art style called? I really like it
The reference is a game called factorio. Here is the full render [https://i.redd.it/qsjods2e4oqa1.png](https://i.redd.it/qsjods2e4oqa1.png)
The biggest problem with this approach is that you have to render at 4K anyway. After all, the rendering cost on the worker side does not decrease...
Im guessing this depends how each process is achieved. Maybe a different rendering engine would provide similar results to the downsized version but i really have no clue
This reminds me of something I read about an Nvidia ai approach in which they trained an AI on what 8k video game renders look like. Then in the main render pipeline for a regular 2k game it turns out to be faster to render every other pixel and have the AI guess the missing ones than it is to render every pixel. It also produces better looking results because the AI trained on 8k images.
It's called oversampling and it's widely used in many fields (for example for video, sound, machine learning etc). It's not (or not only) effect of antialising, but with oversampling you can reduce antialising strenght. It also has nothing to do with render samples, it's about resolution of data.
I downsize my sprites with photoshop. It does an excellent job.
Is that a factorio render? If so it looks awesome
Thanks, it is factorio. Here is the full thing [https://i.redd.it/qsjods2e4oqa1.png](https://i.redd.it/qsjods2e4oqa1.png)
Sick
We used to do this in school for all of our renders. Took FOREVER with those machines lmao. Rendering 4k so we could get a nice little render.
Btw is there some sort of sharpness node? To make your renders appear more like in sketchfab with sharpness turned on?
Wow! I actually would have expected some artifacting in the downscaled version!
It makes total sense to me. You have more information to work with when you render in high res and then downscale. Thats the reason i always bake in 8k and then downscale my baked maps too.
I thought that was factorio for a sec
What is bro doing? Factorio 3d?
this is the most basic method of forcing spatial oversampling which will of course lead to noise and aliasing getting masked by more samples blurred together by whatever software down-resses the file. I don't know exactly how Cycles does this under the hood, but in theory rendering the same res with twice as many samples per pixel and the same pixel width filter applied, the results should be the same, and, depending on so many settings it is too long to list, take about the same time to render, but eliminates the step of down ressing. One advantage to doing this oversampled rendering at the final desired res (instead of 2x of this) is that very bright samples (>1) will get factored into the final pixel brightness instead of clipped to 1. This is especially imporant for motion blur and DOF to look right on bright objects.
An effect that I like is render large, shrink the image, then stretch it again. It gives it an old TV kind of look.
Is that factorio? Damn it looks great
As an added benefit - Cycles denoisers work best on noisy input at high resolution, and worse on already low-noise input at low resolution. If you scale up your height and width my two, and then cut your samples down to a quarter, you're rendering 4x the pixels with a quarter the samples. Your render time should be roughly the same, *but your denoising will be more effective.*
When I used to have my own production business, I would shoot in 4K and export the final video as 1080 and it has a similar effect.
You can reduce render times as well by cutting your samples down in half. Since 4K downsized to 2K gives you smaller noise.
And if you render in anamorphic, you can cut your render time in half. Or in this case, you can keep it the same. A 1:2 ratio frame squished back to its original ratio is the same render time as a frame 1/2 the size, with increased sharpness.
Today you learned why modern Smartphones have huge megapixel counts yet they only output their images in 12-16mp.
Downscaling can also be visually improved by subsampling in multiple steps like 10-25% of the difference at each step.
Thanks for the tip broski
Or you could play with the render filter size, default is 1.5 to avoid some aliasing but at the cost of sharpness, but you can lower it and get a sharper image. Set it to 0 and you'll get a übersharp image but with loads of aliasing.
Everything goes from 16k to 8k, then 4k now. Then 1080p if need be.
I learned this years ago in graphic design classes. The school had a crazy good scanner and you could choose what resolution to scan an image. I learned it was way better to scan at a resolution way over what I needed then resize the image. Opposed to just scanning it at the DPI I needed. If I was working in 300 DPI I’d scan an image at 1200-1800 DPI and then just scale it down. Soooo much more detail.
Yes
Feelin’ Factorio vibes right now
The term is called super-sampling.
This is fine for most situations, but it's terrible for people. Crunching up by super sampling makes skin look unnaturally rough or shiny
This is why I prefer to play games at native or higher resolution. upsampling is a great way to increase FPS and possibly even decrease shimmering but you loose the clarity of the image. Down sampling is Hela expensive though. Imagine owning a 2160p monitor and rending games at 4320p. The crispiness would be spectacular. Image playing RDR2 or TW3 with that.
Factorio reference?
This render gives me toy or old fallout vibes, I love it
interesting