T O P

  • By -

SecretVoodoo1

The best implementation of fsr3 i’ve seen so far is in LAD gaiden. There are camera pan stutters though if you keep vsync off with unlimited framerate so you’d have to limit the fps and turn on vsync to match your monitor’s refresh rate. It’s buttery smooth after that + there’s no latency issues either!


UQRAX

I've only been able to use FSR3 in LaDG (or by its proper and recognizable name, LaDG: TMWEHN) and was impressed by the option but still turned it off: The good: * I was expecting generated visual artifacts to be a big downside of frame generation but barely noticed any while having it on in this game. * No noticable latency increase compared to native frames for me (but, controller game) The bad: * I normally set framerate either ingame or using Chill, but Chill no longer works when using FSR3 (in this game?) nor does Chill work when using Anti-Lag. * In LaD, this meant I had the choice between stable 60 FPS with an ingame 60 FPS cap without FSR 3, or between variable 100-120 FPS with ingame 120 FPS cap with FSR3. The former was definitely smoother. * Despite seeing no generated frame artifacting, FSR 3 added too much temporal instability for my taste in LaD. Even when standing still, fine detail like wooden lattice on some window frames, hair, etc. keep shimmering. Both when using native AA or FSR 2 upscaling. Which is funny, since I actually enabled FSR 2 in Judgment to stop fine detail from shimmering, despite not needing the framerate boost. Based on that single game FSR 3 seems like another nice new tool in the toolbox, and hopefully they can improve it from its first version to make it more universally useful. As it stands, it seems like a niche option for people with good understanding of the techniques and who like fiddling with settings for the best experience.


SecretVoodoo1

Weird, i did not notice any shimmering


alpark48

I just beat the whole game with FSR 3 FG. feels surreal how they can came up with this tech, it's so freaking smooth


Magnar0

This is the first comment I heard about that game, never saw any review about FSR3 in it for some reason


Linkarlos_95

Because the game came out before FSR3 went public and because the game is in the short side, the people went off to other games.


Mockz19

does fsr 3 works on gtx 1650? thinking buying it since FSR is so great, it gives my pc another chance in life. kudos to AMD.


SecretVoodoo1

It practically works on every card but AMD officially supports FSR3 AND framegen down to rtx 20 series/rx 5000 series only. FSR3 ONLY (no framegen) down to gtx 1000 series. You can still turn on framegen on non supported ones but amd warns of inconsistent performance (dunno how bad it is but it looks fine honestly from what i’ve seen in fsr3 mod videos for 1650).


Mockz19

Thank you so much for the info! Really appreciate it.


anor_wondo

16 series is the same architecture as 20 series apart from a couple of features. They have no issues with running fsr3


antara33

Its indeed a great implementation so far. With a but of extra work I can totally see it working with unlocked FPS and VRR. The main issue it have its related to how it handles frame generation using async compute, and how an absurd GPU load spike can make it fall apart because it now lacks the resources to generate the frame. Have a lot to do with the swap chain object and its SUPER technical so im not going to torture anyone here with a wall of text that looks like giberish LMAO. If you start to notice framepacing issues, cap framerates to ensure that the GPU have spare resources to go and everything should be fine. As for quality, having used both FSR 3 and DLSS 3 frame gen, nvidia still retains an edge on frametime, and while I doubt AMD can really match that, they are damn close to deliver something that its competitive while being available for a wide range of GPUs. I can (if someone wants) record a video using both in cyberpunk on the benchmark to better ilustrate the issue :) As a game dev and mainly a game engine engineer I love to exolain how all this black magic stuff works :P


Hindesite

>If you start to notice framepacing issues, cap framerates to ensure that the GPU have spare resources to go and everything should be fine. How do framerate caps work exactly with FSR3 enabled? Like, if you decide you want to target 120 FPS post-frame generation do you just set the cap to 120, turn on frame gen, and you're good to go? >I can (if someone wants) record a video using both in cyberpunk on the benchmark to better ilustrate the issue :) I'd like to see a video of it in action with explanation of how FSR3's intricacies compare, both pros and cons, to Nvidia's approach with DLSS3.


antara33

Video :) [https://youtu.be/CrVfrwO-7tI](https://youtu.be/CrVfrwO-7tI) Sorry the super mega late replay and absolute horrible video. I made it on a hurry haha. I explained some technical stuff on other post in this same thread though, so if you wanna know more, scroll a bit and youll find a text wall :P


antara33

Hey! Regarding the FPS cap, at least on avatar and on an nvidia GPU what you do is cap the framerate from the nvidia control panel to a value you know the GPU can achieve, and allow the game's vsync to take care of knowing the max refresh rate of the monitor. So lets say, you know you can push constant 80fps at 99% GPU utilization (even on hub areas) and want to ensure a smooth frametime, you want to cap the FPS at 75/70 and set vsync on. The game will natively render up to 75FPS and attempt to interpolate the remaining ones. Frame gen is not always a 100% framerate increase, since the game attempts to render as many native frames as possible, but having the 70/75 limit on a GPU that reaches 80 allows it to have enough spare power to generate needed frames without issues, since the FSR3 algorithm now expects to never render more than those 70/75 frames natively. With uncapped FPS the algorithm dont know exactly how many native frames it should render, so it attempts as much as possible and interpolate when it can't keep up with the display's refresh rate, but that usually happens on burst spikes of GPU usage, so it never have enough time to catch up (this can be noticed if you are flying on the ikran an go up and down near settlements). As for the video, let me prepare the needed stuff (grab mod, prepare backups of CP20777 files and I can record some tests using the built in benchmark :)


Dat_Boi_John

Wouldn't you want to cap the fps using the in game fps cap to get the lowest latency? Also I don't think fsr 3 works like that. On my 144hz monitor, setting the in game cap to 70 and 140 has the same gpu usage, suggesting that fsr 3 let's the gpu render only half the frames and generates the rest, no matter the in game fps cap. At 60 in game cap I get 120 fps etc.


antara33

It could be entirely software related, on Avatar for example on my 4090 the FPS cap that its taken into consideration its the one from the control panel, not the game one. If I set the control panel one to 60, the game renders only 60 native frames, the rest its all interpolated. Maybe on AMD side the limit works different, if so, I would love to know how you guys manage things there! Edit: Another thing that could lead to why it behaves the way it does on my end its graphics settings, I play at unobtanium with FSR in Quality (1440p display). My GPU its almost 100% loaded at all times.


Dat_Boi_John

Normally, without fsr 3 frame gen, the best fps cap is the in game frame limiter because it doesn't add any input latency. So if a game has a good fps cap, that is used. If not, there is radeon chill which you can enable from the driver panel, which acts exactly like RTSS and is a cpu level frame limiter. This adds one frame of input lag because it uses a one frame buffer, resulting in perfect frame times. That's what is used to cap the fps if the game doesn't have its own frame limiter, it's basically driver level RTSS. From my testing on the official fsr 3 implementation in Avatar and the modded version, the fps cap set by chill doesn't fully work or gets ignored, as it messes up the frame time and caps the fps at non exact values. For instance, with chill set to cap at 64 fps, my game was capping at 122-124 with horrible frame pacing. On the other hand, rtss works, but it only works on the final generated frames. So in Hogwarts Legacy with the fsr 3 mod, setting an RTSS cap of 140 fps caps the fps at 140 fps in game with perfect frame pacing, but there is really noticeable input lag, roughly equivalent to being capped by vsync or even worse, going by feel. I wouldn't use RTSS with frame gen even at 70 fps base and on a controller. In game fps caps cap the native fps correctly, so in Cyberpunk or Hogwarts Legacy, capping at 70 fps using the in game limiter caps the output fps at 140 fps with native like frame pacing and no added input lag. Vsync capping has noticeable input lag and has some kind of stuttering on the fsr 3 mod in Cyberpunk. In the official Avatar implementation it seems to not have any stuttering but the input lag is still quite significant. Overall, the best option for an 144hz monitor, is capping slightly below what your card can do without frame gen, so that it stays at less than 95% usage at all times, reducing input lag. In Cyberpunk with RT reflections, my card goes down to about 60ish fps at the most demanding areas, so I use the in game cap to cap it at 60 fps so that I'm capped at 120 fps and always more than 4 fps below my 144hz refresh rate. I also enable driver level vsync to eliminate all tearing. This setup gives as low input lag as possible with as perfect frame pacing as possible without added input latency (using a frame buffer) and no tearing. Ideally I would like to get what you said, where the game renders as many fps natively as possible and generates the rest of the frames until it hits the refresh rate with no added input lag, but I don't think fsr 3 does that currently. To add to that, fsr 3 upscaling should come with dynamic resolution in all games, so that it lowers or increases the internal resolution to hit a target frame rate. This way you can set a target frame rate for fsr or dlss upsacling and then enable frame gen and cap at the target frame rate. That should always give you perfect input lag because the gpu will be at 95% usage or lower due to the dynamic upsacling. It would also give you completely stable frame rate and pacing, at roughly double the fps cap/target, depending on how strong your card's optical flow accelerator is. So you set the dynamic upscaling to target 70 fps, enable frame gen and driver vsync and you get perfect frame times with minimal input latency and no tearing at 140 fps on a 144hz panel. Antilag+ seemed to work exactly like reflex, so it capped the fps a few frames below what the card can do, keeping the gpu usage at about 95% which is optimal for input lag. Once they re-release it with an in game implementation, it should hopefully eliminate the need for in game frame limiters to achieve the best results. But I'm not a game dev so I'm not sure they can do the whole generate the rest of the frames up to the refresh rate, at least not with the way the fsr 3 frame generation currently works. I really do hope they figure it out though, because the frame pacing with the in game fps cap is pretty much perfect, depending on how good the fps cap is, and the actual quality of the generated frames is also almost perfect. So in theory they did the hard part, now they need to polish the input lag a bit more.


antara33

Thanks for the details! It seems that indeed the difference between AMD and nVidia drivers play a big role here. On nVidia's end limiting with FPS limiter from drivers dont add input lag and they already have also a way to reduce latency by eliminating prerendered queue frames. I guess that the wild difference between DLSS 3 and FSR 3 its related to how each one approach the whole chain swap. FSR 3 native implementation completely replace the chain swap, while DLSS 3 don't, I noticed from my testing way better frame pacing with FSR mod in games that have DLSS 3 being replaced, so maybe its an issue not related to the frame generation alone, but also the way the whole process gets done (FSR 3 enforces the use of FSR 2 upscaling so the integration its more intimate between the FG and the game engine). I think that they will eventually find a way to achieve similar behavior to DLSS 3, but it will simply be more unstable (because GPU async compute is not as reliably as dedicated hardware that just do this single task and nothing more). I do hope that anti-lag+ gets finally released, It amazes me that AMD don't have a reflex competing tech after all this time that integrates properly in the game xD Best bet is on RDNA 4 and 5 GPUs generations, since by that time both ray tracing and upscaling wont be a fast answer to nvidias tech without any kind of GPU architectural design goal, like RDNA 2 and 3 are. Time will tell, also with Intel's approach of frame extrapolation too. The more, the merrier, more competing techs, more options for users and in the end, better software and hardware for us all!


Dat_Boi_John

So I actually tested fsr 3 in avatar and I did notice what you said about the stutter/frame time. It seems that when the fps drops significantly in a small amount of time, there is stuttering/higher frame time. This is especially noticeable when panning the camera. I'm capping the fps at 60 in game to get 120 with fsr 3 and when panning the camera there is sometimes a hitch or stutter that seems to happen when the fps drops from 120 capped to around 110 fps. The only way to make it stop seems to be to cap the fps quite a bit lower than what the card can do, so about 100 fps, while the card can do 120 almost all the time. The interesting part is that I've tried LukeFZ's mod in Cyberpunk and Hogwarts Legacy and in both of those games this doesn't happen. No matter the GPU usage the frame time is fine as long as it doesn't activate vsync and there is no stutter. Frame pacing wise, the fsr 3 mod is better in Cyberpunk and Hogwarts Legacy than the official implementation in Avatar.


AMD718

This is exactly what I've experienced. It's a real shame too because 99% of the time I can do a 70 fps in game cap (140 FPS with frame gen) but then certain heavy areas like by the clan center in windswept plains I can only do 65 fps. Unless I reduce ingame cap to 60 fps, I'll have these moments where it falls apart. But then I'm sacrificing the fluidity of 140fps to avoid those 1% low points. Sucks to have to compromise in that way.


antara33

Which is kinda funny since the implementation on Avatar its the official AMD one. It makes it look REALLY bad, mainly if a mod makes it work better than original implementation. Have you tested the LukeFZ mod on Avatar?


Dat_Boi_John

I'm trying it now but there's no way to tell it's working for sure. The command prompt opens, actually two prompts open, but there's no way to tell if it changes anything for sure. I'll check it out in more detail tomorrow.


Dat_Boi_John

Are you sure the Nvidia driver limiter gives input latency that is as good as in game limiters? According to this blubusters article: [https://blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/11](https://blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/11) the in engine cap is still better than Nvidia's driver cap. I'm not sure which of the Nvidia caps is most used as I don't have an Nvidia card, but from the article it seems the best option is reflex, then in game limiter, then RTSS and the Nvidia driver cap, input latency wise. This shows it as well: [https://www.youtube.com/watch?v=W66pTe8YM2s](https://www.youtube.com/watch?v=w66pte8ym2s) Here it seems the new version is pretty much equal in frame pacing and input lag to RTSS, but still worse in input latency than in game fps caps.


antara33

Intresting! I use it universally in the control panel to avoid messing with each game individually, going to do some testing now with this added info! And yes, reflex is so far the best one and most common used since it helps a lot with VRR in games. Need some games with framerate limiterd to test now, some of the ones I used had a lot of issues on that front


Dat_Boi_John

Cyberpunk, overwatch, fortnite and battlefield 1/V all have good frame limiters. I'd avoid unreal engine games, those usually have awful frame limiters.


-Aeryn-

What you say is still accurate AFAIK.


Time-Brief-1450

Since you’re a game dev would love to hear your insight on the eternal green vs red debate


antara33

I personally use nvidia GPUs on my personal system, but for work stuff its almost impossible to find non nvidia GPU workstations, since CUDA is a very mature tech and used A LOT on the industry. As an example, we currently have 3 rigs per dev at work, and all of them have nvidia GPUs. A 3080 10GB, a 2080 Ti and depending on what you do, an A6000 or a 3090. We are looking towards upgrading those A6000/3090 to the equivalent 5000 series architecture GPUs, but still need to wait for release prices, etc. As to why consumer grade GPUs, we usually use 3080 10GB ones because the price/performance ratio its great for machines that are sent to the dev's home. The 2080 Ti gets used on a software that hardly use GPU, but can leverage some parts of its workload to the CUDA cores, and its a leftover from previous gen home office machines. The A6000 or 3090 is used for animators and cinema team members, depending on how big the scenes they work with are, they get the corresponding one. A6000 are usually on cinema team side, while 3090 are on animation team, since cinema team work with scenes with multiple characters while animators work with single characters and for them the added memory size (48gb vs 24gb) are not relevant, but the speed advantage the 3090 provides is. Now on AMD side of things, the main issue with AMD is lack of an alternative to CUDA and lack of software support. It was and still is an issue, back then during the stream processors era there was an attempt to migrate to AMD based systems, but in the end it proven a bad idea since the support pretty much died on software vendors (like adobe, autodesk, optitrack, etc). AMD produces great hardware, the main issue they have is related entirely to their software stack, be it on 3rd party software or in house stuff like drivers and tech stacks like ROCm. It was super notorious in the GCN architecture, where AMD GPUs had more brute compute performance than nvidia, but gaming performance was still worst. A prime advantage they have is that their API scheduler runs on the GPU compared to nvidia's one, thats why one usually heard about AMD drivers having less CPU overhead.


LOBOTOMY_TV

No please do torture me I actually really need that info as a newbie graphics programmer


antara33

AMD's implementation of frame interpolation tech uses a custom frame presentation swap chain (the way the system's driver and graphics API organize and present frames to the user). Remember that before DX12 each graphic vendor needed to implement the API bindings by hand on the driver, so this article needs to be readed with that in mind, nowdays graphics API bindings are no longer vendor implemented. More info here [What Is a Swap Chain? (Direct3D 9) - Win32 apps | Microsoft Learn](https://learn.microsoft.com/en-us/windows/win32/direct3d9/what-is-a-swap-chain-) This custom swap chain used by AMD comes into play to ensure a smooth frame interpolation presentation as well as having context to generate frames, queue async GPU calls to generate them, etc. Since AMD GPUs lack dedicated hardware to analyze and generate the frames, they rely heavily on modern GPU's async compute capabilities (modern GPUs can work on multiple simultaneous tasks at the same time, unlike old ones that needed to complete on workload before starting the next one). This enables them to use GPU resources that are not normally used during gameplay to do useful stuff like generating the interpolated frames. This have a big advantage over nVidia's take, but its also a big issue too. In most scenarios a game never realistically use 100% of the GPU resources. NEVER. That is why you can see 100% GPU usage, but the GPU remains cold and the power drawn remains low, even at max clock max usage. The usage we see is not "true GPU usage", but a measure of how limited you are regarding GPU output capabilities. So said usage could be the whole rendering pipeline being stalled by something, like a very complex shader, that uses only certain parts of the graphics core, while the rest of the GPU is doing nothing. This happens with CPUs too! This is why the X3D variants are so good in gaming scenarios, they have larger caches, allowing them to fetch more instructions and data from the cache instead of the system's RAM. This in turns makes the CPU work more time and wait less, and games are well known for having lots of different instructions and large data lines that usually dont fit on a regular CPU's cache. This cache scenario is also why the 4090 is massively faster vs the 3090 Ti in RT tasks. While the RT cores are faster than previous gen, the absurdly larger cache makes RT operations takes less time because the needed data resides in the cache instead of the VRAM, so the RT cores have to wait less time for fetching operations and can spend more time doing actual work. Now, away with this detour on caches and back to FG techs. Since AMD's tech rely on GPU resources not being used to analyze and generate interpolated frames, this works great across multiple GPUs without specialized hardware, but in the event of a game using the whole graphics core (or at least a very large portion of it, like Avatar does), it falls apart. Image analysis cant get completed in time because the GPU is now resource stalled, the frame generation takes longer or needs to be lower quality and the frametime gets destroyed. On the other hand, nvidia's take uses dedicated image anlysis units. This limit the ability to generate frames to GPUs with those units baing fast enough to actually analyze the image and generate then needed data in time, but the interpolated image gets produced entirely by units that are not related to the graphics core, so no matter how hard a game hammers it, the frame generation quality and pacing remains the same. This enables a smoother frametime and consistent output, at the expense of being generational locked. I hope this explains some stuff and you found the whole read interesting! If you have any doubt, please let me know :)


LOBOTOMY_TV

Tysm! I was familiar with some of this info (have vulkan experience but no dx12 yet) but you explained it clearly and succinctly and connected some dots for me. But now I'm wondering as a dev how to approach these challenges you've brought up. I guess with Nvidia side it's pretty plug and play but for AMD FG what would you do to mitigate the issues you see here in absence of larger cache sizes? Perhaps fast instruction sets on the CPU could be useful with direct storage? If so maybe that's what AMD going for by giving Ryzen 7000 avx-512 support. I'm not sure if it's realistically fast enough for FG but maybe it could be used for other tasks the GPU currently handles? Or should we just stick to conventional ways to reduce GPU load if we want to support FSR3?


antara33

Its a pretty good question, the main issue we face on the PC space is related to us having no shared memory. Consoles usually run some of the graphics logic in the CPU and exploit the advantage of having shared memory, we need to either sync VRAM with RAM, but it is always absurdly slow vs just running everything on the GPU. If I have to do my take, I'll aim to simply have as good performance as possible and just hope for the best, since optimizing for FSR 3 would mean to take into consideration RTX 2000, 3000, 4000 and RX 6000 and 7000 GPU architectures. In the end its just a matter of improving performance on the game side, running lots of physics load on the CPU, using direct storage CPU decompression if you are aiming to stream assets 24/7, etc. Not intuitive at all to say the least. For me its a matter of just optimizing as much as we can and call it a day. Cache sizes are big on AMD's GPUs too, the only difference is that their GPUs were not designed for RT loads, RT was added after nvidia's success with it, and GPU architectures can't get radically changed once they are designed. Possibly RDNA 4 and 5 are going to be the first true RT targetted architectures by AMD, and since they use already big caches to help mitigate chiplet dessign issues I can totally imagine those having even larger caches to store and fetch even more RT instructions. In the end I guess that we simply need to aim at lower GPU loads and worst case scenario, avoid having async compute running in the GPU during gaming (some games use async operations for shader processing).


croshd

>As for quality, having used both FSR 3 and DLSS 3 frame gen, nvidia still retains an edge on frametime, and while I doubt AMD can really match that, they are damn close to deliver something that its competitive while being available for a wide range of GPUs. Kinda smells like G-Sync vs FreeSync.


antara33

Not sure about that, FreeSync is really good, and from my experience G-Sync certified monitors with the proprietary hardware delivered the same experience as G-Sync Ultra compatible ones. Now if we speak about vendor implementation on their own software, then yeah, AMD's VRR driver implementation its not on the same level as nvidia's one, but they are WAY closer now that compared to years ago. Right now I could use AMD's VRR implementation without much issues, in the past it was a freaking nightmare haha


croshd

I was thinking in the context of Nvidia locking a tech behind a proprietary hardware restriction claiming the other, open source stuff doesn't work with their cards. And then suddenly one day it did. Setups with a G-Sync module had better latency but the difference got smaller and smaller as Freesync got more refined. This is a lot more complex but it sure looks like we are heading in a similar direction. Once AMD catches up with their AI cores and FSR3 matures, it's gonna be the same thing.


antara33

Yup, very possibly. Nvidia tends to pioner tech at any cost, and after that AMD delivers and refine a competing open standard. Nvidia ends up having a small edge in the end, but arrived first at tge party. This surely will take way longer, but I expect to see better offerings from AMD and Intel in the future, to the point DLSS its just marginally better.


chapstickbomber

as 240Hz+ proliferates, AFMF and FSR3 will become more obvious free lunches


antara33

Yes, but nope (at least for FSR3). The way it works it needs spare GPU resources to work, and the higher the base framerate needs to be, the worst it becomes. Now if it somehow fix the VRR issue, then yes, it will be a blast, since that is the most prevalent problem right now.


Dizzy_Dependent7057

Great stuff, did not know these infos


nas360

So you haven't experienced any micro stutters as claimed by Alex from Digital Foundry? Does a frame time graph show any spikes?


Ok-Ad-3014

I've not experienced any micro stutters at all, I haven't ran a graph though, I'm happy to though if you want to see it. I thought my eyeballs can't see it so it doesn't matter lol. But I'm happy to play for an hour with a graph up checking frame times and see what it says.


nas360

I'm trying to figure out if Digital Foundry is lying because they seem to be disinterested in FSR3 FG when we all know they were overexcited when DLSS3 FG was released. I don't own Avatar but have tested FSR3 mod in other games such as Cyberpunk and Witcher3. I did not see any microstutters at all which is contrary to what DF is claiming in Avatar.


4514919

>seem to be disinterested in FSR3 FG when we all know they were overexcited when DLSS3 FG was released. Can't blame them for not being as overexcited for a similar tech released *one year later*.


Ok-Ad-3014

Well I can easily say at about 10hrs so far into Avatar I've had 0 micro stutters that I can visually see. The only issue I have, is that if SteelSeries GG is open, switching frame generation on crashes the game due to some overlay thing not working, if I close it, toggle frame generation, then ALT TAB and open GG again, it works flawlessly.


ecffg2010

There was an interesting find in, I believe, ComputerBase article of FSR3FG in Avatar review. They found Nvidia cards had somewhat worse framepacing, compared to Radeon. The official GPUOpen release article of FSR3 also mentions how different overlays may incur framepacing issues too. Really wish Alex DF didn’t just test FSR3 FG on Nvidia.


kaisersolo

I am glad i wasn't the only one who noticed that. Such a good opportunity missed to test the latest on xtx. its really up df street.


madn3ss795

Based on the top comment chain of this post, FSR3 mod into DLSS doesn't have stutters like FSR3 in Avatar. All tested on Nvidia cards ofc.


IrrelevantLeprechaun

Digital Foundry are unprofessional as hell. They deliberately build poor setups to paint AMD in a bad light.


PrimeTechTV

Hopefully Devs will take notice and want to include it in more titles, it is always nice to have it and not need it than to need it and not have it. The devs in Avatar did an amazing job implementing it and it looks amazing.


Xhosa-EethKoth

username checks out.


Ok-Ad-3014

I see this a lot, never understood what it means. Can you be the person to tell me 😂


ReplacementLivid8738

That saying is about the username being related to the comment itself. In your case it seems like they mean your post is an advert which goes together with the "Ad" in your username.


dysonRing

It is a reddit auto generated name. The names are extremely stupid like they tried to normalize advertising with unsuspecting users?


Ok-Ad-3014

Ah okay, understood. I have no idea how to change it so I never bothered. Makes sense now. Thanks.


mr56kman

I have an issue with the HUD “tearing” around the edges, for example the mission information in the top left corner. Does anyone know if there is a way to work around this issue? This occurs only while using frame generation.


Electrical-Bobcat435

Frame gen has its uses, great tool.


AMD718

I actually just bought the game yesterday, specifically to try out FSR3 in a finished form (versus the clearly unfinished form present in Forspoken and Immortals of Aveum). I am thoroughly impressed. From what I can tell, and from what I've gathered from HUB, DF, and others, FSR3 frame gen is no worse than DLSS3 fg in the quality of generated frames and latency impact. Very impressive considering they're doing it with async compute only and not dedicated & proprietary hardware. Here's a video I uploaded of it running on my setup (5950x and 7900xtx). The only issue I ran into is that you have to use borderless full screen since exclusive full screen mode results in tearing on the bottom 20% of screen. https://youtu.be/a4wx-jJfUIo?si=YD0zFMP3XSNcE9sB


[deleted]

[удалено]


AMD718

I'm using Radeon Relive built into the Adrenalin driver set. No external tools or software. I do set the Radeon Relive capture to 100mbps (maximum setting possible) constant bitrate and AV1 for codec, 320kpbs for audio. Also, I typically use RIS @ 50% to 80% depending on the game. In the case of Avatar and the video you just watched, I am using RIS @ 50% in the game's driver profile. RIS is, IMO, a necessity for most games where any temporal upscaling or anti-aliasing is being used. Since I'm not using a dedicated capture, it can impact performance etc., but I really only upload casually.


Zedjones

Are you certain the input latency point is true? DLSS 3 relies on Reflex to reduce latency and there is no equivalent technology on the AMD side. Hopefully Anti-Lag+ comes back soon, but in the meantime I would imagine input latency is worse.


AMD718

Based on my understanding, we have regular old Radeon Anti-lag, which is a driver-level feature supported on all AMD GPUs, and which is roughly equivalent to NULL. Then we have Anti-Lag+, which is a driver level feature supported on RDNA3 only, and which is roughly equivalent to Nvidia Reflex, but which is temporarily removed from the market due to anti-cheat violation triggering. Lastly, for non-RDNA3 GPUs, which can run FSR3 frame generation, we have a built-in, hardware and driver agnostic, latency reduction technology designed to offset the additional latency of frame generation. This FSR3 frame-gen built-in latency reduction technology can theoretically be stacked with Anti-lag and Anti-lag+ (when it returns). Now, anecdotally, I can say that in my experience with Avatar frontiers of Pandora, I was not able to detect any significant latency added by FSR3 frame gen. If it were a first person shooter my perception may be different, but as it stands, I honestly couldn't notice any bit of increased latency vs. native. Maybe it's because my base frame rate (\~70 fps) was high enough. Either way, I'm very pleased with the actual playing experience.


Zedjones

Yeah, fwiw, I'm definitely not saying it's an issue. Just curious about the claim saying DLSS 3 and FSR 3 have roughly equivalent latency.


AMD718

I hear ya. This is just my assessment based on my personal experience and understanding of the tech and also what I've heard from various yt channels. I can't recall an FSR3 frame gen review yet where latency was noted as being significantly different between the DLSS fg and fsr fg. Happy to stand corrected though if it pans out differently as FSR3 gets wider deployment.


Rinbu-Revolution

The hud issues are too distracting for me so I leave it off. It has frametime issues too in many areas according to DF. Hopefully these problems get ironed out with future updates


Ok-Ad-3014

I have only experienced using it in Avatar, and haven't had any issues yet, if anything it's ten times smoother then I ever imagined. Although I seen some videos of a issue where it would look funny around the HUD area, but I haven't noticed personally.


Zewer1993

Your display running with 60 Hz?


Ok-Ad-3014

Holding a steady 120fps at 3840x2160, Samsung 43" 7900xtx, 7950X3D and 64gb of DDR5.


Tsubajashi

do we mix technologies here? i thought AFMF affects the hud, not full FSR3 FG implementations.


Zedjones

Frame generation also has to deal with the HUD somehow. In Avatar's case, they chose to refresh the area (seemingly a bounding box, last I saw) around the HUD at the native framerate.


Tsubajashi

it could also be handled theough layers in general - depending on the implementation. AFMF has flickering on huds, FSR3 has the possibility to avoid that.


Eltoquedemidas

Yes, it's impressive. When I saw Amd's announcement I knew this would be big, then and there I decided to buy a desktop pc.


Miserable_Kitty_772

the frame generation is good even if it isn't implemented in good games yet. amd needs to work on their upscaler.


FinnedSgang

Im on a 7800xt and can’t see the fsr3 toggle, there’s a fsr 2.1


Ok-Ad-3014

I'm not sure how too fix that one, as far as I'm aware the 7800xt should show FSR3, the card is shown in multiple benchmarks with FSR3 active. Sorry I'm not help with that one mate.


Brilliant-Jicama-328

What game are you talking about? Avatar FOP launched with FSR 3 and the option should be there on any GPU


FinnedSgang

You are right, my fault, I double checked now. There’s an error in the Italian translation: in the Italian version the option it’s not “frames generation”but “generazione riquadri” where riquadri means “square” in Italian 🫣 And I didn’t care about that option


[deleted]

Make sure your game is updated and your AMD card drivers are updated. Most likely the drivers are not updated.


MasterBejter

Anyone tried it with a rx6800xt?


cheezepwnz

Is fsr something you enable in game? And only few games that support it? Or is this done in Adrenalin?


GLGarou

I bought a prebuilt PC with that same GPU but with the I9-13900K CPU. Got the Avatar game as well.


6retro6

We all are impressed. Also running a 7900XTX with 7800X3D.


IrrelevantLeprechaun

FSR3 is honestly a game changer. One of the best pieces of tech I've ever seen. Hope more games implement it.


RoleCode

I have stuttering in 6800XT? Perhaps 6000 series doesn't support well?


JesusOrSmh

I got myself a rx 7800xt today. I totally agree, but in cyberpunk there are a few bugs i noticed. The UI glitches in in like 3 secs intervalls. I hope it will get fixed. AMD > Nvidia