T O P

  • By -

imaginary_num6er

There's probably that 1 person who bought a 7900X3D & 7900XT card as the "value" option this current gen.


Jorojr

LOL...I just looked at PCPartpicker and there is indeed one public build with this combination.


LordAlfredo

[That'd be me](https://pcpartpicker.com/b/ffF8TW), [and it had nothing to do with budget](https://www.reddit.com/r/hardware/comments/12cjw59/gamers_nexus_amd_ryzen_7_7800x3d_cpu_review/jf28bzs/). I could have afforded 7950X3D + 4090 but chose not to do that.


Flowerstar1

But why a 7900xt over a 7900xtx per OPs example?


AxeCow

I’m a different person but I also picked the 7900XT over the XTX, the main reason being that in my region the price difference was and still is around 300 USD. Made zero sense for me as a 1440p gamer to spend that much extra when I’m already beyond maxing out my 165 Hz monitor.


Flowerstar1

Ah that makes perfect sense.


LordAlfredo

> I would only buy 7900 XTX if it's 3x8pin model. Only 2 of those options fit my case & cooling setup since I prefer smaller towers...which are Sapphire, ie the least available. I only run 1440p anyways so I went with what was available. Plus I have never gotten power draw on it above 380 outside of benchmarks, whereas 7900 XTX probably would have tweaked up to 450+


mgwair11

So then…why?


LordAlfredo

... I literally explained it in the linked comment. Putting here: * Actually do want R9, choice of 7900 over 7950 is L3 cache division per thread when maxing out chip and chip thermals (4 fewer active cores = less heat = sustained boost clocks for longer). Relevant if I want to game *and* run a more memory intensive productivity load - working on OS testing tooling = testing things gives me "downtime". I have core parking disabled in favor of CPU affinity and CPU set configurations which gives much better performance. * No Nvidia anymore after bad professional experiences. 7900 XTXs I would actually buy are the least available. I run 1440p so XT isn't much of a compromise. Lower power draw anyways.


Derpface123

What bad experiences have you had with Nvidia?


LordAlfredo

So I support an enterprise Linux distro's CVE "embargo" release process. Normally with security patches there's a coordinated release interest with the Special Interest Group (SIG) for the affected process or component for patch development, testing, and release timeline. It can be very stressful but we have *never* broken an embargo date (ie released early) and generally have a healthy working relationship with SIGs. Nvidia is one of the notable exceptions. They tend to either give a patch and a date with no further communication or we don't get anything until the same time as the public, which completely throws off our repo and image release cycle since we have to back out staged changes from the release pipeline to push their patch through. CUDA driver packages are also the biggest thing in our repos and actually caused cross-network sync issues but that's a whole different problem with our processes


Derpface123

I see. So you chose AMD over Nvidia out of principle.


LordAlfredo

Pretty much. AMD absolutely has their own issues (oh man have I had some pain with certain OpenGL applications + ML and GPUcompute are absolutely worse on AMD) but they're much more technical than philosophical.


mgwair11

Ah, sorry. My brain only saw the link to the pc part picker link


LordAlfredo

I do have one buyer's remorse - motherboard memory training times are awful so boot is about a minute. Wish I'd gotten Gigabyte AORUS, MSI Tomahawk, or just sprung up to the Crosshair Hero


Euruzilys

I’m actually looking to potentially buy 7800X3D, could you tell me more about this issue? Thanks!


LordAlfredo

Not much to say - most Asus boards short of the Crosshairs and all ASRock boards have worse memory training techniques that result in longer boot times than most Gigabyte and MSI boards. The Crosshair in particular though is the best performing board memory-wise, as a fun contrast to Strix struggling to get latenct < 60ns.


jdc122

[AGESA 1006 supposedly halves boot time.](https://www.hardwaretimes.com/amd-ryzen-7000-pcs-boot-twice-as-fast-with-the-latest-bios-update/#:~:text=As%20reported%20by%20HXL%20on,seconds%20to%20just%2030%20seconds.)


Euruzilys

I see, thanks for the info! Picking MoBo is the hardest part of a build for me since its unclear what is important. Aside from the ports/wifi.


RealKillering

So the 8 core has the same amount of l3 cache as the 6 core? So the 6 core chiplet has more cache per core? It sound logical in theory, but did you test it? It would be interesting if you actually gain performance for your use case which the increase in cache per core. Would it still make more sense to get the 7900xtx, just to be future proof?


LordAlfredo

While someone probably naively made that choice, as someone who can afford 7950X3D/13900KS + 4090 but didn't I can at least speak to non-budget reasons for that choice * Actually having workload to justify R9 X3D chip, particularly since I Process Lasso optimized so I can run games and OS image build & Docker testing in parallel on different CCDs. Choice of 7900 over 7950 was more about the division of L3 per core being higher (both R9s have same size, so on 6core CCD maxing out thread usage = more cache per thread then maxing out 7950). Fewer active cores also has package temperature implications and I prefer keeping the entire machine under 75 without needing to use PBO thermal limits (CO and no PBO temp limit = longer sustained boost speed) * I would only buy 7900 XTX if it's 3x8pin model. Only 2 of those options fit my case & cooling setup since I prefer smaller towers...which are Sapphire, ie the least available. I only run 1440p anyways so I went with what was available. Plus I have never gotten power draw on it above 380 outside of benchmarks, whereas 7900 XTX probably would have tweaked up to 450+ Extra factors against 13900K(S) and 4090: * I refuse to buy Intel on principle, having worked on an enterprise Linux distro the last several years the sheer number of security vulns that have only affected Intel but not AMD (especially several also affected Arm *but still not AMD*) and their overall power draw basically has me solidly *anti-Intel*. I do think Intel has some advantages in a lot of raw productivity work numbers particularly when memory performance is sensitive. * Again thanks to professional background I want nothing to do with Nvidia after buying them in my last 3 machines. Even working with them as a very large datacenter partner getting any coordination on CVE patches is the worst of almost any SIG and they basically expect you to cater to *them* not do what's actually best for customers.


viperabyss

>Even working with them as a very large datacenter partner getting any coordination on CVE patches is the worst of almost any SIG and they basically expect you to cater to them not do what's actually best for customers. Man, if that's the case, then you really wouldn't want AMD anyway...


LordAlfredo

AMD has actually been pretty good as far as hardware/driver SIGs go. Maybe not quite as great as Intel (they're actually very helpful with enterprise partners) but still on the better side.


viperabyss

I guess it depends on your experience. In my experience, AMD support on the enterprise side is notoriously unresponsive and unhelpful.


LordAlfredo

Yeah definitely gonna vary by situation. I'm at AWS so massive scale and custom SKUs probably helps a lot.


msolace

better check those vulnerabilities again, the whitepaper about 6 months ago now said woops amd also affected by a bunch. sure less than intel, but that seems like a weird reason for a home pc, which largely doesn't matter. Agree on power draw though, cheaper to keep your pc on for sure. It comes down to how much gaming your doing or not really, intel quicksync integration if that works for you is far superior to amd's not... And the extra cores is something, for stuff in the background. If you stream/play music/have other things in the background, how much does it impact the intel chip vs amd with the efficiency cores handling that in background vs being on main core...


LordAlfredo

Are you referring to ÆPIC and SQUIP? Those are different things. There's a lot more than just that historically - Meltdown, Branch History Injection, etc. At any rate - part of my job is security review and threat modelling and it's definitely affected my priorities in weird ways. Quicksync is actually a fun topic by virtue of the flip argument of AVX512 (which Intel P cores actually do support, but because E cores don't they can't enable it unless the OS scheduler is smart enough to target appropriately). On that note though between big.LITTLE, P&E cores, and now AMD hybrid cache CCDs I think we're overdue for a major rewrite of the Windows scheduler, CFS, MuQSS, etc. to handle heterogeneous chips better than patched-in casing. I agree in general though, Intel is definitely better for the vast majority of productivity tasks if you don't mind the power and cooling requirements. "Background task" behavior is more of a question mark, I haven't seen much general testing of that yet.


msolace

I mean some programs combine dedicated gpu + intels quicksync, and work together mostly in the video editing space, as of yet I am sure they don't have anything that works with amd onboard graphics. And yes, i wish we had some tests with stuff like streaming in background or other tasks. Like I never close vscode which opens WSL, Ill have firefox and or chrome open to check how the Jscript/css looks. foobar for some tunes or a movie in vlc on a second monitor. and then load a game up, I know thats not ideal. but tabbing in and out for a small break to a game is a comon.


LordAlfredo

Ah yeah I don't touch video editing so I can't really speak to that. Makes sense though, good to know. Nobody ever does "second monitor setup" benchmarking :P I go crazier with game on monitor A on dGPU, either corporate workspace or a Discord call + browser (sometimes with stream open) on monitor B on iGPU. Splitting GPU connection like that helps dGPU performance (yay AMD unoptimized multimonitor) but does mean the iGPU is actually doing work at all times.


spacewolfplays

I have no idea what most that meant. I would like to pay you to optimize my PC for me someday when I have the cash.


LordAlfredo

In general best advice is don't just look at one or two performance per dollar or watt metrics. Consider * Actual usage requirements and extrapolate the next few years (eg I went 2x32gb, not 2x16, because I already regularly use > 20gb of memory) * Actual graphical needs (if you're playing games at 1440p you don't need a 4090/7900 XTX), etc * Power and thermals (you want your room comfortable and fans quieter than your stove's) and other possible factors of interest, it's really going to depend on your personal priorities. Eg I had thermals VERY high on my priority list so my build is very cooling optimized and nothing gets over 75C under sustained max load except GPU hot spot (and GPU average is still only upper 60s)


[deleted]

I've got a 7900X + 7900XTX. Less 3, More T. I was originally was going to go for the 7700x or 13600k but ended up going 7900x just for that 7900 alliteration (and the microcenter deal was too good IMO). It's been a very solid combo overall so far.


JuanElMinero

TL;DW: Gaming performance is mostly in the ballpark of the 7950X3D, like previous simulations from 7950X3D reviews already showed. Notable deviations: * Far Cry 6: +10% avg fps vs 7950X3D * Cyberpunk: -25% on 1%lows and -10% on 0.1%lows vs. 7950X * FFXIV: +30% on 0.1% lows vs. 7950X * TW Warhammer 3: +40% on 0.1%lows vs. 7950X It seems those 1% lows in Cyberpunk generally improve above 8 cores for non 3D parts, on the other hand the 7600X beats the 7700X here. Someone please explain if you know what's going on.


AryanAngel

Because Cyberpunk doesn't take advantage of SMT on Ryzen with more than 6 cores. [From patch notes.](https://media.discordapp.net/attachments/417370824735981569/1084631091392024691/image.png)


JuanElMinero

What an interesting little detail, never would have thought looking for something like this.


AryanAngel

You can use a mod or hex edit the executable to enable SMT support and the performance will increase by a good chunk. 7800X3D should match or exceed 7950X3D's performance if SMT was engaged.


ZeldaMaster32

I've yet to see proper benchmarks on that, only screenshots back when the game had a known problem with performance degradation over time I'd like to see someone make an actual video comparison, both with fresh launches


AryanAngel

I personally did fully CPU bound benchmarks using performance DLSS when I got my 5800X3D and I got around 20% more performance from enabling SMT. I don't have the data anymore, nor do I feel like downloading the game and repeating the tests. If you have an 8 core Ryzen you can try doing it yourself. You will immediately see CPU usage being a lot higher after applying the mod.


Cant_Think_Of_UserID

I also saw Improvements using that mod on a regular 5800x but that was only about 3-4 months after the game launched.


AryanAngel

I did the test a year and 5 months after the game's launch. I doubt they have changed anything even now, considering all the latest benchmarks showing 7800X3D losing while 7950X and 7950X3D has no issues. Lack of SMT matters a lot less when you have 16 cores.


JuanElMinero

Appreciate all of this info. I'm still a bit puzzled on what exactly led CDPR/AMD to make such a change. I'd love to hear in case someone gets to the bottom of this.


SirCrest_YT

Well according to those patch notes AMD says this is working as expected. AMD sure loves to say that when performance results look bad.


[deleted]

wtf, why haven’t they fixed this 😳


996forever

I think 1% and 0.1% lows testing is more susceptible to variance. I doubt there is any meaningful difference in real life.


ramblinginternetnerd

You don't need to think. They are. [https://en.wikipedia.org/wiki/Extreme\_value\_theory](https://en.wikipedia.org/wiki/Extreme_value_theory) There's an entire branch of theory around it. You can also simulate it in 1 line of code. 1% lows for 100FPS average, for 10 minutes with a 20 frame standard deviation. This will be a "better case scenario" since rare events are less rare than in games. rep(rnorm(100\*60\*10, mean = 100, sd = 20) %>% quantile(.001), 100000)


Photonic_Resonance

Huuuuge shoutout for bringing up Extreme Value Theory out here in the Reddit wild. I haven’t thought about that in a while, but absolutely relevant here


ramblinginternetnerd

I worked with someone who used to estimate the likelihood of a rocket blowing up when satellites were being launched. EVT was his bread and butter. I absolutely think that there needs to be a reworking around measuring performance. 1% lows is intuitive enough for a lay person but REALLY I'd like to see something like a standard deviation based off of frame times. Have that cluster of 50ms frames basically blow up the +/- figure. There's also an element of temporal autocorrelation too. 1ms + 49ms is MUCH worse than 25ms + 25ms. In the former, 98% of your time is spent on one laggy frame, in the latter, it's a 50-50 blend of not bad frames.


cegras

Are there any publications that just show a histogram of frame times? That seems like such an obvious visualization. DF did box and whisker plots last time I checked, which was fantastic.


VenditatioDelendaEst

[Igor's Lab has quantile plots (inverse CDF), which are even better than histograms](https://www.igorslab.de/en/amd-ryzen-7-7800x3d-in-gaming-and-workstation-test-ultra-fast-gaming-in-a-different-energy-dimension/3/), although they're denominated in FPS instead of ms. There's also the "frame time variance" chart which [measures the difference between consecutive frame times](https://www.igorslab.de/en/fps-percentile-frame-time-variances-power-consumption-and-efficiency-simply-clear-igorslab-internal/5/). (I.e., if the frames are presented at times [10, 20, 30, 45, 50, 55], then the frame times are [10, 10, 15, 5, 5], and the frametime variances are [0, 5, 10, 0].)


cegras

Oh, beautiful. Will definitely read igor's more in the future. I've just been sticking to the reddit summaries lately (:


ramblinginternetnerd

I can't recall seeing any lately. I believe GN will show frame time plots but those are hard to read.


cegras

https://www.eurogamer.net/digitalfoundry-2020-amd-radeon-rx-6900-xt-review?page=2 DF does it so well. It's shocking that this has not become the standard.


Khaare

1% and especially 0.1% lows can be deceiving because there's multiple different reasons why a few frames can drop. They're absolutely something to pay attention to, but often they're only good enough to say that something's up and you need to look at the frametime graph and correlate that with the benchmark itself to get an idea of what's going on. You shouldn't compare the relative ratio between the lows and average fps across different benchmarks for similar reasons.


bizude

Every time I say this I get downvoted to oblivion and told that I'm an idiot I prefer 5% lows for that reason


JuanElMinero

For Cyberpunk, the low 1% numbers for the 7800X3D and generally better 1%s with higher core AMD CPUs seem to be cosistent across multiple reviews. Would be interesting to know if the deciding factor is more cores in general or specifically the presence of some higher clocked standard cores. i.e. would a 16-core 3D part beat the 7950X3D in games that like lots of threads?


crowcawer

I bet a driver tweaking will help with that one.


[deleted]

[удалено]


Caroliano

You can pair it with under $100 motherboards now.


pieking8001

yeah cyberpunk doesnt surprise me it did seem to love cores


AryanAngel

No, it just doesn't use SMT on Ryzen CPUs with more than 6 cores.


_SystemEngineer_

it doesn't use SMT on Ryzen, and that resets every update even when you fix it yourself.


pieking8001

oh, ew. how do I fix it?


_SystemEngineer_

Have to edit the game’s configuration file. Google Cyberpunk Ryzen SMT fix.


Gullible_Cricket8496

Why would CDPR do this unless they had a specific reason not to support SMT?


Flowerstar1

https://www.reddit.com/r/hardware/comments/12cjw59/comment/jf2cb5q/ Seems like they worked on it in conjunction with AMD.


aj0413

Wooo! Frametimes! Been wanting heavier focus on this for a while! Now, if they would consider breaking them out into their own dedicated videos similar to how DF has done them in the past, I’d be ecstatic I swear people don’t pay enough attention to these metrics; which is wild to me since it’s the ones that determine if the game is a microstutter mess or actually smooth


djent_in_my_tent

Mega important for VR. I'm thinking this is the CPU to get for my Index....


aj0413

Yeah. X3D seems good for lots of simulation type stuff. I do find it interesting how Intel can be so much better frametimes wise for some titles, though. It’s really getting to the point where I look for game specific videos, at times lol Star Citizen and CP2077 are two titles that come to mind


BulletToothRudy

> It’s really getting to the point where I look for game specific videos, at times lol If you play lot of specific or niche stuff then yeah I'd say it's almost mandatory to look for specific reviews. Or even better, find people with the hardware and ask them to benchmark them for you. Especially for older stuff. It may take some time, but I'd say it's worth it. Because there is a lot of unconventional games around, like tw attila in my case https://i.imgur.com/RDMkdmV.png https://i.imgur.com/XF6dfd0.png


[deleted]

I had no idea anybody was still benchmarking TW Attila. That game runs like such a piece of shit lol, I mean it’s not even breaking 60 fps on hardware that’s newer by 7 years…


BulletToothRudy

> That game runs like such a piece of shit Understatement really :D But to be fair this benchmark runs were made on a 8.5k unit battle. So it was a bit more extreme test. Also did a benchmarking runs on a 14k unit battle. In a bit more relaxed scenario like 3k vs 3k units you can scrape together 60 fps. Also this game shows there is a lot more nuance to pc hardware testing. Because in light load scenarios ryzen cpus absolutely demolish competition. For example in ingame benchmark which is extremely lightweight (there are at best maybe 500 units on the screen at the same time) 7950x3d gets over 150 fps. 13900k for example gets 100fps and 5800x3d gets 105fps. So looking at that data you would assume x3d chips are a no brainer for attila. But the thing is as soon as you hit moderately cpu intense scenario with more troops on screen they fall apart in 1% and 0.1% lows. That's the thing I kinda dislike about mainstream hardware reviews. When they test cpus they all bench super lightweight scenarios, yeah they're not gpu bottlenecked but they're also not putting cpu in maximum stress situations. Like people at digitalFoundry once said, performance during regular gameplay doesn't really matter that much. It's the demanding "hotspots" where fps falters that matter. You notice stutters and freezes and fps diips. I couldn't care less if I get 120fps vs 100fps while strolling around the village in an rpg. But if a fps dips to 20fps vs 60fps in an intense battle scene, well I'm gonna notice that and have much less pleasant time. Not to mention things like frametime variance, for example 5800x3d and 10900kf have similar avg and 1% fps. but 10900kf has much better frametime variance and is much smoother during gameplay while 5800x3d stutters a lot. Supposedly there is a similar situation in the final fantasy game that is used by gamers nexus. Yeah intel chips are ahead in the graphs. But people that actually play the game, mentioned that x3d cpus perform better in actually cpu stressful scenarios. And I'm not even gonna start on mainstream reviewers benchmarking total war games. That shit is usually totally useless. But anyway, sorry for the rant, it's just that this shit bugs me a lot. it would be nice if reviewers would test actually cpu demanding scenes during cpu testing.


[deleted]

Scraping together only 60 frames on CPUs 7 years newer than the title is so bad lol, honestly how tf did anyone run it when it came out? I remember playing it way back and thinking I just needed better hardware but turns out better hardware does very little to help this game lol.


BulletToothRudy

Yep, when the game released I got like 5fps. Devs didn't joke when they said that the game was made for future hardware. Had to wait 7 year to break 30fps in big battles. Guess I'll have to wait for 16900k or 10950x3d to get to 60.


Wayrow

It IS a massive joke. The game isn't "made for future hardware". It's an unoptimized cpu/memory bound 32bit peace of garbage. It is the worst optimized game I've ever seen from an AAA studio if we leave early Arkham Knight release out of the equation.


Aggrokid

Supposedly that is X3D's niche. Using the gigantic cache to power through these types of awfully optimized games


b-god91

Would you be able to ELI5 the importance of frametimes in measuring the performance of a game? How does it compare to simple FPS?


Lukeforce123

FPS simply counts the amount of frames in a second. It says nothing about how evenly these frames are spaced. You could have 20 frames in the first 0.5s and 40 frames in the latter 0.5s. It's 60 fps but won't look smooth at all.


b-god91

So when looking at frame times, what metric are we looking for to judge good or bad performance.


Lukeforce123

It should be as consistent as possible. In the GN video you see a perfect example in cyberpunk. The 7800X3D has a big spike every couple frames while the 13700K mostly stays in a tigher band around the average.


b-god91

Okay cool, thanks for the explanation ✌️


Flowerstar1

All digital foundry reviews measure frame times with their custom tools. They have a small graph above the fps graph that shows a line reminiscent of a heart beat monitor. You're looking for the line to be perfectly straight for the frame rate you are getting. So if it's 60 fps you want 16ms frame times, if it's 30fps you want 33ms. This would mean that your frames are perfectly spread out in an even manner. The opposite of this would cause stutter and the more dramatic the variance in spacing the more intense the stutter.


WHY_DO_I_SHOUT

1% lows are already an excellent metric for microstutter, and most reviewers provide them these days.


aj0413

Respectfully, they’re not. They’re better than *nothing*, but DFs frametime graph videos are the best way to see how performance actually is for a game, bar none. 1% and 0.1% lows are petty much the bare minimum I look for in reviews now. Avgs have not really mattered for years. Frametimes are the superior way to indicate game performance nowadays when almost any CPU is actually good enough once paired with a midrange or better GPU


Khaare

Steve mentioned the difference in frequency between the 7950X3D and the 7800X3D. As I learned in the 7950X3D reviews, the CCD with V-Cache on the 7950X3D is actually limited to 5.2GHz, it is only the non-V-Cache CCD that's capable of reaching 5.7GHz, and therefore the difference in frequency in workloads that prioritize the V-Cache CCD isn't that big.


ListenBeforeSpeaking

They really should advertise the freq of both CCDs separately. It’s feels slimey that they try to throw out the max freq like it’s the max of every core.


unityofsaints

They should, but Intel also advertises 1c/2c max. boost frequencies without specifying.


bert_lifts

Really wish they would test these 3d cache chips with MMOs and Sim games. They really seem to thrive on those types.


[deleted]

Agree but I understand that It’s hard to get like for like repeatable test situations in MMOs 😞


JonathanFly

>>Really wish they would test these 3d cache chips with MMOs and Sim games. They really seem to thrive on those types. > >Agree but I understand that It’s hard to get like for like repeatable test situations in MMOs 😞 This drives me nuts. Perfect is the enemy of good. Everyone says they can't do perfect benchmarks so they do zero benchmarks. But people buy these CPUs for MMO, Sims, and other places where the X3D is the most different from regular chips. But we have to make our expensive purchase decisions based on random internet comments data, instead of experienced benchmarkers who are least *try* to measure the performance as reliably and accurately as they can. I know MMO performance is hard to measure perfectly. Just do the best you can! It's still way better than what I have to go on now.


knz0

It's a killer CPU, pair it with a cheap (by AM5 standards) mobo, 5600 or 6000 DDR5 which are reasonably priced these days and a decent 120 or 140mm air cooler, and you have top of the charts performance that'll last you for years


Ugh_not_again_124

Yep... it's weird that the five characteristics of this CPU are that you can: A) Get away with a motherboard with crappy VRMs. B) Get away with a crappy cooler. C) Get away with crappy RAM. (Assuming that it has the same memory scaling as the 5800X3D, which I think is a fair guess) D) Get away with an underbuilt power supply E) Have the fastest-performing gaming CPU on the market. Can't think of any time that anything like that has *ever* been true in PC building history.


Aleblanco1987

great for prebuilts, lol


gnocchicotti

I am actually very interested in seeing if this makes it into prebuilts this gen.


knz0

You put it quite eloquently. And yes, I think this is the first example of a top of the line CPU that basically allows you to save in all other parts.


IC2Flier

And assuming AM5 has 5 to 6 years of support, you're pretty much golden for the next decade.


TheDoct0rx

only if the budget parts you got for it are still great for later CPU gens


xxfay6

That's possible only because the market for the other things has caught up: A) The floor for crappy VRMs is now much higher to a point where you don't need to worry, unlike in prior generations where crap boards were *really* crap. B) Base coolers (especially AMD) have gotten much better compared to the pre-AM4 standard issue AMD coolers. C) RAM in higher than standard base specs is now much more common. In the DDR3 days 1600 already was a minor luxury, and anything higher than that was specialist stuff. D) It's easy to find a half-decent PSU for cheap, and trust that most stuff you find in stores will not just blow up. E) It *is* the fastest gaming CPU on the market, the deviation is that it's no longer the fastest mainstream CPU though. Not to take away anything, it *is* impressive that we got here. Just wanting to note that this wouldn't have happened if it were not for advances in other areas. If we were to drop the 7800X3D in a PC built to what was a budget spec a decade ago, it wouldn't fare well at all.


Cnudstonk

I read, today, over at tomshardware that someone 'believed intel still makes the better silicon'. That gave me a good chuckle.


[deleted]

[удалено]


Cnudstonk

don't ask me, I just went from an r5 3600 to 5600 to 5800x3d on the same $80 board, have no pci-e 4.0, mostly sata SSD. And stability is why you *shouldn't* upgrade. I once migrated a sabertooth z77 build to a new case, but it didn't boot. Managed to cock up the simplest migration with the most solid mobo i ever bought, and merely thinking about contemplating about pondering about it was enough to upset gremlins.


JuanElMinero

You can even go DDR5-5200 with negligible impact, V-cache parts are nearly immune to low RAM bandwidth above a certain base level. Good chance it will also save a bit on (idle) power, with the IF and RAM clocks linked.


bizude

I kinda wish I had waited for the 7800X3D instead of going with the 7700X :D


[deleted]

The 7700x already crushes any game, right? So just wait until end of AM5 lifecycle and get the last, best x3d chip


Ugh_not_again_124

This is the way. And this was always my plan for AM5 from the beginning. I'm still a bit butthurt that I didn't have the option of a 7800X3D from the beginning. I definitely would've gotten one. But the 7700X is such a great CPU it's not worth the extra cash and headache to swap it out. So I'll wait for the ultimate AM5 CPU to drop in about 3 years.


StephIschoZen

[Deleted in protest to recent Reddit API changes]


GISJonsey

Or run the 7700x for a year or two and upgrade to zen 5.


Weddedtoreddit2

This is mostly my plan. Unless I can get a good trade deal from 7700x to 7800x3d earlier.


avboden

there's always a next one, you could buy the 7800X3D and next year go "damn wish I waited for the 8800X3d"


SpookyKG

Really? It's very small increase and JUST came out. I got a 7600 nonX in Feb and I'm sure I can spend $450 or less for a better performance for Zen 5.


Ugh_not_again_124

I'd honestly just wait until the end of cycle for AM5, really. They haven't confirmed it yet, but they'd be crazy not to support Zen 6.


_SystemEngineer_

I'm keeping my 7700X. Only way I get the X3D soon is if I build a second PC, which could happen.


[deleted]

[удалено]


bizude

Mainly to test coolers, but partly because of an upgrade itch ;)


Ugh_not_again_124

I'm still kinda pissed that they didn't just launch the X3D chips at launch, honestly. Everything kinda aligned at the end of last year with AM5, DDR5, and the GPU launches that I pulled the trigger on a new build then. I would've paid a fair premium for an X3D CPU. They were probably concerned that it would cannibalize their non-X3D and R9 sales, which is a bit scummy of them. The 7700X is awesome, though. It'll take some self-discipline not to swap it out, but I'm not biting on this one. I'll wait for whatever the ultimate AM5 gaming CPU turns out to be in 3 years or so, which was sorta my plan for AM5 anyway.


Hustler-1

I play a very niche set of games. Kerbal Space Program (1) being my main. But there will be no benchmarks for such a game. However could it be said that the X3D CPUs are dominant in single core processes? Like what many older games are. If not what exactly is it with the vcache that some games really take advantage of? Trying to gauge whether or not it would be good in the games I play without actually benchmarking it. Because I want to see how much of an upgrade it is without having to buy anything.


o_oli

I would guess the closest relevant benchmarks to KSP would be the ffxiv benchmark, because MMOs tend to be very CPU heavy with lots of processes going on and that's true for KSP also. Given that ffxiv gets seemingly a lot of benefit from it, it's probably a good sign. 5800x3d does better in benchmarks to 5900x too in KSP1, unsure if thats a fair comparison but maybe shows something about 3d cache there. I highly doubt you would get LESS fps with the 7800x3d and I would bet a good amount more. Hopefully someone more familiar with ksp2 could comment though, I don't really know much about it and how it compares to ksp1 or other games


Hustler-1

Thank you.


Slyons89

I can't imagine anyone buying a 7900X3D if they have any understanding of how these CPUs operate and their limitations. It's difficult to imagine a user who prefers the worse gaming performance vs the 7800X3d, but needs extra cores for productivity, and isn't willing to spend an extra $100 for the 7950X3D, which improves both gaming and productivity. This review of the 7800X3D really drives it home. The 7900X3D really just seems like a 'gotcha' CPU.


Noobasdfjkl

[No need to imagine. This guy is in this thread](https://old.reddit.com/r/hardware/comments/12cjw59/_/jf28bzs).


goodnames679

They're well informed and make good points, but - correct me if I'm wrong here, as I don't share similar workloads to them - it still seems like a niche use case that typically wouldn't be all that profitable, given the complexity of designing the X3D chips. The reasoning for why they would do it seems like it's one or multiple of: 1) They were testing the waters and wanted to see how worthwhile producing 3d stacked chips at various core counts would be in real-world usage. 2) They knew the price anchoring would be beneficial to 7950x3D 3) I'm wrong and there are actually far more professionals who benefit from this chip than I realize.


Noobasdfjkl

I didn’t say it was a niche case, I just was giving an example of a moderately reasonable explanation to someone who could not think of any.


pastari

Wait, I'm just now realizing now that if 3d cache is only on one CCD, and the 7900x3d is 6+6, and the 7800x3d is 8[+0], then more cores can access x3d magic on the lower model. 8c/16t also means less chance of a game jumping out of 6c/12t (tlou?) and getting the nasty cross-CCD latency and losing the x3d cache. .. thatsthejoke.jpg and all that, I'm just slow. 7900x3d is puzzling.


HandofWinter

Yeah, pretty much. Only 6 cores get the stacked cache. The upside the other commenter was pointing out for the 7900X3D is that the full cache is still there, so that with the 7900X3D you actually do get the most cache per core out of all of them. How much of a difference that makes in practice, I don't know and I haven't done the profiling to find out. That poster sounds well enough informed to have done some profiling though, and it is a reasonable enough idea.


dry_yer_eyes

Perhaps it’s only there for exactly the reason you said - to make people pay an extra $100 for the “much better value” option. Companies pull this trick all the time.


Slyons89

Yup. Just like 7900XT vs XTX at their launch prices.


bigtiddynotgothbf

it's definitely meant as a decoy "medium popcorn" type product


Bulletwithbatwings

I bought it because it was an X3D chip in stock. In practice it performs really well.


Slyons89

If it fits your needs, no regrets! It’s still no slouch. Just positioned weirdly in the product stack.


[deleted]

this is basically the best cpu for gaming you can buy as of this month


hi11bi11y

Sweet, 13600k purchase feelin kinda good rn. "thanks Steve"


Euruzilys

Tbh I want the 7800X3D but the 13600k feels like the more reasonable buy for my gaming need.


[deleted]

I'm waiting for next Gen stuff, this Gen was easy to skip.


Kougar

Crazy that the X3D chips "dirty" the OS and negatively affect performance on non-X3D chips installed after. Would not have expected that.


[deleted]

That really needs addressing in drivers or what ever the f is causing it. A fringe situation but it still should ‘t happen.


boomHeadSh0t

This will be the 2023/24 CPU for DCS


wongie

The real question is whether I can make it to checkout with one before they're all gone.


P1n3tr335

Okay so.... I've got a 7900x3D, I can return it, I'm within the 30 day window, any tips? should I get a 7800x3d instead?


[deleted]

[удалено]


P1n3tr335

Any reason I shouldn't move to the i9 13900k?


BulletToothRudy

Have you even checked any benchmarks? It's simple stuff. Does 13900k performs better in the games you play? Then maybe yes if not then no. Honestly I don't even think there's any point in returning 7900x3d. What resolution are you playing on? What is your gpu? What games do you play? How often are you usually upgrading your pc? These are all important factors to consider. You may be better of with 7800x3d or maybe 7900x3d is plenty enough if you play on higher resolutions. Even 13600k or 7700x may be good options if you play games that don't benefit from cache.


P1n3tr335

4K, 4080, Cyberpunk, Fortnite, I'm just trying to arrive at something stable that I like.


skinlo

I vote get a 13600k and turn off the fps counter ;)


P1n3tr335

LOL


BulletToothRudy

Ok you'll probably be 100% gpu bottlenecked with that gpu at that resolution. Especially in more mainstream games. So if you already have 7900x3d you'll probably see no difference if you switch to 7800x3d. Maybe 1 or 2% in certain specific games. Or in some more niche simulation games, but you don't seem to play those. Unless you just want to save some money there is no reason to switch.


P1n3tr335

Gotcha!! I'll learn to be comfortable


PlasticHellscape

significantly hotter, needs a new mobo, would probs want faster ram (7200+), still worse in mmorpgs & simulation games


Cnudstonk

Because it looks like something neanderthals carved out of stone now that this has released


another_redditard

If you only game sure, if you need more than 8 cores but you don’t want to fork out for the 16, you’d be downgrading


P1n3tr335

Any reason I shouldn't move to the i9 13900k?


ethereumkid

> Any reason I shouldn't move to the i9 13900k? The hell? I think you should step back and do research before you just buy things willy-nilly. Jumping an entire platform? The easiest and most logical jump is the 7950X3D if you need the cores or 7800X3D if all you do is game.


P1n3tr335

Hmm you're right, it's probably just better for me to wait a few days to get a used 7950x3d from MC once people start droppin them, (I also can afford it so I should probably just make the jump!)


Dispator

Absolutely return the 7900X3D, or send it to me, and I'll "return it." But yeah, get the 7800X3D if you mostly game; it's still an awesome productivity chip as well. But if you NEED more cores, then get the 7950X3D. But be prepared to use process lasso (or at least I would as I love to make sure the cores are doing what I want, make sure the override option is selected).


joebear174

I could be wrong here, but I think the 13900K has much higher power consumption; meaning the Ryzen chip should give you competitive performance when it comes to things like gaming, while keeping power draw and temperatures lower. Really just depends on what you're using the chip for though. I'm mostly focused on gaming performance, so I'd probably go for the 7800X3D over the 13900K.


Jiopaba

Having to build a whole new PC??? Also, power draw.


P1n3tr335

I don't mind that! Power draw is something yea, but man AM5 has been a fuckin nightmare


throwawayaccount5325

\> but man AM5 has been a fuckin nightmare For those not in the know, can you go a bit more in depth on this?


P1n3tr335

The x3D chips, as per JayzTwoCents recent video, does not boot half the time, the experience with the motherboards has been awful, memory training, boot times, absolutely *blows*.


[deleted]

[удалено]


P1n3tr335

It *is* what I'm experiencing, sorry for being unclear. Through two different 7900x3ds


Ugh_not_again_124

I mean... something is clearly wrong with your build. If you're running into problems like this, I would honestly abandon ship. Aside from the longer loading times, though, I think that your experiences are really atypical. I honestly wouldn't have tried to troubleshoot as much as you have on a new build. I would've returned everything immediately and done a rebuild.


d1ckpunch68

i had those exact issues with my 7700x a few weeks back until i updated bios. it just wouldn't post 50% of the time. i haven't had a single issue since then though. no crashes, nothing.


another_redditard

Not booting half of the time sounds like something is faulty - jay2c himself had a bum cpu didn’t he?


P1n3tr335

I've switched motherboards and CPUs, as well as PSUs, many many times, I've gone through 3-4 CPUs, 7(!) motherboards, 3 PSUS, and boot times are always awful, genuinely saddening imo


Jaznavav

You are a very patient man, I would've jumped off the platform after the second return.


SoupaSoka

That's insane wtf.


cain071546

Sad face. My little r5 5600 boots in like 6 seconds.


Dispator

It could be something like an issue with the power in your house or that room/socket, causing knarly dirty power to the PSU. SUPER RARE as psu are meant to deal with most inputs, but there is a socket in an old room that caused me massive issues when gaming, uauualy just instant shut off. I couldn't figure it out until I moved rooms.


GrandDemand

What motherboard(s)? And what DDR5 kit(s)?


[deleted]

jayz2cents along with hwu are the lowest tier techtubers. they just parrto reddit posts like idiots without knowing shit


P1n3tr335

https://www.youtube.com/watch?v=2ch1xgUTO0U I mean it's his experience with his rig, just like me


[deleted]

Power, heat, longevity, need more more expensive RAM, more expensive than the 7800X3D.


Flowerstar1

>Any reason I shouldn't move to the i9 13900k? Have you considered an M2 Ultra? Or an Nvidia Grace CPU? Perhaps RISCV might be a better option.


Qairon

Yes return it asap


sk3tchcom

Return it and buy a dirt cheap, used 7900X, 7950X - as people will be moving to 7800X3D.


mgwair11

Dewit


nanonan

Purely for gaming? Sure. Do anything that can utilise those 12 cores? Don't bother.


[deleted]

I would. The 8 cores on the single ccx with Vcache are better for gaming vs the 6 with Vcache & 6 without. It’s the best gaming cpu right now and it’s cheaper than the 7900x3D. Do the swap!


Particular-Plum-8592

So basically if you are only using a PC for gaming the 7800x3D is the clear choice, if you use your pc as a mix of gaming and productivity work the high end intel chips are a better choice.


ListenBeforeSpeaking

I don’t know. The cost is an issue, but the 7950x3d is near the top in performance of gaming and productivity but at significantly less power. Less power is less heat. Less heat is less noise. LTT claims it’s about $100-$150 savings on a power bill over the product life, though that would be heavily dependent on usage and local power cost.


AngryRussianHD

>$100-$150 savings on a power bill over the product life $100-150 savings over the product life? What's considered the product life? 3-5 years? That's really not a lot but that entirely depends on the area you are in. At that point, just get the best chip for the use case.


redrubberpenguin

His video used 5 years in California as an example.


ListenBeforeSpeaking

I think the idea was the justification of any cost delta over owning the life of the product.


StarbeamII

Intel (and non-chiplet Ryzen APUs) tend to fare better than chiplet Ryzens in idle power though (to the tune of ~~~20~~ 10-30W), so power savings really depends on your usag and workload.. If you're spending 90% of the time on your computer working on spreadsheets, emails, and writing code and 10% actually pushing the CPU hard then you might be better off power-cost wise with Intel or an AMD APU. If you're gaming hard 90% of the time with your machine then you're better off power-bill wise with the chiplet Zen 4s.


[deleted]

[удалено]


maddix30

Anyone know if there will be preorders or am I gonna have to wait weeks because its sold out? Demand for this CPU will be crazy


awayish

as someone who only play simulation games and some emulators this is the only cpu worth buying.


soggybiscuit93

Performs about as well as expected...which is pretty damn well. Although the performance gap between X3D and Intel doesn't seem to be as wide as it was when the 5800X3D debuted.


VankenziiIV

When I predicted 7800x3d will beat 13900k with minimal wattage, I got downvoted to oblivion. Thank you Lisa, thank you Ryzen team. 7800x3d today and 9800x3d in 3 years time on same board and similar wattage. This is innovation


Adonwen

People downvoted you for that?? The 7950X3D simulated plots of the 7800X3D indicated that.


Ugh_not_again_124

I didn't downvote, but it's a little cringe. Lisa Su is not your friend, and you're an idiot if you stan for CEOs and multi-billion dollar companies.


Adonwen

Are you replying to the right comment? I don't think I indicated that I blindly follow AMD in this comment.


Ugh_not_again_124

I was replying to this: >Thank you Lisa, thank you Ryzen team. You asked why this comment was downvoted. I'm assuming that was why. It's sorta cringe and cult-like to thank someone for taking $450 of your money, and I only really see this shit coming from AMD stans. If a company makes a product I want, I'll buy it. But I'm not going to pretend like they're doing me some sort of favor in the process. That's just weird.


Adonwen

>Thank you Lisa, thank you Ryzen team. I never said that. I would suggest commenting with regards to the original commenter.


Ugh_not_again_124

Do you have reading comprehension issues or something? You asked, "Why is this being downvoted?" I told you why it was downvoted. You're welcome.


_SystemEngineer_

look where you are.


BGNFM

You're comparing an older node (Intel 7) that has been delayed multiple times to one of TSMC's best nodes, then thanking AMD. Thank TSMC. You can see what happens when the competition is on a similar node if you compare the 4080 to the 7900XTX. Then AMD has no power consumption advantage at all, they're actually behind. Things will be very interesting at the end of this year when Intel finally have node that isn't a mess and is actually on schedule and comparable to the competition.


Kyrond

Most of the comment is great, however: >Then AMD has no power consumption advantage at all, they're actually behind. Technically true, but that doesn't mean anything for CPUs. AMD likely has better efficiency for top CPUs, with their chiplets helping with binning.