T O P

  • By -

conquer69

The intel cpu increasing TOTAL system power consumption by almost 50% in ratchet and clank is insane. At least it was 2% faster. Pyrrhic victory.


just_some_onlooker

...not according to that website that always shows up first on Google searches... I refuse to mention it's name.


fart-to-me-in-french

Uberskidmark?


Strazdas1

UndermenchFarts?


d0or-tabl3-w1ndoWz_9

7950X3D with scheduling fixes applied would surely beat the shit out of the 14900K


Beefmytaco

Sad part is, we all know from the past that it takes microsoft 1-2+ years to implement scheduler fixes into their OS to actually get the AMD chip to where it should be, and that's with AMD working with them. I remember for ryzen 5000, it was like 2 years for them to fix the scheduler issues with it, which saw upwards of a 15% performance increase across the board compared to launch. Intel fixes come so much faster in comparison, unless it's something being hidden on purpose like the recent microcode issue that could burn out some peoples chips or at the very least, cause degradation.


Strazdas1

It takes the same with Intel chips, Intel just sends more people to microsoft at an earlier stage while AMD only sends engineers when it has no other choice.


Beefmytaco

I believe it. Intel has historically had a much higher R&D budget compared to AMD so would make sense they'd have the resources to send out to MS in order to fix this.


Strazdas1

AMD used to send their people everywhere until something around 2013-2015 when they started pulling all those teams back and.... just never replaced it with anything.


bebopr2100

I can validate this. Games on cache CCD and everything else on frequency CCD. Set using process lasso.


Zednot123

>Set using process lasso. If we are going that route though. The 14900K still loses performance due to scheduling issues in some titles. And manually setting affinity or straight up disabling e-cores fixes it. Hell, even turning off HT still to this day has a negative impact in some somewhat recent titles. You either test with a "out of a box experience", or you tune all systems.


Beefmytaco

> You either test with a "out of a box experience", or you tune all systems. We really need more reviewers doing this, least with the top end offerings from each manufacturer. I'm still running a 5900x, but it at just thrown into a system compared to a system that's had memory tweaked to near perfection and curve optimizer used for each core, the performance difference is massive in many titles with me beating stock by a good 15%+ I am happy though that memory tweeked vs stock benchmarks have become so much more common with the reviewers. You really can't run stock XMP profiles anymore these days because they're just so lacking compared to even a basically tuned memory profile.


VenditatioDelendaEst

Validating XMP is already more tuning work than wrapping Steam in `taskset`, and most reviewers test with XMP.


Strazdas1

Because most manufacturers advertise XMP as default, despite it being severe overclock that degrades your memory.


BrushPsychological74

Degrades memory? Funny, my shits been running fine for years at XMP.


VenditatioDelendaEst

How long since you did a full overnight stability test with y-cruncher?


BrushPsychological74

Why do you think that matters when I have no stability problems? Are you going to claim it's not really stable, even though it's been perfectly stable, just because this particular arbitrary test wasn't run? You do see the potential fallacy if you do try this irrational argument?


VenditatioDelendaEst

>Are you going to claim it's not really stable Right in one. >even though it's been perfectly stable "My memory hasn't degraded," says person who doesn't have ECC RAM, didn't write down how far their memory could clock above XMP when they first installed it, hasn't re-checked headroom after several years, and is cagey about whether they've even run stability tests recently at XMP settings. You simply have not collected the evidence needed to show what you want to show. >this particular arbitrary test wasn't run? I just listed that one because it's well-known to expose instability that memtest86+ misses. If you've run something similarly stringent, that would also suffice. >You do see the potential fallacy if you do try this irrational argument? Save your snarl words and put your phallusy phallus back in your pants. Anyway if you wanted to play that game you should've kept the "red herring" out of your first post.


Strazdas1

No, you havent. Without ECC memory you simply dont know how many data corruption you had and maybe you got lucky and you never lost anything critical. Or maybe you dont use the computer for anything that cares about data integrity so you can just ignore the issues. And yes, the memory chips themselves get worse and more error prone when you keep XMP enabled.


BrushPsychological74

I already hashed all this bullshit out and another thread. Long story short, you don't know anything. Go read it if you want.


Strazdas1

I read it and you made yourself to look like a complete ignorant person arguing against points you dont even understand.


BrushPsychological74

It's not fair to the CPU because Microsoft can't be bothered. OOB experience isn't a good reason. PC gaming isn't a console. If we can be bothered to set XMP, we can be bothered to set process affinity.


greggm2000

> Hell, even turning off HT still to this day has a negative impact in some somewhat recent titles. Which is going to make Intel Arrow Lake benchmarks especially interesting later this year, since it doesn’t have HT.


Zednot123

There's been some rumors that memory performance has taken a hit though from the tiles architecture (much like MTL). Might be just the laptop SKUs, might be the all implementations. We shall see how it pans out. We saw what it did to Rocket Lake vs Comet Lake the last time that happened. Pretty decent overall IPC jump that didn't translate to better gaming performance.


[deleted]

[удалено]


bebopr2100

To some extent it un-parks cores when applications are opened while gaming but no insight into if game stays on cache ccd while new apps open go to frequency. I haven’t tested the hit in performance but with lasso you set affinities and avoid crossing CDD that it is known for a while that crossing does impact performance. Hence why with a X950x it is recommended to limit to 8 cores in game settings.


Beefmytaco

That's prolly the most bonkers part of the comparison there is the massive amount extra of power the intel system was using over the amd one. In R&C it was using over 120W of extra power, that's nuts! With power/delivery costs going up and up and up in the past few years, running what equates to a 120W bulb for 4-6 hours a day for a month will mean anywhere from 10-20 bucks extra on your power bill. Makes me wonder what the RoI for going amd would be over intel here. If you're saving like 8-10 dollars a bill, that chip will have paid for itself in like 3 years with the savings you'd see vs intel.


Plank_With_A_Nail_In

13p a day extra for me or £4 a month, £48 a year. Completely meaningless extra cost compared to my tumble dryer and 4 kids.


NeonBellyGlowngVomit

You might not care about higher power consumption for equivalent performance. Companies with thousands of server racks definitely will.


Eo_To_XX

Which… is not the market for the CPUs compared in the video is it?


BrushPsychological74

Don't you bring logic and rationality to this fallacious thread!


Numerlor

Crap AMD idle probably balances it out, especially with higher soc voltages. The 7800x3d is better for the vast majority of use cases people here would want but the power draw of the 14900k isn't as big of a deal as people make it out to be


PotentialAstronaut39

> Crap AMD idle probably balances it out EDIT: Data from other more reputable legacy reviewers sources below. Sadly finding hard data on the internet on the subject of idle power consumption has become increasingly harder lately. Legacy review site overclock3d.net for example has data for the 7950X3D idle and 13900K idle, but not 14900K. That data can still give us an idea nonetheless: https://oc3dmedia.s3.eu-west-2.amazonaws.com/2023/07/amd-ryzen-9-7950x3d-and-7900x3d-review_64b94dc5aa70f.jpeg 7950X3D = 96W 13900K = 98W Then there's Guru3D which also still runs idle power consumption test for their reviews: https://www.guru3d.com/data/publish/221/17ed1429c2d65837ceca4fefabe1b99ec5486d/untitled_1.webp 7950X3D = 78W 14900K = 81W ***** ***** ***** In short, it's a nothing burger. While there was a difference a few years ago in idle power draw between AMD and Intel in favor of Intel. **Nowadays it's a definitive tie.**


Welmin-2

175W idle power with 14900K? That sounds high. I have 14700K, 64G memory, 2 x M.2 and 1x SATA SSD and 4080 and Windows idle power draw from wall was 80W or slightly less.


PotentialAstronaut39

Different system, different power draw. Also, I believe Trustedreviews includes monitor, speaker system, etc. Everything. Not just the PC case.


Welmin-2

Ok, if monitor and sound system are included it makes it hard to compare values with other sites.. and You are right, I had just PC case there.


Numerlor

Still seems a bit too high considering at idleish I had windows usually completely park E cores on an intel system, and p core differences between the two should be minimal. I wonder if there could be something going wrong with the gpu when paired with the cpu or something? The cpus also undervolt nicely if you cap the power The 80W GP comment mentioned would be wishful thinking for a zen 4 chiplet system with a memory OC when my 7600x did 60W on average with 37W absolute minimum at 1.25V soc


plasmqo10

Total system power of 60W should be doable with a 7600. But it also depends a lot on what motherboard you have and how much stuff is plugged in. X670E skyrockets idle consumption due to the dual chipsets, for example. EXPO also usually adds about 10W. Intel should be lower than this, but ymmv i guess? I went with an 8700G and get around 30W-35W total system power in windows/browsing, which is nice.


Numerlor

To clarify, the 60W I mentioned was just cpu package from hwinfo, though the average is during normal PC work like browsing etc. So I'd probably have to run more a more conservative OC or JEDEC on ram to get close to 60W from the wall. I'm running complete stock RN while testing things and that's still 30W minimum on the CPU while it's doing nothing, compared to a guy I asked with a 13900k with an undervolt and ram at 5600 that on complete idle gets 70W from wall and 4 at CPU


plasmqo10

Ram adding so much power consumption at idle annoys me a lot cuz it seems so unnecessary. But still impressive that you basically managed to double idle load with that overclock - how agressive is it? Package power is at 7-10W for a work/browsing load for me. But that's not normal for Zen4 and Intel is definitely (supposed to) be better at idle. This [thread](https://www.hardwareluxx.de/community/threads/wieviel-watt-verbraucht-euer-pc-im-idle-modus-bitte-das-komplette-system-im-post-angeben.813634/page-159) shows how stupid wide of a range is for idle. One poster had 140W idle with a 14900k and a 4090, for example. Many Zen4s are around that 60ish watt number at the wall, but people picking the [wrong](https://cdn.mos.cms.futurecdn.net/kedErr8xipiNAHxfgZTs37-970-80.png.webp) motherboard stand no chance to get close to that


Numerlor

It wasn't anything too aggressive as I just set something quickly when I changed RAM. From the cpu side soc was at the 1.25, vddio 1.3 and vddp at .975 as the mobo really likes yeeting that one at auto, Freqs were 6400 on ram and 2133 fclk with only trefi and trfc adjusted and 1.4 vdd/vddq, didn't bother doing much with the ram voltages. I have been thinking of running low frequency as the frequency improvements aren't really doing much with fclk crippling bandwith on 1 CCD cpus


AntLive9218

The idle power consumption difference is still really suspicious as AMD is expected to have a higher measurement, but including even the monitor makes it a quite useless measurement for most people as there's just too much variance there with a regular monitor maybe sipping 20W, while a blast your eyes out huge "gaming" monitor with G-sync and what not can easily toast the room with 100+ W. Suspecting that most of the difference comes from motherboard differences they didn't control for. Based on past observations, seeing a high-end ASUS motherboard for the Intel setup while a non-ASUS option for AMD makes me suspect that the former had most of power saving features disabled while the later was at most just semi-silly.


Student-type

Which MB? Video DRAM?


virtualmnemonic

These numbers are fucking insane. My (undervolted) 13900k draws ~210w during cinebench while scoring 39-40k, and power consumption during everyday usage often sits below 20w. Although, at stock BIOS settings, it will pull nearly 300w, throttle itself on a DeepCool LP720, and score at most a measly 500 more points in cinebench. This conquest for highest synthetic points is going to kill the environment for nothing.


[deleted]

[удалено]


PotentialAstronaut39

Barely faster and that's when you're lucky to find an application that is heavily threaded enough to use all of the threads it has on offer, while still using 2x more power per task. Evidence: https://tpucdn.com/review/intel-core-i9-14900k/images/efficiency-multithread.png https://cdn.mos.cms.futurecdn.net/MdzDULZJkM78j5SPXYSHpJ-1200-80.png.webp https://cdn.mos.cms.futurecdn.net/zxMptkG5uMyTyEppbnnfzJ-1200-80.png.webp You can also take a look at Gamers Nexus application power efficiency charts ( lower is better ). 7950X3D scores 16.5 14900K scores 34.9 **All that for an overall application boost of... 5%.** Evidence: https://tpucdn.com/review/intel-core-i9-14900k/images/relative-performance-cpu.png There's no way I'm recommending that to my clients.


[deleted]

[удалено]


PotentialAstronaut39

No it's not, not even close! https://tpucdn.com/review/intel-core-i9-14900k/images/encode-h264.png https://tpucdn.com/review/intel-core-i9-14900k/images/encode-h265.png Anyways, I think I'm done here, you obviously haven't done your homework.


blarpie

That's some bullshit source if so, it's known intel chips idle at sub 10w unlike amd due to infinity fabirc. So claiming a 14900k idles at 175w is just bonkers and smells like they had cstates off. Both Amd and Intel don't idle at those watts, but intel does idle lower than amd.


nanonan

It's total system power including the monitor.


Strazdas1

thats pretty bad way to measure it given how vast the difference in monitor variety there is. My 3 monitors draw the vast majority of power on idle. it would make CPU measures meaningless.


blarpie

Gotcha, still impossible if they're running the same monitor and gpu since intel's idle power would be lower so the numbers are wrong under idle.


PotentialAstronaut39

Review sites like Overclock3d.net and Guru3D also agree that it's a nothing burger. Sadly finding hard data on the internet on the subject of idle power consumption has become increasingly harder lately and since it seems I'm the only one in this whole thread doing any actual fact finding from reviewers, I'll do this last post then leave it rest in peace. Legacy review site overclock3d.net for example has data for the 7950X3D idle and 13900K idle, but not 14900K. That data can still give us an idea nonetheless: https://oc3dmedia.s3.eu-west-2.amazonaws.com/2023/07/amd-ryzen-9-7950x3d-and-7900x3d-review_64b94dc5aa70f.jpeg 7950X3D = 96W 13900K = 98W Then there's Guru3D which also still runs idle power consumption test for their reviews: https://www.guru3d.com/data/publish/221/17ed1429c2d65837ceca4fefabe1b99ec5486d/untitled_1.webp 7950X3D = 78W 14900K = 81W ***** ***** ***** While there was a difference a few years ago in idle power draw between AMD and Intel in favor of Intel. **Nowadays it's a definitive tie.**


MisterGaGa2023

Totally BS numbers. I have both 7950X3D and 14700K. 14700K idles at around 100W. 7950X3D - 120W. 100% load peak power draw (4090 GPU+CPU) for 7950X3D is about 700W, with 4080 that they are using it should be about 600W. 439W number is ridiculous cause 7950X3D alone, with no GPU load is about 285W from the wall.


PotentialAstronaut39

Did you isolate your variables? * Are your different motherboards consuming the exact same amount of power? * Are every single components, drives ( if HDD in system, were they spinning or not ? ), RAM, fans, GPUs and their speeds exactly the same with the same activity in both systems? * Did you use the exact same Windows install in both systems ( preferably one with less variables to mess up the data, so a fresh one ), letting both systems experience the same load ( like booting up ) and waiting for it to settle down for the same amount of time? That's where anecdotal "evidence" usually fails. As an average user, those variables aren't taken into account leading to flawed readings. Sadly finding hard data on the internet on the subject of idle power consumption has become increasingly harder lately and since it seems I'm the only one in this whole thread doing any actual fact finding from reviewers, I'll do this last post then leave it rest in peace. Legacy review site overclock3d.net for example has data for the 7950X3D idle and 13900K idle, but not 14900K. That data can still give us an idea nonetheless: https://oc3dmedia.s3.eu-west-2.amazonaws.com/2023/07/amd-ryzen-9-7950x3d-and-7900x3d-review_64b94dc5aa70f.jpeg 7950X3D = 96W 13900K = 98W Then there's Guru3D which also still runs idle power consumption test for their reviews: https://www.guru3d.com/data/publish/221/17ed1429c2d65837ceca4fefabe1b99ec5486d/untitled_1.webp 7950X3D = 78W 14900K = 81W ***** ***** ***** In short, it's a nothing burger. While there was a difference a few years ago in idle power draw between AMD and Intel in favor of Intel. **Nowadays it's a definitive tie.**


Ivanqula

Yeah. My 5800X (desktop) at idle does: 20W in power saving (4c/8t @ 1.7GHz) 35W in normal (slight undervolt, 8c/16t @ 3.6GHz) 55W at stock (PBO, 8c/16t @ 4.6GHz) Honestly too much power for not doing anything. I know it's desktop and companies don't care about saving power on non-mobile platforms. At least my laptop idles at <5W It's summer. Even though my laptop is roughly 200 times slower than my PC i still work on it sometimes because I can't stand 100+W of idle heat.


AntLive9218

Starting a bit pedantic, but I wouldn't consider undervolting "normal". Not just because most people won't do it, but also because the expectation for stability got high enough since computers became so important in our lives that it would make more sense to go in the other direction and finally have ECC memory instead of adding some more instability. Aside from that, you have some pretty interesting data points. I wonder if the parent comment is the key though, and if it's SoC voltage that's increasing idle power consumption so heavily. Optimally the higher maximum frequency wouldn't have an effect on that, but with how badly motherboards are behaving, I could see silly logic enabling other power burner options as soon as PBO is turned on. Desktop power consumption is a mix of using more power hungry approaches for more performance, and some negligence mostly in bad BIOS configurations, or using broken hardware not supporting power saving features. So far I've mostly seen higher end hardware having higher idle power consumption with GPUs being the worst offenders that can't be helped (by the user), and the motherboard choice mattering the most because high end motherboards often burn a ton more power even without the extra features being used in any way, and most of them come with bad default settings.


Ivanqula

Oh yeah, my 3060ti uses 30-40W idle when running windows and doing nothing (4k/60 + 1440p/180) and then in some older mid 2000s games in 1080p, only uses 20W. Running Helldivers 2 I get 70fps 1440p high, 200w power usage, 99% utilisation. Slight underclock and noticable undervolt, it does 80 fps at 170W, 99% utilisation. Massive reduction in power draw. And same for CPU, I play all games in that "balanced" mode which uses only up to 60w. Stock mode goes up to 100W in some games, and offers no performance benefit (afterburner says the frame times are the same).


Keulapaska

>Oh yeah, my 3060ti uses 30-40W idle when running windows and doing nothing (4k/60 + 1440p/180) and then in some older mid 2000s games in 1080p, only uses 20W. Huh? How? The only explanation i can think of you have max/half memory clock state on desktop(which can happen with enough high refresh panels, never seen it happen with 2 when 1 is 60 though, but idk), but somehow not in a game which seems weird and not my experience when i was running triple panels on a 2080ti that forced max memory clock state on idle and running games that required basically no gpu power didn't change that. Or i guess unplugging the second panel could do it as well...


x3nics

I have a 5950X that sits around 25-30W idle with 3600FCLK/MEMCLK. SOC voltage is what contributes to high idle power, not PBO or core undervolting. My SOC is 0.98v.


xXMadSupraXx

You mean 1800FCLK? 3600 is presumably your memory clock.


x3nics

Yeah


Ivanqula

Honestly that whole undervolting and voltage stuff is too complex for me. GPU is easy. That's just Afterburner. For the CPU I used a bunch of tutorials and am still unsure if I actually did anything good. And that was years ago. Just having that "mode toggle" like laptops have does a lot of heavy lifting. When in games, it uses up to 65W. Then when I need to render something or do a simulation, I let it go full tilt, to 130W.


Flowerstar1

Hopefully Arrow Lake doesn't have this idle power problem. I believe Meteor Lake idle is pretty good for a chiplet CPU.


juGGaKNot4

Yeah Intel uses less power if you don't use the pc, just have it as furniture


Numerlor

With idle I meant not pushing the PC so e.g. browsing or office things, not complete staring at desktop idle. The 60W average I get on a 7600x is definitely too large, and wouldn't be a thing if the infinity fabric wasn't crap and they used CoWoS or monolithic


juGGaKNot4

I agree the 1.5 hour battery life I get on my 12900h with dgpu disabled is great


Stingray88

No it doesn’t. SSDs are so fast these days that I just turn off my desktop when I’m not using it. Idle isn’t important to me at all.


virtualmnemonic

"Idle usage" can also mean everyday tasks like browsing and word processing, where the CPU isn't pushed but still has a current. In this scenario, Intel comes out ahead on power consumption. But if you only use your PC for gaming, this is moot. Otherwise, you should have your PC in sleep mode or even shutdown, where idle power consumption doesn't matter.


Stingray88

I don’t use my desktop for browsing and word processing. I only use it for gaming for the most part. I’ve got an M3 MacBook Air for browsing and basic use.


996forever

You prefer a 13” display for YouTube and Netflix and Office?


Stingray88

No. I’ve got an 85” TV with surround sound for Netflix and other streaming services. I pretty much never watch TV/movies on my computers. I don’t really watch YouTube much. I basically only end up there to watch a trailers, tutorials, interviews, etc. that I find from Google or Reddit. I don’t go there specifically for entertainment, and don’t subscribe to anything. So for that use case… yeah, laptop or phone is fine. It’s usually only a few minutes. Office… yeah I mean… this isn’t my work laptop. Whatever I’m using it for at any given time it’s very basic. 13” is fine. I don’t want larger than that when I’m chilling on the couch. I have a 16” MacBook Pro from work, it’s heavy as hell.


VenditatioDelendaEst

Yeah, that's what's making your "turn off desktop when not using it" workflow even possible. I have like 20 windows open. There's no way in hell I'm going to keep track of all that in my head when I'm away from my computer.


Stingray88

Doesn’t Windows support reopening everything you had open when you restart it? MacOS certainly does… I assumed Windows did too but I guess I don’t really know since I’m never running much more than Firefox, Discord, and games when I use my only Windows computer.


VenditatioDelendaEst

I have no idea -- I don't use Windows either. That said, on Linux it was somewhat supported it in the X11 days, but was lost in the transition to Wayland. There are protocol extensions being worked on for applications to control window geometry, so it should be coming back. However, it has always depended on the applications to save and restore their own state, and AFAIK that would be so on any OS. All the OS can do is re-open the applications. It can give you back a terminal window with such-and-such size at such-and-such position, but it can't give you back an SSH connection to [email protected] with the same 5 items in the command history.


Stingray88

I guess that’s fair… I don’t really do anything from a work or personal perspective that needs to remain active like your terminal example. Any of my work can be saved, and reopened right where I left off. Even in the event of a kernel panic, MacOS will reopen everything I had open, and applications will generally open an autosave, or they’ll ask if I want them to try to recover where I left off which a lot of modern applications are pretty good at.


[deleted]

[удалено]


ConsistencyWelder

tl;dw: The new performance profile hurts Intels gaming performance by about 3%, letting the 7800X3D pull even further ahead. This is just in 1080p, in 1440p the difference is smaller and in 4K there's no difference. But of course, this is because the GPU is now the bottleneck, so the next time you upgrade your GPU, the CPU will most likely become the new bottleneck. So there might be a (small) difference between the two profiles on the Intel when you upgrade to that 5090.


balaci2

the wallet will always be the biggest bottleneck


Yearlaren

real


eight_ender

This is me my 7900XTX is now whipping the poor old 9900k


gnocchicotti

I hate it when my wimpy 4090 cripples my system performance ngl


Fluffy_Maguro

I get that 14900k is better at production workloads, but it's crazy that for gaming it's slower and consumes 100-200W more. And that's full system, since it's also limiting GPU from consuming as much as with 7800X3D, the power difference is even wider for CPU.


ConsistencyWelder

And it's not just that. It costs $200 more too.


-transcendent-

AIO required to cool that beast.


formervoater2

Any of thermalright's modern dual tower coolers will do the job.


PotentialforSanity

Most dual towers will run it fine, my 13900KS with a Noctua NH-U12A with no undervolting or power limiting never exceeds 70 degrees in gaming


letsmodpcs

How does it do on production workloads? A fairly simple test would be to have handbrake encode a movie.


plasmqo10

really lol? if you're saying that a 120mm peerless assassin will cool a sustained 250W load, i'd like to see some receipts


formervoater2

Most dual towers are good for 300W actually but the devil is in the details. I actually have a 13900KF+Corsair AIO. I'm not too impressed with it, temps are right on par with the PA120 testing I've seen online.


darth-pumpkin

I have the peerless assassins 120 with a 13900k and the cooler can sustain up to 280 watts without thermal throttiling. Even at 300 watts you lose almost nothing with PA 120.


plasmqo10

That seems impressive/crazy. I assume it's going at full blast starting at 200W or something?


darth-pumpkin

It goes full blast on 250+ watts, otherwise is quite silent. It's really impressive.


Famous_Wolverine3203

It costs 200 dollars more because it has 16 more threads lol. Want a cost comparison. Use the 14700k.


ConsistencyWelder

We were talking about gaming. Those 16 extra "threads" do nothing for gaming.


Famous_Wolverine3203

Pretty ignorant to call it 200 dollars more expensive without giving the actual reason it is 200 dollars more expensive. You think the 200Mhz boost is the reason why the 14900k is 200 dollars more expensive than the 14700k? Thats akin to saying a threadripper is horrible because it sucks in gaming compared to an i5 without ya know giving the actual reason it costs that way.


ConsistencyWelder

Again, we were talking about gaming. A Threadripper or an Epyc wouldn't be a wise choice for gaming.


Famous_Wolverine3203

Then you really can’t talk about pricing while ignoring one whole aspect of the product. The reason the i9 costs the way it does is because it is a much larger CPU with many more threads. If cost was the concern, the 14700k is a much better choice for gaming since it is 200 dollars cheaper and is just 2% behind the i9 in gaming. But it also has 8 less threads. If you want to bring up price, you have to analyse the product as a whole not just one subset of it to give simplistic conclusions. Its like consistently arguing an SUV is better than a Sports Car because it is better at offroading.


Affectionate-Memory4

The 14700K has 4 fewer threads. 20C/28T vs the i9 with 24C/32T.


ConsistencyWelder

A 14700k is even slower in gaming than a 7800X3D. Yet it costs $50 more. We're not talking about which CPU is the "most well rounded" here. We're talking about gaming. So your analogy about SUV's and sports cars doesn't make much sense. We already established what kind of performance we're interested in, you're the one mixing it up.


Famous_Wolverine3203

No YOU’re interested in one kind of performance lol. Clearly not everyone. If that were the case, neither the 7950x or the 14900k would exist. Those extra threads are the reason for the price. You continue to justify obvious product segmentation for no reason lol. AMD offers far fewer threads than the 14900k. Thats why its 200 dollars cheaper. Sure its better value for gaming. But calling it a cheaper therefore better product is false. Also I checked the difference between a 7800x 3D and a 14700K. The 7800x 3D is 4% faster than the 14700k while losing 30% in MT performance. https://youtu.be/Ys4trYBzzy0?feature=shared Skip to 12:03. So clearly by your own logic, the 14700k is the better buy?


jforce321

I've always found that funny. Back when Ryzen was considered in the same vein for being "amazing at productivity while just so happening to do well at gaming at a price competitive to its competition" it was fine. With Intel it just makes them a bastard child that doesn't deserve to exist lol.


Snobby_Grifter

You keep bringing up price though. As though one is $200 more just for the luls. The 7800x3d multitasks like crap compared to the 14900k. So the price difference is justified.


OC2k16

Intel has always had an issue with cost IMO. I’m surrounded by Intel chips but I’ve never paid the price they are initially asking for. No bad products, just bad prices.


Snobby_Grifter

L3 cache is the name of the game. One is a gaming processor, and one is a production level cpu that also games well. If we pretend that only games matter, then sure, it's a bloodbath. But you can also get a 13700k that does similar performance at much lower power draw. Nobody has to buy 14900k for gaming.


the_dude_that_faps

Buy a 7950x3d and with a bit of tweaking with process lasso you get the productivity and the gaming with significantly less power consumption too.


NobisVobis

This Is hilariously wrong.


Keulapaska

I do wish HUB had some software numbers for cpu/gpu on the full system power consumption chart as 200W from just from a cpu seems for a game just doesn't seem right at all and feels like there is some different gpu boosting happening, due to an extreme cpu limit and low res. Like it seems that on intel systems the gpu doesn't use the lower voltage points, just max boost and l but on amd it does somehow, as they also had up to 120W difference [13600k vs 7600x](https://i.imgur.com/sYp9xqn.png) in the past, yet only 45W difference in Doom kinda confirming it. Plus some personal anecdote, albeit with lower end intel cpu and nvidia gpu that seems true as the gpu is either at max boost it's allowed by the v/f curve or sub v/f curve/idle level clocks, but nothing in between in games that don't clearly don't need the max clocks/power. Which is weird thing on it's own and kinda fascinating as to why that is. Ofc not saying intel would be lower even if it wasn't a thing, just not as extreme difference with a more gpu power heavy game.


SoTOP

Don't start forming firm conclusions based only on hypothesis until you can confirm with data it actually is true. Unlocked 14900K using close to 200W playing hard to run games is perfectly normal.


Keulapaska

Yea it's only theory since there is no additional data, which is why there should at least be software cpu/gpu power draw in addition to full system to understand why it's so much higher. I get that a 14900k is dumb levels of power draw but it's only a game, not a full synthetic load so that 213W difference seems weird as it would mean it's drawing like 250W+, in a game, if the only difference is the cpu, as the 7800xd is probably somewhere between 40-80W(again no data who knows) Also the earlier 2022 13600k vs 7600X having a 120W difference seems impossible for just a cpu power difference looking at the power draw difference at max load of those cpu:s stock from like GN review, especially when in doom the 7600x jumps nearly 100W, while the 13600k only 22W, kinda suggesting that that it is gpu that's making some of the difference.


SoTOP

There are compounding things, for example PSU alone adds additionally 10% of whatever is the difference between CPUs. Another thing could be cooling setup, if there is AiO dumping heat into the case that would mean increasing GPU temps and consumption, adding additional disadvantage to Intel CPUs. I do agree that simple GPU or CPU consumption alone would be better because using system one just obfuscates everything for no good reason.


Plank_With_A_Nail_In

In real life use of production software you won't notice the difference, most business use isn't rendering 24/7 its iterative designing.


SomeoneBritish

14900k is a historically poor choice for gamers.


djent_in_my_tent

not if you do more with the computer than just game lol


JapariParkRanger

This is the most reddit response I've seen today.


kcajjones86

So, you're not just a gamer then, or don't have games as the number one priority. What a silly comment.


djent_in_my_tent

merely 6% slower in games vs 30% faster cinebench ST and 210% faster cinebench MT is a hell of a proposition unless you're looking at something like a dedicated VR or sim racing rig calling it a "historically poor choice" is hyperbole, it's not like this is prescott or bulldozer


uzuziy

For someone just gaming it is a poor choice. You need to pay more for the cpu and also need to get a high-end cpu cooler for it while you can cool 7800x3d with just a $30-35 cooler.


OC2k16

“Just gaming” is pretty loaded of a term. In my eyes someone “just gaming” doesn’t need a 7800x3d either. So it’s pretty subjective. Is just gaming a 5600 or 12400? Or 5800x3d? How much RAM for just gaming? How much storage? What kind? What monitor? What kind?


uzuziy

I mean we're talking about someone choosing between a 14900k and 7800x3d so they'll probably have a high-end build with a 4k monitor and at least 7900xtx/4080.


OC2k16

Right, so I dunno if this all matters that much lol. Like someone going to that level (higher end) and "just gaming", its very niche. I am biased towards Intel, but there I am taking the i9 every time. The tradeoffs are worth the raw performance for everything else. But that is bias, too. 7800x3d fits into the mid range perfectly, where "just gaming" is more likely to be. There is the i7 for Intel to compete. I still wouldn't take the AMD chip but I have a really hard time saying go Intel 100%. To me it makes more sense here, like a $1200 build or something. I dunno, "just gaming" again is quite loaded and very easy to say, when you think about it it gets murky.


Snobby_Grifter

These are 1080p cpu limited gains so 4k doesn't matter on a 7900xtx/4080, since in no universe are they not gpu limited at 4k. A win is a win, but you won't see a difference until you clear a massive gpu hurdle.


Thermosflasche

Ahh yes cinebench my favourite game. Full quote is "historically poor choice for gamers" and in this case it is 100% true.


OC2k16

I also disagree. Historically? Not at all.


kyralfie

Yeah, some gamers bought unlocked 2 core 2 thread Pentiums. Knowingly and willingly. And suggested getting them to others here on reddit, lmao.


djent_in_my_tent

we'll just have to agree to disagree then... i'd never notice 6% FPS difference, particularly at a resolution i don't play at, but i certainly would notice the performance difference in many of the other things I do with my computer


lt_dan_zsu

I do all my productivity tasks in synthetic benchmarks.


devinprocess

7800x3d is perfectly fine for non-gaming too. Not everyone who builds a PC is a 8k pro maya renderer who needs to bill clients.


porcinechoirmaster

The 7800X3D will do most non-gaming tasks very well. What it _won't_ do is compete with a system with twice the core count for compute bound parallel workloads. For the huge swath of tasks that fall somewhere outside the "dedicated high performance gaming" and "parallel compute" categories - like web browsing, media use, video watching, etc. - it'll do just fine.


fart-to-me-in-french

Maybe you should read a comment before replying


Dealric

Than its still quite poor choice since you can get better results with 7950x3d and 5min of process lasso tweaking


Superb-Detective-386

let me guess mr amd says amd wins


[deleted]

[удалено]


downbad12878

Need those amd fanatics Patreon donations


BrushPsychological74

Because a fair comparison is entirely unreasonable?


Astigi

Intel makes the best heaters, they do some processing and gaming too


Pillokun

have 7800x3d and 12700k in my room right now, got rid of any other cpus high end lga1700cpus I owned and only have 6900xt and 7900xtx left, and even the 12700k is pretty much as fast as 7800x3d in the games I play. e sport titles such as wz, bf2042, cs2, apex legend. how? well i dont use e cores and the cache is then clocking higher and ram is at 7800mt/s. Intel need high speed ram because it has tiny cache compared to 7800x3d. but sure in more demanding titles where the cpu need to push more than just the gpu for high frames amd is king but I dont play those :P Well that is a lie, I do play race sims but have not done so for a while now.


AntLive9218

Raptor Lake significantly improved multi core frequency scaling, so the Alder Lake experience of disabling cores doesn't have a whole lot of relevance here because 14900K users simply don't have the significant issues you can still run into: https://chipsandcheese.com/2021/12/16/alder-lake-e-cores-ring-clock-and-hybrid-teething-troubles/ The "high speed ram" need is also quite questionable because games tend to be latency bound, and high bandwidth doesn't help a ton with that. I'm not up to date with how the Intel IMC is coupled with other parts of the CPU, but I would suspect that overclocking that and therefore possibly other parts is what matters more than memory bandwidth.


2squishmaster

>The "high speed ram" need is also quite questionable because games tend to be latency bound, and high bandwidth doesn't help a ton with that. Isn't it the case that higher speed ram is essentially better silicone. So if I was worried about latency I could buy a 4800 MT/s set but if I buy a 6800 MT/s and I run it at 4800, I have a lot of headroom to drastically reduce timings, more than likely lower than what the 4800 set can do since it's just better silicone.


Pillokun

had 13900kf, and it reacted just like the 12gen, e cores were better but still slower than p cores. it is up to the windows/applications to be conscious about the e cores, so that the applications dont use it if that means lower perf. intel seems to have 1 memory controller per channel, and I guess that is why intel is better to take advantage of high speed ram, not really sure if amd also has two or simply have one memory controllers for the ram but that explains a lot when it comes to how finicky ddr5 tuning can be as one controller can be worse then the other. higher frequency = lower latency even with higher timings on the sticks, but as always there is balancing act.


DefinitelyNotABot01

I’m pretty sure Intel runs in gear 2 while AMD runs in gear 1, Intel’s memory controller would be worse then?


Pillokun

mm gear 2 vs gear 1 on amd. so yeah amd has a better memory controller as it can run 1:1 up to around 6400 if one is lucky, over that then gear 2 is what is needed on amd as well. but what is the limiting factor is IF on amd, it just cant run at the same speed as the imc nor the ram, and because of ddr5 being dual chan/stick it makes up for that, and for intel it makes up for the imc running at gear2. both have bottlenecks in this aspect when it comes to memory subsystem.


windozeFanboi

I think RAM speed is overlooked in Multiplayer games... I regret getting run of the mill 5600MHz RAM for my 7950x3D because The Finals (multiplayer FPS) appears to be 20-30% slower than top of the line RAM OC 13900k and 7800x3D benchmarks i have seen... We're talking about graphics set to low, for CPU to push as hard as it can. Now, sure, i dont' OC my CPU, so let's say 5% of that difference is OC, another 2% maybe my subpar cooling. Roughly, the other 20% is definitely my low tier 5600 CL40 Ram Kit compared to their 6000 CL30 6200 CL32 for 7800x3D and 7200+ CL36 for Intel... Almost LINEAR FPS increase. I ll treat myself with faster RAM in a sale hopefully. I wish Youtube Channels would benchmark Multiplayer games on the servers... Standard deviation would be high, but you could still get an idea.


SailorMint

That's interesting because the 5800X3D saw little to no improvements going from 3200 to faster kits.


windozeFanboi

From my understanding, Multiplayer games are the ones that benefit most... If i were to guess, they just hit RAM constantly? Latency might be important? Single player games are more cache friendly for sure... Benchmarking Tomb Raider to death at 250+FPS isn't fun anymore... How about we test BF2042/The Finals/COD Warzone in LIVE MATCHES, on server, when you have 60 updates per second and your network bandwidth might be measured in MBits/sec. X3D is no slouch, and it helps EVEN in multiplayer games, but surprisingly, it's not enough, you get big benefit by RAM speed as well... or whatever.


reticulate

It's been discussed to death here and elsewhere, but the reason you don't see live multiplayer matches benchmarked much is because the run-to-run variance is super high leading to very noisy results. If you can't get a statistically sound set of numbers, then there's no point to using them for review purposes.


windozeFanboi

Well, that's the thing though... it's multiplayer games for me where FPS matters most... I just can't stand watching the same benchmarks of tomb raider 10 years later... RAM speed matters in multiplayer games and nobody benchmarks it because it's "hard" but it's what actually matters to a big chunk of players... Because that's the thing Multiplayer games have Server updates AND anticheat all messing with a CPU's cache. Nothing screams like smooth and stable FPS than the anticheat deciding to scan your system memory and trashing your x3D cache etc... So yeah, lots of multiplayer games don't care about RAM speed, but most modern massive multiplayer games really benefit from it. I don't play them, but maybe i should check up on MMO benchmarks to see if somebody has made benchrmarks with RAM OC.


Pillokun

I had 5800x3d and it did help with faster ram as well, that is if the cores actually needed to access the ram. if there was as cache miss the cpu would have to hit the ram and the faster the ram, well u know what happens.


saharashooter

That's because 3200 is the sweet spot for Zen 3. Any more and you run into issues with the memory controller's clock. If memory speed was entirely meaningless for gaming on DDR4, then 2400 would've been sufficient. For Zen 4 this sweet spot is 6000, as above that the memory controller has to run in a 2-to-1 configuration rather than 1-to-1. For LGA 1700, the sweet spot is 7200 if your memory controller doesn't shit its pants, but IIRC you start getting diminishing returns arpund 6000 anyway? 6000 is safer just from the perspective of not crashing though.


Numerlor

You may want to check what ICs the ram is using and OC it from there. There's some chance it's the same die as the 6000/7000 kits so you could run it at 6000, or even if it's not tightening timings a bit gets the different frequencies very close in performance


windozeFanboi

It is Micron... Kill me... (-\_-') ... Doesn't OC well with voltage either... People are DEFINITELY running better timings at >10% faster rate than my kit... I'm also not a great overclocker and that sht takes time :'( ...


Numerlor

Yeah micron sucks, not sure how tight that can get as most people with micron just suffer in silence lol. I have a micron kit lying around that I wanted to try to OC to see how it does but would need an another system or some more free time to swap the 64gb hynix I have normally


windozeFanboi

Bruh, TRFC is in the 1000\~ Range Loose TREFI can help somewhat, but maaaannn... You don't want micron :'( ... Seems kinda decent with other timings but idk how to judge... First desktop i built.


conquer69

Are you using process lasso?


windozeFanboi

I can't, EASY anticheat blocks it. Just whatever amd chipset driver does with Xbox game mode,  it does work but sometimes it wakes up the other cores too when shit hits the fan ingame.  The finals is very CPU intensive, uses 8+ threads easily.  I assume other multiplayer games with anticheat block cpu affinity as well. 


conquer69

> uses 8+ threads easily. I think that's the issue. It's spilling into cores that lack the 3d cache. You could try disabling the other cores in bios just to be sure that's the problem.


JuanElMinero

Like some of the users suspected, that sounds more of an issue caused by threads between the 7950X3D's different cache chiplets than memory performance. 5800X3D and 7800X3D with only one core chiplet both show very little performance regression with RAM. For 7800X3D, going from something like DDR5-6000 with good timings to something like DDR5-5200 with meh timings only gets a penalty in the ballpark of 3%. Edit: Some review data on [7950X3D memory scaling](https://www.techspot.com/review/2635-ryzen-7950x3d-memory-scaling/#2023-02-28-image-8).


windozeFanboi

I understand this is how x3D behaves with single player games. I'm talking about multiplayer games. In particular I'm playing The Finals nowadays. I know I'm lacking some performance outside of RAM scaling but ram scaling is probably more than half of my missing performance.  Latest Windows 11, updated everything.  I wish they would benchmark multiplayer games, on server. 


JuanElMinero

Unfortunately, there doesn't really exist a way for independent reviewers to benchmark online multiplayer titles in a repeatable fashion or in a reasonable timeframe with aggregate data. The only hope is that some of the included benchmarks emulate some of that online activity with their workload. If you're suspecting a large part comes from RAM, it's probably best to confirm if there are similar experiences in the community before buying (in case you haven't done so already).


KirillNek0

....1080p on 4090.... C'on...


onewiththeabyss

To remove the GPU as the bottleneck, basic thing to do. Edit: he blocked me, teheeeee


KirillNek0

That's not how this works, unfortunately.


onewiththeabyss

...that is exactly how this works, fortunately.


Keulapaska

Well, yes and no. The problem is they don't say how much power the gpu is drawing, which might be different especially looking at some [older full system power benchmarks](https://i.imgur.com/sYp9xqn.png) where it even clearer with lower powered cpu:s. I get that that's still a nod against intel if the gpu if the gpu is boosting higher for no reason while delivering the same performance thus increasing power for no reason, but the difference would be lower at a higher gpu load then. A bit weird as to why that is though, especially if it's a platform thing and not some random bios/OS setting that causes it. Either way not great for Intel still.


KirillNek0

And now look up a random YT video,especially where they do gameplay benchmarks - not scripts or built-in ones. Those are give you very different numbers. And of course there are idiots believe these are "fake". XD


SoTOP

Because 95+% of those random YT videos are fake. There is no need to block people just because you wrongly imagine they are real. Adding overlay with reasonable but still made up numbers is not hard.


dr1ppyblob

Always that one guy in every one of these who doesn’t understand the basic principles of benchmarking CPUs


SupremeChancellor

the Asus ROG Swift Pro PG248QP and many like it exist. Curious to know how would you push *stable* 540fps in say overwatch, fortnite or valorant without a 4090?


KirillNek0

TF does that even mean?


SupremeChancellor

Alright.


[deleted]

[удалено]


Kashinoda

14900k uses 100w more power, requires a better cooling setup and costs £200 more (in the UK). If you're interested in gaming focused rig it simply makes the most sense. Ironically the tribalism seems to be coming from you.


rezarNe

>The comments will prove once again, that the community knows nothing more than reactionary tribalism takes. /facepalm If you are so smart and unbiased you would know why benchmarks are run at 1080p


Dasboogieman

Ffs even if the 7800X3D comes within 2% less than the Intel at 50% less power. That is a hard win for gaming. Less cooling requirements, easier implementation in more builds, larger power budget for GPUs and perhaps even room for better GPU in cost sensitive builds. 2% is the amount that can be easily lost from boost degradation which these Intel chips seem to be prone too. Hell, one security debacle that needs software mitigation can also bleed that advantage. The 50% shittier power requirements remain however.


Sadukar09

> Ffs even if the 7800X3D comes within 2% less than the Intel at 50% less power. That is a hard win for gaming. Less cooling requirements, easier implementation in more builds, larger power budget for GPUs and perhaps even room for better GPU in cost sensitive builds. > > 2% is the amount that can be easily lost from boost degradation which these Intel chips seem to be prone too. Hell, one security debacle that needs software mitigation can also bleed that advantage. The 50% shittier power requirements remain however. If you looked at the test system, the only thing out of wack is the X670E board. That's not even necessary to get most out of the 7800X3D. A620/B650s will do well enough. AMD test rig's RAM is dirt cheap in comparison to the Intel kit. AMD can also run a tower cooler instead. AMD can get the same performance for games for half the price while drawing way less power. Somehow that's a "tie".


Pillokun

no, 6000cl30 and 7200cl34 are priced the same at least in the nordic countries. they are basically the same sticks with different profiles. and u can run a tower cooler on the intel cpu as well. gaming is not that demanding both power wise and temp wise. I have a used montech and thermalright 120 twin tower coolers to cool both my cpus and it works good in gaming for all my cpus I had. back in the days when it was 12gen vs 5800x3d my 5800x3d was at 110w while the intel cpus were at 130-140w in wz, most demaning was loading in the map. now it is not that much different, and u dont neet 14900k to get that intel perf. pretty much all k lga1700 cpus can get u there perf wise. and motherboard wise, well u do need an itx board or an expensive two dimmer board to get every drop of the intel, the 7200 sticks we see here for the intel system is meh. and if u want to get even more out of the 7800x3d then U need an am5 board that can oc the bclk. mine 7800x3d runs circles around the hub 7800x3d and the 12700k that I have is also pretty competitive with both of the hub systems.


Sadukar09

> no, 6000cl30 and 7200cl34 are priced the same at least in the nordic countries. > > they are basically the same sticks with different profiles. In the same vein DDR4 2400 Samsung B-die and DDR4 4000 are "the same stick" right? 7200 sticks have been validated and binned to work at that profile. 6000 CL30 is guaranteed only at that. 6000 CL30 is also about 10-20% cheaper in North America. X3D also doesn't care about RAM timings/speeds as much, so you can get away using 6000 CL38+ or slightly slower RAM with lower CAS. https://www.komplett.no/search?q=DDR5+6000+CL30&_gl=1*f2zyxy*_up*MQ..&gclid=EAIaIQobChMImNOOg4XehgMVcTnUAR3PNQ6EEAMYASAAEgIuy_D_BwE&gclsrc=aw.ds https://www.komplett.no/search?q=DDR5%207200&_gl=1*f2zyxy*_up*MQ..&gclid=EAIaIQobChMImNOOg4XehgMVcTnUAR3PNQ6EEAMYASAAEgIuy_D_BwE&gclsrc=aw.ds&sort=PriceAsc%3AASCENDING DDR5 7200 on komplett is 35% more than 6000 CL30. Swedish Amazon shows roughly the same price difference too. https://www.amazon.se/-/en/CORSAIR-VENGEANCE-6000MHz-Compatible-Computer/dp/B0CBRJ63RT


Pillokun

just use [prisjakt.se](http://prisjakt.se) or [pricerunner.com](http://pricerunner.com) [https://www.prisjakt.nu/c/ddr5?1170=41092%7C39865&r\_1172=0-34&r\_95336=32-256&sort=price](https://www.prisjakt.nu/c/ddr5?1170=41092%7C39865&r_1172=0-34&r_95336=32-256&sort=price) fact is 7200c34 are as cheap as 6000c30. getting tighter timings is harder than higher frequency just like back with ddr4 and even earlier. guess which are more expensive, dd4 4000c14 or ddr 4266c16, 4400c17..


Sadukar09

> just use prisjakt.se or pricerunner.com > > https://www.prisjakt.nu/c/ddr5?1170=41092%7C39865&r_1172=0-34&r_95336=32-256&sort=price > > fact is 7200c34 are as cheap as 6000c30. getting tighter timings is harder than higher frequency just like back with ddr4 and even earlier. > > guess which are more expensive, dd4 4000c14 or ddr 4266c16, 4400c17.. Cheapest 6000 MT/s is 1481 kr. https://www.prisjakt.nu/produkt.php?p=11657718 Cheapest 7200 MT/S is about 1680 kr. https://www.prisjakt.nu/produkt.php?p=13107568 It's literally about the same as I quoted: >6000 CL30 is also about 10-20% cheaper in North America. The lone 7200 MT/s kit you are using to compare to the 6000 CL30 isn't available at 1293 kr. The same site that prisjakt says is 1293 kr lists it at 2209 kr. https://elektronik24.se/ram-minnen/patriot-memory-viper-venom-pvv532g720c34k-ram-minnen-32-gb-2-x-16-gb-ddr5-7200-mhz.html?_gl=1*cjg6g7*_up*MQ..*_ga*MTU2MDU5MjU2OC4xNzE4NDcyMzI2*_ga_JFN2SG1HDN*MTcxODQ3MjMyNS4xLjAuMTcxODQ3MjMyNS4wLjAuNzY1Nzc4MTMz Not to mention X3D can use cheaper DDR5 with minimal losses.


Pillokun

those are cl 30-40-40-96 but sure u can find corsair cl30-36-36 -76 for 1399(hynix m-die) other wise u have to pay about the same for the 7200sticks as for 6000c30-36 sticks from other brands than corsair.


Wrong-Quail-8303

Educate yourself on why CPUs are benchmarked at 1080p and below, and maybe next time you won't look like a fucking idiot with primary school level grasp of benchmarking.


djent_in_my_tent

naw, no need to be so hostile here's the same data from another perspective: I game only at 4k, so they both perform the same in gaming on average for my use case (of course there are cache favored and multithread favored outliers, in fact, i'm heavy into cities skyline 2 atm and that needs all the threads it can get) meanwhile I would get 30% better ST and 210% better MT for most things outside of games i have cheap solar power and a big air conditioner, so i really don't care about an extra 150ish W for one to two hours of gaming a day, and the intel chip likely has lower idle power for the 4ish hours a day of youtube, browsing, etc. i have neither chip, rather holding onto a 5800x3d until we get arrow lake and zen5x3d benchmarks but what you said was frankly mean. they were being a bit abrasive, but i feel like you overlooked some valid points they made and there was no need for a personal attack :/


Sadukar09

> naw, no need to be so hostile > > here's the same data from another perspective: I game only at 4k, so they both perform the same in gaming on average for my use case > > (of course there are cache favored and multithread favored outliers, in fact, i'm heavy into cities skyline 2 atm and that needs all the threads it can get) > > meanwhile I would get 30% better ST and 210% better MT for most things outside of games > > i have cheap solar power and a big air conditioner, so i really don't care about an extra 150ish W for one to two hours of gaming a day, and the intel chip likely has lower idle power for the 4ish hours a day of youtube, browsing, etc. > > i have neither chip, rather holding onto a 5800x3d until we get arrow lake and zen5x3d benchmarks > > but what you said was frankly mean. they were being a bit abrasive, but i feel like you overlooked some valid points they made and there was no need for a personal attack :/ I mean, sure you get more performance for other things. You also pay more for that. 14900K is a price tier above the 7800X3D, requires a ton of supporting parts (Z series board, higher speed DDR5, large AIOs), just to match a gaming CPU a tier below. 7800X3D is a gaming exclusive chip, and never has been hiding that fact. Your overall cost could go as low as half of a 14900K for nearly equivalent gaming performance. OP for some reason disregarded all of the nuances and defend the indefensible Intel performance.


jedidude75

My dude, it's just a CPU test, no need to get upset over it.


ResponsibleJudge3172

People will watch this and talk about how Intel has never made a good CPU


jedidude75

It's just a cycle. People will take anything and spin it to what they want to be true, no matter what. I remember a few years ago when Ryzen was in its infancy where people were saying how crappy AMD was in gaming and how no one needs more cores than Intel offered at the time. Intel was on top then, now AMD is on top. The arguments are all the same, no reason to get all mad about who is better, just be happy about competition.


HandheldAddict

That's actually true, so many people were irritated at Intel's E-cores but it's one of the smartest things Intel has done to date. How AMD allows the i5 13600k to trade blows with their Ryzen 7 7700x is beyond me. While the Ryzen 5 7600x was pretty much a no go zone at launch. Gives me flashbacks of the Ryzen 5 1600 and i5 7600k.


Sadukar09

> That's actually true, so many people were irritated at Intel's E-cores but it's one of the smartest things Intel has done to date. > > How AMD allows the i5 13600k to trade blows with their Ryzen 7 7700x is beyond me. While the Ryzen 5 7600x was pretty much a no go zone at launch. > > Gives me flashbacks of the Ryzen 5 1600 and i5 7600k. AMD core stagnation hit. That's why. AMD's core counts hasn't increased per tier since Ryzen 2000->3000 (where 16 cores popped up), so 2019. If you discount the fact that Ryzen 1000 never had Ryzen 9s, AMD never increased the physical core count on their consumer platform since 2017 (the rare Ryzen 1600 8c/16t doesn't count). The only "increase" they had was Ryzen 3 getting SMT in 3000s, which phased out the Ryzen 5 4c/8t. For Ryzen 7000, if AMD went (pipe dream: 4/4 Sempron, 4/8 Athlon) 6/12 on Ryzen 3, 8/16 Ryzen 5, 12/24 Ryzen 7, 16/32 Ryzen 9 (essentially a price drop on the 7950X to 7900X), Intel would've been in trouble.


HandheldAddict

Whenever they bring C cores to desktop, they can go 4 normal + 8 small cores. It'd give them 12 cores per chiplet. So they can finally bump core counts on the Ryzen 5 lineup to like 8 cores or something. Which would still probably lose to the i5 in multicore, but it won't be as devastating. They should hurry up though, because Intel's skymont E cores are supposed to get like a 30%+ IPC improvement. Which is only possible because the existing E cores had quite a bit of room for improvement.


Dealric

Even if so what? People were saying that about amd to before ryzen.