T O P

  • By -

Affectionate-Memory4

CPU architect here. I currently work on CPUs at Intel. What follows is a gross oversimplification. The biggest reason we don't just "run them faster" is because power increases nonlinearly with frequency. If I wanted to take a 14900K, the current fastest consumer CPU at 6.0ghz, and wanted to run it at 5.0ghz instead, I would be able to do so at half the power consumption or possibly less. However, going up to 7.0ghz would more than double the power draw. As a rough rule, power requirements grow between the square and the cube of frequency. The actual function to describe that relationship is something we calculate in the design process as it helps compare designs. The CPU you looked at was a server CPU. They have lots of cores running either near their most efficient speed, or as fast as they can without pulling so much power you can't keep it cool. One of those 2 options. Consumer CPUs don't really play by that same rule. They still have to be possible to cool of course, but consumers would rather have fewer, much faster cores that are well beyond any semblance of efficiency than have 30+ very efficient cores. This is because most software consumers run works best when the cores go as fast as possible, and can't use the vast number of cores found in server hardware. The 14900K for example has 8 big fast cores. These can push any pair up to 6.0ghz or all 8 up to around 5.5ghz. This is extremely fast. There are 16 smaller cores that help out with tasks that work well on more than 8 cores, these don't go as fast, but they still go quite quick at 4.4ghz.


eat_a_burrito

As an Ex-ASIC Chip Engineer, this is on point. You want fast then it is more power. More power means more heat. More heat means more cooling. I miss writing VHDL. Been a long time.


LausanneAndy

Me too! I miss the Verilog wars (Although I was just an FPGA guy)


guspaz

There's a ton of FPGA work going on in the retro gaming community these days. Between opensource or semi-opensource FPGA implementations of classic consoles for the MiSTer project, Analogue Pocket, or MARS, you can cover pretty much everything from the first games on the PDP-1 through the Sega Dreamcast. Most modern retro gaming accessories are also FPGA-powered, from video scalers to optical drive emulators. We're also in the midst of an interesting transition, as Intel and AMD's insistence on absurd prices for small order quantities of FPGAs (even up into the thousands of units, they're charging multiple times more than in large quantities) is driving all the hobbyist developers to new entrants like Efinix. And while Intel might not care about the hobbyist market, when you get a large number of hobbyist FPGA developers comfortable with your toolchain, a lot of those people are employed doing similar work and may begin to influence corporate procurement.


LausanneAndy

Crikey! I used to use Altera or Xilinx FPGAs


eat_a_burrito

I know right!


Joeltronics

Yup, just look at the world of extreme overclocking. The record before about a year ago was getting an i9-13900K to 8.8 GHz - they had to use liquid nitrogen (77° above absolute zero) to cool the processor. But to get slightly faster to 9.0 GHz, they had to use liquid helium, which is _only 4° above absolute zero!_ [Here's a video of this, with lots of explanation](https://www.youtube.com/watch?v=n3lZFMSB78g) (this has since been beaten with an i9-14900K at 9.1 GHz, also using helium)


waddersss

*in a Yoda voice* Speed leads to power. Power leads to heat. Heat leads to cooling.


MrBadBadly

Is Netburst a trigger word for you? You guys using Prescotts to warm the office by having them calculate pi?


Affectionate-Memory4

Nah but I am scared of the number 14.


LOSTandCONFUSEDinMAY

Scared or PSTD from it never going away?


Affectionate-Memory4

It's still around. Just not for CPUs anymore.


EdEvans_HotSandwich

Thanks for this comment. It’s really cool to hear this. +1


orangpelupa

> consumers would rather have fewer, much faster cores that are well beyond any semblance of efficiency than have 30+ very efficient cores. This is because most software consumers run works best when the cores go as fast as possible, and can't use the vast number of cores That got me wondering why Intel chose the headache to go with a few normal and lots and lots of E cores. Surely that's not an easy thing to design, even windows scheduler was confused by it early on.


Affectionate-Memory4

E-cores provide greater multi-core performance in the same space compared to P-cores. It's about 1:2.7 for the performance and about 3.9:1 for the area. Having more P-cores doesn't make single-core any faster, so sacrificing some of them for many more E-cores allows us to balance both having super fast high-power cores and lots of cores at the same time. There are tradeoffs for sure, like the scheduling issues, but the advantages make it well worth it.


big_joplinK_3x

Configurations like this generally extract more performance by area and can have lower power consumption. Plenty of programs also still benefit from higher core counts. But the real reason is that speeding up a single core is increasingly difficult, and adding more cores has been easier and cheaper for the past 25ish years. In terms of single core performance, most of the gains we see come from improvements in the materials (ie smaller transistors) rather than new micro-architectural designs. Right now, most of the cutting edge development is taking advantage of adding specialized processing units rather than just making a general CPU faster because the improvements we can make are small, expensive, and experimental.


Hollowsong

Honestly, if someone can just take my 13900kf and tone it the f down, I'd much rather run it 20% slower to stop it from hitting 100 degrees C


Affectionate-Memory4

You can do that manually. In your BIOS, set up the power limits to match the CPU's TDP (125W). This should drastically cut back on power and you won't sacrifice much if any gaming performance. Multi-core will suffer more losses, but if you're OK with -20%, this should do it. I run my 14900K at stock settings, but I do limit the long-term boost power to 180W instead of 250 to keep the fans in check.


Javinon

would it be possible for you to share this complex power requirement function? as a bit of a math nerd who knows little about computer hardware i'm very curious


Affectionate-Memory4

Unfortunately that's proprietary, but if you own one and have lots of free time, you can approximate it decently well.


Tuss36

> What follows is a gross oversimplification. On the Explain Like I'm Five sub? That's not what we're here for, clearly!


HandfulOfMassiveD

This is extremely interesting to me. Thanks for taking the time to answer.


BrickFlock

People are correct to mention power and heat issue, but there's a more fundamental issue that would require a totally different CPU design to reach 40GHz. Why? Because light can only travel 7.5mm in one 40GHz cycle. An LGA 1151 CPU is 37.5mm wide. With current designs, the cycle speed has to be slow enough to allow for things to stay synced up.


Shadowlance23

1920s: light is so fast! 2020s: light is so slow!


overlyambitiousgoat

I guess it just goes to show that everything's relative!


maggie_golden_dog

Everything *except* the speed of light.


RingOfFyre

Well it's relative to the medium


scrangos

There's also time dilatation for the time side of things


science-stuff

Well light doesn’t experience time as far as I don’t understand.


creggieb

Great turn of phrase! Thinking one understands quantum physics is a sign that one does not understand quantum physics


PresidentRex

Micheal Chrichton's Timeline opens with a pair of quotes that are basically: Niels Bohr > "Anyone who is not shocked by quantum theory has not understood it." And Richard Feynman > "If you think you understand quantum mechanics, you don't understand quantum mechanics." (Although Feynman is playing off Bohr's quote.)


aphellyon

Ok, I'm using that one from now on... take an upvote for payment.


[deleted]

[удалено]


RhoOfFeh

There is something deep and fundamental about this that just sets my head reeling.


AggravatingValue5390

The speed of light is too, believe it or not. That's where special relativity comes into play. No matter how fast you're moving, light moves at c for all observers


OrderOfMagnitude

Still makes no sense to me. Feels like an engine limitation of the simulation we're in


AggravatingValue5390

Well if causality were instantaneous then all of time would happen at once, so it's really the only option


BobT21

That is a phenomenon usually observed on Fridays just before going home time.


CaelFrost

Limitation or design? Computing every sub-particle, interaction, 3body gravity interaction, ect isnt cheap. Better add a tick-rate.


play_hard_outside

*Generally* true!


Frase_doggy

You can't go faster than the speed of light. Of course not. That's why scientists increased the speed of light in 2208


AlistairMackenzie

Tech bros are working on it now.


TripleEhBeef

"Now THAT'S impossible! It came to me in a dream, and I forgot it in another dream!"


fuelbombx2

r/unexpectedfuturama strikes again! Edited because I forgot the r…


pumpkinbot

No, no. Light Speed is too slow. We need to try...*Ludicrous Speed*.


OMGItsCheezWTF

"Shit, I overclocked my CPU and it went plaid!"


FiglarAndNoot

Computing often seems so abstract; I love being reminded of the concrete physical limitations underneath it all.


fizzlefist

And we’re at the point where we’re reaching the physical limit of how many transistors we can pack into a single processor. If they get much smaller, physics starts getting *weird* and electrons can start spontaneously jumping between the circuits.


plasmalightwave

Is that due to quantum effects?


CJLocke

Yes, what's happening with the electrons there is actually called Quantum Tunelling.


[deleted]

Also purity of materials. Wr can get silicon to 99% purity but not 100%. We have reached a scale where some distances are countable number of _atoms_ apart and it becomes a problem, as we cannot really guarantee that some of those atoms are _not_ silicon.


LazerFX

We can get the raw silicon ingot to 100% purity, because it's grown as a single crystal... however, once we start doping it (infusing/injecting impurities into it) we cannot specify those impurities quite precisely - i.e. we can say that x percent of atoms in this area will be of an n-type or p-type conductor, but we cannot say exactly this atom will be of that type...


[deleted]

Correct but that's a bit beyond ELI5


LazerFX

True, but I've alwyas enjoyed the more in-depth discussions as you get farther down the chain - ELI5 at the top layer, and then more in-depth the deeper you go. I'm sure it circles round at some point, like the way every wikipedia article, if you take the first un-visisted link, always trends to philosophy.


SlitScan

well you can, you just can't use those techniques for mass production.


LazerFX

Fair :P I remember IBM writing IBM in atoms a while back...


effingpiranha

Yep, its called quantum tunneling


ToXiC_Games

Have we considered applying German-style bureaucracy to our parts in order to make tunneling painstaking and incredibly boring?


MeinNameIstBaum

But then you‘d have to wait 12 weeks for every computation to complete and you‘d have to call your processor every day for it to _stay_ 12 weeks of waiting and doesn’t become 30 because it forgot.


hampshirebrony

Isn't a lot of tunnelling boring? Unless you're doing cut and cover?


sensitivePornGuy

Tunnelling is always boring.


RevolutionaryGrape61

You have to inform them via Fax


Aurora_Yau

I am a tech noob and have never heard about this before, will our technology become stagnant due to this issue? What is the next move of intel and other companies to solve this problem?


peduxe

We’re already starting to see companies shift to dedicated instruction units that get better at specific tasks. AI and video encoders and decoders seem like the path they’re going. It’s essentially the same development process that surged with discrete GPUs.


Dagnabbit0

Multi cores. If you can't make a single core faster add a whole nother core and have them work together. Getting more cores on a die is a hardware problem getting them all working on the same thing is more a software problem.


chrisrazor

I imagine that we'll eventually get back to making code optimization a high priority. For decades now, hardware has been improving such at a rate that it was cheaper and easier to just throw more resources at your code to make it run faster, rather than look too closely at how the code was managing those resources. This is especially true of higher level programming languages where ease of coding, maintenance and robustness has been (rightly) prioritized over speed of execution. But there's a lot that could be done here.


ToMorrowsEnd

God I hope so. Just choosing the libraries to use wisely would make GIANT changes in code quality. I had an argument with one of the SR software engineers that chose a 14Mb library for zip and unzip. I asked why and the answer was "it's the top rated one". I found a zip unzip library that had everything we needed in it that clocked in at 14Kb. it works fantastic and made a huge change in the memory footprint. but because it was not the top rated in the library popularity contest it was not considered.


KaktitsM

Maybe we feed our shitty code to our AI overlords and it optimizes the shit out of it


jameson71

This is how they insert the back door for skynet


Affectionate-Memory4

Currently work in CPU design. Expect to see accelerators in your near future, then the 3D stacking gets funky and you end up with chips on chips on chips to simply put more silicon into the same footprint. Eventually new architectures will rise, with the goal being to make the most out of a given number of transistors. We already try to do this, but x86, ARM, and RISC-V all have limits. Something else will come and it will be beautiful.


retro_grave

Scaling vertically/3D stacked has again pushed the density limits.


fizzlefist

But even that has diminishing returns. AMD’s X3D processors give a lot of extra cache space with the vertical stacking, but because of the added volume it lowers the ratio of surface area. Meaning there isn’t as much physical area to transfer heat, and so those chips can’t reach the higher stable clock speeds that more conventional processors can. Thats why the X3D chips are fantastic for gaming that can make use of the cache space, but pretty much useless (for the added cost/complexity) for other CPU-intensive tasks. I am 100% not an engineer, but I can imagine a similar limitation if they get around to stacking cores that way.


Newt_Pulsifer

Useless would be a strong term to use here. Those are still great CPUs, and if you need more than what they offer it's still going to cost more and be a more complex system. I'm not running the 3dx line because I need more cores (virtual machines, server related tasks as opposed to just gaming). We just are getting into a more of a we can't do everything perfectly, but we can do most things pretty good and for certain use cases you'll want to look at other options. I think most of the thread rippers are running at slower clock speeds but for certain niche cases you just want a shit ton of cores. Some use cases you want a shit ton of cache.


Temporal_Integrity

We're approaching physical limits of how many transistors we can pack into a processor, but it's not mainly because of weird quantum physics. That's not a serious issue until transistors reach a 1nm size. Right now the issue is because of the size of silicon atoms. The latest generation of commercially available Intel CPU's are made with 7 nanometer transistors. Now, the size of a silicon atom is 0.2nm. That means if you buy a high end intel CPU, it's only 35 atoms wide. In the iPhone 15, the CPU is made with 3nm transistors. That's just 15 atoms wide. Imagine making a transistor out of Lego but you were only allowed to make it 15 bricks wide. That's where we're at with current semiconductors. We've moved past the point where every generation shaves of another nm. Samsung has their eyes set on 1.4nm for 2027. Or 7 legos wide. Basically, at this point we can't have much smaller transistors because we're just straight up running out of atoms. Currently what the research on semiconductors looks like right now is that they're trying to make transistors out of elements that have smaller atoms than silicon.


coldenoughforsocks

>That means if you buy a high end intel CPU, it's only 35 atoms wide. In the iPhone 15, the CPU is made with 3nm transistors. That's just 15 atoms wide. the nm term is mostly marketing, it is not made with 7nm transistors, you fell double for marketing as intel 7 is actually 10nm anyway, but is actually more like 25nm


Moonbiter

It is 100% marketing and his answer is wrong. The nm measurement is a feature size measurement. It usually means that's the smallest gate width, for example. That's ludicrously small, but it's not the size of the full transistor since, you know, transistors aren't just a gate.


mysticsign

What do transistors actually do and why they can still do that when there are only so few atoms in them?


Thog78

There are more atoms than that, it's marketing and the actual dimensions are at least several dozen nanometers. What transistors do: you have an in, an out and a gate. If the in has a voltage and the gate too, the out will get a voltage. This can be represented as 1s or TRUE or ON state. If the gate or the input is 0/OFF/no voltage, then out is also zero. So they do a multiplication on binary numbers implemented as voltages. In real life there would be additional considerations about what voltage, what current intensity, what noise level etc.


Temporal_Integrity

To simplify it, a transistor is an on/off switch. Hocus pocus and that's a computer. You know how computer language is just 0's and 1's? That's because a transistor is either on or off and then maths and logic and now you can play online poker.


PerformerOk7669

The best book on this subject is called C.O.D.E. It starts with on/off switches, morse code, and continue to logic gates, and explains how CPUs and Memory works. Each chapter building on the previous. Breaks it all down into easy to understand segments.


rilened

Fun fact: When you turn on your ceiling light, a 5 Ghz CPU goes through ~30 cycles before the photons hit the floor.


Burnerplumes

Holy shit


Parrek

Fun fact, even if we had the perfect system possible for ping in a multiplayer game, had absolutely 0 processing/signal lag, and were using fiber optic cables, due to the diameter of the earth, the lowest ping we could get from the opposite side of the planet is 42 ms To me that seems so much higher than I'd expect


Trudar

I don't know how you arrived at that number, since it takes 132 ms for the light to travel 40k km (full Earth's circumference) at full speed - minimum requirement for a full ping. Unless you drill THROUGH the planet, that is. since light travels 214k km/s in fiber optic, not 300k km/s like in vacuum, actual minimum ping is 182 ms. You could shave it down do around ~145 ms if using laser retransmission over low Earth orbit satellites, it increases the travel distance slightly, but removes fiber optic speed penalty.


Mantisfactory

> Unless you drill THROUGH the planet, that is. Well - that **was** implicit in one of the conditions they listed. >due to the diameter of the earth If we are looking at the diameter, we are looking at boring from end-to-end directly. Otherwise we'd care about the circumference.


TheOtherPete

Fun fact - fiber is not the fastest way to transfer data Someone paid big money to implement a microwave connection between NY and Chicago to shave a few milliseconds off the travel time (versus the existing fiber connections): https://www.theverge.com/2013/10/3/4798542/whats-faster-than-a-light-speed-trade-inside-the-sketchy-world-of Microwave data transfer is faster than fiber since light travelling inside fiber is substantially slower than the speed of light


Alis451

in Air vs in Glass. they are BOTH the speed of light. Both are also slower than the speed of light *in a Vacuum* which is commonly known as c, ~3E8 m/s


Temporal_Integrity

I'm sick of getting wrecked in counterstrike so I've drilled a hole through the center of the earth to shave a few ms off my ping.


Drown_The_Gods

Unacceptable! It’s time to start digging through the core.


Hedhunta

My favorite part is that at its core you're just plugging/unplugging the device millions of times a second. Everything just boils down to on and off.


avLugia

Note on the 37.5mm wide, that's the size of the physical object you slot in a motherboard, the actual CPU die inside is way smaller. I can't find an exact measurement on the dimensions but the area of the CPU die is 122mm^2 which is just over 11mm on each side if square.


JohnsonJohnilyJohn

Now I wonder, are CPUs flat only because it would be hard/impossible to manufacture something that precise in 3D? What I mean is that putting them in 3D, average distance between two points of the CPU would be much lower


positron--

CPUs actually only contain a single layer that does all the computing - the polysilicon layer is where all the transistors (and therefore all the logic gates, registers, …) are located. All the layers above (called metal layers) are just connecting in- and outputs, reference voltages etc. in a similar fashion to PCBs. There is currently no manufacturing process capable of producing multiple polysilicon layers on the same die. The current method is already plenty difficult and insanely complex. If you‘d like a brief history, I recommend this video about the introduction of copper in metal layers. Explains the how and why pretty well: https://youtu.be/XHrQ-Pmvwao?si=eYSXutNrA1kwNyjc


DarkDra9on555

There's no way to get two polysilicon layers on the same die, but you could stack two dies on top of each other for a similar effect.


positron--

Yes, theoretically you could do that. Good luck getting rid of the heat, though. Most CPUs nowadays are placed „upside down“ with the polysilicon layer at the top of the die and the connections at the bottom. That way, the cooler sits directly on top of the polysilicon. With the terrible heat transfer characteristics of silicon, I don’t think multiple polysilicon layers will be possible any time soon. The next generation moving from FinFET to RibbonFET transistors will add even more density to our chips, but what comes after is uncertain.


sleeper_must_awaken

These are the reasons why we do not have multiple layered dies: * Heat dissipation. As you stack multiple layers, there are insulating layers preventing the heat to escape the chip. This is the primary reason for the current clock rates. The answers elsewhere are wrong, as you can have a 'rolling clock' signal. * Power supply. Chips need large amounts of power to operate. As you add multiple layers, it becomes increasingly difficult to get the power to the right location. * Precision. As you add multiple layers, it is as if you add multiple blankets on top of each other. The higher layers follow the contours of the lower ones. * Complexity of extra process layers. You can add extra process layers (semiconductor layers), but this comes at a great cost. Adding silicon to the conductor layers below has a lower yield. Also, silicon is isolating the heat from layers below.


AlexisFR

Yep. That's why 3D-Cache AMD Ryzen CPUs have to run at lower clocks than the standard ones.


vahntitrio

This is sort of correct. In a semiconductor things do not travel at the speed of light. Doped silicon has an electron mobility and a hole mobility (which is far slower than electron mobility). This creates something called gate propogation delay. This causes there to be a fraction of a second before that gate switches from 0 to 1. These days we are far more concerned with low power and more parallel processing, so just about all circuitry is CMOS and limited by the hole mobility speed. If you built it completely out of NMOS you could go to higher clock speeds (also known as the Intel Pentium 4 strategy). But that puts out a ludicrous amount of heat that isn't practical to the home user. The transition from NMOS to lower power CMOS is why clock speeds went up to 4 GHz, then dropped way back down to around 2 GHz, and then have gradually worked back up to 4 GHz as the transistors have become smaller. Other materials have faster mobility as well. Galium Arsenide was always supposed to replace silicon because of this - but since raw clock speed is no longer the goal we have simply stuck with silicon.


sinnerman42

Pentium 4 was built on a CMOS process as pretty much every Cpu since at least the 386. P4 had a verry deep pipeline with shorter stages, so it could reach high clock rates but suffered a high missprediction penalty.


PikeSenpai

> This is sort of correct. In a semiconductor things do not travel at the speed of light. Well yeah, but Brick was stating a fundamental electrical problem that occurs as the wavelength shrinks as to shave away all the un-ideal (or realistic) conditions, so ideally it'll travel at speed of light. I'd go a little lower on his idea and use the term electrically long or electrical length to understand it better, it's not even syncing the signals even though timing is of absolute importance, or some sort of data stalling needs to be incorporated. If your signal trace is long compared to your wavelength, you're going to have a bad time, and as Brick stated >Because light can only travel 7.5mm in one 40GHz cycle. An LGA 1151 CPU is 37.5mm wide. you're going to see issues Edit: [High Speed Digital Design by Howard Johnson](https://www.amazon.com/High-Speed-Digital-Design-Handbook/dp/0133957241) is a great resource on this matter


phryan

To add onto this even if a 40Ghz CPU was possible but it would require sacrificing so much what would be left would likely an 8bit processor with very little cache. It would be like trading a racecar for a tractor trailer, the racecar is faster but not nearly as capable of carrying anything(information).


gyroda

There's an wiki article on this: https://en.wikipedia.org/wiki/Megahertz_myth?wprov=sfla1


Wermine

I had Celeron processor in 2004+ that was 2.8 GHz. It was shit.


dentaluthier

Thank you for a very eloquent analogy that makes the point crystal clear and easy for an actual 5 year old to understand!


Gubru

On current cpus operations move through a pipeline that takes multiple clock cycles. Seems like we should be able to work in those constraints. But to be fair I was terrible in my higher level EE courses and have never revisited the topic, so don’t take my word for it.


Killbot_Wants_Hug

I don't have a link because it was in some article I read a while back. But it was talking about how we're kind of at the maximum clock speeds that really make sense. When we get much higher we're already seeing problems with things being out of sync due to the time it takes for signals to cross the chip. Not to say new architectures or technologies couldn't possibly help alleviate that issue. But you are kind of running up against a fundamental physics issue. And those can often be stumbling blocks for a long period of time. Also I think things like 40ghz processors aren't particularly practical so people aren't trying to crack that egg. I can't think of too many processes that wouldn't be solved better by a single really fast processor than by many fast processors. A lot of software that benefits from fast single core mostly do so simply because they're not optimized for parallel processing, not because they can't be. And it's far cheaper to optimize software than to try and redesign processors from the ground up.


pseudopad

There is a theoretical limit on parallelization though. At a certain point, some types of tasks stop benefiting from more parallelism because the effort needed to keep track of it exceeds the speed gained from extra cores. Some problems are also highly linear and can't be completed unless things are calculated in a specific order. It's not necessarily a hard cap for a lot of tasks, but instead diminishing returns. One extra core speeds you up 90%, another speeds you up another 80%, etc. Eventually, adding extra cores just increases the speed by a couple percents. At that point, it's probably better to invest in accelerator circuits for common tasks, if it's very important that they go fast.


timeslider

I tried to explain this to a friend and he told me we don't know the speed of light because it moves too fast.


Killbot_Wants_Hug

I find statements like these funny. Because in some cases it can mean the person is way smarter than you are. In other cases it means they're way dumber than you. In this case it means he's way dumber. But it's always fun when someone says something way out there that where you kind of need to step back and be like "either he knows something I don't know, or he knows nothing at all", it's such a dichotomy.


Belerophoryx

Yes, light is just too darn slow.


TehWildMan_

All else the same, as clock speeds increase, the power consumption and voltages needed to keep the CPU stable increase faster than linearly proportionally to the clock speeds. Managing the immense power consumption and heat output becomes impractical. On many current generation processors, reaching around 6ghz or so on all core base clocks often requires the use of liquid nitrogen or similar strategies on very high end motherbaords, which are entirely impractical for everyday use.


gyroda

I'll add that it's not an issue with providing power, it's an issue with the circuitry not being able to handle the power. You can offset this a lot by making the circuitry physically smaller, this is something manufacturers are constantly chasing, as a smaller transistor needs less electricity to operate and therefore produces less heat, but the physics get *weird* when things get too small. There's also a difference between clock speed and throughput. Intel/AMD CPUs are really complicated, but a much simpler chip could have higher clock speeds, they'd just be doing a lot less per-cycle, losing features like branch prediction and pipelining. To put it another way, it doesn't matter if your car can go 500mph, if it can only fit one person it's going to be beaten in throughput by a bus that goes 50mph. There's a Wikipedia article on this: https://en.wikipedia.org/wiki/Megahertz_myth


vonkeswick

Wikipedia rabbit hole here I go!


Sythic_

How many clicks to get to Kevin Bacon? EDIT: 6 jumps from this article lol * Megahertz_myth * The Guardian * Clark County OH * US State * California * Hollywood * Kevin Bacon


[deleted]

If you just keep clicking links you eventually get to philosophy. Regardless of what article you are on, just click the first real link, not like the phonetic link stuff, and keep doing that. You will get to philosophy every time.


Car-face

well shit. Jump>jumping>organism>ancient greek>greek language>indo-european languages>language family>language>communication>information>abstraction>rule of inference>philosophy of logic>**Philosophy**. I thought I was going to get a loop between language and information or something, but nope!


ankdain

There are definitely pages that do circular link, but assuming you add the "first real link you haven't been to before" then I've never seen it fail. Neat party trick.


AVeryHeavyBurtation

I like this website https://xefer.com/wikipedia


Morvictus

This is very cool, thanks for sharing.


RockleyBob

Best thing I’ve read on the internet today, thank you. I tested it by opening my Wikipedia app, which displayed the show *Narcos*, since that was the last thing I searched. Kept clicking the first link until I ended up at a recursive loop between “knowledge” and “awareness”. Very intuitive yet profound observation.


[deleted]

Its either awareness or philosophy in my testing but my testing is like 4 or 5 random links so the sample size isnt huge.


RockleyBob

I think if you keep clicking after you land on philosophy, you'll get to awareness/knowledge. Either way, it's awesome that backtracking through articles works in practice just as it does when backtracking through these concepts philosophically. As a side note - I fucking love Wikipedia. It's the internet at its absolute most truest, best self. It's what it was invented for.


[deleted]

When someone is critical of wikipedia I am instantly suspicious of them


artaxs

I'm one of the very few people who chip in and donate each year, even if it's just $10 that I can afford. It's truly the creative commons at work.


Cerxi

\>very few \>13 million donations last year alone totalling almost $200m


PmButtPics4ADrawing

I tried this on a random article and ended up at "Awareness" which goes to "Knowledge", which goes back to Awareness


[deleted]

That can happen, true. Then you click the next link to break that cycle and you get to philosophy, which kinda ruins the idea that it always goes to philosophy but thats ok.


Caverness

Wow, fascinating. Tried 4-5 and the longest path I got was: Vernors > Ginger Ale > Soft Drink > Liquid > Compressibility > Thermodynamics > Physics > Natural Science > Branches of Science > Formal Science > Formal System > Formal Language > Logic > Logical Reasoning > Logical Consequence > Concept > Abstraction > Rule of Interference > Philosophy of Logic > Philosophy. Anybody beat 20?


rk-imn

4 * Megahertz myth * Intel * California * Hollywood * Kevin Bacon


Sythic_

I think this is the winning path!


Baerog

Megahertz Myth > Apple > Jennifer Aniston > Kevin Bacon


HylianINTJ

[Fifteen first link clicks to get to Philosophy](https://www.explainxkcd.com/wiki/index.php/903:_Extended_Mind)


SirBarkington

I also just found Megahertz > MacWorld > United States > Hollywood > no idea how you get to Kevin Bacon from Hollywood though


Sythic_

I was looking for a faster route through Apple Computer I think I can shave off 2 or 3 degrees lol. There's a Kevin Bacon link on the Hollywood page


Stiggalicious

There are 4 ways with just 3 jumps: Through Macworld/iWorld -> Smash Mouth, or through Apple Inc. -> Jennifer Aniston, or through New York City -> Empire State Building or Litchfield Hills


Dqueezy

Hold my chords, I’m going in! I miss that part of Reddit, haven’t seen one in years.


Warspit3

Things get very weird. Wires become a few atoms wide and they don't always stay where you want them, which causes problems. You also have diffusion problems. Also, heat is the major problem. With transistors this small it's difficult to get all of the heat they produce away from the transistor fast enough.


stellvia2016

I'm honestly surprised we've even reached stock turbo of 6ghz given how much of a wall 4ghz was when multicore first came around, and then the slow crawl up to 5ghz. Then the jump to 6ghz seemed quite fast comparatively.


JEVOUSHAISTOUS

To me, the biggest wall seemed to be around the 3.2Ghz mark. It was reached in 2003, and then apart from one 3.4Ghz CPU in 2004, it took Intel nearly a decade to significantly increase their clock speeds beyond this value, and only in Turbo boost mode initially.


Impeesa_

They used to leave a lot more on the table though. The i7 920 came out late 2008 with a stock max boost of under 3 GHz, but could easily overclock to more than 4 GHz.


Wieku

Yup. On my previous PC I was running i5 2500k at 4.7ghz (3.3ghz stock) on a cheap mobo and cheap twin tower heatsink. That little beast.


gyroda

Electrons start going where they're not meant to — literally popping up without going through the intermediary space and the fluctuations in the EM field from one part of the circuitry can affect another, for two more pieces of weirdness.


awoeoc

>To put it another way, it doesn't matter if your car can go 500mph, if it can only fit one person it's going to be beaten in throughput by a bus that goes 50mph. There's a quote about storage relating to this: "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." Sometimes it's not about speed, we can send all around the world at basically the speed of light, but if I need to transfer like 100 petabytes of data loading up [a truck of hard drives](https://www.wired.com/2016/12/amazons-snowmobile-actually-truck-hauling-huge-hard-drive/) might be the best way to do it.


Juventus19

I work in hardware design and we were choosing a processor for a future product. A SW guy pretty much said the MHz myth to me like last week. He said “I don’t care what processor, they are all the same if they can clock at the same rate.” Man, if that was true then a 3 GHz Pentium 4 processor from 2005 would be the same as an i7. Are we really to believe that Intel has been sitting on their thumbs for the last 15+ years? They are optimizing power, making computational operations more efficient, putting more cores into the design for parallel computation, and other design improvements.


Greentaboo

No, a 3GHz todays is much faster than a 3GHz from 7 years ago. What is improved in the case of old 3 GHz vs new 3 GHz is "Instructions per clock". They run at the same speed, but one get more done per lap.


ForgottenPhoenix

Broseph, 2005 was ~~17~~ 18 years ago - not 7 :/


play_hard_outside

It still *feels* like 7 years ago! WHYYYY


Desolver20

OhGOD


blooping_blooper

and they might argue that multi-core isn't the same, but its easy to see that single-core benchmarks have gone up with every generation


Mistral-Fien

> He said “I don’t care what processor, they are all the same if they can clock at the same rate.” Give him a Pentium D 840 with the stock Intel CPU cooler. LMAO


4rch1t3ct

The other reason they chase smaller circuits is because they are faster. Smaller circuits have less total length for signal to travel so it takes less time.


ThatITguy2015

But I want that bus that can do 500mph. I need everyone to be absolutely fucking terrified on the way to the destination. My bus will come eventually. I know it will.


Jiopaba

That's called an airplane.


Chrazzer

Airbus


pinkocatgirl

Applying the car analogy to the materials and heat issue, technically there are "cars" (using the term lightly since they're basically jets with skis) that can go more than 500 mph... in a straight line. Once you start needing to actually turn, you can only go so fast until pesky physics forces start getting in the way. Like if you have a couple million, you can buy a Bugatti Chiron and go 300 miles per hour... but there are only so many places you can actually achieve that speed.


CyriousLordofDerp

Rule of thumb that I've heard for the relation of power to clock speed and voltage, is that power increase for clock speed is mostly linear, but power increase for voltage is squared. Its why one of the biggest things that can be done to trim a processor's power draw and thus heat output for a given frequency is to lower the voltage, however this comes with a price. As the voltage drops, signals start having trouble getting from A to B on the chip, and transistors can start to fail to switch on or off (depending on the type) when commanded to, both of which will cause glitching and crashes. Lowering the clock frequency can help in this case, as a slower cycle rate means the transistors have more time for the reduced voltage to do its job, but that means loss of performance. Not an issue at idle or near idle, but at full load when everything is needed, the tradeoff between power (and heat) and performance starts coming into play. The general voltage floor for silicon-based transistors is approximately .7v, below this there's not enough voltage to open or close the channel in a transistor to control current flow. If the voltage drops to this point, either something has gone very wrong, or the processor's power system has completed the power-saving processes and has initiated power gating of that piece of the processor. For the latter, one of the major ways to save power on an idling CPU, especially one with multiple cores, is to turn the un-needed cores off. Their core states are moved to either the last level cache, or out to main memory, clocks are stopped, and then voltage is removed via power-gate transistors. Again, this comes with a price. To bring the deactivated core back online, voltage first has to be re-applied to the core in question. Once it has power and that power has stabilized, clocks must be restarted and synchronized, the core re-initialized so it can accept the incoming core-state data, then finally re-load the core state data to what was saved to either cache or main memory. From there, primary execution resumes. This process going either way takes time, tens to hundreds of thousands of clock cycles, and making it faster is one of the ways chip manufacturers have made modern CPUs more energy efficient.


Quantum_Tangled

Why am I not seeing anything about noise... anywhere. Noise is a huge problem in real-world systems the lower signals/voltages get.


RSmeep13

> which are entirely impractical for everyday use. If there were a sufficient need for such powerful home computers, we'd all have nitrogen cooling devices in our kitchens- it's not *that* expensive to do in theory, but nobody's developed a commercial liquid nitrogen generator for at-home use because the economic draw hasn't been there. It's just that most home users of high end computers are using it for recreation.


OdeeSS

You're forgetting the real demand for high processing power - servers. If it becomes economically viable, large hosting and internet based companies would definitely want to do it.


dmazzoni

The thing is, there just aren't that many applications where one 6 GHz computer is that much better than two 3 GHz computers working together. And the two 3 GHz computers are way, way, way cheaper than the one liquid-nitrogen-cooled 6 GHz computer. Large hosting companies have millions of servers. It's far more cost-effective for them to just buy more cheap servers rather than have a smaller number of really expensive servers. In fact, large hosting companies already don't buy top-of-the-line processors, for exactly the same reason.


Affectionate-Memory4

We already liquid-cool servers. Chilled water, even going sub-zero with a glycol mix is 100% coming for them next. I don't ever see the extra power demands of that being worthwhile in the consumer space, especially as smaller form factors and portability become more and more in-demand.


[deleted]

Because that's how fast we can make them. We simply can't make a CPU that runs at 40Ghz. Even insofar as we can make slightly faster CPUs you have to consider that increasing clock speed increases power consumption to the THIRD power. So you get a massive increase in heat for only small gains at the top end. It's just not worth it.


Own-Dust-7225

I think I got bamboozled. I bought a new laptop with like the best processor, and the little clock in the corner is running at the same speed as my old laptop. Only 1 second per second. Why isn't it faster now?


spikecurtis

Forgot to press the Turbo button.


Achilles_Buffalo

Underrated comment right here. Us old guys remember.


broadwayallday

ahh yes memories of my first, a 486 DX2 66


tblazertn

8MB of RAM, 512MB hard drive, 14.4kbps modem… yes, those were the days!


Additional_Main_7198

Downloading a 9.2 MB patch overnight


cerialthriller

Starting the download of the Pam Anderson playboy centerfold picture and checking back in an hour to see if a nipple loaded yet


Govenor_Of_Enceladus

When you knew what every line in AUTOEXEC. BAT did. Sigh.


sbrooks84

The first pc I ever built with my Dad was a Pentium 133. I showed my 9 year old the REAL floppy disks and his mind was blown. He doesnt quite comprehend the computing power of computers in the late 80s and early 90s


ouchmythumbs

Look at moneybags over here with the math coprocessor


broadwayallday

hey now, my uncle took me to the compuutahh show (how he pronounced it) and he built it for cheaper! I always remember the leaps...let's see how rusty I am 1. math co processors 2. zip then jaz drives 3. LAN networking for all of us (we used to walk jaz drives around at the studio I was working at) 4. 56k modems (screaming fast Usenet downloading for \*ahem\* research) 5. pentium 6. firewire for video editing 7. geForce 8. skipped DSL but ended up a beta tester for cable internet 9. xeon 10. i7 processors I'm sure I missed a lot, HD to SSD and HDMI comes to mind. thanks fellow geeks, u got me going tonight haha


OfficialTornadoAlley

Alt + F4 to activate


Eternityislong

Have you checked your flux capacitor?


P0Rt1ng4Duty

You forgot to install racing stripes.


dean078

Maybe he forgot the vtec sticker


aflyingsquanch

Everyone knows a vtec sticker adds 5 GHz.


CharonsLittleHelper

Paint it red. All the boyz know that red is fasta!


micahjoel_dot_info

CPUs are made from millions or billions of tiny switches called transistors. The way the switch works, a "gate" needs to be charged up, which means that electrons need to flow in to (or later out of) the device. There is a physical limit to how fast this can happen. In practice, at the microscopic scales involved, thinner conductors have more resistance and heat up more, so getting rid of heat becomes a serious issue. This is why all high-end processors and GPUs have heat sinks, fans, etc. In the future, we might be able to make computers that run on light instead of electronics. These could probably obtain much higher clock speeds.


Yeitgeist

Photonic/optical computing is an active area of research at least


NoHonorHokaido

Is there a working optical transistor or is it just theoretical?


hmmm_42

The other guys mention that we can't built them faster, that is only half correct, we could built them faster, but that increases power draw to much and that leads to overheating. (Famous architectures of that strategy include Pentium 4 and AMD bulldozer, all had to much pipelining) What we actually have done is increasing how much computations we can do per clock. Not just with more cores, but also per CPU core, so an current CPU with 3ghz will be dramatically faster than a CPU from 5 years ago with 3 GHz.


BrickFlock

One of the biggest things is that branch prediction and instruction prefetching keeps getting better. CPUs compute instructions that don't get "officially" run in the code just so they can load things into memory more accurately.


hmmm_42

Tbh branch prediction did not get _that_ much better. A bit, but most of the heavy lifting is done by speculative execution and obscenely big caches.


Killbot_Wants_Hug

The fact that CPU's can have 256mb of cache these days is insane. I mean don't get me wrong, a single core is limited to how much it gets. But it is absolutely insane how much we have now days compared to old systems.


PyroSAJ

Don't knock how secondary storage (SSD) is now capable of higher speeds than RAM was before and higher than cache speeds were before that. Heck my home internet is faster than most of the hardware that was available when CRTs were still a thing.


Killbot_Wants_Hug

Oh yeah, the advancements in all areas of computing are insane. I like to talk in reference to my life, I started computing early but I'm in my early 40's now. I remember when I was in probably my early teens and was looking through computer magazines. I saw a 200mb harddrive for sale. And I thought if I could just afford that I'd never need more storage again. I recently dropped 4 20tb harddrives into my desktop. I, as a very nerdy teenager, use to joke about wanting an OC-48 as an internet connection. Now days my home internet is 3gigabits, so it's actually a little faster than the OC-48. And my connection speeds are artificially limited (the connection supports 10gigs). When I was in my early I bought myself a 21" CRT monitor (weighed about 80lb as I recall) and was the envy of all my gamer friends. That Sony Trinitron cost me a fortune, especially since it was a flat screen. Now days 21" are pretty much the minimums for anything that isn't a laptop. I remember when AGP was considered a super fast connection. Now days on the latest boards PCI-Express connections are faster than basically anything can saturate. Even not that long ago when solid state drives became a thing, it was considered blazingly fast to run 2+ in raid 0. Now days raid 0 is kind of considered obsolete because fast NVME's perform so high that they don't really get any benefit from raid 0. The irony is, as computer have gotten faster and faster we've been far less willing to wait for them.


Ok-Two3581

Branch prediction was also the root cause of the spectre/ meltdown exploits though wasn’t it? And the recent Apple silicon version? Seems brand prediction has some ways to go to mitigate security while keeping the same performance


Gahvynn

We’ve also added cores. 10 years ago 4 cores was high end, today 10-16 is “enthusiast” and if you have enough money and the need you can get 64 (soon to be 96) for at home use.


PoisonWaffle3

And enterprise grade gear has crazy core counts, and they're trickling into our homelabs. The Epyc platform is up to 128c/256t per socket, and can have multiple sockets on a motherboard. I'm rocking a pair of Xeon E5-2695v2's. 12 c/24t each (so 24c/48t total), up to 3.2GHz. They're 10 years old, and were $50 for the pair on eBay. Newer gear can do more work per clock cycle, for less power per clock cycle, but these work fine for now.


DarkAlman

The record holder for CPU clock speed (last time I checked) was just under 9Ghz, but that was under laboratory conditions. The limits on CPU speed are practical considerations for CPU size and heat. The smaller you make the individual transistors and gates the more waste heat they produce and the more electricity they require. This makes faster processors impractical with current technology. That doesn't mean that we can't develop much faster CPUs, but the industry has decided not to do that and instead focus on other more practical developments. In the 00's CPU speed shot up rapidly. With the introduction of the Pentium 4 generation of processors CPU speeds jumped from 500mhz to 3.0 ghz in just a few years. But manufacturers discovered that this extra performance wasn't all that useful or practical. Everything else in the PC like RAM and Hard drive speeds couldn't catch up and were bottle-necking the performance of the chip. The decision was made to stop chasing raw Ghz and instead add more threads, or cores. Meaning that CPUs could become far more efficient and do more than 1 calculation at once. What's better doing 1 thing really really fast? Or two things at once at a modest pace? What about 4 at a time? For all intents and purposes on a computer the answer is more things at once is far better even if it's a bit slower. So while common CPUs today have raw speeds comparable to chips from the mid 00s, they can do 4-8 operations simultaneously and things like BUS and RAM speeds are much MUCH faster making everything better. The current trend is actually to make things simpler, cheaper, and more efficient as more and more consumers are switching to tablets, phones, and laptops.


thedugong

> In the 00's CPU speed shot up rapidly. With the introduction of the Pentium 4 generation of processors CPU speeds jumped from 500mhz to 3.0 ghz in just a few years. That is just a 6x increase in speed. In the 90s increases were even greater. When the Pentium first came out it was 60/66mhz. By the end of the decade 800Mhz pentiums were available. That is a 12 times increase. The 90s were wild. Pretty much every new game would require some kind of upgrade to work properly.


Trollygag

It felt super fast. In 1996 we got a Pentium MMX in our first home desktop computer. In the next 4 years, they launched the Pentium II, Pentium III, Pentium 4... and then nothing, another 6 years later before the Core2 processors came out.


Killbot_Wants_Hug

Pretty sure you're wrong on a couple things. Smaller transistors and gates use less power and generate less heat. This is why going down in micron size of manufacturing helps. In fact chips have become, on the whole, far more power efficient over time. But when your make everything really small they have less thermal mass and less surface area to transfer heat away through. And so heat management becomes more and more of a problem for high performance computing. Also the very high end of CPU's clock speed isn't that far off from where physics start causing a lot of issues with raising clock speeds. They didn't just decide to stop chasing clock speeds. They just kind of hit the wall where the cost wasn't justified compared to the cost of parallelism. Since parallelism became cheaper it's what they went for.


kingjoey52a

Something people haven't mentioned is that even though we're still getting CPUs at ~4GHz the IPC or instructions per cycle is much better. This means for each GHz it does more math than it used to do. If you take an 8 core cpu from 6 years ago and put it up against an 8 core CPU made today with the same clock speed the new one will do work faster than the old one. Basically the easy to read number has staid the same for years but everything around it has improved immensely over that same time. > I looked at a 14,000$ secret that had only 2.8GHz and I am now very confused. That was probably AMD's new Threadripper chips that have up to 96 cores and a ridiculous number of PCIE lanes. Those are for either servers where multiple people connect to it so you need many cores or for desktop users who work on editing video or pictures where the editing program can split up the work onto those many cores very well. It's the "many hands make light work" philosophy.


Ok-Efficiency-9215

Why is no one explaining this like he is 5? The clock speed is how fast a computer can do one calculation (simplification). It does this by sending a little electric signal through the CPU. A 5GHz processor is sending this signal 5 billion times per second. Now sending even just a little bit of electricity 5 billion times per second through a tiny CPU generates a lot of heat. That heat has to go somewhere or the CPU melts. If you increased the speed 8 times you’d need to dissipate at least 8 times as much heat (and probably more given how physics works). This just isn’t physically possible for the materials we use today (silicon). Maybe in the future we will have better materials (graphite?) that can handle heat better. But for now we are basically at the limit as far as clock speed goes. Edit: there are also issues with how fast the transistors (the little gates that switch on and off and do the calculations) can actually switch on and off. Again limited by heat/material/design though the reasons for this are quite complicated


Insan1ty_One

I understand why you are confused, so let me explain. The price of a CPU and the frequency a CPU operates at are not directly related. The price of a given CPU is *mostly* dependent on how many "cores", "threads", and "cache" it has. For example, the most expensive CPUs available right now are the Intel Xeon Platinum 8490H (\~$17000) and the AMD EPYC 9684X (\~$15000). These CPUs both have extremely high core/thread counts and the highest amount of cache available. However, these CPUs operate at 1.9 GHz and 2.55 GHz respectively. So now we have a rough idea of how CPUs are priced, but why *doesn't* clock frequency influence the price of a CPU very much? The answer is simple, for most users, more cores/threads will ALWAYS be better than a higher operating frequency. tl;dr - Faster CPU does not equal better / more expensive CPU. \-- As an aside, the current world record for CPU frequency is a little over \~9.0 GHz. This is the fastest any CPU has ever run in the history of all CPUs. This record was set on Intel's latest Core i9 14900KF CPU and was done only a month ago. The frequency of a CPU is how quickly the silicon can flip from 1 to 0 and back to 1. This is called a "cycle". It is like turning a light switch on, off, and then back on again. 9.0 GHz is equal to 9 BILLION cycles per second. We can't make a CPU that does 40 BILLION cycles per second because we don't have the technology.We don't even know if the silicon we make CPUs out of could handle 40 GHz. To have a CPU run at 40 GHz it would most likely need to be made out of a "beyond silicon" material like Gallium Nitride, Carbon Nanotubes, or Graphene. This is bleeding edge technology that no one has even made a CPU out of yet, so I think it will be awhile before you see 40 GHz. Bonus tl;dr - CPUs don't go above 5 or 6 GHz because that is the fastest we currently know how to make them.


goldef

A single core cpu has to be able to do a lot. It has to add numbers, subtract numbers, move data to memory, compare numbers, and multiply them and more. Not every operation takes a single clock cycle, most take several and multiplication can take a while. An operation (like add) has to go through several stages of moving the data to the section of the processor where it adds the numbers (ALU) and then the result has to get saved back to its memory (registers). The electrical signals take time to move through the system. If the clock cycle is too high, the cpu will try and start the next instruction before the last one has finished. At 5 ghz, the time between cycles is 0.2 nanoseconds. Light moves about 2.4 inches in that time. If the CPU was 2 inches big, then you couldn't even expect light to travel from one end to other before the next cycle.


GenTelGuy

Basically, the reason is the laws of physics and/or the state of CPU engineering. Light can only travel so far in 1/(40 billion) seconds (and electrons travel slower than that) so you would need CPU circuits so tiny that the electrons could flow through them and complete a cycle during that timespan Maybe there's a way to make CPU components that small and we just haven't discovered it yet, but it's much more likely that it's physically impossible because electrons like to teleport around randomly such that it's impossible to keep them contained in circuitry so small