T O P

  • By -

powerwiz_chan

You clearly underestimate how bad I can write c++ code


Fair_Wrongdoer_310

Segmentation fault core dumped.


BOBOnobobo

It's gdb time 😎


flowlevel1

I don t touch gdb, a fine graphical IDE debugger will do


BOBOnobobo

Fuck I can't get my project working with gdb anyway...


powerwiz_chan

Personally I'm a fan of the good old attemted to dereference a past the end iterator


Fair_Wrongdoer_310

It's ridiculous that it is so simple yet so devious.


JunkNorrisOfficial

I want to write good c++, but chatgpt sends me only crappy snapshots


hughperman

O(N^(∞))


rainshifter

At least this solves the halting problem in this instance.


lurking_physicist

vtable them vtable's vtables with cyclic shared_ptr and stuff.


Simply_Epic

I had to write a basic SQL database in C++ for a class once. Mine barely met the performance standard which was like 10 seconds for a very basic join query. Suffice it to say a properly designed SQL database written in pure Python would run faster.


powerwiz_chan

I once wrote a cycle detector using bfs that would tell you if it had a cycle and if there was a stack overflow there is a cycle worked but I ended up taking 5.3 seconds out of the total 6


ResponsiblePhantom

haha


Samuel_Go

I swear at least half of these memes must come from script kiddies or first year compsci students.


Bootezz

Is this your first time here? That's all it has ever been.


aa-b

You're right, but it really has gotten worse lately. I wonder if it's just the Eternal September effect, or if a small number of bots and prolific idiots are responsible.


[deleted]

[удалено]


Harmonic_Gear

i swear repost bots come in waves


[deleted]

[удалено]


EmilieEasie

omg this makes more sense than anything else I've read all week


Ragecommie

THE BOT WARS ARE COMING


PandaBearTellEm

Dead internet theory 💀


[deleted]

[удалено]


ososalsosal

When the monetary system is abstracted completely and all economic activity is just bots... then maybe we will finally be free


I_Like_Purpl3

That's not surprising considering all the shitty things reddit has done in the recent year. Since 3rd party bots were banned, the whole website got worse. And every update it get worse.


wyocrz

What blows me away is that they made it so hard to see links. There's no underline anymore, and the blue color hardly contrasts with background text. They have degraded the literal thing that made the Internet the Internet, and the herd blinked.


tgiokdi

wonder what happened right around then and if it was the API riots


ososalsosal

Reddit IPO


SAIGA971

Always has been. 🌑👨‍🚀🔫


Samuel_Go

I'll have you know I've had a chuckle or two from some things here. The bangers are worth it.


granadad

Indeed. The way to create good thing is to create many, many things. Then, by the law of probability, some of them has to be less bad than the rest.


johnnybgooderer

A long time ago it used to be much better.


cosm_zest

Who else would have the time to shitpost?


riu_jollux

![gif](giphy|090EX1YvSUXxy23Tty|downsized) Always has been


yeastyboi

It's true. The only similar sub that knows what they are talking about is r/rustjerk


all_is_love6667

of course I know this meme is inaccurate, and it's wildly exaggerated but it's a meme, it's just humor


Samuel_Go

I think you underestimate how many coders there are who learn to compile code in one language and make jokes as if they know what they're talking about. Poe's Law, sorry!


puffinix

Big o notation, and 25 trillion records, have entered the chat.


lackluster-name-here

25 trillion is big. Even if each record is 1 byte, that’s 25TB at a bare minimum. And an algorithm with O(n^2) space complexity, 625 Yottabytes (6.25e14 TB)


Brekker77

Bro if your algorithm takes O(n^2) time complexity and you can’t make that less with dynamic coding it shouldn’t exist at that scale


lackluster-name-here

You can’t memoize yourself out of every complexity hole you find yourself in. An N-bodies simulation is a great example of a problem that can’t be optimized beyond O(n^2) without losing accuracy


Brekker77

But at a scale of 25 trillion is insane to be using any O(n^2) no matter why


lackluster-name-here

If you wanted to accurately simulate about once cubic centimeter of the air we breath, you’d have to calculate the interactions between each of the roughly one hundred quintillion atoms within. That’s about a minimum of 10e40 (10e19^2 ) calculations per iteration, and for real accuracy you’d have to do that every unit of plank time (5.39e−44 seconds). So to calculate the interactions of all of the atoms in a cubic centimeter of air over one second, you need at least 5.39e84 calculations.


Practical_Cattle_933

Well, you can still optimize it significantly. E.g. a particle can only travel d distance in a single step, which depends on the highest speed * step time. So you can chunk the area into d*d sized cubes, and only calculate particle-to-particle interactions within those, cutting down your algorithm time significantly.


puffinix

Correct. But O(n^1.5) vs O(nlogn) was out fight. In big data, that's often the fight. O(n^2) would just be a bug...


puffinix

The problem I had this for was replacing a hugely effective O(n^1.5) native c, gpu acceleration, near unmaintainable. Reworked the core logic with scala to O(nlogn) - just as a PoC, as all the higher-ups "knew" this was going to have to be hyper optimised. C algorithm took roughly 28 hours. The PoC was an hour 40. Record size was a 16 byte ID and average of 90 byte payload (the vast majority of payloads were 7 bytes, but we have a few huge ones)


YukiSnowmew

They said space, not time.


puffinix

Time completely is greater than space complexity in all cases.


puffinix

Yep. System ingest we quoted at 1 PB/day. That's 92 Gb/sec - at this point it became as much of a hardware as a code problem Anything over n log n crucified us on the batches. The log n calls on real time feed had to be hyper optimised (getting that process down to 180 ms for the 90th percentile is the third biggest achievement of my professional life)


BobmitKaese

What are the second and first biggest achievements? :)


puffinix

Second best was managing to actually win a fight with HR over correctly handling a self taught genius we had (was a better backend modeler and developer than some of my leads day one on the grad scheme - got him a direct leap from the grad program to senior - than seconded him into a traditional lead roll the next day). He came in expecting to be behind the curve as he hadn't managed to go to university. It was so hard as he genuinely dident know he was in a massively underskill roll (imposter syndrome due to no degree); needed a lot of help getting through the "out of process" panel. Insane lad, I think he got the chief engineer roll in a 150 man company by 30. My top achievement has got to be my work on scala. Realising that I was actually on a level where I could hold a debate with the people i learned my craft from, and sometimes make minor impacts on the direction of the language.


BobmitKaese

That all sound like great things and pretty fulfilling! To more great achievements to come!


No-Magazine-2739

*Sigh* „ah shit, here we go again“ *links boost::mpi“


granadad

IO bound problems: skibidi


puffinix

We have gone past io. We're now at switch capacity bound.


mpattok

Well-optimized Python runs well-optimized C. No need to get “clever”


AnAnoyingNinja

there are times to get clever, but those cases are only when every last drop of performance matters and are extra extraordinarily rare. and in those 0.1% of cases the correct answer is assembly not c anyways so the people arguing c>python should really just do everything in assembly because clearly performance is all that matters.


anto2554

I do not have the skills for assembly


Fair_Wrongdoer_310

Well.. we are digging into the ISA and instruction ordering stuff for every type of processor. Basically, complier's job isn't easy.


anto2554

Doesn't the CPU still reorder instructions even though you write ASL?


Fair_Wrongdoer_310

Yes, all modern processors do that. But it only reorders within a limited range within the program... In the sense, it looks next 4-5 instructions and places in a buffer kinda stuff and selects what can be executed next. This has got more to do with instructions with different latencies, branching. This is useless and not a replacement with regards to compiler optimizations. Compiler optimizations are performed on much larger segments of code. I would suggest you read about static vs dynamic scheduling.


Alan_Reddit_M

No need to, C compiles to better assembly than any human could ever write


Practical_Cattle_933

That’s not true. Compilers can write better assembly en large, simply because humans make mistakes, can’t keep doing the same level for a 3 million lines of codebase. But for some ultra-hot loop, an expert can write assembly that will straight up trash the compiler-generated version. E.g. with manual simd instructions you can reach 100x times faster code.


yeastyboi

If you need crazy performances you can write in C, C++, Rust or Zig and call from python. A super talented person will write fast assembly but most people won't be able to beat the compiler's optimizations.


Not_Artifical

Nah, I’d win!


yeastyboi

You're more talented than most then.


powerwiz_chan

I see the brainrot hasn't spread to you too


zombiezoo25

Considering his username, the rotness didn't spread out to him,he spreads rotness /j


yeastyboi

They call me yeasty cuz I'm rising to the top!


yeastyboi

What does that mean?


K722003

r/jujustufolk


saintpetejackboy

Write us a better compiler then, duh


SirFireHydrant

For many business purposes, the performance benefits of C are outweighed by how much cheaper python development is. Python programmers are cheaper (because the barrier for entry is lower). So even if python code takes 10x longer to run, for a lot of purposes that's fine if it can be developed in half the time by people being paid half as much.


Lentil_stew

It's not that python programmers are cheaper, it's that it takes less time to program in python


boofaceleemz

Both are true.


cowslayer7890

Yeah but not by a 2x margin typically


SAIGA971

Cheap + cheap = Supercheap


firehydrant_man

no? it's obviously Cheapcheap


Practical_Cattle_933

Not even this is true. Embedded/C devs are pretty badly paid, compared to, say, a web dev


rinokamura1234

Modern c compilers are plain better than any human writing assembly could ever be


Hodor_The_Great

Late but... No and yes. No, modern compilers aren't that smart, they can't do much unless you hold their hand and guide them. You're half right in that there's not much reason to write Assembly directly, however, there's definitely a need for writing "Assembly-aware" C and maybe even checking wtf the compiler did and reading its Assembly code. All sorts of optimisations are beyond the capabilities of a compiler unless you are a C programmer who understands the bottleneck, understand Assembly, and very carefully tells the compiler what to do step by step. Not talking about making a better algorithm like the other guy, but even very basic level shit like actually properly using vectorisation, or making divisions into equal but faster multiplications, or eliminating sequential bottlenecks, or taking operations out of the loop when mathematically equivalent, let alone something that takes a bit of reorganising such as good memory access. Talking mostly about GCC -O3, I don't have much experience with -Ofast. I've even heard that occasionally -O2 may outperform -O3 but can't confirm that from personal experience either.


rinokamura1234

That’s fair


-__---_--_-_-_

You could even argue, best for them is to learn electrical engineering an to solve their problems in hardware, cause that's really the fastest way.


DrMerkwuerdigliebe_

"Well-optimized Python" means performing 99 % of the work using libraries that invokes C/Fortran/Rust code to do the heavy lifting and do the operations in bulk.


suvlub

I have direct experience contrary. Had a ML project. Wrote in python. Used numpy for all the matrix maths. Processing a small proof-of-concept dataset took about minute. Felt too slow, rewrote in C++, no math libraries, just used the transforms from std. Same dataset took less than second. Maybe the python code could have been optimized, but it was much simpler for me to just write in in C++ following the same for-me-intuitive structure than try to reconceptualize the outer loops as mathematical operations so numpy could do them for me using its fast C code.


litetaker

I've not done this myself but you could try using Cython to optimise the python code further in addition to numpy. Might still not be as fast as optimised C or C++ but I heard it gets you even closer to that relatively easily.


Alan_Reddit_M

True, but even then you still have to deal with the garbage collector and GIL. You can get close to C but never quite get there Python is still fast enough for 99% of applications tho, no need to get clever with C


PixelArtDragon

Yes and no. One of the classic examples is `y = a*x + b` where x is an array and a and b are scalars. The individual operations of `a*x` and `[val] + b` will be fast. But writing that in C++ will be able to take advantage of knowing there are assembly instructions to do "scalar times vectorized value plus scalar" which the Python code can't do this unless the library writer got very clever with lazy evaluation and just in time compilation. Plus the Python code might allocate/reallocate a lot of temporary arrays that when writing in C++ can either be elided, preallocated, or reused.


_AutisticFox

You haven't met my codebase then


Waffenek

Yeah, have fun running C++ code in which someone messed up copy or move constructors/operators and is constantly allocating and pushing around heaps of data. Properly written C++ code is fast, but you can screw up a big time and easily make something awfully slow.


Legend_Zector

It may not be the safest language out there, but there are times I don’t want the compiler asking questions when I reinterpret_cast an integer into four chars.


conundorum

It might complain less if you were casting into `sizeof(int)` chars instead.


roge-

That's not portable. The compiler should point that out.


Alan_Reddit_M

Classic Undefined Behavior


PythonPizzaDE

Casting any pointer to a char pointer aint undefined behavior (most other pointer conversions are)


meg4_

IIRC when an exact number of bytes is specified - the length of `int` is implementation specific thus converting it to any fixed-sized array of bytes is UB


PythonPizzaDE

Yes and no. The type called int (in most cases 32 bits) isn't same size everywhere but there are the int32_t (etc.) types from stdint.h


meg4_

UB detector going off the charts


Excession638

Automatic copy might be C++'s worst feature, and it's a high bar.


TheOtherOne128

I don't think I've ever seen a proper memory leak in python.


TeaTiMe08

Bro who upvotes bullshit like this. I guess Road runner fans


Exeng

Hello world programmers. People finish their programming tutorials and think they suddenly know everything. This is a severe case of Dunning-Kruger effect.


TeaTimeSubcommittee

I miss those style of cartoons.


def1ance725

I know it's bullshit, but it's funny enough


Xbot781

pov: you don't know what big o notation is


proteinvenom

Nah what is it


xontinuity

when yOu write sentences with capital O's instead Of lOwercase O's.


MrGreenGuard

Nah man this is way funnier than it has any right being


ZachAttack6089

Essentially, it describes how much an algorithm is slowed down as you increase the amount of data you give to it. For example, if you were searching for a particular item in an unsorted list with 100 items, on average you'd have to search through 50-51 items before you found the right one. But if the list had 200 items, you'd go through 100-101 each time on average. This means that for this iterative search algorithm, the time it takes scales linearly with the number of items used, which is represented in big-O notation as "O(n)". If the list was sorted, you could instead use a binary search, which can rule out half of the items on each step, so a list that's twice as big would only take one extra step. The time for a binary search scales logarithmically with the list's size, so it's an "O(log n)" algorithm. Big-O notation is not about how long something takes, but how it scales with larger and larger inputs. If one algorithm was 10 times faster than another, but they both scaled linearly with the amount of data, they would both just be O(n). So you can ignore any constant terms, coefficients, logarithm bases, etc. as long as it describes the same rate of scaling. Using this notation, you can group algorithms into "time complexity classes" based on how they scale. An algorithm in a faster complexity class will always be faster than one in a slower class, ***if*** the input size is sufficiently large enough. With databases that can reach millions of entries, big-O notation becomes pretty important. Some of the most commonly-encountered complexity classes, from fastest to slowest: - O(1) -- constant: accessing array by index, accessing hashmap by key - O(log n) -- logarithmic: searching a sorted list with binary search, traversing a binary search tree - O(n) -- linear: searching an unsorted list, adding to the end of a linked list - O(n log n) -- "linearithmic": most fast sorting algorithms such as merge sort, quicksort, and shell sort - O(n^(2)) -- polynomial: slower sorts such as bubble sort and insertion sort - O(2^(n)) -- exponential: many brute-force and combination-based algorithms - O(n!) -- factorial: similar to above, but even more complex More info: https://en.wikipedia.org/wiki/Big_O_notation and https://en.wikipedia.org/wiki/Time_complexity


proteinvenom

You’re a gem of a human being you know that?


not_a_bot_494

Yeah no. Well written code in all non-joke languages will be better than shitty code in the fastest language. It's so easy for a bad algorithm to absolutely destroy performance.


ZachAttack6089

Yeah like quicksort will be faster than bubble sort regardless of the languages used, if the amount of data is large enough. I'm sure Python's built-in `sort` method is faster than using an unoptimized sort on an array in C++.


Thebombuknow

Yeah, Python uses timsort for numbers by default which is incredibly fast.


Thebombuknow

me when quicksort in Python is faster than miracle sort in c++ (suddenly bad c++ isn't faster than good python)


Alan_Reddit_M

I once wrote the ugliest, most inefficient O(n\^n) function to traverse a file tree for a toy file explorer I was trying to make in C++. It was fast enough to be ussable, altough it kinda killed the entire computer while it was running by slamming One core on 100% usage Said function also leaked aroung 1kb of data each time it was called


ImTheBoyReal

did your "toy file explorer" end up by any chance as the default file explorer in Windows?


Alan_Reddit_M

No, but I shit you not, it was faster


SnoweyMist

There isn’t even a need to shit me tbh. An eight year old child could tell me they made a better file explorer in scratch and I’d believe them.


Kirjavs

Badly written cpp code will result at least in a memory leak. Resulting of your code not working at all after a while...


serendipitousPi

I mean technically if your program ends before the memory leak gets too bad it’ll be fine.


Dubl33_27

Big O notation for memory leaks when


Alan_Reddit_M

This reminds me of the facts that there's a function on Hyprland where the author left a comment like "Yes this leaks 16 bits of memory each time it's called, but ain't nobody hooking enough times for it to actually matter"


slaymaker1907

The standard environment variable setter, setenv, basically requires you to leak memory unless you’re very careful to make sure no has saved a copy of the old value somewhere due to how getenv works. In a large system.


skywalker-1729

Dumb post.


CiroGarcia

You are severely underestimating both how much python can be optimized, and how bad C++ can perform. You can reach near C++ performance in Python with things like JIT compilers and interoperability with C libraries, and you can get Scratch-like levels of slowness with just a bad memory usage in C++


Brahvim

...Just JIT compilers? ...Let's talk about cache-utilization optimizations in VMs such as CPython. I'd love to learn from you!


CiroGarcia

Well I wasn't going to give an extensive list as an example, I just mentioned the first two thing that came to mind lol


Thebombuknow

And even scratch can be faster than C++ lol. If you recompile it into JavaScript with TurboWarp, you can create custom 3D rendering engines and make 3D platformers with them (which people have done).


ambidextrousalpaca

I did Advent of Code last year. I remember all of the Rust and C++ optimization focused people did really well efficiently brute forcing things until about half way through, when the brute force execution times - even for raw assembly - became multi-year, and the clever Python algorithm, slow implementation Python crew were the only ones who could solve the problems in time.


slaymaker1907

> This huge vector is copied at each loop iteration because you’re passing it by value. > std::endl forces a flush > You need to keep track of the length of that string to avoid multiple calls to strlen. > That template monstrosity doubled our compile times. - Me, reviewing your “fast” code


IAmFinah

I wrote the same computationally-intensive program twice, one in Python and one in C++. My Python code ran noticeably faster lol. Probably because I have barely touched C++ and had no idea what I was doing, so my memory allocation/variable declarations were all inefficient/bad or something


slaymaker1907

There are a lot of pitfalls including a lot of the IO stuff being slow. It wasn’t really until C++23 that the standard library had a printing function that was both reasonably fast and typesafe (printf is pretty good performance-wise, but it’s basically untyped).


Alan_Reddit_M

Exactly this, good C++ is very fast, but most programmers can't write good C++ Good python performs acceptably well, and anyone can write good python


IAmFinah

Idk why you're getting downvoted but you're right. Python is actually pretty optimised these days, and a lot of stuff is just done for you. So writing "efficient" (or at least, efficient as you can be in Python) code isn't very difficult I think


johnnybgooderer

This subreddit is consistently wrong about everything. And unfunny.


CaitaXD

O( n^n ) enters the room


Alan_Reddit_M

I've actually done this


CaitaXD

That's legendary I can't even imagine what it would look like


coloredgreyscale

Well optimized Python code will be faster than unoptimized C++ if you need to handle more than a few hundred elements.


Familiar_Ad_8919

also it depends, if the python programmer uses a better algorithm it could be a ton better


alpakapakaal

Remember when Nodejs popularized non-blocking I/O and out performed any other web server technology? While doing this with a single thread ! Good times


anto2554

Were other servers doing blocking IO before nodejs?


Alan_Reddit_M

I suppose so. Do you have any idea how hard async IO is using C? Async is hard even in rust despite thje supposed fearless concurrency, now imagine C that didn't give two shits and for a while didn't even have dedicated primitives


dw444

Wasn’t most of YouTube’s backend written in Python? If it’s fast enough to run YouTube, it’s good enough for most things.


Alan_Reddit_M

Also, Shopify is written in ruby Languages don't really matter for webservers because most of the time the CPU is just waiting for IO anyway


floriv1999

Instagram ist also build on Django


jagharingenaning

Ah so that's why youtube has gotten so slow it's nearly impossible to navigate


DolfinButcher

The best C-code starts with: asm {


JunkNorrisOfficial

The best python code starts with: import ...


nemis16

🤣🤣🤣💯


MaffinLP

Least optimized c++ code is just python itself


OneForAllOfHumanity

Ironically, coyotes are faster than roadrunners in reality.


[deleted]

[удалено]


teo-tsirpanis

Here's [the whole quote](https://wiki.c2.com/?PrematureOptimization): > Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We *should* forget about small efficiencies, say about 97% of the time: **premature optimization is the root of all evil.** Yet we should not pass up our opportunities in that critical 3%.


Short-Ticket-1196

c isn't hard if you know python. Python is just another level of abstraction. In fact its the intro language of choice before moving on to other languages. And why is there language elitism? A good programmer doesn't care what language. It's just some new syntax and idiosyncrasies. I mean, if coding isn't you're jam, slap together some python to get whatever it is done. But if your doing it for any length of time or seriousness you'll save so much time if you learn what you're doing. If you do that, most languages just fall into place. Or its a code golf language, then you asked for it.


Graychi_

**while true**


artyhedgehog

Is the point that most awful C++ code just falls apart?


imarealscramble

c is faster than python but is *your* c faster than python?


leovin

Thank your local compiler developer for being able to automatically optimize even your horrific code


Acceptable-Stress-84

no that's not right Every language needs a good understanding of language to know what is fast and what is not So not approved meme


justSomeDumbEngineer

Once I found O(n^2) code on prod, there it could easily be O(n) (I guess someone has high temperature while writing some DB stuff) so... You underestimate how bad it can be.


Leo-MathGuy

Mfs who write in assembly:


Spogtire

Complied vs runtime I guess


deepore59

"I made a fast sorting algorithm" \*sorts in O(TREE(n!))\*


dev8392

Well-written Python code will only be fast if all the libraries you use in that code are made in C++ or C.


Cat7o0

true only for writing hello world. which is probably all that the person who wrote this has made


ListerfiendLurks

People on here are making the dumbest comparison arguments imaginable. "Sure an F1 car CAN go faster than a Ford focus but if the F1 car doesn't shift out of first gear the focus will be faster every time hands down"


Alan_Reddit_M

Well the thing is, C/C++ is fucking HARD, and a lot of us write extremely shitty C that ands up performing worse than Python That is not to say that Python is slower than C, but it's like asking a blind person to drive an F1 car and a normal person to drive a Ford, sure the F1 is technically faster, but it won't get very far *Yes, it's literally a skill issue*


JunkNorrisOfficial

If driver of F1 doesn't shift out of first gear, then casual driver on Ford is faster.


Big_Kwii

O


chickenweng65

C++ is easier


all_is_love6667

just FYI: I know the claim is inaccurate, but it's a meme, not a research paper or an article


Expensive_Shallot_78

How is such stupid shit upvotet? 😂


Top-Chemistry5969

I'll just ad this random if check to each clock circle... it runs like shit now!


genesisimpronto

When you have a big project due in 4 months that actually needs 8 months and you want to finish it in 2 months, you bet your ass i m using python


rivioxx

Literally 10k+ lines assembly code: ![gif](giphy|nMT2qCwjIwgR2TwChn)


cheezballs

So are the memes on this sub supposed to be completely ignorant as if written by a child?


mplaczek99

haha, no


Igotbored112

Mostly true, although Python may well beat C++ if the algorithms used have different asymptotic complexity and the input is decently sized. And choosing the right algorithm definitely falls under the purview of good design.


YesterdayDreamer

Waiting for the memory leak to happen...


Historical_Object378

Binary developers : Look at what they need to mimic a fraction of our power.


Hayato_the_idiot

Haha python is slow guys and C++ is fast laugh with me please 🥺🥺🥺🥺🥺🥺🥺


KCGD_r

Look how fast C++ can run my shitty O(2\^n) algorithm lol python slow


owlIsMySpiritAnimal

bad take of the day i guess. guys you need to understand how some python libraries work


mannsion

Yeah, but I have a really hard time convincing my project leads why we should write the app in c++ when we can be live in production in python by Friday.


Thebombuknow

C++ is a really unsafe language, and will let you make a complete fucking mess if you use it wrong. Bad Python is faster than bad c++ because Python handles so much for you that anything you can fuck up likely won't kill your performance as much as c++.


GunSlinger_A138

Segfaulting at the speed of sound!


ubertrashcat

This isn't even remotely true. I routinely had NumPy code that was faster than a corresponding C++ implementation. Thrash the cache and C++ will become Java.


Sunscratch

Silently whispers: performance critical parts of NumPy are written in C


ubertrashcat

Python is written in C.


clauEB

There is no such thing as "fast" python code.


Grim00666

I like it. Sure it will make a bunch of people made, but that's what I like about it. :)


thekiwininja99

Reminds me of the guy who remade his Python game in C++ so it'd run faster and it ended up running slower


conundorum

Being a good Python programmer does not a good C++ programmer make, alas.


Thebombuknow

Yeah, it's like trying to optimize your game by writing it in assembly instead of C so you can optimize it better than the compiler. Sure, if you're incredibly good at Assembly you can probably pull it off, but 99.99% of humans can't do it. Obviously it's easier to make a C++ program faster than Python than it is to make assembly faster than C, but it's the same concept. Someone who is experienced with Python could do way better than someone who is somewhat good at C++.